c0mmando

joined 1 year ago
MODERATOR OF
 

While authentic videos coming out during this US campaign season show some of the leading actors proving with their behavior that truth can indeed be stranger than fiction (in this case, than any deepfake) – Big Tech continues with its obsession with deepfake technology as a serious threat.

A threat of such proportions, as far as the likes of Meta are concerned – or are pressured to be concerned – that it calls for some fairly drastic measures.

Take, for example, a new patent application filed by the giant, detailing a method of authenticating users by combining vocalization – “and skin vibration.”

… and what? The filing reveals that this is the kind of biometric data which uses not only a person’s voice but also how speaking causes that person’s skin tissue to vibrate.

This level of “creepiness” in biometric information collection and use is explained as a need to solve security problems that come with activating systems only with one’s voice. That’s because, Meta says, voice can be “generated or impersonated.”

But, say some experts, if skin vibration is “a second factor” – then that protects from deepfakes.

Meta doesn’t state if it thinks that what’s true of voice also applies to fingerprints – but the “skin vibration authentication” is supposed to replace both fingerprints and passwords in device activation. Needless to say, Meta insists that “user experience” is improved by all this.

Meta talks about things like smart glasses and mixed reality headsets as use cases where the technology from this new patent can be applied – yet that’s a whole lot of very invasive biometrics-based authentication for a very small market.

For now, those are some of the examples, with built-in “vibration measurement assembly” that makes this method possible, but once there, the tech could be used in almost any type of device – and for different purposes.

 

The US Court of Appeals for the Fourth Circuit published its opinion in the United States v. Chatrie case, which concerns alleged violations of the Fourth Amendment.

We obtained a copy of the opinion for you here.

This constitutional amendment is supposed to protect against unreasonable (including warrantless) searches.

At the center is Google, and how the giant’s collection of users’ locations, then accessed by others to locate a person, might constitute a violation.

In a 2-1 vote the appellate court has decided that accessing Google location data is not a search.

A court that originally dealt with the case, where location data was used to identify a bank robber. The warrant was based on the mass and indiscriminate surveillance method known as “geofencing.”

In 2022, that court found that data collected and made available to law enforcement does mean a search has been performed in contravention of the Fourth Amendment.

This was viewed as unconstitutional, and the court was not satisfied that (in this case) location information collected this way passed legal muster.

Two years on, Circuit Court judges Jay Richardson and Harvie Wilkinson concluded the search of location data was – no search, at least not in their understanding of the Fourth Amendment. The dissenting opinion came from Judge James Wynn.

Judge Richardson states that the government accessing Google location history of the appellant (defendant in the appeals proceedings) Okello Chatrie “did not have a reasonable expectation of privacy” during the two hours he was “geofenced” by Google – plus, Chatrie “volunteered” it in the first place (by using Google and its location feature.)

The Circuit Court, which extensively cited the 2018 Carpenter v. United States, also seems to go into the meaning of privacy, and possibly try to redefine it. Namely, – do “only” two hours of a person’s life (monitored by Google and then accessed by law enforcement) count? Not really, as the majority opinion put it:

“All the government had was an ‘individual trip viewed in isolation,” which, standing alone, was not enough to enable deductions about ‘what (Chatrie) does repeatedly, what he does not do, and what he does ensemble.”

And – “Chatrie voluntarily exposed his location information to Google by opting in to Location History.”

Apart from future implications regarding geofencing, there’s a life hack hidden in this ruling as well: just to be on the safe side, never opt in to Google’s surveillance schemes.

 

AT&T is facing severe criticism following a substantial data breach where hackers accessed the call records of “NEARLY ALL” its mobile subscribers, totaling approximately 109 million individuals.

This doesn’t just affect AT&T customers, it affects everyone those customers have interacted with.

In a statement to Reclaim The Net, the telecommunications giant confirmed that the breach occurred between April 14 and April 25, 2024, involving its Snowflake storage. Snowflake, a provider that facilitates large-scale data warehousing and analytics in the cloud, is now under scrutiny for security lapses in the wake of multiple breaches facilitated by stolen credentials.

Recently, the security firm Mandiant identified a financially motivated hacker group, known as “UNC5537” targeting Snowflake users. This has led to a series of data thefts, prompting Snowflake to implement stricter security measures, including mandatory multi-factor authentication for its administrators.

The stolen data includes call and text metadata from May 1 to October 31, 2022, and a specific breach on January 2, 2023. This metadata encompasses telephone numbers, interaction counts, and aggregate call durations, affecting not only AT&T’s direct customers but also those of various mobile virtual network operators (MVNOs).

AT&T took immediate action upon discovering the breach, engaging with cybersecurity experts and contacting the FBI. According to an official statement, the FBI, along with the Department of Justice (DOJ), evaluated the breach’s implications on national security and public safety, which led to delays in public disclosure sanctioned on May 9 and June 5, 2024. The FBI emphasized its role in assisting victims of cyberattacks and the importance of early communication with law enforcement in such incidents.

“We have taken steps to close off the illegal access point,” AT&T continued in its statement. “We are working with law enforcement in its efforts to arrest those involved in the incident. We understand that at least one person has been apprehended.” Customers should take several proactive steps to protect their personal information and reduce potential risks: Be Wary of Phishing Attempts

Hackers may attempt to use stolen data to craft convincing phishing emails or texts. Customers should be cautious about unsolicited communications asking for personal information or urging them to click on suspicious links. Use MFA (Multi-Factor Authentication)

While passwords were not compromised in this breach, enabling MFA where available can enhance security on all digital accounts. Avoid using text messages as a form of account verification. This is when a company sends you a code by text message that you have to use to access your account. It’s much safer to use a 2-factor authentication app. Avoid Using Standard Phone Calls and SMS Text Messages as Much as Possible

Phone carriers, by virtue of their central role in facilitating communications, inherently collect and store vast amounts of metadata related to phone calls and text messages. This metadata, which includes details such as call times, durations, and the numbers involved, can be highly sensitive. Despite its non-content nature, metadata can reveal intricate details about a person’s life, habits, and social networks. Here are some reasons why phone carriers are often more vulnerable to metadata leaks:

Large Data Stores: Phone carriers manage enormous volumes of data daily. Each call or text generates metadata that is logged and stored. The sheer volume of this data makes it a significant target for hackers, and managing its security can be challenging.

Regulatory Requirements: Carriers are often required by law to retain metadata for certain periods for lawful intercept capabilities and other regulatory reasons. This obligation to store data can increase the risk of breaches, as older, possibly less secure systems may be used for storage.

Complex Systems and Integration: The infrastructure of telecom companies is complex and often integrated with various legacy systems and third-party services. Each integration point can introduce vulnerabilities, potentially offering hackers multiple entry points to access and extract data.

Insufficient Encryption Practices: While the content of communications might be encrypted, the metadata often is not. This oversight can leave sensitive information exposed to anyone who gains unauthorized access to the system.

High Value for Surveillance and Advertising: Metadata is extremely valuable for surveillance purposes, as well as for targeted advertising. This makes it a lucrative target for unauthorized actors, including state-sponsored groups and cybercriminals looking to monetize the data.

Delayed Disclosure: Carriers might delay disclosing data breaches due to ongoing investigations or national security implications, as seen in the AT&T breach. This delay can exacerbate the problem, increasing the window during which stolen data can be misused.

Underestimation of Metadata Sensitivity: There is often a misconception that metadata is not as sensitive as direct communication content. This misunderstanding can lead to less rigorous security measures being applied to protect this type of data.

Economic and Technical Resources: Despite having significant resources, phone carriers may prioritize cost-saving measures over the implementation of state-of-the-art security solutions. Additionally, updating and securing sprawling networks can be technically challenging and expensive.

Use end-to-end encrypted apps to communicate instead and encourage family and friends to do the same.

Using apps that offer end-to-end encryption (E2EE) is crucial for maintaining privacy and security, especially in the wake of breaches like the one experienced by AT&T, where call data was exposed. Here’s why E2EE apps are a better choice:

Enhanced Privacy Protection: End-to-end encryption ensures that messages, calls, and files are encrypted on the sender’s device and only decrypted on the recipient’s device. This means that no one in between, not even the service providers or potential interceptors, can read or listen to the content. This is crucial when the metadata (like call logs and contact numbers) is exposed, as the content of the communications remains secure.

Security Against Interception: E2EE is particularly important for protecting against potential eavesdropping. Even if a hacker can access transmission lines or servers, they cannot decrypt the encrypted data without the unique keys held only by the sender and receiver.

Prevention of Third-Party Access: In cases where service providers are subpoenaed for user data, they cannot hand over what they do not have access to. E2EE means the service provider does not have the decryption keys and therefore cannot access the content of the communications, offering an additional layer of legal protection.

Reduced Risk of Data Breaches: If a data breach occurs and encrypted data is stolen, the information remains protected because it is unreadable without the decryption keys. This significantly reduces the risk associated with data theft.

Trust and Compliance: Using E2EE can help companies build trust with their customers by showing a commitment to privacy and security. It can also help in complying with privacy regulations and standards, which increasingly mandate the protection of personal data.

Mitigation of Damage from Breaches: While encryption does not prevent data from being stolen, it devalues the data, making it useless to the thief. This is particularly important in incidents where sensitive information is at risk of being exposed.

Given these advantages, users are strongly advised to prefer communication apps and services that offer robust end-to-end encryption. This not only protects the content of their communications but also serves as a critical defense mechanism in today’s digital and often vulnerable cyber landscape.

AT&T has provided a FAQ page where customers can find out if their data was involved in the breach. It’s important for customers to use these resources to assess their exposure.

 

Tony Blair Institute’s Future of Britain Conference 2024 (co-organized with My Life My Say) seems to have gone out of its way to cover (with a positive spin) pretty much all the key contested by rights advocates’ plans and schemes, digital ID being inevitably among those.

One of the panelists, former Indian Minister of State for Electronics, Information Technology, Skill Development and Entrepreneurship Rajeev Chandrasekhar was there to praise a major set of goals aimed at ushering in digital ID and payments by the end of the decade.

The “umbrella” for achieving that is what’s known as the digital public infrastructure (DPI) – a buzzword shared by the UN, the EU, the WEF, and Bill Gates’ Foundation.

At the same time, Rajeev downplayed privacy fears associated with digital ID and revealed that his country was working with others to push the initiative.

The host asserted that introducing digital identity is “so important for the transformation of a country” (he didn’t specify in which direction this transformation is supposed to go).

But Chandrasekhar made sure to talk about the positives, such as that the system, Aadhaar, which at this time provides 1.2 billion Indians with digital identities, is helping improve on what was previously seen as his county’s “dysfunctional governance.” And he appears to suggest that the notion once in place in Asia – that this type of scheme is only good for countries like China but not democracies – is shifting.

The perception (or fact-based belief) that aggressive digitization and privacy are ultimately incompatible is “a false binary,” he said.

And despite the many instances of Aadhaar being the target of data breaches, hacks, and the ensuing concerns for the safety of the people’s personal data, Chandrasekhar sought to downplay these dangers – by citing which legislative tools are in place that are supposed to prevent them.

The former government official said that in India privacy and data protection are fundamental and constitutional rights and that the country has a data protection law. And this, it appears, is Chandrasekhar’s argument that privacy and policies covered by the DPI and digital ID are actually safe.

Chandrasekhar also notes that “if you go down and deep dig a little deep into this, you can figure out solutions that can both protect the individual’s rights to information privacy as well as grow an innovation ecosystem.”

But he does reveal whether India, or others that he is aware of, are actually “digging a little deeper.”

 

Deployment of facial recognition has received another “endorsement” in the UK, during an event co-organized by the Tony Blair Institute for Global Change – including by London’s Metropolitan Police Director of Intelligence Lindsey Chiswick.

The Future of Britain Conference 2024 is co-hosted by My Life My Say charity with links to UK authorities, and the US embassy in London.

Despite civil rights groups like Big Brother Watch consistently warning against turning the UK’s high streets into a facial recognition-powered privacy and security nightmare, Chiswick was upbeat about using this technology.

She shared that the results so far have been “really good,” and asserted that this is happening as the Met are “conscious” of privacy concerns, which is far from any pledge that those concerns are being properly addressed – the police are simply aware of them.

Perhaps in line with that attitude, she conveniently left out the fact that the system is essentially a dragnet, scanning the faces of hundreds of thousands of law-abiding citizens in search of a very small number of criminal offenders – sometimes just to make a single arrest.

But while Chiswick directs citizens to the Met website where they can see “transparency” in action – explanations of the legal mandate, and “all sorts of stuff” – she insists that this transparency is much better than what private companies who use the same tech offer.

The idea seems to reassure the public not by stating – that we respect your privacy and rights and are open about how – but, “we’re less bad than the other guys.”

According to Chiswick, facial recognition opens up a number of “opportunities” (in the mass surveillance effort) – such as crime pattern recognition, traffic management, forensic analysis, body body-worn video analysis.

This high-ranking Met official came across as a major proponent and/or apologist of the controversial tech, describing it as a “game changer” that has already made a “huge difference” in how London is policed.

Chiswick goes into the various techniques used to try to match images (taken by surveillance equipment, and from other sources) – one of them being the most contentious – live facial recognition.

She promises that the “bespoke watch list” against which live camera feed images are compared is “not massive.”

“That’s being created over time. So it’s bespoke to the intelligence case that sits behind the deployment,” Chiswick said. “If an offender walks past the camera and there’s an alert, that’s a potential match.”

 

The Tony Blair Institute for Global Change and My Life My Say charity co-hosted the Future of Britain Conference 2024 and heard Blair organization’s director of health policy, Charlotte Refsum, and other panelists speak in favor of more commercialization and surveillance of health data.

This was one of several controversial issues covered during the event, along two main lines – more surveillance of various types, and combating “disinformation.”

Blair Institute’s choice of organizing partner is telling, as well, since My Life My Say, which focuses on getting young people out to vote, lists the UK Cabinet Office and US embassy in London, as well as the mayor of London, as its past partners or backers.

Regarding health data, Refsum urged the creation of digital health records for all citizens, as well as a private commercial entity dubbed, “national data trust” – that would be tasked with commercializing access to sensitive health data in the country, and generate revenue in that way.

Blair himself was less straightforward, as a politician does, but appears to be pushing for digital health records and national data trust. But he appeared somewhat evasive when Refsum asked him about a digital health record and a national data trust, speaking about the benefits of technology in general, in terms of health.

Wellcome, another charitable foundation with ties to the UK government – the Department of Health and Social Care – would like to see the National Health Service (NHS) “integrate all the data” it has to achieve a “learning population health system.”

This is according to Wellcome’s Dr. John-Arne Rottingen who is also a fan of “faster intelligence” and reaching this goal by feeding massive amounts of data into the schemes.

Rottingen, who is Norwegian, spoke about what he considers a positive example of Scandinavian countries that have already linked access to health data “across the full population.”

In contrast to his learning population health system, is the current state of affairs where this information is “locked in different parts of the system,” noted Rottingen.

He urged researchers in the UK to enter public-private partnerships in order to come up with “insights” that are supposed to provide the driving force for a future “sustainable healthcare system.”

 

EU’s law enforcement agency Europol is another major entity that is setting its sights on breaking encryption.

This time, it’s about home routing and mobile encryption, and the justification is a well-known one: encryption supposedly stands in the way of the ability of law enforcement to investigate.

The overall rationale is that police and other agencies face serious challenges in doing their job (an argument repeatedly proven as false) and that destroying the internet’s currently best available security feature for all users – encryption – is the way to solve the problem.

Europol’s recent paper treats home routing not as a useful security feature, but, as “a serious challenge for lawful interception.” Home routing works by encrypting data from a phone through the home network while roaming.

We obtained a copy of the paper for you here.

Europol appears to want to operate on trust: the agency “swears” it needs access to this protected traffic simply to catch criminals. And if the feature was gone, then ISPs and Europol could have smooth access to traffic.

But if the past decade or so has taught law-abiding citizens anything, it is how, given the right tools, massive government and transnational organizations “seamlessly” slip from lawful to unlawful conduct, and secretive mass surveillance.

Not to mention that tampering with encryption – in this instance available in home routing as a part of the privacy-enhancing technologies (PET) – in security and privacy terms, means opening a can of worms.

It turns out, as ever, that agencies like Europol actually do have other mechanisms to go after criminals, some more controversial than others: one is “voluntary cooperation” by providers outside the EU (in which case Europol has to disclose information about “persons of interest” using foreign phone cards with other countries) as well as issuing an EIO – European Investigation Order.

But that barely compares to breaking encryption, in terms of setting up the infrastructure for effective mass surveillance. Europol’s complaint about the available procedures naturally doesn’t mention any of that – instead, they talk about “slow EIO replies” that hinder “urgent investigations.”

Europol presents two solutions to the home routing encryption “problem”: One, disable PET in home routing. The second is a cross-border mechanism inside the EU where “interception requests are quickly processed by service providers.”

 

Earlier this week the EU Commission (EC) published its second report on what it calls “the state of the digital decade,” urging member countries to step up the push to increase access and incentivize the use of digital ID and electronic health records.

At the same time, the bloc is satisfied with how the crackdown on “disinformation,” “online harms,” and the like is progressing.

In a press release, the EC said the report was done to assess the progress made in reaching the objectives contained in the Digital Decade Policy Program (DDPP), targeting 2030 as the year of completion.

EU members have now for the first time contributed to the document with analyses of their national “Digital Decade strategic roadmaps.” And, here, the EC is not exactly satisfied: the members’ efforts will not meet the EU’s “level of ambition” if things continue to develop as they currently are, the document warns.

In that vein, while the report is generally upbeat on the uptake of digital ID (eID schemes) and the use of e-Health records, its authors point out that there are “still significant differences among countries” in terms of eID adoption.

To remedy member countries falling short on these issues, it is recommended that they push for increased access to eID and e-Health records in order to meet the objectives set for 2030.

The EU wants to see both these schemes available to 100% of citizens and businesses by that date – and reveals that eID is at this point available to 93% of citizens across the 27 of the bloc’s countries, “despite uneven take-up.”

Still, the EC’s report shows that policymakers in Brussels are optimistic that the EU digital ID Wallet will “incentivize” eID use.

And, the document’s authors are happy with the way the controversial Digital Services Act (DSA) is getting enforced. Critics, however, believe it is there to facilitate crackdowns on speech – under the guise of combating “disinformation,” etc.

The EU calls this, “strengthening the protection against online harms and disinformation,” while also mentioning that it is launching investigations (into online platforms) to make sure DSA is enforced.

And in order to reinforce the message that DSA is needed as a force for good, the report asserts that “online risks are on the rise and disinformation has been identified as one of the most destabilizing factors for our societies, requiring comprehensive, coordinated action across borders and actors.”

 

A recent advisory published by the Resecurity cybersecurity vendor exposes a trend now developing on the dark web – more and more stolen biometrics-based data is ending up in this corner of the internet.

These revelations, describing the increase in activity of this type as “significant,” highlight the case of Singapore, including its SingPass scheme.

At the same time, they confirm that the fears of the digital ID and age verification push sooner or later turning into a privacy nightmare.

In Singapore, every citizen and resident has a SingPass (Singapore Personal Access) digital ID account, which is touted by the authorities in the city-state as their “trusted digital identity” – not to mention a “convenient” one.

Blackhat hackers, however, beg to disagree, and it’s hard to imagine that digital ID holders affected by identity theft think of the scheme as in any way “convenient.”

Security researchers say that overall, year-on-year, as many as 230 percent more “vendors” are now selling stolen personal information that often contains facial recognition data, fingerprints, and other biometrics belonging to Singaporeans.

A majority of this data has been up for sale on the XSS dark web forum, according to the same source.

In 2024 thus far, this type of activity peaked in April, following a rise in data breaches where cybercriminals targeted a number of online databases that store this information.

Stolen citizens’ identities are then used for a variety of criminal activities, including fraud, scams, and the creation of deepfakes. But once this kind of floodgate opens, exposing particularly sensitive data, spies and various governments are never far behind the common criminals in exploiting the breaches.

Other than supposedly being “easy and secure,” SingPass gives access to more than 1,700 government and private sector services in Singapore, both online, and in person.

But Resecurity said that more than 2,377 of these accounts were compromised last month alone, with the firm saying the holders of those have been notified of this discovery.

However, the firm’s advisory noted that in many cases online platforms that suffer data breaches do not disclose these incidents, which means that citizens and residents in Singapore whose identities have been stolen are not even aware of this.

 

Meta last fall came up with an idea of how to comply with the EU’s Digital Markets Act (DMA) (not to be confused with the Digital Services Act (DSA) – considered by critics to be a “censorship law”).

Namely, Meta announced at the time that in order to adhere to DMA, and allow an ad-free “experience” in the EU (but also in the European Economic Area, EEA, and Switzerland) Facebook and Instagram would offer subscriptions to privacy-minded users.

The problem with what Meta calls “a free, inclusive, ad-supported” internet is not just that ads are annoying – it’s that people actually do pay what turns out to be a pretty hefty price, i.e., with their sensitive personal data monetized by the giant for “personalized,” aka, targeted advertising.

But this “opt-out” (for a fee), or alternatively consent to data collection in order to continue to use the platforms (“for free”) idea didn’t go over well in the EU, for reasons presented by EU’s Commission is its typical barely-human-readable fashion.

As per the EC, the reasons are the following: the proposed Meta scheme “does not allow users to opt for a service that uses less of their personal data but is otherwise equivalent to the ‘personalized ads-based service,” and, “does not allow users to exercise their right to freely consent to the combination of their personal data.”

The enthusiasm of EU Commissioner for Internal Market Thierry Breton for the EC findings published earlier this week – given his previous track record – does tempt onlookers to wonder if this decision really has to do with protecting competitiveness and users in Europe – or is yet another form of pressuring Meta, at a sensitive (political) time.

Whatever the case may be, also in general EU fashion, the Commission’s findings are only the beginning of a lengthy process that is expected to last for months, as Meta looks into the findings and tries to counter their arguments to defend its position.

What we know, thanks to a spokesperson’s statement, is that Meta will try to prevail in this controversy, among other things, by citing EU’s top court, the Court of Justice of the European Union (CJEU), as in 2023 “endorsing” its proposed scheme, which the giant asserts does comply with DMA.

If this fight is eventually lost, Meta can look forward to parting with 10 percent of global turnover, and fines up to 20 percent for repeat infringement.

 

Secret international discussions have resulted in governments across the world imposing identical export controls on quantum computers, while refusing to disclose the scientific rationale behind the regulations. Although quantum computers theoretically have the potential to threaten national security by breaking encryption techniques, even the most advanced quantum computers currently in public existence are too small and too error-prone to achieve this, rendering the bans seemingly pointless.

The UK is one of the countries that has prohibited the export of quantum computers with 34 or more quantum bits, or qubits, and error rates below a certain threshold. The intention seems to be to restrict machines of a certain capability, but the UK government hasn’t explicitly said this. A New Scientist freedom of information request for a rationale behind these numbers was turned down on the grounds of national security.

France has also introduced export controls with the same specifications on qubit numbers and error rates, as has Spain and the Netherlands. Identical limits across European states might point to a European Union regulation, but that isn’t the case. A European Commission spokesperson told New Scientist that EU members are free to adopt national measures, rather than bloc-wide ones, for export restrictions. “Recent controls on quantum computers by Spain and France are examples of such national measures,” they said. They declined to explain why the figures in various EU export bans matched exactly, if these decisions had been reached independently.

A spokesperson for the French Embassy in London told New Scientist that the limit was set at a level “likely to represent a cyber risk”. They said that the controls were the same in France, the UK, the Netherlands and Spain because of “multilateral negotiations conducted over several years under the Wassenaar Arrangement”.

“The limits chosen are based on scientific analyses of the performance of quantum computers,” the spokesperson told New Scientist. But when asked for clarification on who performed the analysis or whether it would be publicly released, the spokesperson declined to comment further.

The Wassenaar Arrangement is a system adhered to by 42 participating states, including EU members, the UK, the US, Canada, Russia, Australia, New Zealand and Switzerland, that sets controls on the export of goods that could have military applications, known as dual-use technologies. Canada has also implemented identical wording on 34 qubits into a quantum computer export ban.

New Scientist wrote to dozens of Wassenaar states asking about the existence of research on the level of quantum computer that would be dangerous to export, whether that research has been published and who carried it out. Only a few responded.

“We are closely observing the introduction of national controls by other states for certain technologies,” says a spokesperson for the Swiss Federal Department of Economic Affairs, Education and Research. “However, existing mechanisms can already be used to prevent in specific cases exports of such technologies.”

“We are obviously closely following Wassenaar discussions on the exact technical control parameters relating to quantum,” says Milan Godin, a Belgium adviser to the EU’s Working Party on Dual-Use Goods. Belgium doesn’t appear to have implemented its own export restrictions yet, but Godin says that quantum computers are a dual-use technology due to their potential to crack commercial or government encryption, as well as the possibility that their speed will eventually allow militaries to make faster and better plans – including in relation to nuclear missile strikes.

A spokesperson for the German Federal Office for Economic Affairs and Export Control confirmed that quantum computer export controls would be the result of negotiations under the Wassenaar Arrangement, although Germany also doesn’t appear to have implemented any restrictions. “These negotiations are confidential, unfortunately we cannot share any details or information about the considerations of this control,” says the spokesperson.

Christopher Monroe, who co-founded quantum computer company IonQ, says people in the industry have noticed the identical bans and have been discussing their criteria, but he has no information on where they have come from.

“I have no idea who determined the logic behind these numbers,” he says, but it may have something to do with the threshold for simulating a quantum computer on an ordinary computer. This becomes exponentially harder as the number of qubits rises, so Monroe believes that the rationale behind the ban could be to restrict quantum computers that are now too advanced to be simulated, even though such devices have no practical applications.

“The fallacy there is that just because you cannot simulate what the quantum computer is doing doesn’t make it useful. And by severely limiting research to progress in this grey area, it will surely stifle innovation,” he says.

 

FedEx is using AI-powered cameras installed on its trucks to help aid police investigations, a new report has revealed.

The popular postal firm has partnered with a $4billion surveillance startup based in Georgia called Flock Safety, Forbes reported.

Flock specializes in automated license plate recognition and video surveillance, and already has a fleet of around 40,000 cameras spanning 4,000 cities across 40 states.

FedEx has teamed up with the company to monitor its facilities across the US, but under the deal it is also sharing its Flock surveillance feeds with law enforcement. And it is believed to be one of four multi-billion dollar private companies with this arrangement.

It's led critics to liken the move to rolling out a mass surveillance network - as it emerged that some local police forces are also sharing their Flock feeds with FedEx.

Jay Stanley, a policy analyst with the ACLU, told the Virginian Pilot: 'There's a simple principle that we've always had in this country, which is that the government doesn't get to watch everybody all the time just in case somebody commits a crime.'

'The United States is not China,' he continued. 'But these cameras are being deployed with such density that it's like GPS-tracking everyone.'

In response to Forbes' report that FedEx was part of Flock's surveillance system, he told the outlet: 'It raises questions about why a private company…would have privileged access to data that normally is only available to law enforcement.'

He went on to bill it as 'profoundly disconcerting'.

Flock Safety's cameras are used to track vehicles by their license plates, as well as the make, model, and color of their cars. Other identifying characteristics are also monitored, such as dents and even bumper stickers.

Lisa Femia, staff attorney at the Electronic Frontier Foundation, warned that FedEx's participation could prove problematic because private firms are not subject to the same transparency laws as cops.

This, she told Forbes, could '[leave] the public in the dark, while at the same time expanding a sort of mass surveillance network.'

The Shelby County Sheriff's Office in Tennessee confirmed its partnership with Flock in an email to Forbes.

'We share reads from our Flock license plate readers with FedEx in the same manner we share the data with other law enforcement agencies, locally, regionally, and nationally,' public information officer John Morris told the outlet.

He also confirmed his department had access to FedEx’s Flock feeds.

Its participation was unmasked after Forbes found the name of the force on publicly available lists of data sharing partners - along with others such as the Pittsboro Police Department in Indiana, located just outside of Indianapolis.

Pittsboro police chief Scott King reportedly did not comment on why his department is participating but insisted the force had not requested access to a private system.

'Only those listed under law enforcement,' he said.

Assistant Chief of Greenwood Police Department Matthew Fillenwarth confirmed its force, also in Indiana, is similarly participating.

Memphis police department also stated it had received camera feeds from FedEx but did not confirm if these were provided by Flock.

When speaking about networks of license plate readers, Brett Max Kaufman, a senior staff attorney at the American Civil Liberties Union (ACLU), told Forbes: 'The scale of this kind of surveillance is just incredibly massive.'

He went on to describe to the outlet how the warrantless monitoring of citizens en masse was 'quite horrifying'.

FedEx declined to answer questions about its partnership with Flock, saying in a statement: 'We take the safety of our team members very seriously. As such, we do not publicly discuss our security procedures.'

There is no suggestion the partnership is illegal, but some critics suggest it is flouting the basic tenets of the Constitution

For now, it is currently unclear just how far-reaching the partnership between law enforcement and FedEx actually is or how much Flock data is being shared.

Forbes also found that FedEx was not alone its decision to sign up - with Kaiser Permanente, the largest health insurance carrier in the US, also taking part.

The company shared data garnered from Flock cameras with the Northern California Regional Intelligence Center, an intelligence hub that provides support to local and federal police investigations involving major crimes across California's west coast.

'As part of our robust security programs, license plate readers are not only an effective visual deterrent, but the technology has allowed us to collaborate with law enforcement within the parameters of the law,' a spokesperson confirmed.

'The technology has been used in response to warrants and subpoenas, as well as in other scenarios regarding potential or ongoing crimes on the facilities' premises -and it has supported the arrest and prosecution of those committing crimes.'

The cameras were labeled to disclose to passersby they were filming - but she declined to comment when asked about where the company had these cameras deployed.

Meanwhile, police forces around the world over the past few years continue to pick up Flock as a partner - with more than 1,800 law enforcement agencies taking part.

Overall, more than 3,000 American communities use Flock technology, only ten years since the startup surfaced in 2014.

The firm today is valued at nearly $4billion, and continues to receive a steady stream of venture capital.

In 2022, it raised an astounding $300 million in just seven months, followed by $38 million in Series B funding February the following year.

It uses real-time data 'to enable and incentivize safer driving,' a description on its website states - describing the effort as 'the world's first fully digital insurance company for connected and autonomous commercial vehicles.'

'Eliminate crime in your community' a chyron geared toward businesses in the private sector - such as grocery stores - reads.

[–] c0mmando@links.hackliberty.org 2 points 1 week ago (1 children)

also consider any prior activity from this used phone will now be associated with you. when people are considering switching to grapheneos, i typically recommend buying a new pixel 7a in store using cash.

[–] c0mmando@links.hackliberty.org 0 points 4 weeks ago (1 children)

Thanks for the post, I've made links.hackliberty.org available over Tor at http://snb3ufnp67uudsu25epj43schrerbk7o5qlisr7ph6a3wiez7vxfjxqd.onion

view more: next ›