c0mmando

joined 1 year ago
MODERATOR OF
 

The number of facial recognition searches law enforcement conducted via controversial Clearview AI technology doubled to 2 million over the past year, the company said Thursday.

The number of images stored in the company’s database of faces, which is used to compare biometrics, also has surged, now totalling 50 billion, according to a statement from CEO Hoan Ton-That.

Last November the database contained 40 billion images, Time reported, quoting Ton-That.

Biometric Update first reported the new statistics.

The use of facial recognition technology by law enforcement continues to draw close scrutiny. Some police departments have banned use of the technology, but have been exposed for repeatedly asking nearby departments to run searches on their behalf.

Critics have worried that police will abuse the technology, a fear born out in an Evansville, Indiana, episode earlier this month. An officer there resigned under pressure after officials found he had been using the technology for personal reasons, searching social media accounts.

Earlier this month Clearview preliminarily settled a class action lawsuit accusing it of invading people’s privacy by agreeing to give the class who filed suit and whose faces appear in its database a 23 percent stake in the company.

The company’s technology is used by federal law enforcement agencies and police departments nationwide. A vast majority of Americans' faces are in the database.

 

Updated At least two VPNs are no longer available for Russian iPhone users, seemingly after the Kremlin's internet regulatory agency Roskomnadzor demanded Apple take them down.

Red Shield VPN, which is focused on providing its services to Russian users, claims it received a note from Apple that says its VPN was removed from the Russian App Store. The email, which the VPN operator shared on X, says Cupertino had to remove the app from the App Store in Russia since the software did not "conform with all local laws." This is after the Kremlin had apparently spent years trying technological approaches to block the use of the VPN.

"Apple's actions, motivated by a desire to retain revenue from the Russian market, actively support an authoritarian regime," Red Shield said in a statement.

"Over the past six years, Russian authorities have blocked thousands of Red Shield VPN nodes but have been unable to prevent Russian users from accessing them. Apple, however, has done this job much more effectively for them.

Updated At least two VPNs are no longer available for Russian iPhone users, seemingly after the Kremlin's internet regulatory agency Roskomnadzor demanded Apple take them down.

Red Shield VPN, which is focused on providing its services to Russian users, claims it received a note from Apple that says its VPN was removed from the Russian App Store. The email, which the VPN operator shared on X, says Cupertino had to remove the app from the App Store in Russia since the software did not "conform with all local laws." This is after the Kremlin had apparently spent years trying technological approaches to block the use of the VPN.

"Apple's actions, motivated by a desire to retain revenue from the Russian market, actively support an authoritarian regime," Red Shield said in a statement.

"Over the past six years, Russian authorities have blocked thousands of Red Shield VPN nodes but have been unable to prevent Russian users from accessing them. Apple, however, has done this job much more effectively for them.

"This is not just reckless but a crime against civil society. The fact that a corporation with a capitalization larger than Russia's GDP helps support authoritarianism says a lot about the moral principles of that corporation."

Le VPN also says it was taken off of the Russian App Store, and shared the same email.

Roskomnadzor has been on a bit of a banning spree lately, and previously tried to pressure Mozilla into removing five apps, including VPNs, from the Mozilla store. However, the Firefox maker reversed its ban after a week and the five apps have since remained up.

Google has apparently received similar requests, Russian internet freedom NGO Roskomsvoboda (not to be confused with Roskomnadzor) told The Register.

"We also know that Google has received similar requests from the Russian regulatory agency and has even notified some proxy services that they might face removal," Roskomsvoboda claims. "However, it has not taken any action so far."

Roskomsvoboda believes eight VPN apps are no longer available on the Russian App Store, including popular ones such as NordVPN, Proton, and Private Internet Access.

However, not all of these VPNs were seemingly taken down recently or even by Roskomnadzor itself. "We have not received any communication from Apple as we have unlisted our apps from Russian versions of application stores ourselves back in 2023," a NordVPN representative told The Register. We've reached out to Roskomsvoboda for clarification on what VPNs Roskomnadzor itself has banned lately.

Vladimir Putin's Russia has been struggling to get VPNs taken down for years now, and when a Russian senator claimed in October that 2024 would see Roskomnadzor finally crack down on VPNs in a big way, it wasn't clear if anything would come of it. However, it seems Roskomnadzor's fresh focus is on stopping VPN apps from being distributed rather than merely blocking VPN servers.

We have asked the Tim Cook-run Apple to confirm it sent the notifications.

 

A Mississippi state law (introduced as House Bill 1126) that, among other things, requires platforms to implement age verification, has been declared as by and large unconstitutional by the US District Court for the Southern District of Mississippi.

The law was challenged by Big Tech trade group NetChoice in the NetChoice v. Fitch lawsuit. As the law was to come into force on July 1, the plaintiff asked for a preliminary injunction to prevent enforcement.

This has now been granted in part and denied in part by the district court, which found that “a substantial number, if not all, of H.B. 1126’s applications are unconstitutional judged in relation to its legitimate sweep.”

We obtained a copy of the decision for you here.

Observers now expect to see how the ruling might satisfy the – apparent – direction the Supreme Court is giving to lower courts, or affect its own future decisions.

Related: The 2024 Digital ID and Online Age Verification Agenda

Meanwhile, the Mississippi law is yet another in a series of legislative efforts introduced under the banner of protecting children from predatory behavior online. As summed up by the court, one of the bill’s sections requires “all users” – adults and children – to verify their age.

This would be necessary to create an account, on what is referred to as non-excluded internet services, while another provision calls for parental consent in case a minor is opening that account.

“This burdens adults’ First Amendment rights, and that alone makes it overinclusive,” is how the court explained its ruling against age verification.

Meanwhile, the legislation also sets limitations on the data the relevant online services can collect. In addition, platforms are required to make “commercially reasonable efforts to develop and implement a strategy to prevent or mitigate the known minor’s exposure to harmful material.”

The data collection provision was not contested in the lawsuit, and so the court opinion issued this week does not deal with that – while it grants motions for a preliminary injunction concerning other provisions, as filed by NetChoice.

The court – unlike some observers, who warn about the consequential harmful effects of such efforts – sees nothing but good intentions behind bills like H.B. 1126, but is critical of the way it (vaguely) delineates its own scope and even the definition of a digital service provider.

In addition, the court cites the lack of specificity concerning how tech platforms would go about ascertaining somebody’s parental status, and as in general being “either overinclusive or underinclusive, or both.”

The case is now expected to move into the appeals stage with the Fifth Circuit.

 

It’s really a no-brainer both for politicians and those crafting the wording and perception of their policies.

Namely – if you want genuinely complex and controversial initiatives (such as those related to mass surveillance and privacy infringements) fast-tracked both in legislatures and the media/public, just frame them as geared toward “child safety.”

Job done. Not many will even attempt to stand up to this, even if arguments in favor are patently disingenuous.

One gets the sense this is what Australia’s “chief censor” –eSafety Commissioner Julie Inman Grant – is there to do – and she seems to understand her assignment well. Whether she succeeds, though, is a whole different question.

For right now, Grant is not letting up on trying to attack online security and privacy via demands for swift implementation of age verification schemes by online platforms.

Grant is now setting a six-month deadline and threatening mandatory codes unless these platforms play along.

It might bear repeating, and louder, “for the people in the back”: The only way to truly verify anyone’s age online is for adults with a government-issued ID to present a copy of it to the platforms ruling the internet – ruled by governments.

This effectively destroys online anonymity, and in many countries and under many regimes, people’s (physical) safety.

To her “credit” – Grant does seem to always be more concerned about how her initiatives are perceived, rather than what they actually realistically can achieve.

And so reports say her latest push is to have online platforms implement age verification over the next six months or be forced to do so by a “mandatory code.”

The alternative to the country enforcing “child safety rules” is that these rules will eventually be imposed.

(The rules in question are related to access to pornography but also “other inappropriate” content; “suicide,” and “eating disorders” are lumped into this, and, it’s unclear if “eating disorders” as defined by Grant, include only undereating, or overeating as well.)

Effectively, Grant has set October 3 of this year as the deadline for tech companies to tell the Australian government how they plan to implement their own “codes” – before the government does it for them. As any good democratic government does /s.

The scope of the envisaged standards is quite wide: standards for “app stores, websites including pornography and dating websites, search engines, social media platforms, chat services, and even multi-player gaming platforms check(ing) that content is suitable for users,” Grant is quoted.

 

NewsGuard co-founder and co-CEO Steve Brill has published a book, “The Death of Truth” – but he’s not taking any responsibility. On the contrary.

Namely, Brill’s “apolitical (misinformation) rating system for news sites” as NewsGuard is promoted to customers, is often blasted – and currently investigated by Congress for possible First Amendment violations – as yet another tool to suppress online speech.

But corporate media sing his praises, presenting him as a “media maven.”

A censorship maven more like it, critics would say. And while getting his book promoted, Brill managed to add his name to the steadily growing list of governments, NGOs, and associated figures who are attacking online anonymity.

Along with end-to-end encryption, the ability to interact anonymously is a cornerstone of the internet, but these two key elements that ensure not only privacy but also the security of individuals, companies, etc., have become the two main targets for authoritarian (labeled as such or acting in that spirit) governments.

Brill’s contribution: a set of practical solutions that includes “banning anonymous posting online and funding media literacy programs.”

The problem that this is supposed to fix is, essentially, that social media platforms are not yet fully under control, and therefore neither are their users (and voters).

If anonymity were to be taken out of the equation, Brill is reported as saying – then it would be “easier to sue tech companies for the false content posted on their platforms,” as well as “waging legal campaigns against social media companies for violating their own terms of service.”

There’s another snippet of a veiled threat aimed at tech companies, in terms of what might happen to them if they “misbehave,” (such as letting up on the already extraordinary levels of censorship), especially during a campaign season.

Reporting about Brill’s Washington DC garden party to promote his book and the efforts to “clean up the internet and bring truth back to life” – the Washington Post repeatedly mentions “bad information” as that ominous source of “divisions” and “polarization.”

We’ve been hearing about “misinformation,” “disinformation,” and even “malinformation” that must be fought tooth and nail. But what is “bad information” – could it simply be information that one doesn’t like?

Whatever it is, Brill and his ilk seem willing to dismantle the internet itself, in order to get rid of it.

 

The Supreme Court has today announced it would review a legal challenge against a Texas statute mandating digital ID verification of any websites and apps that could be deemed “harmful to minors.” The law is usually cited in relation to pornographic material but the broad term “harmful to minors” can be applicable across many websites, preventing people from interacting with a website without first uploading their ID.

This legal battle revolves around Texas’ age verification bill, introduced in 2023.

The law also compels these sites to present health warnings concerning the alleged psychological dangers associated with pornography consumption. Notably, this labeling requirement does not yet extend to search engines or social media platforms.

Websites that fail to comply with the law face steep fines, including daily civil penalties of up to $10,000 and, if a minor accesses restricted content, potential fines from the Texas attorney general up to $250,000 per instance.

Texas is not alone in implementing such regulations; similar laws are currently active in seven other states and are set to be introduced in more states soon.

The Free Speech Coalition, along with several adult website operators, filed a lawsuit against the bill. Their legal argument is that the law infringes on First Amendment rights. A federal district court initially halted the law’s enforcement just before its implementation on September 1, 2023.

Mandatory digital ID requirements for website and social media use raise significant concerns about the chilling effect on free speech. These requirements can deter online participation due to privacy fears, and undermine anonymity vital for activists and whistleblowers. Such policies may also lead to self-censorship, as users might avoid sharing controversial opinions out of fear of being easily traced. Additionally, implementing digital IDs poses complex legal, technical, and logistical challenges that could result in bureaucratic errors and data breaches. The major Big Tech ID verification AU10TIX was recently reported to have suffered a data leak, though the company says it hasn’t seen evidence of any user data being exploited.

The majority of the panel at the US Court of Appeals for the 5th Circuit concluded that the Texas law is “rationally related to the government’s legitimate interest in preventing minors’ access to pornography,” using the least stringent rational-basis review standard, and thus did not violate the First Amendment. In contrast, Judge Patrick Higginbotham dissented, arguing that the law necessitates strict scrutiny due to its content-based restrictions on adult access to protected speech.

As the 5th Circuit allowed its decision to stand, the Free Speech Coalition and the affected websites escalated the matter to the Supreme Court. Their appeal emphasized the contradiction between the 5th Circuit’s decision and established Supreme Court precedents regarding sexual content and expression. They argue that the law unduly burdens adults’ constitutional rights by requiring the disclosure of personal information, thus increasing the risk of data breaches and privacy violations.

Texas officials defend the legislation, asserting it as a reasonable measure to protect minors from sexually explicit materials and not an undue burden on the porn industry.

[–] c0mmando@links.hackliberty.org 2 points 4 days ago (1 children)

also consider any prior activity from this used phone will now be associated with you. when people are considering switching to grapheneos, i typically recommend buying a new pixel 7a in store using cash.

 

Google is testing facial recognition on one of its campuses, and refusing to be subjected to this is not an option for the giant’s employees.

In other words, opt-out is not a feature of the surveillance scheme – the only possibility available to employees is to fill out a form and declare they don’t want images recorded by security cameras, taken from their company IDs, stored.

Reports are saying that this is happening in Kirkland, a suburb of Seattle, where facial recognition tech is used to identify employees using images on their ID badges, in order to keep those unauthorized from entering the premises.

And while in the testing phase badges are being used, that won’t be the case in the future, Google representatives have said, but reports quoting them do not clarify what type of ID – or images – might be used instead.

According to Google and its division behind the project, Security and Resilience Services (GSRS), the purpose is to mitigate possible security risks.

Google “guinea-pigging” its own employees is seen as part of a wider push by the corporation to position itself in the expanding AI-driven surveillance development and deployment, regardless of this being yet another privacy controversy being added to Google’s already existing huge privacy controversy “portfolio.”

A spokesperson for Google insisted that the testing in Kirkland and eventual implementation of the technology is squarely security-driven, and reports mention one serious incident, the 2018 shooting at the YouTube office in California, as justification for the measures now being put in place.

However, there is already evidence that this type of employee surveillance is being used to control and discipline them as well.

According to an internal document seen by CNBC, the Kirkland experiment is “initially” taking place there, suggesting facial recognition will be deployed elsewhere on Google campuses; and the officially stated goal is to identify persons who “may pose a security risk to Google’s people, products, or locations.”

But even before the more sophisticated and elaborate surveillance trials started, Google used security camera footage to identify a number of employees who protested over labor conditions, as well as the giant’s Project Nimbus, which involves the company in the conflict in the Middle East. More than 50 people got fired.

 

Those still using Microsoft Windows (now in version 11) as their operating system in 2024 have a lot of experience being left out of the “decision-making process” concerning their own computer and their own data.

This is what closed-source, proprietary software gets you (in addition to a lack of innovation and overall technical quality); but there are even more ways to avoid transparency, and, frankly, disrespect paying customers.

And one is introducing questionable features without even announcing them.

OneDrive – Microsoft’s cloud service – is also available to back up Windows folders like Desktop, Documents, Music, Pictures, Videos… and as it turns out, users don’t even have to agree to this – or even know it’s happening.

Namely, if you are installing Windows 11 (signed into the Microsoft account, as Microsoft prefers), the default is now to upload content from those folders to Microsoft’s cloud. And Microsoft didn’t bother informing their users about this change, compared to the previous installation process, Neowin reported.

“Informing” here means, not with a press release, and not even with prompts during installation and setup.

The backup, i.e., the syncing of the files is now already ongoing or done as soon as a fresh install is finished, and users are reportedly only (slowly) becoming aware of the change because of new visual indicators on their desktop shortcuts and folder icons (showing that the backup is in progress or done).

Windows users can still be grateful there are several ways to deal with the situation. One is to go to the OneDrive settings, and then go through several steps (Sync and Backup>Manage Backup…) and uncheck whatever folders should not sync with the Microsoft cloud service.

(But there are also older versions of OneDrive, where the way is, Manage Backup>StopBackup.)

Another way to remedy the situation is to install Windows offline, that is, not signed into the Microsoft account (although, it’s not clear what happens once a user signs in after the install – or what might start happening at some later date).

The third method is to delete OneDrive from your Windows.

And the fourth and best – stop using Windows.

 

AU10TIX, an identity verification company operating out of Israel and serving prominent clients like TikTok and more recently Elon Musk’s X, was found to have inadvertently left sensitive user information vulnerable after administrative credentials were exposed online, according to a report from 404 Media.

The company, known for processing photos and drivers’ licenses to verify identities, allegedly had this security lapse exposed by cybersecurity firm spiderSilk, revealing a potential goldmine for hackers.

The exposed data, accessible for over a year, included not only basic identity details such as names, birth dates, and nationalities but also images of the identity documents themselves, such as drivers’ licenses. This breach underscores a growing concern as more platforms, including social networks and adult content sites, demand real identity verification from users, increasing the risk of personal data exposure.

Further complicating the issue, AU10TIX’s services involve sophisticated processes like “liveness detection” and age estimation through photo analysis, indicating the depth of data potentially compromised.

The breach was first detected when credentials stolen by malware were found on a Telegram channel. This channel had posted these credentials in March 2023, despite them being harvested back in December 2022. These included passwords and tokens for various services, which 404 Media suggests deepens concerns.

In a statement, AU10TIX said “While PII data was potentially accessible, based on our current findings, we see no evidence that such data has been exploited. Our customers’ security is of the utmost importance, and they have been notified.”

X, formerly known as Twitter, has recently introduced a new policy requiring users who earn through its platform—via advertising or paid subscriptions—to verify their accounts using government-issued IDs.

This move, facilitated through a partnership with Au10tix was designed to reduce impersonation and fraud. But starting immediately for new creators and by July 1, 2024, for existing ones, the policy aims to enhance authenticity and secure user transactions.

However, it also sparks significant privacy and free speech concerns, as the platform is recognized for championing free expression—a principle often supported by the ability to remain anonymous.

The implementation of mandatory government ID verification by X is part of a wider trend towards digital ID verification in the online and political arenas, raising questions about the impact on free speech and anonymity.

While the intent behind such policies is to improve security and authenticity, they risk infringing on the fundamental rights to privacy and anonymous speech, essential for activists, whistleblowers, and those critical of their governments.

 

More controversy is developing in the UK, this time in Scotland, around the use by law enforcement of cameras equipped with live facial recognition technology.

Reports say that the police in Scotland may intend to start using this tech to catch shoplifters and persons who break bail conditions. But civil rights group Big Brother Watch is warning against any kind of deployment of live facial recognition as incompatible with democracy – primarily because it indiscriminately jeopardizes the privacy of millions of people.

To make sure this is not happening, the non-profit’s head of research Jake Hurfurt has told the press that the tech should be banned.

That would be an improvement also from the point of view of legal clarity around how AI and big data are used by law enforcement; since currently, Hurfurt remarked, the government and the police “cobble together patchwork legal justifications to experiment on the public with intrusive and Orwellian technology.”

Big Brother Watch offered another observation – the UK is a rare country outside of China and Russia (apparently, even the EU is “scaling back”) that is ramping up this type of surveillance.

The previous heated debate over live face recognition had to do with the London police, and at the moment, the Met’s decision to deploy it – besides being “a multi-million pound mistake,” is also facing a legal challenge, the group said.

They are hopeful this might serve as a teachable moment for the police in Scotland and dissuade them from repeating the same costly “experiment” of trying to usher in a “hi-tech police state.”

Meanwhile, press reports in the UK are confirming that Scotland police are considering using the technology, which works by trying to match images of people recorded by surveillance cameras with existing police databases.

The problem with using this as a method of policing in crowded streets is that it turns every citizen who happens to pass by one of the cameras into a justified – as far as the authorities are concerned – target, as a “potential suspect.”

And, the target may be shoplifters today – but who knows who might be another, if, as Big Brother fears, “we’re sleepwalking into a high-tech police state.”

The fear that Scotland may be on the way toward introducing live facial recognition as a police tool originates from a Scottish Police Authority conference on biometrics, where Assistant Chief Constable Andy Freeburn said:

“I think we do need to get into the difficult and potentially divisive topic of live facial recognition technology; we need to look at the limits of AI – and I hope that today is the first step in a wider debate.”

 

Australia’s chief censor, eSafety Commissioner Julie Inman Grant, by her own admission already lives in “a dystopia.” Namely, in an era of the internet where allegedly, and to sum her sentiment up, “nobody’s thinking of the children.”

And so to make both the dystopia and the internet worse, Grant is sticking to her guns, at least when it comes to continued anti-encryption rhetoric. This, is despite the fact that Australia’s proposed new rules have seemingly gotten “watered down” after strong pushback.

It’s a dangerous game Grant and her ilk are playing, considering that encryption is the best currently known protection people of any age (and businesses and governments) currently have on the internet.

But this big picture is just something various jurisdictions, like the EU and Australia, refuse to acknowledge, and would rather essentially break the internet instead.

Critics say that’s because the real goal is not to target surveillance at child abusers, but to facilitate mass surveillance of everybody.

And Grant has just made another admission. “Resistance from industry (to proposed anti-encryption measures) during the public consultation this year was more robust than we expected,” she said, noting that a reason for this resistance was fear of widespread government surveillance.

But she dismissed it saying that “the world we live in today” is already dystopian because adults (such as, law enforcement) allegedly have no tools to stop online abuse of children or promotion of terrorism.

These comments come after the new standards originally announced in Australia in November underwent changes before the final draft was submitted to parliament last Friday.

Namely, it “improves” on the vague language by stating, “companies will not be required to break encryption and will not be required to undertake measures not technically feasible or reasonably practical,” reports say.

At least, this applies to building whatever is considered “a systemic weakness” into the service, and specifically concerning end-to-end encryption, it applies to building “a new” decryption capability.

This is seen by the industry behind messaging platforms, from Apple to Signal, as a win, but Grant’s subsequent reaction via an op-ed clearly shows she is unhappy with the outcome.

And there is nothing to stop the commissioner from, going forward, introducing another proposal, perhaps attacking encryption from another angle, or just amplifying the child safety narrative.

 

When it comes to privacy and overall security of some of people’s most sensitive (financial, but also, “behavioral”) biometric data, massive global banks and payment processors, and burgeoning biometric surveillance was always going to be that perfect “match made in hell.”

And that reality is gradually taking shape. Not only is biometric tech and its ubiquitousness increasing (still in most countries without proper legal protections or proper “disclosure” of how and why it is being) – but behemoths like Mastercard and Visa are realizing they have access to massive amounts of highly monetizable people’s data.

The nature of the personal information that the likes of Mastercard get with every transaction you make is not only the number but also the location, the content of a purchase… and then behavioral patterns start emerging. But it doesn’t stop there.

Meanwhile, the goal (often, but not always) openly talked about is the lucrative business of “sharing” that data for targeted advertising.

But in a possible future Orwellian society – it really would be very useful to the surveillance state in so many different ways.

That clearly is not how the trend is going to be sold to the customer when financial execs speak about it.

Most people might expect this to be happening online, but Mastercard is very hungry for “biometric behavioral data” (the very phrase sounds almost as frightening as the thing is – and it is described by Mastercard itself in this way: “Track(ing) personal actions such as typing style and how you hold your phone, as well as habits such as the time of day you usually login or your usual IP address”).

And the giant is obviously comfortable to talk about biometrics getting expanded to “a number” of brick-and-mortars and their in-store payment systems this year.

“(…) From the consumer point of view, there’s no card, there’s no phone needed at all (at physical checkouts). You just present yourself at a monitor device.”

That’s right – “just yourself” – nothing more, folk. /s.

And that’s how a podcast host recently described the “experience” to Mastercard Executive President of Identity Products and Innovation Dennis Gamiello to confirm. “It could be a hand scanner, face scanner, whatever. And then you are authorized,” the host went on, and Gamiello fully agreed that’s how it’s going to work.

Mastercard’s executives are saying that people’s behavioral biometrics will be used – but of course – simply to enhance their “experience” and, the perceived convenience.

“We’re actively working with partners around the globe to move to more seamless and secure authentication methods. That’s both the physical biometric, which is what we’re talking about here, as well as behind the scenes. There’s behavioral biometrics,” says a post on Mastercard’s site.

More than that – there is a vision of a future where digital ID will take over to verify payments, and link those with incentives such as reward programs.

[–] c0mmando@links.hackliberty.org 0 points 2 weeks ago (1 children)

Thanks for the post, I've made links.hackliberty.org available over Tor at http://snb3ufnp67uudsu25epj43schrerbk7o5qlisr7ph6a3wiez7vxfjxqd.onion

view more: next ›