France fines Clearview AI maximum possible for GDPR breaches • TechCrunch

0

Clearview AI, the controversial facial recognition company that removes selfies and other personal data from the internet without agreeing to feed an AI-powered identity matching service that it sells to law enforcement and others, has been fined yet another.

This comes after he failed to respond to a command last year From CNIL, France’s privacy watchdog, to stop its illegal processing of French citizens’ information and delete their data.

Clearview responded to this by hiding the regulator – thus adding a third GDPR breach (non-cooperation with the regulator) to Earlier.

Here is a CNIL summary of Clearview’s violations:

  • Unlawful processing of personal data (violation of Article 6 GDPR)
  • Failure to respect the rights of individuals (Articles 12, 15 and 17 GDPR)
  • Non-cooperation with CNIL (Article 31 of the RGPD)

“Clearview AI had two months to comply with the orders in the official notice and justify them to CNIL. However, that did not happen Make no response to this official notice,” CNIL wrote in A press release Announcing the punishment today [emphasis its].

So the head of the National Commission on International Law decided to refer the matter to the restricted committee, which is responsible for issuing sanctions. Based on the information obtained, the restricted committee decided to impose a maximum financial penalty of 20 million eurosin accordance with Article 83 of the General Data Protection Regulation [General Data Protection Regulation]. “

The European Union’s General Data Protection Regulation (GDPR) allows penalties of up to 4% of a company’s annual revenue worldwide for the most serious breaches – or €20 million, whichever is higher. But the CNIL press release makes it clear that it charges the maximum amount possible here.

However, whether France will get a penny of this money from Clearview remains an open question.

The US-based privacy stripper has been issued with a slew of penalties by other data protection agencies across Europe in recent months, including a €20 million fine from Italia And the Greece; and Smaller penalty in the UK. But it’s not clear that any money has been handed over to any of these authorities – and they have limited resources (and legal means) to try and pursue Clearview payments beyond their borders.

So the GDPR sanctions often sound like a warning to stay away from Europe.

Clearview’s PR agency, LakPR Group, sent us this statement after the CNIL penalty – which it attributed to CEO Hwan Tun – That:

“There is no way to determine if a person has French citizenship, only from a public image from the Internet, and therefore it is impossible to delete data from French residents. Clearview AI only collects publicly available information from the Internet, just like any other search engine such as Google or Bing or DuckDuckGo.”

The statement goes on to reiterate Clearview’s previous claims that it has no place of business in France or the European Union, nor does it undertake any activities that would “otherwise mean it is subject to the GDPR,” as she puts it – adding:Clearview’s AI database of publicly available images is collected legally, just like any other search engine like Google.”

(Note: On paper, the GDPR has extraterritorial reach, so its earlier arguments are meaningless, while its claim that it doesn’t do anything that would make it subject to the GDPR seems absurd. Due to its combined database of more than 20 billion images worldwide, and Europe as well, part of planet Earth…)

The Ton-That statement also reiterates a claim frequently made in Clearview’s public statements in response to the influx of regulatory sanctions that its business has attracted because it created its own facial recognition technology “with the intent of helping make communities safer and helping law enforcement solve nefarious problems.” Crimes against children, the elderly and other victims of unscrupulous acts” – not taking advantage of the unlawful exploitation of people’s privacy – does not mean, however, that a “pure” motive would make any difference to its requirements, under European law, a basis Legally valid processing of people’s data in the first place.

“We only collect public data from the open internet and adhere to all privacy standards and the law. I am very saddened by the misinterpretation by some in France, where we do not do any work, of Clearview AI technology to society. My intentions and those of my company have always been to help communities and their people live A better and safer life,”

Every time it receives a penalty from an international regulator, it does the same thing: denying any breach and refuting that the foreign body has any jurisdiction over its actions – so its strategy for dealing with its data processing outside the law seems simple not to cooperate with regulators outside the law. United State.

Obviously, this only works if you plan on your senior executives/senior employees to never set foot in sanctioned areas of your business and give up any idea of ​​selling the sanctioned service to overseas clients. (last year The Swedish data protection watchdog has also fined a local police body for illegal use of Clearview – so European regulators can work to crack down on any domestic demand as well, if necessary).

Back home, Clearview has finally had to face some legal redlines lately.

Earlier this year She agreed to settle a lawsuit accusing her of violating an Illinois law that prohibits the use of individuals’ biometric data without consent. The settlement included Clearview agreeing to some restrictions on its ability to sell its software to most US companies, but still bragging about the outcome as a “big win” – claiming it would be able to circumvent the ruling by selling its algorithm (instead of accessing its database) For private companies in the United States

The need to empower regulators so that they can order the deletion (or market withdrawal) of algorithms trained on illegally processed data seems like an important upgrade to their toolkits if we are to avoid the dystopia that fuels AI.

It happens that a file The EU’s upcoming AI law may contain such powerfor each legal analysis of the proposed framework.

The bloc also recently submitted a plan to AI المسؤولية Responsibility Directive which wants to encourage compliance with broader AI law – by linking compliance with the lower risk of suing AI model makers, publishers, users, etc., if their products cause a range of harms, including people’s privacy.

Leave A Reply

Your email address will not be published.