Clearview A.I. Evades Huge U.K. Fine; AI Transparency in Spotlight

19 October, 2023 - 2:43 pm (43 days ago)
1 min read

Controversial facial recognition pioneer, Clearview A.I., dodged a ยฃ7.5 million fine after a British appeals court ruled in its favor. The court emphasized that the U.K.โ€™s Information Commissionerโ€™s Office lacks jurisdiction over how foreign entities utilize British citizensโ€™ data. This ruling potentially sets a precedent for other nations trying to hold the company accountable for unauthorized data scraping.

Clearview’s rapid rise to prominence came with its massive database of 30 billion images, primarily sourced without explicit permission. Although the firm halted sales to most private entities in 2020, numerous U.S. law enforcement agencies continue to tap into its resources.

Multiple countries, including Australia, Canada, and France, have confronted Clearview over data protection concerns. Yet, the company remains resilient, with its operations mostly unscathed.

Shadows Over AI Transparency

Stanford University’s recent investigation into the transparency of foundational AI models presented a sobering realization. Not a single leading AI developer, including industry giants like Meta Platforms Inc., has been sufficiently transparent about their model’s societal implications.

The Foundation Model Transparency Index, a brainchild of Stanford’s Human-Centered Artificial Intelligence research group, ranked the transparency of major AI models. Although Meta‘s Llama 2 secured the top spot with a 54% score, the overall findings were lackluster, with even the most transparent models barely crossing the halfway mark.

Open-source models, like Llama 2 and BloomZ, showed a distinct advantage in the transparency arena. Surprisingly, despite its non-transparent methodology, OpenAI secured 47% due to significant external information available on its GPT-4 model.

However, regardless of their open-source nature, models failed to provide insights into their societal repercussions. These findings emphasize a troubling pattern, especially as AI integration becomes increasingly pervasive.

Rishi Bommasani, a co-author of the Stanford study, highlighted the aim of the index: crafting a tangible benchmark for regulatory entities. With the European Union’s prospective Artificial Intelligence Act, which mandates stringent AI regulations, transparency becomes a paramount concern. The act will categorize AI tools based on risk factors, addressing misleading information, biased language, and biometric surveillance.

While an enthusiastic open-source community thrives around generative AI, significant industry players cloak their operations in mystery. OpenAI’s strategy exemplifies this secrecy trend, choosing to withhold research due to competitive pressures and safety apprehensions.

Stanford HAI plans regular updates to its Transparency Index and will broaden its scope to encompass newer models, ensuring a clearer lens on the ever-evolving AI landscape.

The juxtaposition of these two narratives reflects a broader concern: the struggle for transparency and accountability in tech. As Clearview evades legal repercussions, questions arise regarding international jurisdiction and data protection. Simultaneously, the AI transparency study showcases the industry’s general opacity, emphasizing an urgent need for more lucid regulatory frameworks. As technological advancements continue at an unprecedented pace, the balance between innovation and ethical responsibility remains precarious.

You can follow us on Telegram, Facebook, Linkedin, Twitter ( X ), Mastodon

Bilgesu Erdem

tech and internet savvy, cat lover.

wrIte a comment

Your email address will not be published.

Latest from AI