bay888casino login register download.bay888 app,bay888 casino login Register

International

Why Is Google's AI Model Facing European Union Scrutiny

As part of the wider efforts by the EU to check how AI systems handle personal data, the bloc's regulators said Thursday they're looking into Google's artificial intelligence model over concerns about its compliance with the strict data privacy rules.

Google AI european union
Google says it applies its "Responsible AI Practices" in PaLM2. ` Photo: ai.google
info_icon

American tech giant Google has come under European Union's (EU) scrutiny over concerns over its compliance with the bloc's data privacy rules by its Pathways Language Model 2, also known as PaLM2, which is a next-generation language model for advanced Artificial Intelligence (AI) capabilities.

As part of the wider efforts by the EU to check how AI systems handle personal data, the bloc's regulators said Thursday they're looking into Google's artificial intelligence model over concerns about its compliance with the strict data privacy rules.

With the Google European headquarters based in Ireland's Dublin, the Irish Data Protection Commission said it has opened an inquiry into Google's PaLM2.

What Is PaLM2

Google describes PaLM2 as their next-generation language model with "improved multilingual, reasoning and coding capabilities" that builds on company's "legacy of breakthrough research in machine learning and responsible AI."

Google uses PaLM2 to power a range of generative AI services including email summarising.

Google says it applies its "Responsible AI Practices" in PaLM2.

Under the Privacy section of Google's responsible AI practices information on its website, the company says even though there may be enormous benefits to building a model that operates on sensitive data, machine learning models, "it is essential to consider the potential privacy implications in using sensitive data."

"This includes not only respecting the legal and regulatory requirements, but also considering social norms and typical individual expectations. For example, it’s crucial to put safeguards in place to ensure the privacy of individuals considering that ML [machine learning] models may remember or reveal aspects of the data they have been exposed to. It’s essential to offer users transparency and control of their data," says Google.

Google says it is "constantly developing such techniques to protect privacy in AI systems, including emerging practices for generative AI systems."

X, Meta Bend To EU Regulations

The commission said its inquiry is examining whether Google has assessed whether PaLM2's data processing would likely result in a “high risk to the rights and freedoms of individuals" in the EU.

The Irish watchdog said earlier this month that Elon Musk's social media platform X, formerly Twitter, has agreed to permanently stop processing user data for its AI chatbot Grok after the watchdog took the company to court the month before, filing an urgent High Court application to get it to "suspend, restrict or prohibit" processing of personal data contained in public posts by its users.

X agreeing to stop processing its EU users’ data for its Grok means that the AI platform can’t use Europeans’ posts on the microblogging platform.

The platform reportedly had opted EU citizens in to Grok’s training without their consent. It meant that Grok AI was using X users’ personal information, including posts, to build its own AI as a rival to ChatGPT and Google Gemini.

Meta Platforms, formerly Facebook, paused its plans to use content posted by European users to train the latest version of its large language model after apparent pressure from the Irish regulators. The decision "followed intensive engagement" between the two, the watchdog said in June.

“We will release a multimodal Llama model over the coming months, but not in the EU due to the unpredictable nature of the European regulatory environment,” reports quoted a Meta spokesperson as saying.

Meta updated its privacy policy asking to take all public and non-public user data, except chats between individuals, for use in current and future AI technology, which was due to take effect on June 26 this year.

Italy's data privacy regulator last year temporarily banned ChatGPT because of data privacy breaches and demanded the chatbot's maker OpenAI meet a set of demands to resolve its concerns.