US to evaluate new AI models from Google, Microsoft, xAI for safety

0

The US Department of Commerce (Commerce) will now evaluate new AI tools and capabilities from Google, Microsoft and xAI before making them available to the general public.

‎‎The IT companies have consented to voluntarily submit their models to Commerce’s Centre for AI Standards and Innovation (CAISI) for testing.

‎‎The new agreements, which build on commitments made by AI firms like OpenAI and Anthropic under the Biden Administration, will assess the security and capability of AI models from each of the firms.

‎‎”These expanded industry collaborations help us scale our work in the public interest at a critical moment,” stated Chris Fall, director of CAISI.

‎‎The assessments of the AI tools would encompass “testing, collaborative research and best practice development related to commercial AI systems.”

‎‎Gemini, a chatbot that is publicly accessible on Google products and is currently being utilised by US military and defence organisations, is Google’s most well-known AI tool through its DeepMind subsidiary.

‎‎Microsoft’s most well-known AI tool is CoPilot, whereas xAI’s sole AI product is Grok, a chatbot that has drawn criticism from the public for allegedly undressing individuals in pictures.

‎‎CASI announced that it has carried out 40 prior assessments of AI tools, including testing and evaluation of some “state-of-the-art models that remain unreleased.”

‎The models that have been prevented from being made available to the general public were not disclosed by the centre.

‎‎Requests for comment were not answered by representatives of Google, Microsoft or SpaceX, the Elon Musk firm that now owns xAI.

‎Moreover, bringing in more companies for research and safety testing of commercial AI products, the Trump White House, which has typically taken a hands-off approach to oversight or regulation of AI and technology companies, is changing its ways.

‎‎US President Donald Trump signed a series of executive orders last year that served as the foundation for his administration’s “AI Action Plan,” which he claimed would “remove red tape and onerous regulation” surrounding AI development and guarantee that the US would “win” through technological advancements and control.

‎However, the White House appears to be changing its perspective in light of the US military’s growing usage of AI and Anthropic’s recent assertions that it created a model called Mythos that is too potent to be made public.

‎‎Despite the fact that Anthropic is embroiled in a legal battle with the US Department of Defence over Anthropic’s unwillingness to remove safety precautions for government usage of its models, senior members of Trump’s administration met with Anthropic CEO Dario Amodei last month.

LEAVE A REPLY

Please enter your comment!
Please enter your name here