back to top
22.1 C
New Delhi
Thursday, December 19, 2024
spot_img

OpenAI and Anthropic Ink Major Deals with US Government

The U.S. Artificial Intelligence Safety Institute said on Thursday that AI firms OpenAI and Anthropic have inked agreements with the US government for research, testing, and assessment of their artificial intelligence models.

The agreements are the first of their type, and they come at a time when the corporations are under regulatory scrutiny for their safe and ethical use of AI technology.

California senators are expected to vote on a plan to broaden regulations on how AI is created and implemented in the state as early as this week.

“Safe, trustworthy AI is crucial for the technology’s positive impact. Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment,” said Jack Clark, Co-Founder and Head of Policy at Anthropic, backed by Amazon and Alphabet.

According to the agreements, the U.S. AI Safety Institute will get access to key new models from both OpenAI and Anthropic before and after their public release.

The agreements will also allow for joint study to assess the capabilities of AI models and the hazards connected with them.

“We believe the institute has a critical role to play in defining U.S. leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on,” said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

“These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI,” said Elizabeth Kelly, director of the U.S. AI Safety Institute.

The institute, a part of the U.S. commerce department’s National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The U.S. AI Safety Institute was launched last year as part of an executive order by President Joe Biden’s administration to evaluate known and emerging risks of artificial intelligence models.

Get the latest news updates and stay informed with FELA NEWS!

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

3,625FansLike
67,000FollowersFollow
5,582SubscribersSubscribe
- Advertisement -spot_img

Latest Articles