OpenAI, an artificial intelligence (AI) research and deployment firm behind ChatGPT, is launching a new initiative to assess a broad range of risks related to AI.
OpenAI is building a new team dedicated to tracking, evaluating, forecasting and protecting potential catastrophic risks stemming from AI, the firm announced on Oct. 25.
Called “Preparedness,” OpenAI’s new division will specifically focus on potential AI threats related to chemical, biological, radiological, and nuclear threats, individualized persuasion, cybersecurity and autonomous replication and adaptation.
Led by Aleksander Madry, the Preparedness team will try to answer questions like how dangerous are frontier AI systems when put to misuse as well as whether malicious actors would be able to deploy stolen AI model weights.
“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” OpenAI wrote, admitting that AI models also pose “increasingly severe risks.” The firm added:
According to the blog post, OpenAI is now seeking talent with different technical backgrounds for its new Preparedness team. Additionally, the firm is launching an AI Preparedness Challenge for catastrophic misuse prevention, offering $25,000 in API credits to its top 10 submissions.
OpenAI previously said that it was planning to form a new team dedicated to addressing potential AI threats in July 2023.
Related: CoinMarketCap launches ChatGPT plugin
The risks potentially associated with artificial intelligence have been frequently highlighted, along with fears that AI has the potential to become more intelligent than any human. Despite acknowledging these risks, companies
Read more on cointelegraph.com