The Center for Artificial Intelligence and Digital Policy (CAIDP) has filed a complaint with the United States Federal Trade Commission (FTC) in an attempt to halt the release of powerful AI systems to consumers.
The complaint centered around OpenAI’s recently released large language model, GPT-4, which the CAIDP describes as “biased, deceptive, and a risk to privacy and public safety” in its March 30 complaint.
CAIDP, an independent non-profit research organization, argued that the commercial release of GPT-4 violates Section 5 of the FTC Act, which prohibits ''unfair or deceptive acts or practices in or affecting commerce.''
To back its case, the AI ethics organization pointed to contents in the GPT-4 System Card, which state:
In the same document, it stated: “AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.”
CAIDP added that OpenAI released GPT-4 to the public for commercial use with full knowledge of these risks and that no independent assessment of GPT-4 was undertaken prior to its release.
As a result, the CAIDP wants the FTC to conduct an investigation into the products of OpenAI and other operators of powerful AI systems:
While ChatGPT-3 was released in November, the latest version, GPT-4 is considered to be ten times more intelligent. Upon its release on March 14, a study found that GPT-4 was able to pass the most rigorous U.S. high school and law exams within the top 90th percentile.
It can also detect smart contract vulnerabilities on Ethereum, among other things.
This morning I was hacking the new ChatGPT API and found something super interesting: there are
Read more on cointelegraph.com