OpenAI said in a blog post on April 6, 2023 that it is ‘committed to making powerful AI secure and widely beneficial’ (iStock/Getty Images)
ChatGPT maker OpenAI has defended its popular AI chatbot amid concerns about the threat it poses to society.
In a blog post published on Wednesday, the AI firm acknowledged “real risks” associated with the technology, but claimed that its artificial intelligence system was subject to a “rigorous security assessment”.
After completing training of its latest GPT-4 model, OpenAI said it underwent six months of security testing before releasing it to the public.
The company also called for more regulation to ensure safety standards are followed across the industry. But even with tighter regulation and security testing, it remains a major challenge to predict how people might misuse the technology once it is publicly available.
“Despite extensive research and testing, we cannot predict in what ways people will use our technology profitably, nor whether people will abuse it,” the blog post said.
“We will be increasingly vigilant as we build and deploy more capable models, and will continue to enhance security precautions as our AI systems evolve.”
The blog comes a week after more than 1,000 experts called for AI development to be halted until the full risks are properly understood.
“If such a pause cannot be implemented quickly, governments should step in and establish a moratorium,” the authors wrote.
OpenAI said it was “eager to contribute” to such discussions and “actively engaging with governments on the best form of such regulation”.
The company also addressed recent allegations surrounding the collection of its user data, with Italy last week becoming the first Western country to ban ChatGPT as a result of privacy concerns.
Other European countries including Germany and Ireland are reportedly considering similar restrictions and are currently in talks with Italy’s data protection agency.
“While some of our training data includes personal information that is publicly available on the Internet, we want our models to learn about the world, not private individuals,” OpenAI’s blog post states.
“That’s why we work to remove personal information from the training dataset where possible, fine-tune models for denying requests for private individuals’ personal information, and encourage individuals to remove their personal information from our systems.” respond to requests.”
OpenAI did not respond to a request for comment. Independent About the possibility of further sanctions.
Source