Seeking to bolster the safety of its widespread AI chatbot, OpenAI is popping to exterior cybersecurity and penetration consultants, also referred to as “pink groups,” to seek out holes within the AI platform.
The corporate says it’s on the lookout for consultants throughout numerous fields, together with cognitive and laptop science, economics, healthcare, and cybersecurity. The purpose, OpenAI says, is to enhance the security and ethics of AI fashions.
The open invitation comes because the US Federal Commerce Fee launches an investigation into OpenAI’s knowledge assortment and safety practices, and policymakers and companies are questioning how secure utilizing ChatGPT is.
“[It’s] crowdsourcing volunteers to leap in and do enjoyable safety stuff,” Halborn Co-founder & CISO Steven Walbroehl advised Decrypt. “It is a networking alternative, and an opportunity to be [on] the frontline of tech.”
“Hackers—the very best ones—prefer to hack the latest rising tech,” Walbroehl added.
To sweeten the deal, OpenAI says pink staff members can be compensated, and no prior expertise with AI is important—solely a willingness to contribute various views.
“We’re saying an open name for the OpenAI Crimson Teaming Community and invite area consultants all in favour of bettering the security of OpenAI’s fashions to hitch our efforts,” OpenAI wrote. “We’re on the lookout for consultants from numerous fields to collaborate with us in rigorously evaluating and red-teaming our AI fashions.”
Crimson groups discuss with cybersecurity professionals who’re consultants at attacking—also referred to as penetration testing or pen-testing—programs and exposing vulnerabilities. In distinction, blue groups describe cybersecurity professionals who defend programs towards assaults.
“Past becoming a member of the community, there are different collaborative alternatives to contribute to AI security,” OpenAI continued. “For example, one possibility is to create or conduct security evaluations on AI programs and analyze the outcomes.”
Launched in 2015, OpenAI entered the general public eye late final yr with the general public launch of ChatGPT and the extra superior GPT-4 in March, taking the tech world by storm and ushering generative AI into the mainstream.
In July, OpenAI joined Google, Microsoft, and others in pledging to decide to growing secure and safe AI instruments.
Whereas generative AI instruments like ChatGPT have revolutionized how folks create content material and devour info, AI chatbots haven’t been with out controversy, drawing claims of bias, racism, mendacity (hallucinating), and missing transparency relating to how and the place consumer knowledge is saved.
Considerations over consumer privateness led a number of international locations, together with Italy, Russia, China, North Korea, Cuba, Iran, and Syria, to implement bans on utilizing ChatGPT inside their borders. In response, OpenAI up to date ChatGPT to incorporate a delete chat historical past perform to spice up consumer privateness.
The Crimson Staff program is the most recent play by OpenAI to draw high safety professionals to assist consider its know-how. In June, OpenAI pledged $1 million in direction of cybersecurity measures and initiatives that use synthetic intelligence.
Whereas the corporate mentioned researchers are usually not restricted from publishing their findings or pursuing different alternatives, OpenAI famous that members of this system needs to be conscious that involvement in pink teaming and different tasks are sometimes topic to Non-Disclosure Agreements (NDAs) or “should stay confidential for an indefinite interval.”
“We encourage creativity and experimentation in evaluating AI programs,” OpenAI concluded. “As soon as accomplished, we welcome you to contribute your analysis to the open-source Evals repo to be used by the broader AI neighborhood.”
OpenAI didn’t instantly return Decrypt’s request for remark.