OpenAI's Top Bug Bounty Skyrockets To $100K For Security Vulnerabilities

hero2 OpenAI's Bounty Skyrockets $100K Security Vulnerabilities
If you are a skilled cybersecurity expert, able to discover security threats and software vulnerabilities, you may soon be eligible for a $100,000 payday. OpenAI announced that it has expanded its maximum security bug bounty program to $100,000, to compensate researchers who discover extremely severe or unique vulnerabilities.

Initially, ethical hackers and security researchers could only get a maximum of $20,000 for their efforts in uncovering severe or special vulnerabilities that threaten Open AI's products. However, yesterday, OpenAI increased this payout to a whopping $100,000.

body1 open ai 100k bounty security researchers news

It is worth mentioning that this is not the only avenue for security researchers to benefit from OpenAI’s bounty program. OpenAI has also decided to celebrate its new bounty payout by initiating a Bonus promotion period between March 26 and April 30, 2025. Security researchers who submit eligible reports will receive additional bonuses in this timeframe.

This bonus is designed for researchers who report access control vulnerabilities like Insecure Direct Object Reference (IDOR), which may occur if OpenAI's website allows malicious actors to gain unwarranted access. The previous reward range for this bonus was $200-$6,500, while the new range is $400-$13,000.

According to OpenAI, its security bug bounty program was borne out of the need to "invest heavily in research and engineering to ensure” its “AI systems are safe and secure." It added that the initiative reflects its "commitment to rewarding meaningful, high-impact security research that helps" to "protect users and maintain trust."

hero open ai 100k bounty security researchers

As expected, Open AI has established guidelines within which ethical hackers who choose to participate in the program will be assessed before they can be eligible for the payout. The company also added that it holds the sole discretion to determine the priority and impact of any vulnerability regardless of its prior classification. However, in cases where a vulnerability is downgraded, researchers would be provided a comprehensive explanation for that decision.

While Open AI has assured researchers that it would not pursue any legal actions against them for any act as long as they comply with its policy, it has refused to commit to defend or indemnify researchers from actions from third parties even if they were done while participating in the bounty program. If you're a researcher and you fancy the program, you could log in and submit a report.
OSZAR »