Recently, Google has introduced a new bug bounty program. This program is geared entirely towards its artificial intelligence software. The goal of the company is to identify potential threats to security within its AI. It will provide significant financial awards to successful security researchers. The purpose of this initiative is to demonstrate Google’s dedication to improving AI products for the good of all users.
This program will offer financial incentives of much higher value than traditional bounty programs. A participant can receive a payout up to $20,000 for identifying a critical vulnerability. The upper tier of rewards applies to major products like the Google Search tool.
The Gmail and Google Drive services, along with the Gemini apps, also fall under this bounty program. For high-quality reports, Google will award successful researchers even larger financial amounts than $20,000.
In addition, Google will provide bonus payments for exceptional security research findings. A bonus payment can increase a payout amount for a report as high as $30,000. Google is especially looking for comprehensive, highly-original vulnerability reports on its suite of AI tools. Bonus payments are intended to incentivize researchers to complete outstanding work, while also rewarding creativity and luck.
A majority of popular products developed within Google’s Security and Privacy AI department are covered under the AIBP (AI Bounty Program) program. This also includes the Gemini AI apps and the experimental AI assistant, Jules, as well as the experimental note-taking tool, Notebook LM. Researchers are encouraged to direct testing efforts on these AI tools.
Google is searching for specific cybersecurity flaws, especially. An example could be a command that alters Gmail, which could even force it to send summaries to an attacker. Another example could be to unlock a Google Home smart door. Any unauthorized data leak is considered a serious qualifying bug.
Not every AI issue counts toward a bug bounty. AI hallucinations are not deemed security bugs to begin with, and issues involving copyrighted content or hate speech do not count as they should be reported in the in-app feedback tool. The bounty program is strictly for security vulnerabilities.
Google states this program also serves to improve safety of AI models. Proactively identifying flaws makes it more difficult for hackers to exploit the model in the real-world. It will help build a overall more trustworthy and secure AI system. There was already a lot of money earned from previous bounty programs, and the researchers have already been paid out more than $430,000 in bounties across AI Models.
Security researchers can get to step to begin hunting for bugs immediately. They should familiarize themselves with the target AI products thoroughly, and ensure all findings are reported using Google’s official channel. Knowing the program’s rules is important to receive payment. It is an awesome opportunity to help ensure AI safety.
Also Read: Google’s Warning: Hackers Target Bank Accounts Of Tech Employees