Google has always cared for its users, whether about security or anything else.
Google has launched many features or options specific to the security of the user’s browsing data. Just like this, Google has enhanced the security level of AI-generated content. Now, its put the bug bounty programs in a critical situation as Google has added the generative AI threats.
Google has its specific program known as the vulnerability reward program, that is basically for the security purpose of Google. This program has many features in favor of Google. Still, other than that now Google has added a new threat option in vulnerability reward programs that is attack scenarios specific to generated AI.
Before the publication the announcement was made that was shred with TechCrunch, in which Google said, “we believe expanding the vulnerability reward program will incentivize research around AI safety and security and will bring potential issues to light with ultimately make AI safer for every person that uses it.”
Where companies or people hate hackers, Google vulnerability reward program pays the ethical hackers so that they can do the findings and will disclose any security flaws. So that they can correct those flaws and make their Google’s security more advanced. This new modern tech AI has highlighted new security issues like potential for unfair bias or model manipulation. Google itself cleared it that they will rethink that how bugs it receives should be categorized and will be reported.
According to Google, it is doing this by using different findings from its newly formed AI read team, which is the group of ethical hackers that will stimulate a variety of adversaries, ranging from nation states. Government backed group to hacktivist and malicious insiders to hunt down security weaknesses in technology. The team recently conducted an exercise to determine biggest threats to technology behind generative AI products like the most famous ChatGPT and Google Bard.
After more concentration, Google team find out that large language models or LLMs are more sensitive to prompt injection attacks, just like whereby a hacker crafts adversarial prompts that can influence the behavior of models. The person attacking the site could use this attack to easily generate text that will be more harmful and offensive to leak sensitive information. The Google team has also warned another type of attack called training data extraction, that allows hackers to reconstruct the verbatim training example to extract the personality identifiable information or password from the data.
All these types of attacks are mostly covered in the scope of Google’s expanded vulnerability reward program, with this we have model manipulation and model theft attacks, but at that time Google says it will not offer those rewards to researchers who only uncover the bugs related to copyright issues or data extraction that reconstruct non sensitive or public information.
Google has always thought about its users. And at this point, it has provided the more security to the people with AI generation. They have excellently added the generative AI threats to its bugs bounty, vulnerability reward program.