Significance:
The recently introduced abstract model, initially revealed to Axios, has the potential to enable entities to promptly fortify their artificial intelligence systems against cybercriminals attempting to manipulate AI designs or pilfer the data upon which the designs were developed.
Overview:
Frequently, cybersecurity and data privacy are not prioritized by enterprises and consumers when a novel emerging technology trend gains traction.
- A case in point is social media, where users were so eager to engage with one another on new platforms that they paid scant attention to how user data was accumulated, shared, or safeguarded.
- Google is apprehensive that the same situation is unfolding with artificial intelligence systems, as companies hastily create and integrate these models into their operational processes.
Stated Opinion:
According to Phil Venables, the CISO at Google Cloud, “We want individuals to recognize that several of the hazards associated with artificial intelligence can be controlled via some of these rudimentary components.”
“Even as individuals are exploring more sophisticated approaches, they must not forget that it is equally important to ensure that the fundamental aspects are executed correctly.”
Elaboration:
Google’s Secure AI framework urges entities to embrace six concepts:
- Evaluate the feasibility of extending existing security controls to new AI systems, such as data encryption;
- Broaden the scope of existing threat intelligence research to include threats that specifically target AI systems;
- Incorporate automation into the company’s cybersecurity defenses to rapidly address any abnormal activity aimed at AI systems;
- Regularly review the security measures implemented around AI models;
- Continuously test the security of these AI systems through penetration tests and make appropriate modifications based on the test results;
- Lastly, establish a team that comprehends the risks associated with AI to determine where AI risks should be placed in an entity’s overall strategy for mitigating business risks.
Implication:
According to Venables, numerous security practices are already being utilized by established entities in their other departments.
“We soon realized that most of the methods employed to manage security around the utilization and creation of artificial intelligence are quite similar to how you approach the management of data access,” he further added.
Noteworthy:
To encourage the implementation of these principles, Google is collaborating with its clients and governments to determine how to execute the concepts.
Furthermore, the company has extended its bug bounty program to accept fresh discoveries that reveal security vulnerabilities linked to AI safety and security, as stated in a blog post.
Upcoming Steps:
According to Venables, Google intends to solicit feedback on its framework from industry associates and government entities.
”We believe that we have made considerable progress in these areas throughout our history, but we are not overconfident to assume that people cannot provide us with recommendations for further enhancements,” Venables stated.
#ArtificialIntelligence #Google #SecureAI #AIsystems #Cybercrime