Reader Advisory

Some articles posted in The SlickMaster's Files may contain themes, languages, and content which may neither appropriate nor appealing to certain readers. READER DISCRETION is advised.

26 March 2025

Newsletter: Tenable Research Reveals Popular AI Tools Used in Cloud Environments are Highly Vulnerable

[THIS IS A PRESS RELEASE]

Tenable®, the exposure management company, today announced the release of its Cloud AI Risk Report 2025, which found that cloud-based AI is prone to avoidable toxic combinations that leave sensitive AI data and models vulnerable to manipulation, data tampering and data leakage. 

Cloud and AI are undeniable game changers for businesses. However, both introduce complex cyber risks when combined. The Tenable Cloud AI Risk Report 2025 highlights the current state of security risks in cloud AI development tools and frameworks, and in AI services offered by the three major cloud providers—Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure. The key findings from the report include:

  • Cloud AI workloads aren’t immune to vulnerabilities: Approximately 70% of cloud AI workloads contain at least one unremediated vulnerability. In particular, Tenable Research found CVE-2023-38545—a critical curl vulnerability—in 30% of cloud AI workloads.
  • Jenga® -style cloud misconfigurations exist in managed AI services: 77% of organizations have the overprivileged default Compute Engine service account configured in Google Vertex AI Notebooks. This means all services built on this default Compute Engine are at risk.
  • AI training data is susceptible to data poisoning, threatening to skew model results: 14% of organizations using Amazon Bedrock do not explicitly block public access to at least one AI training bucket and 5% have at least one overly permissive bucket.
  • Amazon SageMaker notebook instances grant root access by default: As a result, 91% of Amazon SageMaker users have at least one notebook that, if compromised, could grant unauthorized access, which could result in the potential modification of all files on it. 
“When we talk about AI usage in the cloud, more than sensitive data is on the line. If a threat actor manipulates the data or AI model, there can be catastrophic long-term consequences, such as compromised data integrity, compromised security of critical systems and degradation of customer trust,” said Liat Hayun, VP of Research and Product Management, Cloud Security, Tenable. “Cloud security measures must evolve to meet the new challenges of AI and find the delicate balance between protecting against complex attacks on AI data and enabling organizations to achieve responsible AI innovation.”

1 The Jenga-style concept, coined by Tenable, identifies the tendency of cloud providers to build one service on top of the other, with “behind the scenes” building blocks inheriting risky defaults from one layer to the next. Such cloud misconfigurations, especially in AI environments, can have severe risk implications if exploited.

About Tenable
Tenable® is the exposure management company, exposing and closing the cybersecurity gaps that erode business value, reputation and trust. The company’s AI-powered exposure management platform radically unifies security visibility, insight and action across the attack surface, equipping modern organizations to protect against attacks from IT infrastructure to cloud environments to critical infrastructure and everywhere in between. By protecting enterprises from security exposure, Tenable reduces business risk for approximately 44,000 customers around the globe. Learn more at tenable.com. 

JENGA® is a registered trademark owned by Pokonobe Associates.

[END OF PRESS RELEASE]

No comments:

Post a Comment

Feel free to make a comment as long as it is within the bounds of the issue, and as long as you do it with decency. Thanks!