Survey Finds AI Adoption Outpacing Security Readiness
F5报告指出,96%的企业正在部署AI模型,但仅2%高度准备应对安全挑战。77%的企业中等准备,21%低准备。传统安全框架无法应对AI带来的新风险。企业需加强数据治理、部署AI防火墙并扩展AI安全应用以缩小差距。 2025-7-14 17:27:18 Author: securityboulevard.com(查看原文) 阅读量:12 收藏

As organizations continue to deploy AI, security professionals find themselves confronting critical gaps in their level of preparedness, according to F5’s 2025 State of AI Application Strategy Report. The survey also found that 96% of organizations are actively deploying AI models currently. Yet, only 2% of those organizations qualify as what F5 defines as “highly ready” to secure and scale these systems — a fact many experts agree will come back to haunt organizations later. 

The vast majority of organizations — 77% — are only moderately ready for AI, and 21% are classified as low readiness. This gap exposes enterprises to new and evolving risks that traditional security frameworks are not equipped to address. “We set out to find if organizations are ready and found, not really. Maybe they are as ready for AI as they were ready for cloud when they jumped in,” Lori MacVittie, distinguished engineer at F5, told SecurityBoulevard. 

Understanding F5’s AI Readiness Index 

The report introduces an “AI Readiness Index.” The AI Readiness Index quantifies an organization’s ability to scale, secure and sustain AI across six dimensions: Their current generative AI status; their current adoption of AI agents, agentic AI, and breadth of AI applications. It also quantifies the company’s AI penetration performance across its application portfolio and its diverse use of AI models. Organizations are classified as: 

Techstrong Gang Youtube

AWS Hub

Highly ready: Wide AI deployment, model diversity, strong integration 

Moderately ready: Some AI, modest deployment 

Low readiness: Limited, siloed, or experimental AI use 

“I’m not surprised that most are currently moderately ready,” said MacVittie. “That means they know how to manage data and manage a lot of the aspects needed for AI. But the real changes that are still coming [such as managing numerous AI agents]. That’s where the practitioners are going to get hit the most,” she said. 

AI is not just another incremental shift in technology,” MacVittie said. “It’s as disruptive as the move to e-commerce or cloud computing, and it requires organizations to change everything about how they approach security,” she added. 

For instance, security professionals must look at new ways of deploying an old friend: the firewall. MacVittie said they should prioritize AI firewalls as foundational infrastructure.  

The report did find that 72% of organizations do plan to deploy AI firewalls within 12 months, with 31% already having done so. These firewalls are critical for protecting against threats unique to AI, such as prompt injection and data misuse. However, readiness varies: While 47% of moderately ready organizations have deployed or plan to deploy an AI firewall, that’s only true for just 21% of low-readiness organizations.  

AI Model Diversity is a Complex Security Challenge 

Every surveyed organization uses more than one AI model, and they’re often using a mix of commercial models (58% of those surveyed) and open-source models (42%). This diversity is reminiscent of early multi-cloud challenges, introducing new vulnerabilities and governance complexities. The most common AI model combination is GPT-4 or GPT-4 Turbo with a Mistral variant. Security teams must now understand and secure multiple models, each with distinct risks and compliance needs. 

The survey also identified that approaches to data protection are evolving. While some organizations rely on broad cloud security policies, others are adopting more sophisticated methods, such as the 27% that use inline enforcement across layers, the 19% that federate control and apply tokenization, and the 11% that isolate data within model-specific infrastructure. 

Security professionals must ensure data is protected not just at rest or in transit, but throughout the AI lifecycle — including training, inference and AI decision-making. 

AI model governance is proving to be a weak link, as only 21% of moderately ready organizations have formal data labeling practices. This lack of governance undermines transparency, accountability, and risk mitigation, MacVittie said. Without structured data labeling, organizations cannot fully understand or monitor AI decision-making, increasing the risk of undetected vulnerabilities and compliance failures.AI systems require more sophisticated data governance, including labeling and protecting sensitive information that could be exposed through AI outputs. This is a level of data management that many organizations are not yet prepared for,” she warned. 

The report also found some disparity in AI governance and security maturity across vertical markets, with 81% of financial services firms, manufacturers (73%) and healthcare (74%) being “mostly moderately ready,” as they each face unique regulatory and legacy infrastructure challenges.  

Organizations in the government and education sectors tend to be overrepresented in the low-readiness category, as they are often constrained by regulation and underinvestment, making them especially vulnerable. 

Based on the report’s findings, there are five strategic actions security practitioners should take: 

  1. Diversify model use. Leverage both open-source and paid AI models, with tailored security controls for each. Flexibility and diversity are key to balancing performance, security, and cost.  
  2. Expand AI security across applications. Don’t limit security controls to chatbots—apply them to all AI-powered operations, analytics, and security tools. 
  3. Embed data labeling practices. Implement formal data labeling and governance. This is essential for transparency and risk mitigation in AI systems. 
  4. Align AI with security infrastructure. Integrate AI security with existing tools—firewalls, observability platforms, and enforcement systems. AI should enhance, not replace, established security measures. 
  5. Treat AI as platform-level security. Approach AI security as a core infrastructure component, not a point solution. Platform-level thinking is required for actual readiness and resilience. 

Security professionals must recognize that the AI security challenge extends beyond individual tools or models — it’s about securing an entire AI-powered operational environment. “That makes observability essential,” MacVittie said. “Observability is going to take on a new level of importance in the future, because that’s how we’re going to see and understand what’s going on,” she said. 

The organizations that achieve high readiness will be more adaptive, competitive and prepared for future innovation. For the majority still working toward that goal, immediate action on AI-specific security, governance and expertise is essential to close the readiness gap and protect organizational value in the AI era. 

Recent Articles By Author


文章来源: https://securityboulevard.com/2025/07/survey-finds-ai-adoption-outpacing-security-readiness/?utm_source=rss&utm_medium=rss&utm_campaign=survey-finds-ai-adoption-outpacing-security-readiness
如有侵权请联系:admin#unsafe.sh