AI Adoption Prompts Security Advisory from NSA
2024-4-26 02:39:24 Author: securityboulevard.com(查看原文) 阅读量:5 收藏

The National Security Agency (NSA) has issued a report warning of the risks from insecurely deployed artificial intelligence (AI) systems as adoption grows across industries.

The agency emphasized the need for organizations to implement robust security measures to prevent data theft and mitigate potential misuse of AI systems.

The NSA advisory was issued jointly with the Cybersecurity and Infrastructure Security Agency (CISA), the FBI and international partners including the Australian Signals Directorate and the United Kingdom’s National Cyber Security Centre.

According to the NSA, deploying AI systems securely requires careful setup and configuration, depending on factors that include system complexity, available resources and the infrastructure used.

Key recommendations include enforcing strict access controls, conducting regular audits and penetration testing, and implementing robust logging and monitoring mechanisms. Organizations are also urged to validate AI systems before deployment, secure exposed APIs, and prepare for high availability and disaster recovery.

AIE

The warning comes amid concerns about potential vulnerabilities in AI technology and the need to address evolving risks.

Serious Risk of Leakage

Models developed and deployed internally, often using sensitive data, pose a serious risk of leakage through cybersecurity attacks like membership inference and data extraction, said Daniel Christman, co-founder of AI security firm Cranium. “These risks can be exploited simply by threat actors using the model and observing outputs,” he said.

Moreover, many organizations are leveraging models deployed in external environments (such as their vendors), where internal cybersecurity stakeholders have little to no control over the security posture, amplifying the severity of the situation.

Christman noted threat actors are actively taking advantage of AI systems’ unique capabilities, particularly the fact that AI systems’ outputs are not static but based on the underlying training data. “An attacker can extract sensitive information from an AI system without access to the data or underlying infrastructure,” Christman cautioned.

If malicious actors can access the training data, model weights, or other key AI system details, they can launch a much wider range of attacks that have proven extremely difficult to detect, such as data poisoning.

As referenced in the NSA’s guidance, Christman advised cybersecurity practitioners to continue applying best practices to their AI system deployment environments. Ensure access controls are in place, and continuously monitor system metrics for anomalous activity.

For high-risk AI systems, consider more sophisticated defensive measures, such as AI red-teaming, adversarial training and robustness testing to ensure AI systems are resilient to malicious inputs.

Cybersecurity Pros Aware of Threat

The guidance comes at a crucial time, according to Marcus Fowler, CEO of Darktrace Federal.

“We’re already seeing the early impact of AI on the threat landscape and some of the challenges that organizations face when using these systems – both from inside their organizations and from adversaries outside of the business,” Fowler said.

Darktrace recently released research that found 74% of security professionals consider AI-powered threats a significant issue, and 89% agreed that AI-powered threats will remain a major challenge into the foreseeable future.

“As AI systems become embedded into the tools and processes organizations depend on every day, cybersecurity plays a crucial role and is foundational to AI safety,” Fowler said.

In this key moment of AI adoption, security leaders must be embedded in the process from the beginning to ensure AI is deployed in ways that keep it reliable and secure.

Fowler said the NSA report reinforces the need to protect models and invest in safeguards to keep AI systems protected at all stages of the AI lifecycle, to avoid unintended behaviors or potential hijacking of the algorithms. That includes securing the environment in which the AI models are deployed, ensuring the models are continuously monitored and protected and implementing processes and procedures to ensure they are used safely and appropriately.

“Simply put, trustworthy and reliable AI cannot exist without strong cybersecurity,” Fowler added.

Deploying MFA, Continuous Risk Assessment

Defensive strategies should integrate advanced threat phishing detection systems, said Stephen Kowski, field CTO at SlashNext, as well as systems that monitor for suspicious activity. That’s on top of nforcing strict access control using methods like phishing-resistant multifactor authentication (MFA).

“Continuous risk assessment and mitigation are crucial for identifying and addressing new and evolving threats to AI systems with recommendations for adopting a Zero Trust architecture and using robust logging and monitoring,” Kowski added.

Kowski cautioned that Generative AI models like ChatGPT can introduce vulnerabilities such as model inversion attacks and data poisoning, permitting the exploitation of the model’s learning processes and output manipulation. “Adversaries may exploit these vulnerabilities to inject biased data, extract sensitive information, or manipulate outputs, requiring strong encryption and continuous monitoring as part of the defense strategy,” he said.

Photo credit: Hal Gatewood on Unsplash

Recent Articles By Author


文章来源: https://securityboulevard.com/2024/04/ai-adoption-prompts-security-advisory-from-nsa/
如有侵权请联系:admin#unsafe.sh