Protect AI Report Surfaces MLflow Security Vulnerabilities
2024-1-18 23:44:56 Author: securityboulevard.com(查看原文) 阅读量:4 收藏

A report published by Protect AI today identifies remote code execution (RCE) vulnerabilities in an open source MLflow life cycle management tool that can be used to compromise artificial intelligence (AI) models.

Specifically, the report finds MLflow, which is widely used to build AI models, had an RCE vulnerability in the code used to pull down remote data storage. Users could be fooled into using a malicious remote data source, which could execute commands.

In addition, a bypass of an MLflow function, which validates that a file path is safe, was discovered. It allows a malicious user to remotely overwrite files on the MLflow server.

Finally, MLflow hosted on certain types of operating systems could be manipulated into displaying the file contents of sensitive files via a file path safety bypass. There is potential for system takeover if SSH keys or cloud keys were stored on the server and if MLflow was started with permissions to read them, the report noted.

Protect AI notified the maintainers of MLflow of the issue, and the latest version of the platform remediates the issue. Cybersecurity teams, however, will need to make sure AI teams are using the latest release.

Protect AI president Daryan Dehghanpisheh said these vulnerabilities are the latest in an ongoing series that have been discovered via a bug bounty program the company manages. There are already more than 13,000 researchers participating in the program.

In general, securing AI models requires organizations adopt tools and MLSecOps best practices to ensure security. In addition to attempting to take over an AI model, cybercriminals are also trying to steal AI models that are among an organization’s most valuable intellectual property, he noted.

At the same time, cybercriminals are attempting to inject malware into the software components used to create AI models in addition to poisoning the data sources used to train an AI model in the hopes of forcing it to hallucinate in a way that destroys its business value. Those threats are especially problematic because, unlike other software artifacts, there is no way to patch an AI model. Instead, organizations incur the expense of having to retrain the entire AI model any time an issue is discovered, said Dehghanpisheh. AI models are, by definition, among the most brittle types of software artifacts an organization can deploy, he added.

On the plus side, thanks to safety concerns, more attention is being paid to AI security much earlier than other types of emerging technologies, he added. Many organizations are already appointing AI security chiefs who are being allocated specific budgets to ensure the integrity of AI models, he noted. These new positions create a lucrative opportunity for cybersecurity professionals to advance their careers, said Dehghanpisheh.

It’s not clear to what degree AI models are being compromised or outright stolen, but most of them are less secure than many organizations realize. AI models, much like any other software artifact, are built using software components that have inherent vulnerabilities. In effect, they are insecure by design because of the dependencies that exist throughout the AI model supply chain, noted Dehghanpisheh.

Each organization that embraces AI will eventually need to address cybersecurity concerns. The only thing that remains to be seen is to what degree those concerns will be acted upon before there is an inevitable breach.

Recent Articles By Author


文章来源: https://securityboulevard.com/2024/01/protect-ai-report-surfaces-mlflow-security-vulnerabilities/
如有侵权请联系:admin#unsafe.sh