JFrog security researchers have discovered a pair of critical vulnerabilities in a workflow automation platform from n8n that makes use of large language models (LLMs) to execute tasks.
A CVE-2026-1470 vulnerability, rated 9.9, enables a malicious actor to remotely execute JavaScript code by manipulating a Statement capability in the n8n platform that is used to sanitize business logic.
The CVE-2026-0863 vulnerability, rated 8.5, similarly abuses the logic sanitize tool provided by n8n to enable remote execution using Python code.
Designed to be deployed in on-premises IT environments or accessed via a cloud service provided by n8n, both issues can be resolved by upgrading to one of the later editions of the n8n platform.
Used frequently by internal IT and cybersecurity teams to automate tasks, it’s not clear how many vulnerable instances of the n8n platform have been deployed, but this issue is the latest in a series that highlight the risk associated with deploying artificial intelligence (AI) platforms, especially if they enable remote code execution.
Shachar Menashe, vice president of security research for JFrog, said that in the rush to deploy powerful emerging AI technology organizations need to have a better understanding of the potential risks. That doesn’t mean that organizations should not adopt AI, but rather they need to understand the potential cybersecurity implications, he added.
In the case of these two vulnerabilities, they have both been rated high because they are relatively trivial to exploit, noted Menashe.
In general, the discovery of new vulnerabilities is becoming much more problematic in the age of AI. It’s become much simpler for cybercriminals to discover a vulnerability and reverse engineer an exploit using AI coding tools. Cybersecurity teams now need to assume that the time between when a vulnerability is disclosed and an exploit has been created can now be measured in days, if not hours.
Historically, only a small percentage of known vulnerabilities are actually exploited, but in the age of AI, it’s probable that percentage will soon significantly increase. As a result, cybersecurity teams are likely to soon find themselves even more challenged in the coming year.
Each organization will, as a consequence, need to make sure it is running the latest and most secure version of an application. Many of them will also need to revisit the degree to which they are comfortable with automatically applying patches. Many organizations tend to prefer to test a patch before upgrading software to ensure their application doesn’t break. However, as the overall level of risk a cyberattack represents to the business continues to increase, there are more classes of patches that should be automatically applied. The risk that a potential cyberattack creates is simply larger than the cost of the potential downtime that might result from the patch being applied. Hopefully, AI tools will also soon make it easier to discover and remediate vulnerabilities before they are exploited.
In the meantime, cybersecurity teams should, as always, continue to hope for the best while being ready for the worst.
Recent Articles By Author