10 minutes to secure your Kubernetes application without giving up on customization: Wallarm WAF as a sidecar container with plain Kubernetes manifests
2020-08-18 07:25:07 Author: lab.wallarm.com(查看原文) 阅读量:245 收藏

In this series’ previous article, we added the AI-powered Wallarm WAF to our Helm chart bundled application as a sidecar container. 

As you can see, 10 minutes is the time we need to stop worrying about rules, lists, and attacks, and start focusing on performance, optimization, and deployment.

As you probably know, if you’re developing applications in a container environment orchestrated by Kubernetes, Helm is a robust solution to bundle your application, mainly because it offers a simple way to package and distribute everything in a consistent way. One of its main advantages, especially if you’re quite new to the microservices approach, is that Helm provides you a strong logic, and many third-party solutions you can directly plug into your environment. 

In some cases, the pros can turn into cons: the logic implemented by Helm may be too restrictive for highly complex solutions, which need their specific approach. Similarly, you could prefer to not bundle your application as a Helm chart, for many different reasons that span from the deployment environment to the DevOps team who’s managing it. Even more obviously, you don’t need to distribute your application in a repeatable way, so this would be a useless layer for you.

Forcing the developers to use a specific technology, approach, and logic is against the mindset we have here at Wallarm. We are expert developers, and we know that everything comes and goes, which is why our AI-powered WAF is designed to be as flexible and portable as possible. Wallarm WAF can be easily integrated into your plain Manifest-based application with no pain.

YAML: keeping manifests simple

Kubernetes manifests are the standard way to create, modify, and delete Kubernetes resources such as pods, deployments, services, or ingresses. The most common way to define manifests is in the form of .yaml files, sent to the Kubernetes API Server via commands such as kubectl apply -f my-file.yaml.

YAML, which stands for Yet Another Markup Language, is a human-readable text-based format for specifying configuration-type information. Using YAML for Kubernetes definitions is convenient because you don’t need to add all of your parameters to the command line. Plus, it adds maintainability, because YAML files can be added to source control to track changes, and flexibility, giving you the ability to create much more complex structures than you can on the command line.

10 minutes to be protected

The simple syntax of YAML manifests, making it human-readable, allows you to implement your custom logic and approach in your Kubernetes application. And Wallarm WAF follows the same “keep it simple for you” idea. It doesn’t matter if you want to apply Wallarm WAF to an existing application or write it from scratch, in the next 10 minutes you will be able to have it protected and safe.

Of course, first of all, you need to build at least the basics of your application, and if you need help, the official Kubernetes Documentation is an excellent place to start.

Let’s set the ConfigMap. Now it’s time to add Wallarm WAF security using a custom ConfigMap. You can create a new manifest file, or add a new object to the existing manifest for a new Kubernetes ConfigMap object that will hold the NGINX configuration file for the Wallarm sidecar container. If you’re already familiar with this, you can jump to our Documentation. Otherwise, we’re here to move on together.

ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: wallarm-sidecar-nginx-conf
data:
  default: |
    geo $remote_addr $wallarm_mode_real {
      # Please replace <WALLARM_MODE> below by the request filtering mode: 
      # off to disable request processing
      # monitoring to process but not block requests
      # block to process all requests and block the malicious ones
      default <WALLARM_MODE>;
      # IP addresses and rules for US cloud scanners
      23.239.18.250 off;104.237.155.105 off;45.56.71.221 off;45.79.194.128 off;104.237.151.202 off;45.33.15.249 off;45.33.43.225 off;45.79.10.15 off;45.33.79.18 off;45.79.75.59 off;23.239.30.236 off;50.116.11.251 off;45.56.123.144 off;45.79.143.18 off;172.104.21.210 off;74.207.237.202 off;45.79.186.159 off;45.79.216.187 off;45.33.16.32 off;96.126.127.23 off;172.104.208.113 off;192.81.135.28 off;35.235.101.133 off;34.94.16.235 off;35.236.51.79 off;35.236.87.46 off;35.236.16.246 off;35.236.110.91 off;35.236.61.185 off;35.236.14.198 off;35.236.96.31 off;35.235.124.137 off;35.236.100.176 off;34.94.13.81 off;35.236.55.214 off;35.236.127.211 off;35.236.126.84 off;35.236.3.158 off;35.235.112.188 off;35.236.118.146 off;35.236.1.4 off;35.236.20.89 off;
      # IP addresses and rules for European cloud scanners
      139.162.130.66 off;139.162.144.202 off;139.162.151.10 off;139.162.151.155 off;139.162.156.102 off;139.162.157.131 off;139.162.158.79 off;139.162.159.137 off;139.162.159.244 off;139.162.163.61 off;139.162.164.41 off;139.162.166.202 off;139.162.167.19 off;139.162.167.51 off;139.162.168.17 off;139.162.170.84 off;139.162.171.141 off;139.162.172.35 off;139.162.174.220 off;139.162.174.26 off;139.162.175.71 off;139.162.176.169 off;139.162.178.148 off;139.162.179.214 off;139.162.180.37 off;139.162.182.156 off;139.162.182.20 off;139.162.184.225 off;139.162.185.243 off;139.162.186.136 off;139.162.187.138 off;139.162.188.246 off;139.162.190.22 off;139.162.190.86 off;139.162.191.89 off;85.90.246.120 off;104.200.29.36 off;104.237.151.23 off;173.230.130.253 off;173.230.138.206 off;173.230.156.200 off;173.230.158.207 off;173.255.192.83 off;173.255.193.92 off;173.255.200.80 off;173.255.214.180 off;192.155.82.205 off;23.239.11.21 off;23.92.18.13 off;23.92.30.204 off;45.33.105.35 off;45.33.33.19 off;45.33.41.31 off;45.33.64.71 off;45.33.65.37 off;45.33.72.81 off;45.33.73.43 off;45.33.80.65 off;45.33.81.109 off;45.33.88.42 off;45.33.97.86 off;45.33.98.89 off;45.56.102.9 off;45.56.104.7 off;45.56.113.41 off;45.56.114.24 off;45.56.119.39 off;50.116.35.43 off;50.116.42.181 off;50.116.43.110 off;66.175.222.237 off;66.228.58.101 off;69.164.202.55 off;72.14.181.105 off;72.14.184.100 off;72.14.191.76 off;172.104.150.243 off;139.162.190.165 off;139.162.130.123 off;139.162.132.87 off;139.162.145.238 off;139.162.146.245 off;139.162.162.71 off;139.162.171.208 off;139.162.184.33 off;139.162.186.129 off;172.104.128.103 off;172.104.128.67 off;172.104.139.37 off;172.104.146.90 off;172.104.151.59 off;172.104.152.244 off;172.104.152.96 off;172.104.154.128 off;172.104.229.59 off;172.104.250.27 off;172.104.252.112 off;45.33.115.7 off;45.56.69.211 off;45.79.16.240 off;50.116.23.110 off;85.90.246.49 off;172.104.139.18 off;172.104.152.28 off;139.162.177.83 off;172.104.240.115 off;172.105.64.135 off;139.162.153.16 off;172.104.241.162 off;139.162.167.48 off;172.104.233.100 off;172.104.157.26 off;172.105.65.182 off;178.32.42.221 off;46.105.75.84 off;51.254.85.145 off;188.165.30.182 off;188.165.136.41 off;188.165.137.10 off;54.36.135.252 off;54.36.135.253 off;54.36.135.254 off;54.36.135.255 off;54.36.131.128 off;54.36.131.129 off;
    }
    server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;
        server_name localhost;
        root /usr/share/nginx/html;
        index index.html index.htm;
        wallarm_mode $wallarm_mode_real;
        # wallarm_instance 1;
        set_real_ip_from 0.0.0.0/0;
        real_ip_header X-Forwarded-For;
        location / {
                # Please replace <APP_CONTAINER_PORT> below by the port number
                # on which the container accepts incoming requests,
                # the value must be identical to ports.containerPort
                # in definition of your main app container
                proxy_pass http://localhost:<APP_CONTAINER_PORT>;
                include proxy_params;
        }
    }

To have everything working, you have to edit two parts of the ConfigMap. The first one is data.default section with “geo” Nginx configuration statement, where we define the way Wallarm WAF filters the requests.

data:
  default: |
    geo $remote_addr $wallarm_mode_real {
      default <WALLARM_MODE>;

You will always be able to change this parameter for testing purposes. You can disable the request processing (off), or just monitor it (monitoring). Still, the typical use is the block mode, so Wallarm WAF can process all requests and block the malicious ones using its AI-powered approach. 98% of our customers use Wallarm WAF in blocking mode in the production environment.

The second parameter we need to set is located in Nginx proxy_pass configuration attribute.

data:
  default: |
    server {
        location / {
                proxy_pass http://localhost:<APP_CONTAINER_PORT>;
                include proxy_params;
        }
    }

<APP_CONTAINER_PORT> is the port number on which the container accepts incoming requests. Just keep in mind that this value must match the ports.containerPort you set in the definition of your main app container.

Deployment

Customization often comes at the price of complexity. Well, this is not the case. The next steps you have to cover to ensure the AI-powered security provided by Wallarm WAF are quite short.

To work with Wallarm WAF, we need to find the pods in our application that are actually exposed to the Internet. If you’re unsure, you should search for something like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers: 
      - name: myapp 
        image: <Image>
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        - containerPort: 8080

Let’s add to the Deployment object the elements we need to filter the requests.

First of all, you need to edit the spec.template.spec.containers section.

Container Section
spec:
  template:
    spec:
      containers:
      # Wallarm element: definition of Wallarm sidecar container
      - name: wallarm
        image: wallarm/node:2.14
        imagePullPolicy: Always
        env:
        - name: WALLARM_API_HOST
          value: "api.wallarm.com"
        - name: DEPLOY_USER
          value: "username"
        - name: DEPLOY_PASSWORD
          value: "password"
        - name: DEPLOY_FORCE
          value: "true"
        - name: WALLARM_ACL_ENABLE
          value: "true"
        # Amount of memory in GB for request analytics data, 
        # recommended value is 75% of the total server memory
        - name: TARANTOOL_MEMORY_GB
          value: 2
        ports:
        - name: http
          # Port on which the Wallarm sidecar container accepts requests 
          # from the Service object
          containerPort: 80
        volumeMounts:   
        - mountPath: /etc/nginx/sites-enabled   
          readOnly: true    
          name: wallarm-nginx-conf

Everything is documented directly in the code and the official Documentation, but let’s take a closer look.

The first parameter to be set is the value of WALLARM_API_HOST in spec.template.spec.containers.env. You can choose between two values defining the right Wallarm API endpoint, depending on where your Wallarm account is located. If you are in the EU cloud, you should use api.wallarm.com. If your account is located in the US cloud, please set the value to us1.api.wallarm.com.

The DEPLOY_USER and DEPLOY_PASSWORD refer to the user having the Deploy role in your Wallarm account. If you didn’t create it yet, it’s time to do it by following our instructions. Generally speaking, there’s no need to edit the DEPLOY_FORCE and WALLARM_ACL_ENABLE parameters. Likewise, the value of TARANTOOL_MEMORY_GB parameter can be left as it is.

Let’s move to the other part of the Deployment object we need to update: the actual definition of the wallarm-nginx-conf volume, located in the spec.template.spec section. You should find a volumes subsection. If not, create it.

spec:
  template:
    spec:
      volumes:
      - name: wallarm-nginx-conf
        configMap:
          name: wallarm-sidecar-nginx-conf
          items:
            - key: default
              path: default

The only value we must check is spec.template.spec.volumes.configMap.name. Does it match with the name of the ConfigMap? If the answer is “no”, this is the time to update it.

Service and NetworkPolicy

You’re almost done with the Wallarm WAF setup. There are just a few values to be updated or verified.Return to the Kubernetes manifests and open the template that defines the Service object that points to Deployment modified in the previous step. For example:

apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
  ports:
  - port: {{ .Values.service.port }}
    # Wallarm sidecar container port; 
    # the value must be identical to ports.containerPort
    # in definition of Wallarm sidecar container
    targetPort: 8080

Make sure the ports.targetPort value is identical to ports.containerPort value that was defined in the Wallarm sidecar container.By default, Kubernetes pods are non-isolated, so they accept traffic from any source. If you need to define some rules, you can implement a NetworkPolicy object. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. Remember that you should update it to reflect the Wallarm sidecar container port specified in the spec.ports.port.targetPort of your Service object.

Ready to rock

When everything is set, you can update or deploy the new application manifest in the Kubernetes cluster. 

You can check if everything is correct by getting the list of pods by using the following command.

kubectl get pods

The number of containers in the pod should increase, and the pod’s status should be “Running”.

NAME                      READY STATUS    RESTARTS      AGE
myapp-724b023acd-fe2gh    2/2 Running     0             36s

There is only one last step: a real-world simulation, made possible by Wallarm tools.

Let’s send a malicious test attack to our application, something like this:

http://<resource_URL>/?id='or+1=1--a-<script>prompt(1)</script>

In the Events section of your Wallarm account dashboard, you should find a new attack on the list, describing SQLI and XSS attacks. If you see this, your application is now protected by the Wallarm WAF installed as a sidecar container. The whole setup took just 10 minutes, and your custom logic is totally preserved.

Conclusion

When your application goes out in the real world, a reliable security solution is crucial, especially if your project is complex and implements a custom logic. Wallarm WAF installed as a sidecar container is the best solution for this because it gives you total flexibility and keeps your application protected against present and emerging threats, without giving up on performance and with minimal time consumption. The setup only takes you 10 minutes, and the daily management is performed by the AI-powered solution developed by Wallarm.


文章来源: https://lab.wallarm.com/10-minutes-to-secure-your-kubernetes-application-without-giving-up-on-customization-wallarm-waf-as-a-sidecar-container-with-plain-kubernetes-manifests/
如有侵权请联系:admin#unsafe.sh