Child’s play: Install SAP BTP transparent proxy using Helm
2024-1-15 23:14:28 Author: blogs.sap.com(查看原文) 阅读量:19 收藏

It is inevitable that cloud solutions have to communicate with other remote solutions. The latter can be situated on public or private clouds, or set up on client sites. Of course, it would be easier to have levers to facilitate this in the simplest possible way. This is where the SAP BTP Connectivity services and components come to the rescue! In this blog post, you will understand how to install one of these components using Helm: the SAP BTP transparent proxy.

SAP BTP transparent proxy simplifies the connection between Kubernetes workloads and target systems defined as destinations in the SAP Destination service. To understand more about some of the features of the Transparent Proxy, you could check this blog.

Prerequisites

Before you start, you should have the following:

  • A Kubernetes cluster
  • Kubectl installed and configured on your local machine
  • Helm installed on your local machine
  • SAP BTP subaccount with a Destination service instance and a Connectivity Proxy instance (optional for on-premise connectivity)
  • Istio or cert-manager running in your Kubernetes cluster as a foundation for traffic encryption between the micro-components of the Transparent Proxy
  • Connectivity proxy installed in your cluster (optional for on-premise connectivity)

Installation steps

To install the Transparent Proxy using Helm, follow these steps:

  1. Create a namespace for the Transparent Proxy in your Kubernetes cluster. For example:
    kubectl create namespace transparent-proxy​
  2. Create a Kubernetes secret with the credentials of your Destination service instance
    • You can obtain the credentials from the SAP BTP cockpit. Navigate to Services -> Instances and Subscriptions -> Click on the service instance row -> Service Keys -> View -> Copy JSON
    • Use the service key data to create the Kubernetes secret. For example:
      kubectl create secret generic dest-svc-key -n transparent-proxy --from-literal=secret='<credentials>'​
  3. Create values.yaml according to your needs. You can find all available parameters here. For example:
    deployment:
      autoscaling:
        http:
          horizontal:
            # Enables or disables the Horizontal Pod Autoscaler mechanism.
            enabled: true
            # Upper limit for the number of HTTP Transparent Proxy replicas to which the autoscaler can scale up.
            maxReplicaCount: 3
            metrics:
              # Target value of the average CPU metric across all Transparent HTTP Proxy pods, represented as a percentage of the requested value of the CPU for the pods.
              cpuAverageUtilization: 80
              # Target value of the average memory metric across all Transparent HTTP Proxy pods, represented as a percentage of the requested value of the memory for the pods.
              memoryAverageUtilization: 80
        tcp:
          horizontal:
            # Enables or disables the Horizontal Pod Autoscaler mechanism.
            enabled: true
            # Upper limit for the number of TCP Transparent Proxy replicas to which the autoscaler can scale up.
            maxReplicaCount: 3
            metrics:
              # Target value of the average CPU metric across all Transparent TCP Proxy pods represented as a percentage of the requested value of the CPU for the pods.
              cpuAverageUtilization: 80
              # Target value of the average memory metric across all Transparent TCP Proxy pods represented as a percentage of the requested value of the memory for the pods.
              memoryAverageUtilization: 80
    config:
      # Defines the tenant mode in which Transparent Proxy is working in. The option "dedicated" shows that the proxy works in single-tenant mode.
      tenantMode: "dedicated"
      security:
        accessControl:
          destinations:
             ## Defines the scope of Destination CRs.
            defaultScope: "clusterWide"
        communication:
          internal:
            # Enables/Disables mTLS communication between the Transparent Proxy micro-components.
            # It may be disabled only in test environments or if you want to integrate with a Service mesh like Istio.
            encryptionEnabled: true
            certManager:
              issuerRef:
                name: <cert-manager issuer name>
                kind: ClusterIssuer
              # Certificate properties used by cert-manager's Certificate controller.
              certificate:
                privateKey:
                  algorithm: ECDSA
                  encoding: PKCS8
                  size: 256
                duration: 720h
                renewBefore: 120h
      manager:
        # The interval on which the Transparent Proxy will check for updates in the Destination service instance. 
        executionIntervalMinutes: 3
      integration:
        destinationService:
          instances:
            # The local cluster name of the Destination service instance, which can later be used as a reference in the Destination CR
          - name: dest-service-instance
            serviceCredentials:
              # The key in the Destination service secret resource, which holds the value of the destination service key.
              secretKey: secret
              # The name of the existing secret, which holds the credentials for the Destination service.
              secretName: dest-svc-key
              # The namespace of the secret to be used, which holds the credentials for the Destination service.
              secretNamespace: transparent-proxy
        connectivityProxy:
          # The Kubernetes service name + namespace that are associated with the Connectivity Proxy workload.
          serviceName: <connectivity proxy service name>.<connectivity proxy namespace>
          # The port on which the HTTP interface of the Connectivity Proxy is started.
          httpPort: 20003
          # The port on which the TCP interface of the Connectivity Proxy is started.
          tcpPort: 20004
  4. Install the Transparent Proxy using the Helm values from step 3:
    helm install transparent-proxy oci://registry-1.docker.io/sapse/transparent-proxy --version <version of helm chart> --namespace transparent-proxy -f <path-to-values.yaml>​

    You should receive a similar to this response:

    Successful%20installation%20of%20Transparent%20Proxy%20with%20Helm

    Successful installation of Transparent Proxy with Helm

  5. Verify that the Transparent Proxy is running by checking the status of the pods and the health check:
    kubectl get pods -n transparent-proxy

    There should be two pods running:

    Transparent%20Proxy%20resources%20after%20installation

    Transparent Proxy components after installation

    As you can see, the Transparent Proxy has a health check pod which constantly checks the status of all Transparent Proxy components. You can look at what capabilities the health check has in the Verification and Testing page in the Help portal. Here’s how you can execute a component check:

    kubectl run perform-hc --image=curlimages/curl -it --rm --restart=Never -- curl -w "\n" 'sap-transp-proxy-int-healthcheck.transparent-proxy/status'​

    And the result should be the following:

    Response%20from%20the%20health%20check
    This means that the sap-transp-proxy-manager, the heart of the Transparent Proxy, is running smoothly and you are ready to consume your first target system through the Transparent Proxy!

Try it out

To use the Transparent Proxy, you should create a Destination Custom Resource (CR). Let’s create a dynamic one. “Dynamic” means a Destination CR will serve all destinations for a Destination service instance or its tenants. Follow these steps:

  1. Create a Destination CR file with name dynamic-destination.yaml​:
    apiVersion: destination.connectivity.api.sap/v1
    kind: Destination
    metadata:
      name: dynamic-destination
      namespace: transparent-proxy
    spec:
      destinationRef:
        name: "*"
      destinationServiceInstanceName: dest-service-instance​
  2. Create the resource from step 1 in your cluster:
    kubectl create -f dynamic-destination.yaml​
  3. Wait for a successful status of the Destination CR. To check it execute:
    kubectl get dst dynamic-destination -n transparent-proxy -o yaml

    You should observe a status similar to this one:

    status:
      conditions:
      - lastUpdateTime: "2024-01-11T11:56:33.605473101Z"
        message: Technical connectivity is configured. Kubernetes service with name
          dynamic-destination is created.
        reason: ConfigurationSuccessful
        status: "True"
        type: Available
  4. Create a curl pod, from where you can test the consumption of the target system through the Transparent Proxy:
    kubectl run curlpod -n transparent-proxy --image=curlimages/curl -n transparent-proxy -i --tty -- sh
  5. Consume a target system defined as a destination in your Destination service instance:
    curl dynamic-destination -H "X-Destination-Name: <destination-name>"​

Examples

In the context of my SAP BTP subaccount, I have created two destinations: one pointing to the SAP XSUAA API, and another pointing to a server on my local machine, exposed to the cloud via the SAP Cloud Connector.

Configured%20destinations%20in%20the%20SAP%20BTP%20Cockpit

Configured destinations in the SAP BTP Cockpit

  • Executing a request to the SAP XSUAA API. This is an example for getting the registered service instances of the current subaccount via destination, locally exposed and served by Transparent Proxy, and centrally managed via the SAP Destination service:
    ~ $ curl dynamic-destination/sap/rest/authorization/v2/apps -H "X-Destination-Name: xsuaa-api" -v
    * Host dynamic-destination:80 was resolved.
    ...
    > GET /sap/rest/authorization/v2/apps HTTP/1.1
    > Host: dynamic-destination
    > User-Agent: curl/8.5.0
    > Accept: */*
    > X-Destination-Name: xsuaa-api
    > 
    < HTTP/1.1 200 OK
    ...
    [{"appid":"auditlog!b3718","serviceinstanceid":"0889a7e7-61d8-41...
  • Executing a request to an on-premise system using principal propagation. That system is a simple server that maps the user certificate to a concrete user. The current response greets the requestor.
    ~ $ curl dynamic-destination/principal-propagation -H "X-Destination-Name: my-on-premise-system" -H "Authorization: Bearer $TOKEN" -v
    * Host dynamic-destination:80 was resolved.
    ...
    * Connected to dynamic-destination (10.104.69.106) port 80
    > GET /principal-propagation HTTP/1.1
    > Host: dynamic-destination
    > User-Agent: curl/8.5.0
    > Accept: */*
    > X-Destination-Name: my-on-premise-system
    > Authorization: Bearer eyJhbGciOiJSUzI1NiIsImprdS...
    ...
    < HTTP/1.1 200 OK
    ...
    Hello Iliyan Videnov!​

In this blog post, you have learned how to install SAP BTP transparent proxy using Helm, and how to easily set up it for system consumption. I hope you find it useful and enjoy it. Ideas, suggestions, and comments are welcome. Thank you for reading!


文章来源: https://blogs.sap.com/2024/01/15/childs-play-install-sap-btp-transparent-proxy-using-helm/
如有侵权请联系:admin#unsafe.sh