This manual guides you through the fully private deployment of OneConnect on SAP BTP Kyma (AWS EKS). All access is routed through SAP Cloud Connector — no services are exposed to the public internet.
Follow the steps below in order.
| Step | Section | What you will do |
|---|---|---|
| 0 | Prerequisites | Verify tools, cluster access, and Docker credentials |
| 1 | Section 2 | Install the Strimzi Kafka Operator |
| 2 | Section 3 | Deploy the full Kafka Stack |
| 3 | Section 4 | Deploy the OneConnect Platform (Cloud Connector mode) |
| 4 | Section 5 | Deploy the Observability Stack (monitoring & logging) |
| 5 | Section 6 | Enable Istio Sidecar Injection |
| 6 | Section 7 | Activate all required Kyma Modules |
| 7 | Section 8 | Configure SAP Cloud Connector |
| ✓ | Section 9 | Verification & Health Checks |
It is necessary to install kubectl and Helm v3. For detailed instructions, please refer to the documentation:
How to connect Kubectl to BTP Kyma environment using AWS EC2
You must have the information of the server where SAP Cloud Connector (SCC) is installed and running.
Required information:
The machine where SAP Cloud Connector (SCC) is installed must have the required connectivity to reach the published OneConnect services.
Ensure that the following ports are allowed and properly reachable from the corresponding path to the SCC host:
Service | Port | Purpose |
OneConnect Frontend | 5050 | Frontend application access |
OneConnect API Gateway | 9000 | Backend/API communication |
AKHQ / additional web service | 8080 | Administrative or service access, when applicable |
Strimzi is the Kubernetes operator that manages Kafka clusters declaratively. It must b
e installed and running before any Kafka resources are applied.
| helm install strimzi-operator strimzi-kafka-operator \ --repo https://strimzi.io/charts/ \ --namespace kafka \ --create-namespace |
| kubectl get pods -n kafka | grep strimzi-cluster-operator # Expected result: # strimzi-cluster-operator-xxxx 1/1 Running |

strimzi-cluster-operator pod shows 1/1 Running. The operator must be healthy before it can process Kafka resources.Deploy the full Kafka stack: Kafka 4.2 in KRaft mode (not compatible with ZooKeeper), Confluent Schema Registry 7.6.0, and Kafka Connect.
Kafka Connect is the component that runs connectors to external databases and platforms such as MySQL and DataBricks.

Replace [your-docker-token] with your Docker Hub personal access token before running this command.
| Placeholder | Description |
|---|---|
[Docker-token] | Your Docker Hub personal access token (e.g. dckr_pat_xxxxxxxxxxxx) |
| [SUBACCOUNT_ID] | SAP BTP Subaccount ID — e.g. 6c7a9dbc-8590-479e-9cec-69bf38a9a822 |
[LOCATION_ID] | Cloud Connector Location ID — e.g. onibex-1 |
cd helm/oneconnect-helmdeployment cd kafka-platform/ # Deploy kafka
helm install kafka-platform ./kafka-platform \
--namespace kafka \
--values kafka-platform/values.yaml \
--set dockerHub.token=[Docker-token] \
--set akhq.serviceMapping.enabled=true \
--set akhq.serviceMapping.subaccountId="[SUBACCOUNT-ID]" \
--set akhq.serviceMapping.locationId="[LOCATION-ID]"
# Create akhq service
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: akhq
namespace: kafka
spec:
selector:
app: akhq
ports:
- port: 8080
targetPort: 8080
type: ClusterIP
EOF |
Wait until all four pods are Running before moving to Phase 2:
| kubectl get pods -n kafka -w # All four pods must show 1/1 Running: # kafka-cluster-main-pool-0 1/1 Running # kafka-cluster-main-pool-1 1/1 Running # kafka-cluster-main-pool-2 1/1 Running # kafka-cluster-entity-operator-xxx 1/1 Running |
| # Schema Registry check — expected result: [] kubectl exec -it \ $(kubectl get pod -l app=schema-registry -n kafka -o name) \ -n kafka -- curl http://localhost:8081/subjects # Kafka Connect check — expected result: [] kubectl exec -it connect-cluster-connect-0 -n kafka -- \ curl http://localhost:8083/connectors # Strimzi resource status — READY must be True for both kubectl get kafka,kafkaconnect -n kafka |
| File | Purpose |
|---|---|
kafka-platform/values.yaml | Helm values for the Kafka platform chart |
Deploy all OneConnect application services in Cloud Connector mode. No services are exposed to the public internet — all access flows through SAP Cloud Connector.

You will need the three values below. Replace each label with your actual value before running the command:
| Placeholder | Description |
|---|---|
[LOCATION-ID] | Cloud Connector Location ID — e.g. onibex-1 (must match the value you set when configuring Cloud Connector in Section 8) |
[DOCKER-TOKEN] | Your Docker Hub personal access token |
[SUBACCOUNT-ID] | SAP BTP Subaccount ID — e.g. 6c7a9dbc-8590-479e-9cec-69bf38a9a822 (found in BTP Cockpit → Subaccount overview) |
| helm install oneconnect . \ --namespace oneconnect \ --create-namespace \ --values values-aws.yaml \ --set dockerHub.token=[DOCKER-TOKEN] \ --set cloudConnector.enabled=true \ --set cloudConnector.locationId="[LOCATION-ID]" \ --set cloudConnector.subaccountId="[SUBACCOUNT-ID]" |

locationId and subaccountId flags are both required. If either is missing, the Helm chart will display a clear error message.| Resource | Name | Details |
|---|---|---|
| ServiceMapping | apigateway-mapping | Maps apigateway.oneconnect.svc.cluster.local:9000 |
| ServiceMapping | frontend-mapping | Maps frontendoneconnect.oneconnect.svc.cluster.local:5050 |
| ServiceMapping | akhq-mapping | Maps akhq.kafka.svc.cluster.local:8080 |
| Frontend Env | NEXT_PUBLIC_API_BASE | Automatically set to http://localhost:9000 (no manual update) |
| LoadBalancers | — | No LoadBalancer services are created in this mode |
Service IDs default to apigateway-oneconnect and frontend-oneconnect. Change them only if your Cloud Connector setup requires different names:
| --set cloudConnector.serviceIdApigateway="my-apigateway" \ --set cloudConnector.serviceIdFrontend="my-frontend" |
After installation, retrieve the tunnel endpoints. You will need these in Section 8 when configuring the Cloud Connector service channels.
| # apigateway endpoint kubectl get servicemappings.connectivityproxy.sap.com apigateway-mapping \ -n oneconnect -ojsonpath='{.status.endpoint}' # frontend endpoint kubectl get servicemappings.connectivityproxy.sap.com frontend-mapping \ -n oneconnect -ojsonpath='{.status.endpoint}' # Result format: # tcp://cp.c-XXXXX.kyma.ondemand.com:443/[service-id] # Save both values — you will need them in Section 8. |
The Observability Stack provides monitoring, log aggregation, and distributed tracing for the OneConnect platform.

| cd helm/one_source-oneconnect-observability-*/ helm install oneconnect-observability . \ --namespace observability \ --create-namespace \ --values values-aws.yaml |
| kubectl get pods -n observability |
| kubectl port-forward svc/grafana 3000:3000 -n observability # Open in your browser: http://localhost:3000 |
The Cloud Connector tunnel uses Istio mTLS to authenticate traffic between services. Without the Istio sidecar running inside each pod, the Connectivity Proxy will reject all connections.
Think of the Istio sidecar as a security co-pilot alongside each pod — it handles encrypted routing automatically. After enabling it, each pod will show 2/2 READY (app container + Istio sidecar).
Activating the Istio module in Kyma Dashboard is a key step to enable sidecar injection and ensure mTLS authentication in the Cloud Connector. Each pod displays 2/2 READY once the application container and the security sidecar are running together.
Run for the oneconnect namespace:
| Placeholder | Description |
|---|---|
Namespace |
Kubernetes namespace — oneconnect, kafka |
| kubectl label namespace oneconnect istio-injection=enabled kubectl rollout restart deployment -n oneconnect # Wait until all pods show 2/2 READY: kubectl get pods -n oneconnect |

1/1 after the restart, verify the label was applied: kubectl get namespace oneconnect --show-labels | grep istioThe Connectivity Proxy manages the secure tunnel from Cloud Connector to your cluster. The Transparent Proxy exposes BTP destinations as internal Kubernetes services. Both components depend on Istio (Step 5).
In Kyma Dashboard → Modules → Modify Modules → Add, add all five modules listed below:
| Module | Namespace | Purpose |
|---|---|---|
api-gateway | — | API gateway for Kyma services |
btp-operator | kyma-system | SAP BTP service operator |
connectivity-proxy | kyma-system | Secure tunnel between Cloud Connector and the cluster |
istio | kyma-system | Service mesh and mTLS (activated in Step 5) |
transparent-proxy | sap-transp-proxy-system | Exposes BTP destinations as Kubernetes services |

| # Transparent Proxy namespace must exist kubectl get ns sap-transp-proxy-system # Transparent Proxy pods must be Running kubectl -n sap-transp-proxy-system get deploy,pod # Connectivity Proxy must be Running kubectl get pods -n kyma-system | grep connectivity |
The SAP Cloud Connector is an agent you install in your on-premise network. It opens a secure outbound tunnel to SAP BTP — no inbound firewall ports are required.
Traffic flow:
| SAP ECC / S4HANA → http://[cc-server-host]:[local-port] → Cloud Connector (Service Channel) → Connectivity Proxy (kyma-system) → service.oneconnect.svc.cluster.local:[port] |
The Connectivity Proxy needs BTP credentials. Run the command below to create them. Use kubectl — do NOT use the BTP Cockpit Service Marketplace for this step.
| Placeholder | Description |
|---|---|
[NAMESPACE] |
Kubernetes namespace — use: oneconnect, kafka |
| kubectl apply -f - <<EOF apiVersion: services.cloud.sap.com/v1 kind: ServiceInstance metadata: name: connectivity-instance namespace: oneconnect spec: serviceOfferingName: connectivity servicePlanName: connectivity_proxy --- apiVersion: services.cloud.sap.com/v1 kind: ServiceBinding metadata: name: connectivity-binding namespace: oneconnect spec: serviceInstanceName: connectivity-instance EOF # Verify both reach Ready / Created status: kubectl get serviceinstances,servicebindings -n oneconnect |
If you already ran Step 3.4, skip this. Otherwise retrieve the endpoints now:
| kubectl get servicemappings.connectivityproxy.sap.com apigateway-mapping \
-n oneconnect -ojsonpath='{.status.endpoint}'
kubectl get servicemappings.connectivityproxy.sap.com frontend-mapping \
-n oneconnect -ojsonpath='{.status.endpoint}' kubectl get servicemappings.connectivityproxy.sap.com akhq-mapping \
-n oneconnect -ojsonpath='{.status.endpoint}'
# Result format: tcp://cp.c-XXXXX.kyma.ondemand.com:443/[service-id]
# Save both values — needed in Step 7.5. |
https://localhost:8443Administrator / manage — change your password immediately.In the Cloud Connector admin UI, go to Subaccounts → + (Add) and fill in:
| Field | Value |
|---|---|
| Region | Your BTP subaccount region |
| Subaccount | Your BTP Subaccount ID |
| Location ID | Must exactly match the value used in Step 3.1 |
| Login | Your BTP email address |
| Password | Your BTP password |
| Placeholder | Description |
|---|---|
[your-btp-region] | BTP subaccount region — e.g. cf-us10 |
[SUBACCOUNT-ID] | SAP BTP Subaccount ID — found in BTP Cockpit → Subaccount overview |
[LOCATION-ID] | Must be the exact same value you used in Step 3.1 (case-sensitive) |

Create one service channel per ServiceMapping. In the Cloud Connector admin UI, go to On-Premise to Cloud → + (Add).
Channel 1: apigateway
| Placeholder | Description |
|---|---|
[apigateway-endpoint] | The endpoint retrieved in Step 7.2 — remove the tcp:// prefix before pasting |
[local-port] | A free port on your CC server — e.g. 9000 |
| Field | Value |
|---|---|
| Type | K8s Cluster |
| Endpoint | apigateway endpoint from Step 7.2 — remove the tcp:// prefix |
| Local Port | 9000 (or another free port) |
| Enabled | Yes |
Channel 2: frontend
| Placeholder | Description |
|---|---|
[frontend-endpoint] | The endpoint retrieved in Step 7.2 — remove the tcp:// prefix before pasting |
[local-port] | A different free port — e.g. 5050 (must differ from Channel 1) |
| Field | Value |
|---|---|
| Type | K8s Cluster |
| Endpoint | frontend endpoint from Step 7.2 — remove the tcp:// prefix |
| Local Port | 5050 (must be different from Channel 1) |
| Enabled | Yes |
Channel 3: AKHQ (only if you completed Step 3.5)
| Field | Value |
|---|---|
| Type | K8s Cluster |
| Endpoint | AKHQ endpoint from Step 3.5 — remove the tcp:// prefix |
| Local Port | 8080 (must differ from Channels 1 and 2) |
| Enabled | Yes |

1/1 connections. If it shows 0/1, see the Troubleshooting section.With the service channels active, create an HTTP destination in your SAP system:
| Placeholder | Description |
|---|---|
[cc-server-host] | IP address or hostname of the Cloud Connector server |
[local-port] | The local port you set in the service channel (e.g. 9000) |
| Field | Value |
|---|---|
| Connection Type | HTTP Connection to External Server (type G) |
| Host | [cc-server-host] |
| Port | [local-port] |
| Protocol | HTTP |
| Path Prefix | /your-endpoint-path |
Before going live, test connectivity from the Cloud Connector server. The Host header is required for the Connectivity Proxy to route requests correctly.
| Placeholder | Description |
|---|---|
[local-port] | The local port of the apigateway service channel (e.g. 9000) |
[local-port-frontend] | The local port of the frontend service channel (e.g. 5050) |
[local-port-akhq] | The local port of the AKHQ service channel (e.g. 8080) |
| # Test apigateway tunnel curl -vvv http://localhost:[local-port]/ \ -H "Host: apigateway.oneconnect.svc.cluster.local" # Test frontend tunnel curl -vvv http://localhost:[local-port-frontend]/ \ -H "Host: frontendoneconnect.oneconnect.svc.cluster.local" # Test AKHQ tunnel (if configured) curl -vvv http://localhost:[local-port-akhq]/ \ -H "Host: akhq.kafka.svc.cluster.local" # Any HTTP response (including 404 or 302) = tunnel is working. # "Empty reply from server" = tunnel is NOT working. See Troubleshooting. |
Use the commands below to confirm every component is running correctly.
| kubectl get pods -n oneconnect # All pods must show 2/2 READY (app + Istio sidecar) |
| kubectl get services -n oneconnect # You should see only ClusterIP services — no LoadBalancer entries |
| kubectl get kafka,deployment -n kafka # READY must be True for both resources kubectl get pods -n kafka # 3 pool nodes + entity-operator must all be Running |
| kubectl get servicemappings.connectivityproxy.sap.com -n oneconnect |
| kubectl get pods,svc -n kafka | grep akhq # akhq shows type=ClusterIP — access is through Cloud Connector only # Local access via port-forward: kubectl port-forward svc/akhq 8080:8080 -n kafka # Open: http://localhost:8080 (default login: admin / admin123) |
| kubectl get namespace oneconnect --show-labels | grep istio kubectl get pods -n oneconnect # All pods must show 2/2 READY |
| kubectl get pods -n observability |
2/2 READY.| kubectl rollout restart statefulset connectivity-proxy -n kyma-system |
| kubectl logs -n kyma-system -l app=connectivity-proxy --tail=50 |
istio-injection=enabled label is set on the namespace.| kubectl rollout restart deployment -n oneconnect |
*.ondemand.com.tcp:// prefix.| kubectl get servicemappings.connectivityproxy.sap.com apigateway-mapping \ -n oneconnect -ojsonpath='{.status.endpoint}' |
1/1 Running before applying Kafka resources (Step 1).| kubectl logs -n kafka -l strimzi.io/kind=cluster-operator --tail=100 |
Istio is preventing all traffic that is not configured as mTLS. Apply a PERMISSIVE mode policy:
| kubectl apply -f - <<EOF apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: istio-system spec: mtls: mode: PERMISSIVE EOF |
Re-run with all required flags:
| helm upgrade oneconnect . \ --namespace oneconnect \ --values values-aws.yaml \ --set dockerHub.token=[your-docker-token] \ --set cloudConnector.enabled=true \ --set cloudConnector.locationId="[LOCATION-ID]" \ --set cloudConnector.subaccountId="[SUBACCOUNT-ID]" |
| kubectl get pods -n oneconnect kubectl get services -n oneconnect |
| kubectl get pods -n kyma-system | grep connectivity kubectl logs -n kyma-system -l app=connectivity-proxy --tail=50 kubectl rollout restart statefulset connectivity-proxy -n kyma-system |
| kubectl get servicemappings.connectivityproxy.sap.com -n oneconnect kubectl get servicemappings.connectivityproxy.sap.com apigateway-mapping \ -n oneconnect -ojsonpath='{.status.endpoint}' |
| kubectl get serviceinstances,servicebindings -n oneconnect |
| kafka-cluster-kafka-brokers.kafka.svc.cluster.local:9092
http://schema-registry.kafka.svc.cluster.local:8081 |
| kubectl get pods -n observability kubectl port-forward svc/grafana 3000:3000 -n observability |
| Term | What it means |
|---|---|
| Kubernetes (K8s) | System for running and managing containerized applications — the orchestrator for your app containers. |
| Kyma | SAP-managed Kubernetes environment running in SAP BTP (Business Technology Platform). |
| Namespace | A logical partition inside Kubernetes — like a separate workspace. e.g. kafka, oneconnect, observability. |
| Strimzi | A Kubernetes operator that makes it easy to deploy and manage Apache Kafka clusters. |
| Kafka | A distributed messaging system that enables data streaming between services. |
| Helm | A package manager for Kubernetes. Installs a full app with one command instead of many YAML files. |
| Pod | A running container inside Kubernetes. Like a lightweight virtual machine. |
| SAP Cloud Connector | A secure tunnel agent installed in your on-premise network. Bridges local SAP systems to SAP BTP without opening inbound firewall ports. |
| Istio | A service mesh that adds encrypted traffic control between pods. Required for Cloud Connector access. |
| ServiceMapping | A Kyma resource that links a Kubernetes service to the Cloud Connector tunnel. |
| Observability | A monitoring stack providing dashboards, logs, and distributed tracing. |
| Docker Hub | Container image registry. An "app store" for Docker images. Requires an access token. |
| AKHQ | A web-based dashboard for Kafka — browse topics, inspect messages, monitor consumers. |
| Kafka Connect | A Kafka component that runs connectors to external systems such as databases. |
What if pods do not start?
Always verify that Strimzi is running (1/1) before installing Kafka. For other pods, check logs with kubectl logs [pod-name] -n [NAMESPACE]. Common causes: Docker token is wrong or expired, or a previous Helm release was not properly uninstalled.
Why am I getting ACCESS_DENIED?
The Istio sidecar is either not injected or the Location ID does not match between your Helm install and the Cloud Connector configuration. Check that the namespace has the istio-injection=enabled label and that the Location ID is identical in both places (case-sensitive).
How do I know the tunnel is working?
Run a curl command from the CC server to the local port with the appropriate Host header. Any HTTP response — even a 404 or 302 — confirms the tunnel is active. "Empty reply from server" means the tunnel is not working.
Is the Observability stack mandatory?
It is optional from a technical standpoint, but highly recommended for production deployments. It consumes additional cluster resources, so confirm with the customer whether to include it.
Where do I find the Subaccount ID?
Log in to the SAP BTP Cockpit and navigate to your Subaccount overview page. The Subaccount ID (a UUID) is displayed in the header area of the subaccount page.
How do I test if the tunnel is working end to end?
Run the curl test in Step 7.7 from the CC server. Any HTTP response (even 404) confirms the tunnel is active. Then create the SM59 destination in SAP and call a known OneConnect endpoint to confirm application-level connectivity.