HANA Sink Connector Setup
v1.0 · 2026
0 | Prerequisites |
Before starting, make sure you have the following access and information available.
Required Access
Access to AKHQ (Kafka administration UI)
HANA connection details: URL, schema, user and password
1 | Set Up the HANA Sink Connector in AKHQ |
Step 1.1 — Navigate to Connect and create a new definition
In the left panel, click on "connect". Then click the "Create a Definition" button.
Step 1.2 — Select the connector type From the Types dropdown, select:
com.onibex.connect.hana.jdbc.OnibexHanaSinkConnector [2.2.4]
Step 1.3 — Enter the connector name
In the Name field, enter the connector name following this naming convention:
HANADB_[CLIENT]_[ENVIRONMENT]_[SUFFIX]
Note: The team will define the name, client, and environment before delivering the connector configuration. Use the exact name provided.
Step 1.4 — Enter JDBC URL and complete the JSON configuration
Enter the HANA JDBC connection URL in the JDBC URL field. Then, in the text editor area, paste the following JSON configuration. Replace the placeholder values with your actual environment information:
{
"transforms": "ExtractTimestamp",
"topics": "",
"topics.regex": "YOURPREFIX_DEV_T.*", "transforms.ExtractTimestamp.type":
"org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.ExtractTimestamp.timestamp.field": "event_ts", "onibex.license": "eyJhbGciOiJSUzI1NiJ9...........", "connection.url":
"jdbc:sap://REPLACE_URL.hanacloud.ondemand.com:443/?encrypt=true&validateCertificate=true
¤tSchema=REPLACE_SCHEMA",
"connection.user": "REPLACE_USER",
"connection.password": "REPLACE_PASSWORD",
"insert.mode": "UPSERT",
"delete.enabled": "true",
"batch.size": "100",
"table.name.format": "${topic}",
"pk.mode": "record_key",
"auto.create": "true",
"auto.evolve": "true",
"offset.flush.interval.ms": "10000",
"auto.offset.reset": "latest"
}

Step 1.5 — Create the connector
Once all fields are filled in, click the "Create" button to submit the connector configuration. AKHQ will validate and register the connector.
Note: After creation, verify that the connector status shows RUNNING in the AKHQ Connect view.
| Quick Verification |
Confirm each step was completed successfully.
Step | Component | Expected Status |
1 | Connect panel | "connect" option visible in left panel |
2 | Connector type | com.onibex.connect.hana.jdbc.OnibexHanaSinkConnector selected |
3 | Connector name | Name entered following the naming convention |
4 | JSON configuration | All REPLACE_* fields filled with actual values |
5 | Connector status | Status RUNNING in AKHQ |
Glossary
Term | Definition |
AKHQ | Web-based administration UI for Apache Kafka. Used to manage connectors, topics, and consumers. |
SAP HANA | SAP's in-memory database. The destination where the connector writes data. |
Kafka | Distributed messaging and event-streaming platform. Source of data for the HANA Sink connector. |
HANA Sink Connector | Kafka Connect plugin that receives events from Kafka topics and inserts them into SAP HANA tables. |
JDBC URL | Connection string used to establish a link to the SAP HANA database (includes host, port, schema). |
topics.regex | Pattern matching Kafka topic names. The connector subscribes to all topics matching this regex. |
UPSERT | Insert mode that inserts new records or updates existing ones based on the primary key. |
Environment | Execution environment identifier. Defined by the team before delivering files to the client. |