HANA Sink Connector Setup

HANA Sink Connector Setup

HANA Sink Connector Setup 

 v1.0 · 2026 

 

0  

Prerequisites  

 

Before startingmake sure you have the following access and information available. 


Required Access 

  • Access to AKHQ (Kafka administration UI) 

  • HANA connection details: URL, schema, user and password  


1  

Set Up the HANA Sink Connector in AKHQ 

 

Step 1.1 — Navigate to Connect and create a new definition  

In the left panel, click on "connect". Then click the "Create a Definition" button.  

Group 6025, Objeto agrupado 


Step 1.2 — Select the connector type From the Types dropdownselect: 

                           com.onibex.connect.hana.jdbc.OnibexHanaSinkConnector  [2.2.4]  


 


Step 1.3 — Enter the connector name 

In the Name fieldenter the connector name following this naming convention: 


HANADB_[CLIENT]_[ENVIRONMENT]_[SUFFIX] 



InfoNote:  The team will define the nameclient, and environment before delivering the connector configuration. Use the exact name provided. 

 

Step 1.4 — Enter JDBC URL and complete the JSON configuration 

Enter the HANA JDBC connection URL in the JDBC URL fieldThen, in the text editor area, paste the following JSON configurationReplace the placeholder values with your actual environment information: 

  1. {      

      "transforms": "ExtractTimestamp", 

        "topics": "", 

        "topics.regex": "YOURPREFIX_DEV_T.*",     "transforms.ExtractTimestamp.type":  

    "org.apache.kafka.connect.transforms.InsertField$Value", 

        "transforms.ExtractTimestamp.timestamp.field": "event_ts",     "onibex.license": "eyJhbGciOiJSUzI1NiJ9...........",     "connection.url":  

    "jdbc:sap://REPLACE_URL.hanacloud.ondemand.com:443/?encrypt=true&validateCertificate=true 

    &currentSchema=REPLACE_SCHEMA", 

        "connection.user": "REPLACE_USER", 

        "connection.password": "REPLACE_PASSWORD", 

        "insert.mode": "UPSERT", 

        "delete.enabled": "true", 

        "batch.size": "100", 

        "table.name.format": "${topic}", 

        "pk.mode": "record_key", 

        "auto.create": "true", 

        "auto.evolve": "true", 

        "offset.flush.interval.ms": "10000", 

        "auto.offset.reset": "latest" 

      } 

Info
Note: Replace all REPLACE_* placeholder values with the actual credentials for your environment (DEV/QA/PROD).
Sensitive credentials (URL, user, password) must be handled securely and never shared publicly. 


 

Group 7720, Objeto agrupado 

Step 1.5 — Create the connector 

Once all fields are filled in, click the "Create" button to submit the connector configuration. AKHQ will validate and register the connector.


InfoNote:  After creationverify that the connector status shows RUNNING in the AKHQ Connect view

Group 7699, Objeto agrupado 

Quick Verification 

Confirm each step was completed successfully. 

Step 

Component 

Expected Status 

1 

Connect panel 

"connect" option visible in left panel 

2 

Connector type 

com.onibex.connect.hana.jdbc.OnibexHanaSinkConnector selected 

3 

Connector name 

Name entered following the naming convention 

4 

JSON configuration 

All REPLACE_* fields filled with actual values 

5 

Connector status 

Status RUNNING in AKHQ 

 

Glossary 


Term 

Definition  

AKHQ 

Web-based administration UI for Apache Kafka. Used to manage connectors, topics, and consumers. 

SAP HANA 

SAP's in-memory database. The destination where the connector writes data. 

Kafka 

Distributed messaging and event-streaming platform. Source of data for the HANA Sink connector. 

HANA Sink Connector 

Kafka Connect plugin that receives events from Kafka topics and inserts them into SAP HANA tables. 

JDBC URL 

Connection string used to establish a link to the SAP HANA database (includes host, port, schema). 

topics.regex 

Pattern matching Kafka topic names. The connector subscribes to all topics matching this regex. 

UPSERT 

Insert mode that inserts new records or updates existing ones based on the primary key. 

Environment  

Execution environment identifier. Defined by the team before delivering files to the client.  

    • Related Articles

    • Onibex Clickhouse Sink Connector

      The Onibex Clickhouse JDBC connector sends real-time data from Kafka to write to Tables based on the topics subscription. It is possible to achieve idempotent writes with upserts. Auto-creation of tables and auto-evolution is supported using the ...
    • Onibex Snowflake Iceberg Sink Connector for Confluent Platform and Cloud

      Snowflake Connector Setup Guide (JSON, No Primary Key Configuration for Confluent Cloud) Prerequisites Before setting up the Snowflake connector, gather the following information: 1. API Key - Your Confluent Cloud API key. You can create your Kafka ...
    • Onibex Snowflake Sink Connector Benefits

      The Onibex Snowflake Sink Connector enables real-time data ingestion from Confluent Platform and Confluent Cloud into topic-based subscription tables in Snowflake. It supports idempotent writes through elevator logic and allows for automatic table ...
    • Onibex Snowflake Sink Connector for Confluent Platform and Cloud

      The JDBC snowflake connector sends real-time data from Confluent Platform and Cloud for writing to the theme-subscription Snowflake Tables. It is possible to achieve idempotent writings with elevators. Self-creation of tables and self-evolution is ...
    • Onibex Kafka Connector APP - Snowflake Native APP

      Onibex Kafka Connector App The Onibex Kafka Connector App is a Snowflake Native Application that provides a fully integrated framework for managing Snowflake connectors in Confluent Cloud directly from Snowflake. It allows users to create, delete, ...