Skip to content

Commit 4ede737

Browse files
committed
links questdb for documentation
1 parent e691497 commit 4ede737

File tree

1 file changed

+8
-130
lines changed

1 file changed

+8
-130
lines changed

readme.md

Lines changed: 8 additions & 130 deletions
Original file line numberDiff line numberDiff line change
@@ -1,141 +1,19 @@
11
# QuestDB Sink connector for Apache Kafka
2-
The connector reads data from Kafka topics and writes it to [QuestDB](https://questdb.io/) tables.
2+
The connector reads data from Kafka topics and writes to [QuestDB](https://questdb.io/) tables.
33
The connector implements Apache Kafka [Sink Connector API](https://kafka.apache.org/documentation/#connect_development).
44

5-
## Pre-requisites
6-
* QuestDB 6.5.0 or newer
7-
* Apache Kafka 2.8.0 or newer, running on Java 11 or newer.
5+
## Documentation
6+
Documentation is maintained on [QuestDB.io](https://questdb.io/docs/third-party-tools/kafka/questdb-kafka/)
87

9-
## Usage with Kafka Connect
10-
This guide assumes you are already familiar with Apache Kafka and Kafka Connect. If you are not then watch this [excellent video](https://www.youtube.com/watch?v=Jkcp28ki82k) or check our [sample projects](kafka-questdb-connector-samples).
11-
1. [Download](https://github.com/questdb/kafka-questdb-connector/releases/latest) and unpack connector ZIP into Apache Kafka `./libs/` directory.
12-
2. Start Kafka Connect in the distributed mode.
13-
3. Create a connector configuration:
14-
```json
15-
{
16-
"name": "questdb-sink",
17-
"config": {
18-
"connector.class": "io.questdb.kafka.QuestDBSinkConnector",
19-
"host": "localhost:9009",
20-
"topics": "Orders",
21-
"table": "orders_table",
22-
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
23-
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
24-
"include.key": "false"
25-
}
26-
}
27-
```
28-
4. Submit the configuration to Kafka Connect.
29-
5. From now on JSON entries sent to Kafka topic `Orders` will be written to QuestDB table `orders_table`.
8+
## Sample Projects
9+
This repository contains a number of [sample projects.](kafka-questdb-connector-samples) showing how to use the connector. It also demonstrates how to use the connector together with Debezium for Change Data Capture.
3010

31-
See [sample projects](kafka-questdb-connector-samples) for more details.
32-
33-
## Configuration
34-
The connector supports following Options:
35-
36-
| Name | Type | Example | Default | Meaning |
37-
|------------------------|---------|-------------------------------------------------------------|--------------------|------------------------------------------------------------|
38-
| topics | STRING | orders | N/A | Topics to read from |
39-
| key.converter | STRING | <sub>org.apache.kafka.connect.storage.StringConverter</sub> | N/A | Converter for keys stored in Kafka |
40-
| value.converter | STRING | <sub>org.apache.kafka.connect.json.JsonConverter</sub> | N/A | Converter for values stored in Kafka |
41-
| host | STRING | localhost:9009 | N/A | Host and port where QuestDB server is running |
42-
| table | STRING | my_table | Same as Topic name | Target table in QuestDB |
43-
| key.prefix | STRING | from_key | key | Prefix for key fields |
44-
| value.prefix | STRING | from_value | N/A | Prefix for value fields |
45-
| skip.unsupported.types | BOOLEAN | false | false | Skip unsupported types |
46-
| timestamp.field.name | STRING | pickup_time | N/A | Designated timestamp field name |
47-
| timestamp.units | STRING | micros | auto | Designated timestamp field units |
48-
| include.key | BOOLEAN | false | true | Include message key in target table |
49-
| symbols | STRING | instrument,stock | N/A | Comma separated list of columns that should be symbol type |
50-
| doubles | STRING | volume,price | N/A | Comma separated list of columns that should be double type |
51-
| username | STRING | user1 | admin | User name for QuestDB. Used only when token is non-empty |
52-
| token | STRING | <sub>QgHCOyq35D5HocCMrUGJinEsjEscJlCp7FZQETH21Bw</sub> | N/A | Token for QuestDB authentication |
53-
| tls | BOOLEAN | true | false | Use TLS for QuestDB connection |
54-
| retry.backoff.ms | LONG | 1000 | 3000 | Connection retry interval in milliseconds |
55-
| max.retries | LONG | 1 | 10 | Maximum no. connection of retry attempts |
56-
57-
## Supported serialization formats
58-
The connector does not do data deserialization on its own. It relies on Kafka Connect converters to deserialize data. It's been tested predominantly with JSON, but it should work with any converter, including Avro. Converters can be configured using `key.converter` and `value.converter` options, see the table above.
59-
60-
## How it works
61-
The connector reads data from Kafka topics and writes it to QuestDB tables. The connector converts each field in the Kafka message to a column in the QuestDB table. Structs and maps are flatted into columns.
62-
63-
Example:
64-
Consider the following Kafka message:
65-
```json
66-
{
67-
"firstname": "John",
68-
"lastname": "Doe",
69-
"age": 30,
70-
"address": {
71-
"street": "Main Street",
72-
"city": "New York"
73-
}
74-
}
75-
```
76-
The connector will create a table with the following columns:
77-
78-
| firstname <sub>string</sub> | lastname <sub>string</sub> | age <sub>long</sub> | address_street <sub>string</sub> | address_city <sub>string</sub> |
79-
|-----------------------------|----------------------------|---------------------|----------------------------------|--------------------------------|
80-
| John | Doe | 30 | Main Street | New York |
81-
82-
## Designated Timestamps
83-
The connector supports designated timestamps. If a message contains a field with a timestamp, the connector can use it as a timestamp for the row. The field name must be configured using `timestamp.field.name` option. The field must either a plain integer or being labelled as a timestamp in a message schema. When it's a plain integer, the connector will autodetect its units. This works for timestamps after 4/26/1970, 5:46:40 PM. The units can be also configured explicitly using `timestamp.units` option. Supported configuration values are `nanos`, `micros`, `millis` and `auto`.
84-
85-
## QuestDB Symbol Type
86-
QuestDB supports a special type called [Symbol](https://questdb.io/docs/concept/symbol/). This connector never creates a column with a type `SYMBOL`. Instead, it creates a column with a type `STRING`. If you want to use `SYMBOL` type, you can pre-create a table in QuestDB and use it as a target table.
87-
88-
## Target Table Considerations
89-
When a target table does not exist in QuestDB then it will be automatically created when a first row arrives. This is recommended approach for development and testing.
90-
91-
In production, it's recommended to [create tables manually via SQL](https://questdb.io/docs/reference/sql/create-table/). This gives you more control over the table schema, allow per-table partitioning, creating indexes, etc.
11+
## Distribution
12+
Releases are published on GitHub: https://github.com/questdb/kafka-questdb-connector/releases/
13+
It's also available in [Confluent Hub](https://www.confluent.io/hub/questdb/kafka-questdb-connector).
9214

9315
## Issues
9416
If you encounter any issues, please [create an issue](https://github.com/questdb/kafka-questdb-connector/issues/new) in this repository.
9517

96-
## FAQ
97-
<b>Q</b>: Does this connector work with Schema Registry?
98-
<br/>
99-
<b>A</b>: The Connector does not care about serialization strategy used. It relies on Kafka Connect converters to deserialize data. Converters can be configured using `key.converter` and `value.converter` options, see the configuration section.
100-
101-
<b>Q</b>: I'm getting this error: `org.apache.kafka.connect.errors.DataException: JsonConverter with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. If you are trying to deserialize plain JSON data, set schemas.enable=false in your converter configuration.`
102-
<br/>
103-
<b>A</b>: This error means that the connector is trying to deserialize data using a converter that expects a schema. The connector does not use schemas, so you need to configure the converter to not expect a schema. For example, if you are using JSON converter, you need to set `value.converter.schemas.enable=false` or `key.converter.schemas.enable=false` in the connector configuration.
104-
105-
<b>Q</b>: Does this connector work with Debezium?
106-
<br/>
107-
<b>A</b>: Yes, it's been tested with Debezium as a source. Bear in mind that QuestDB is meant to be used as append-only database hence updates should be translated as new inserts. The connector supports Debezium's `ExtractNewRecordState` transformation to extract the new state of the record. The transform by default drops DELETE events so no need to handle it explicitly.
108-
109-
<b>Q</b>: How I can select which fields to include in the target table?
110-
<br/>
111-
<b>A</b>: Use the ReplaceField transformation to remove unwanted fields. For example, if you want to remove the `address` field from the example above, you can use the following configuration:
112-
```json
113-
{
114-
"name": "questdb-sink",
115-
"config": {
116-
"connector.class": "io.questdb.kafka.QuestDBSinkConnector",
117-
"host": "localhost:9009",
118-
"topics": "Orders",
119-
"table": "orders_table",
120-
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
121-
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
122-
"transforms": "unwrap,removeAddress",
123-
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
124-
"transforms.removeAddress.type": "org.apache.kafka.connect.transforms.ReplaceField$Value",
125-
"transforms.removeAddress.blacklist": "address",
126-
"include.key": "false"
127-
}
128-
}
129-
```
130-
See [ReplaceField documentation](https://docs.confluent.io/platform/current/connect/transforms/replacefield.html#replacefield) for more details.
131-
132-
<b>Q</b>: I need to run Kafka Connect on Java 8, but the connector says it requires Java 11. What should I do?
133-
<br/>
134-
<b>A</b>: The Kafka Connect-specific part of the connectors works with Java 8. The requirement for Java 11 is coming from QuestDB client itself. The zip archive contains 2 JARs: `questdb-kafka-connector-<version>.jar` and `questdb-<version>.jar`. You can use replace the latter with `questdb-<version>-jdk8.jar` from the [Maven central](https://mvnrepository.com/artifact/org.questdb/questdb/6.5.3-jdk8). Bear in mind this setup is not officially supported and you may encounter issues. If you do, please report them to us.
135-
136-
<b>Q</b>: QuestDB is a time-series database, how does it fit into Change Data Capture via Debezium?
137-
<br/>
138-
<b>A</b>: QuestDB works with Debezium just great! This is the recommended pattern: Transactional applications use a relational database to store the current state of the data. QuestDB is used to store the history of changes. Example: Imagine you have a Postgres table with the most recent stock prices. Whenever a stock price changes an application updates the Postgres table. Debezium capture each UPDATE/INSERT and pushes it as an event to Kafka. Kafka Connect QuestDB connector reads the events and inserts them into QuestDB. This way Postgres will have the most recent stock prices and QuestDB will have the history of changes. You can use QuestDB to build a dashboard with the most recent stock prices and a chart with the history of changes.
139-
14018
## License
14119
This project is licensed under the Apache License 2.0. See [LICENSE](LICENSE) for details.

0 commit comments

Comments
 (0)