You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: kafka-questdb-connector-samples/stocks/readme.md
+7-6Lines changed: 7 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -66,7 +66,7 @@ If you liked what you see and want to learn more about the internals of the proj
66
66
### Postgres
67
67
The docker-compose start Postgres image. It's using a container image provided by the Debezium project as they maintain a Postgres which is preconfigured for Debezium.
68
68
69
-
## Java stock price updater
69
+
### Java stock price updater
70
70
It's a Spring Boot application which during startup creates a table in Postgres and populates it with initial data.
71
71
You can see the SQL executed in the [schema.sql](src/main/resources/schema.sql) file. The table has always one row per each stock symbol.
72
72
@@ -86,7 +86,7 @@ The application is build and packaged as container image when executing `docker-
86
86
- postgres:postgres
87
87
```
88
88
89
-
## Debezium Postgres connector
89
+
### Debezium Postgres connector
90
90
Debezium is an open source project which provides connectors forvarious databases. It is used to capture changes from a database and feed them to a Kafka topic. In other words: Whenever there is a changein a database table, Debezium will read the change and feed it to a Kafka topic. This way it translates operations such as INSERT or UPDATE into events which can be consumed by other systems. Debezium supports a wide range of databases. In this sample we use the Postgres connector.
91
91
92
92
The Debezium Postgres connector is implemented as a Kafka Connect source connector. Inside the [docker-compose file](docker-compose.yml) it's called `connect` and its container image is also built during `docker-compose build`. The [Dockerfile](../../Dockerfile-Samples) uses Debezium image. The Debezium image contains Kafka Connect runtime and Debezium connectors. Our Dockerfile amends it with Kafka Connect QuestDB Sink.
@@ -113,7 +113,7 @@ It uses Kafka Connect REST interface to start a new connector with a give config
113
113
```
114
114
Most of the fields are self-explanatory. The only non-obvious one is `database.server.name`. It's a unique name of the database server. It's used by Kafka Connect to store offsets. It's important that it's unique for each database server. If you have multiple Postgres databases, you need to use different `database.server.name` for each of them. It's used by Debezium to generate Kafka topic names. The topic name is generated as `database.server.name`.`schema`.`table`. In our case it's `dbserver1.public.stock`.
115
115
116
-
## Kafka QuestDB connector
116
+
### Kafka QuestDB connector
117
117
The Kafka QuestDB connector re-uses the same Kafka Connect runtime as the Debezium connector. It's also started using `curl` command. This is how we started the QuestDB connector:
@@ -210,10 +210,10 @@ We cannot feed a full change object to Kafka Connect QuestDB Sink, because the s
210
210
211
211
We do not want to create columns in QuestDB for all this metadata. We only want to create columns for the actual data. This is where the `ExtractNewRecordState` transform comes to the rescue! It extracts only the actual new data from the overall change object and feeds only this small part to the QuestDB sink. The end-result is that each INSERT and UPDATE in Postgres will insert a new row in QuestDB.
212
212
213
-
## QuestDB
213
+
### QuestDB
214
214
QuestDB is a fast, open-source time-series database. It uses SQL for querying and it adds a bit of syntax sugar on top of SQL to make it easier to work with time-series data. It implements the Postgres wire protocol so many tools can be used to connect to it.
215
215
216
-
## Grafana
216
+
### Grafana
217
217
Grafana is a popular open-source tool for visualizing time-series data. It can be used to visualize data from QuestDB. There is no native QuestDB datasource for Grafana, but there is a Postgres datasource. We can use this datasource to connect to QuestDB. Grafana is provisioned with a dashboard that visualizes the data from QuestDB in a candlestick chart. The char is configured to execute this query:
218
218
```sql
219
219
SELECT
@@ -248,4 +248,5 @@ SAMPLE BY 5s ALIGN TO CALENDAR;
248
248
```
249
249
And this is then used by the candlestick chart to visualize the data.
At this point you should have a good understanding of the architecture. If the explanation above is unclear then please [open a new issue](https://github.com/questdb/kafka-questdb-connector/issues/new).
0 commit comments