EventNative ❤️ ClickHouse — and easiest way to automate data collection on-prem
January 9, 2021
Data has become an invaluable asset that helps companies understand users, predict behavior, and identify trends. EventNative is our open-source core designed to simplify event data collection. EventNative supports a few data warehouses as storage backends, and ClickHouse is one of them.
ClickHouse is the first open source SQL data warehouse to match the performance, maturity, and scalability of proprietary databases like Vertica, and Snowflake.
This article shows how to set up EventNative with ClickHouse and gives operational advice on how to achieve the best performance and reliability
Getting data to ClickHouse is not as easy a task as it seems. Streaming millions of events from different applications where each event has its own structure can be very challenging. Things can become much more complicated when different versions of the same application are running in production (such as the different versions of iOS app).
EventNative’s architecture is very efficient and robust. It consists of a lightweight HTTP server that accepts an incoming event-stream (JSON objects) and buffers it to local-disk. A separate thread takes care of processing the buffer, mapping JSON to ClickHouse tables, adjusting the schema, and storing the data.
ClickHouse and EventNative quick-start
In this section we’ll configure a single node installation of ClickHouse and EventNative using official Docker images.
Note that this is a dev setup to get things going. In production scenarios you would want to deploy multiple EventNative nodes and to enable ClickHouse replicas to ensure availability of data as well as scale throughput.
1. Pull latest Docker images
2. Start ClickHouse
3. Configure EventNative
Put the following content to ./eventnative.yaml
Also, create a directory for logs: mkdir ./eventnative-logs
4. Start EventNative
5. Send test event and check that it landed in ClickHouse
Put the following JSON to ./api.json:
Run the following command:
You’ll see one event in the database. The test worked!
6. Test event buffering
One of the core features of EventNative is event buffering. Events are written to an internal queue with disk persistence. If a destination (ClickHouse in our case) is down, data won’t be lost! It will be kept locally until ClickHouse is up again.
Let’s test this feature.
Put the following JSON to ./api2.json:
Now let’s test buffering.
1. Shutdown ClickHouse:
2. Send an event:
3. Verify that ClickHouse is down:
4. Start ClickHouse again:
5. Wait for 60 seconds, then verify that event hasn’t been lost:
If you see the event on the last step, the test succeeded.
Schema management with EventNative and ClickHouse
EventNative is designed to be a schema-less component in your stack. This means you don’t have to create table schemas and maintain them in advance. EventNative takes care of it automatically! Each incoming JSON field will be mapped to a SQL field. If the field is missing, it will be automatically created with ClickHouse.
It’s particularly useful when one engineering team is in charge of event structure, and another team operates ClickHouse. As an example: a frontend developer may start sending very simple data to track product page views (product_id and price), and add more sophisticated fields later (currency, images). It’s nice to have.
Automatically created table structure
Mapping configuration details
EventNative can be configured to apply particular transformations to incoming JSON objects such as:
- Remove fields
- Rename fields (including moving element to another node)
- Explicitly defining the SQL type of the node
- Setting a constant
Mapped and flattened JSON
See a full description of this feature in the documentation.
ReplacingMergeTree (or ReplicatedReplacingMergeTree) is the best choice for data produced by EventNative. Here’s why:
- Usually, data produced by EvenNative is used in aggregated queries, such as the number of events per period satisfying filtering conditions. MergeTree engine family shows great performance for aggregation queries.
- ReplacingMergeTree (unlike ordinary MergeTree) has a nice side-effect of data deduplication. Often, mistakes are found in data after it has been loaded. Sometimes, a replay is required. Since EventNative can optionally keep a copy of data locally for a while, it’s possible to write a script to fix data and send it to EventNative once again. ReplacingMergeTree will avoid data duplication provided each event has a unique id and the id is used as a key.
If the destination table is missing, EventNative will create the table with ReplacingMergeTree or ReplicatedReplacingMergeTree if cluster size is greater than 1. However, it’s possible to configure the engine manually. Please, read more about table creation in the documentation.