SigLens 54x faster than ClickHouse
SigLens beats ClickHouse by a staggering 54x in the NYC Taxi Dataset performance test. This test resulted in processing 1 billion records and SigLens performed magnitudes faster than ClickHouse.
In this blog we outline the detailed steps of this benchmark. You will learn ingestion and querying of a massive dataset along with tuning systems settings. Let’s dive in.

Overview of the test, dataset

The objective of this benchmark is to assess performance of these two databases under conditions mirroring real world scenarios. Our approach involves running both databases on identical hardware configurations, ingesting identical dataset and running the same set of queries. For this reason we opted for the NYC Taxi Dataset.

The NYC Taxi Dataset is a massive dataset of over 1 Billion trips made by NYC Taxis and Uber, all originating in New York City since 2009.

The queries used were as follows:

  • Q1: Find the total number of rides.
  • Q2: Find the average ride cost grouped by the number of passengers.
  • Q3: Find the number of rides grouped by the number of passengers and the pick-up location.
  • Q4: Find the number of rides grouped by the number of passengers, pick-up location, and trip distance.

Introductions of the databases

ClickHouse:

ClickHouse is a column oriented database. It offers compressions over each column yielding significant savings on storage. Its SQL based query engine is quite fast and answers queries in milliseconds. Increasingly many observability companies are using it as a backend database for storing logs, metrics and traces.

SigLens:

SigLens is a column oriented database built from scratch for observability data. It offers dynamic compressions over each column requiring zero configuration.

SigLens uses columnar micro indices that are 1/100th size of an actual database index. This reduces the number of machines needed. SigLens uses AgileAggregationTree algorithm that makes aggregations queries run at sub-second speeds.

Test Setup

We used an AWS im4gn.2xlarge instance for both SigLens and ClickHouse, it had the following hardware configuration:
  • 8 vCPUs
  • 32 GB of RAM
  • 3.5 TB of storage
  • 10 Gigabit networking

Common steps for both:

Setup the server, mount the storage, configure aws s3 cli by following these steps:

Step 1: Let’s find out if the NVM storage device is present on this server. To find that, ssh into your server and run lsblk. You should see something like the following, with the nvme1n1 item having 3.4 TB of storage.

NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0          7:0    0  21.3M  1 loop /snap/amazon-ssm-agent/7529
loop1          7:1    0  49.1M  1 loop /snap/core18/2794
loop2          7:2    0  59.3M  1 loop /snap/core20/2019
loop3          7:3    0 109.6M  1 loop /snap/lxd/24326
loop4          7:4    0  35.5M  1 loop /snap/snapd/20102
nvme0n1      259:0    0     8G  0 disk
├─nvme0n1p1  259:2    0   7.9G  0 part /
└─nvme0n1p15 259:3    0    99M  0 part /boot/efi
nvme1n1      259:1    0   3.4T  0 disk

Step 2: Let’s mount the storage device and create directories.

sudo mkfs.xfs /dev/nvme1n1
sudo mkdir /mnt/nvme1n1
sudo mount /dev/nvme1n1 /mnt/nvme1n1

Step 3: Verify Storage

You can check that it's mounted by running df -h and you should see something like this:

Filesystem       Size  Used Avail Use% Mounted on
/dev/root        7.6G  1.5G  6.2G  20% /
tmpfs             16G     0   16G   0% /dev/shm
tmpfs            6.2G  948K  6.2G   1% /run
tmpfs            5.0M     0  5.0M   0% /run/lock
/dev/nvme0n1p15   98M  6.3M   92M   7% /boot/efi
tmpfs            3.1G  4.0K  3.1G   1% /run/user/1000
/dev/nvme1n1     3.5T   25G  3.4T   1% /mnt/nvme1n1

Step 4: Update Permissions

sudo chmod 777 /mnt/nvme1n1

Step 5: Configure AWS CLI

Note: This AWS config step is only needed if you are using a private S3 bucket, if you use our public S3 bucket you can skip this step.

sudo apt-get install awscli -y
aws configure

Cleanup and Prepare Dataset:

Note: You can skip this step and use our s3 bucket. It has the dataset in the correct format. If you want to prepare the dataset yourself then follow these steps else jump directly to the benchmark step to use our s3 bucket.

mkdir -p /mnt/nvme1n1/data
cd /mnt/nvme1n1/data
Download the dataset from https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page into above dir. We used the 2011-2017 yellow taxi trip parquet files
Next, you need to convert the parquet files to TSV.
git clone git@github.com:siglens/siglens
python -m venv taxis
source taxis/bin/activate
pip install -r siglens/tools/nyc-taxi-benchmark/requirements.txt
python siglens/tools/nyc-taxi-benchmark/parquet_to_tsv.py . .
The above cmd takes two parameters: input_dir , output_dir. It will read *.parquet files from the input_dir and convert them to tsv files and put them in output_dir.
ClickHouse will use the TSV files, but for SigLens we'll use JSON. Let’s convert the tsv files to json now. Put the below content in some bash file and run it or run the below commands in a terminal.
for year in {2011..2017}; do
    for month in {01..12}; do
        basefile="yellow_tripdata_$year-$month"
        go run siglens/tools/sigclient/cmd/utils/converter.go --input 
"$basefile.tsv" --output "$basefile.json" &
    done
done
wait
Finally, compress the TSV and JSON files with gzip and upload them to your AWS S3 bucket. We ingested the TSV and JSON files into separate directories to make it easier to download all of one type.

Benchmark ClickHouse

Install ClickHouse

We will use the precompiled apt packages for ClickHouse and install them.
# Prepare to install ClickHouse
sudo apt-get install -y apt-transport-https ca-certificates dirmngr
GNUPGHOME=$(mktemp -d)
sudo GNUPGHOME="$GNUPGHOME" gpg --no-default-keyring
--keyring /usr/share/keyrings/clickhouse-keyring.gpg --keyserver 
hkp://keyserver.ubuntu.com:80 --recv-keys 8919F6BD2B48D754
sudo rm -r "$GNUPGHOME"
sudo chmod +r /usr/share/keyrings/clickhouse-keyring.gpg

echo "deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb stable main" | sudo tee \
    /etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
# Install ClickHouse server and client
sudo apt-get install -y clickhouse-server clickhouse-client
You should get the prompt Enter password for default user:. Either create a password or just press enter to have no password.

Configure the ClickHouse data folder

This is an optional step to specify where ClickHouse should store its data. We did this so that both ClickHouse and SigLens would use the 3.5 TB storage space.
Use sudo vim /etc/clickhouse-server/config.xml to change the line <path>/var/lib/clickhouse/</path> to <path>/mnt/nvme1n1/clickhouse/</path>

Run ClickHouse

sudo service clickhouse-server start

Make the ClickHouse Table

We will use the ClickHouse client to create a table.
clickhouse-client
CREATE TABLE trips (
    VendorID Int32,
    tpep_pickup_datetime DateTime,
    tpep_dropoff_datetime DateTime,
    passenger_count Int32,
    trip_distance Float32,
    RatecodeID Int32,
    store_and_fwd_flag FixedString(1),
    PULocationID Int32,
    DOLocationID Int32,
    payment_type FixedString(3),
    fare_amount Float32,
    extra Float32,
    mta_tax Float32,
    tip_amount Float32,
    tolls_amount Float32,
    improvement_surcharge Float32,
    total_amount Float32,
    congestion_surcharge Float32,
    airport_fee Float32)
ENGINE = MergeTree()
ORDER BY (tpep_pickup_datetime)
SETTINGS index_granularity=8192

Ingest Data into ClickHouse

Let’s start ingesting data into ClickHouse. We will use the compressed tsv files we created earlier and let ClickHouse download it from the S3 bucket.
INSERT INTO trips
SELECT
    VendorID,
    tpep_pickup_datetime,
    tpep_dropoff_datetime,
    passenger_count,
    trip_distance,
    RatecodeID,
    store_and_fwd_flag,
    PULocationID,
    DOLocationID,
    payment_type,
    fare_amount,
    extra,
    mta_tax,
    tip_amount,
    tolls_amount,
    improvement_surcharge,
    total_amount,
    congestion_surcharge,
    airport_fee
FROM s3(
    's3://siglens-benchmark-datasets/nyc-taxi-benchmark-data/tsv/yellow_tripdata_{2011..2017}-{01..12}.tsv.gz',
    'your_aws_access_key_id',
    'your_aws_secret_access_key',
    'TabSeparatedWithNames'
);
Note: In above step AWS credentials are only needed if you are using a private S3 bucket, if you use our public S3 bucket you can remove those two lines.

Run Queries against ClickHouse

Now let’s run the exact same SQL queries that we ran earlier.
# Query 1
SELECT airport_fee, count(*) FROM trips GROUP BY airport_fee
# Query 2
SELECT passenger_count, avg(total_amount) FROM trips GROUP BY passenger_count
# Query 3
SELECT passenger_count, PULocationID, count(*) FROM trips GROUP BY passenger_count, PULocationID
# Query 4
SELECT passenger_count, PULocationID, trip_distance, count(*)
FROM trips
GROUP BY passenger_count, PULocationID, trip_distance

ClickHouse Query Results

We ran each query for 5 times, and took the lowest number from those runs.
Try 1 Try 2 Try 3 Try 4 Try 5 Lowest
Q1 357 ms 346 ms 349 ms 417 ms 358 ms 346 ms
Q2 2,642 ms 2,389 ms 2,427 ms 2,597 ms 2,349 ms 2,349 ms
Q3 2,675 ms 2,108 ms 2,112 ms 2,294 ms 2,098 ms 2,098 ms
Q4 8,240 ms 7,850 ms 8,211 ms 7,850 ms 7,807 ms 7,807 ms

Benchmark SigLens

You'll want three terminals. Terminal 1 will run SigLens, Terminal 2 will do some setup and view the logs, and Terminal 3 will send the queries. Terminal 3 can run on your local machine if you set up the server to accept HTTP traffic, but Terminals 1 and 2 should be on the server. Start with Terminal 1.

Install Go and SigLens

SigLens is built in golang. In this step we will install the go dependencies, install SigLens and start it.
sudo apt update
sudo apt install golang -y
git clone https://github.com/siglens/siglens.git
cd siglens

## Start SigLens
sudo go run cmd/siglens/main.go --config server.yaml
Wait until SigLens is running. You'll see these lines in the terminal once it's up:
INFO[2023-12-06 18:10:38] Extracting config from configFile: server.yaml
INFO[2023-12-06 18:10:38] Defaulting to 2160hrs (90 days) of retention...
INFO[2023-12-06 18:10:38] ----- Siglens server type SingleNode starting up -----
INFO[2023-12-06 18:10:38] ----- Siglens Ingestion server starting on 0.0.0.0:8081 -----
INFO[2023-12-06 18:10:38] ----- Siglens Query server starting on 0.0.0.0:5122 -----
INFO[2023-12-06 18:10:38] ----- Siglens UI starting on 0.0.0.0:5122 -----

Prepare Agile Aggregations

In a continuous running ingest, you don’t need to do this step. However, since this is a static dataset ingestion, we need to set up the aggregations columns. SigLens creates aggregation trees during ingestion using these columns. These trees are used during query processing
Switch to Terminal 2 and run the following:
curl -X POST -d '{
    "tableName": "trips",
    "groupByColumns": ["airport_fee", "passenger_count", "PULocationID", "trip_distance"],
    "measureColumns": ["total_amount"]
}' http://localhost:5122/api/pqs/aggs
echo ""
You should get this response:
{"message":"All OK","status":200}

Ingest data into SigLens

Now let's ingest some data into SigLens. In Terminal 2 run
cd /mnt/nvme1n1/siglens/tools/sigclient 
mkdir dataset
Run the following script to download, decompress, and ingest the data into SigLens.
for year in {2011..2017}; do
    for month in {01..12}; do
        {
            basefile="yellow_tripdata_$year-$month"
            aws s3 cp s3://siglens-benchmark-datasets/nyc-taxi-benchmark-data/json/$basefile.json.gz dataset/
            gunzip dataset/$basefile.json.gz
            python3 ../nyc-taxi-benchmark/ingester.py dataset/$basefile.json
        } &
    done
    wait
done

Run Queries against SigLens

Run the following in Terminal 3. If Terminal 3 is on your local machine, make sure to replace localhost with the IP of the server. You can remove the | python3 -m json.tool if you want, it just formats the JSON response. Check the log file siglens.log for the query times.
curl -X POST -d '{
    "searchText": "SELECT airport_fee, count(*) FROM trips GROUP BY airport_fee",
    "indexName": "trips",
    "startEpoch": "now-24h",
    "endEpoch": "now",
    "queryLanguage": "SQL"
}' http://localhost:5122/api/search | python3 -m json.tool

curl -X POST -d '{
    "searchText": "SELECT passenger_count, avg(total_amount) FROM trips GROUP BY passenger_count",
    "indexName": "trips",
    "startEpoch": "now-24h",
    "endEpoch": "now",
    "queryLanguage": "SQL"
}' http://localhost:5122/api/search | python3 -m json.tool

curl -X POST -d '{
    "searchText": "SELECT passenger_count, PULocationID, count(*) FROM trips GROUP BY passenger_count, PULocationID",
    "indexName": "trips",
    "startEpoch": "now-24h",
    "endEpoch": "now",
    "queryLanguage": "SQL"
}' http://localhost:5122/api/search | python3 -m json.tool

curl -X POST -d '{
    "searchText": "SELECT passenger_count, PULocationID, trip_distance, count(*) FROM trips GROUP BY passenger_count, PULocationID, trip_distance",
    "indexName": "trips",
    "startEpoch": "now-24h",
    "endEpoch": "now",
    "queryLanguage": "SQL"
}' http://localhost:5122/api/search | python3 -m json.tool

SigLens Query Results:

We ran each query for 5 times, and took the lowest number from those runs.
Try 1 Try 2 Try 3 Try 4 Try 5 Lowest
Q1 60 ms 72 ms 63 ms 41 ms 42 ms 41 ms
Q2 43 ms 43 ms 44 ms 43 ms 43 ms 43 ms
Q3 116 ms 304 ms 442 ms 509 ms 471 ms 116 ms
Q4 9,815 ms 9,229 ms 10,316 ms 9,093 ms 9,092 ms 9,092 ms

Result Comparison

As you have seen, we used the exact same hardware, exact same dataset, exact same queries. We took the lowest of the 5 runs for each query type for both of these databases and compiled that result in this table:
Query Type ClickHouse SigLens Comparison
Q1 346 ms 41 ms SigLens 8x faster than ClickHouse
Q2 2,349 ms 43 ms SigLens 54x faster than ClickHouse
Q3 2,098 ms 166 ms SigLens 18x faster than ClickHouse
Q4 7,807 ms 9,092 ms ClickHouse 1.2x faster than SigLens
SigLens proves to be 8x-54x faster on 3 out of 4 query types. These speed advantages of SigLens are due to its dynamic columnar compressions, dynamic columnar microindexing, AgileAggsTree. These innovative techniques help run your filter and aggregation queries on complex datasets at lightning fast speeds.

Try SigLens in less than 1 minute

SigLens gets installed in less than a minute with docker or helm chart. Give it a try and experience the difference.
Try SigLens
GitHub Repo

Benchmark Questions?

Both of these databases were given the exact same treatment, there was no unfair advantage given to SigLens. You can also look at the ClickHouse numbers on this blog by Mark Litwintschik who has run this exact same test on various ClickHouse clusters and also other databases. Our blog’s numbers for ClickHouse and Mark’s numbers are very close.
You can verify these test results yourself by following the detailed steps we have outlined above. If you feel something is incorrect please submit an issue or a pull request at https://github.com/siglens/siglens.