Scales linearly with your needs. From a workload of 8 TB/day on 8 vCPU to 1 PB/day on 800 vCPU.
Search on any field and any sub-text of your log line. Support for wildcard and regex.
Create new fields from an existing log line sub-text and use it in later stages of your pipeline query.
Innovative MicroIndexing lets you index at lightening fast speeds.
Many query languages supported including Splunk QL, Elastic DSL, SQL, Loki LogQL.
Wide variety of ingestion protocols supported: Open Telemetry, Elasticsearch, Splunk HEC, Loki, Vector, FluentD/FluentBit, Logstash, S3/SQS/SNS, Promtail.
Quick 1-min install with multiple methods of installation: Docker, Helm.
Single binary solution that reduces operational burden and increases reliability.
One UI. One database for logs, metrics and traces.
|Indexes only metadata fields, not log lines
|Indexes all fields including loglines
|Query Response Time
|Takes several minutes
|Sub-second query response times
|S3 for remote storage, incurs very high S3 round trip costs for queries
|Micro-index is local, most recent data on local (highly compressed), older data in S3, pulling from S3 is minimal due to micro indexing
|Very slow aggregation queries
|Lightning-fast aggregation queries due to AgileAgssTree innovation
|Inverted index, but grows to 110% of incoming volume
|MicroIndexing technology creates 1/100th size of an actual index
|Requires a cluster of machines for larger indices
|100x lower storage and compute resources
|Slow query times, especially on log lines sub-texts, very slow aggregation queries
|1025x faster search/filter/aggregation queries
|Prone to being yellow/red due to regular unforced restarts
|Lower probability of issues due to simple single binary architecture
|Requires predefined Engines on columns (e.g., MergeTree)
|Utilizes dynamic columnar compression algorithms with zero configuration
|Requires Materialized views to be predefined, increases compute costs.
|Utilizes AgileAggsTree for fast aggregation queries with zero configuration
|Operational overhead with predefined Engines and Materialized views
|Reduces operational overhead with dynamic approaches
|Achieves faster ingestion with bulk updates (not ideal for constant log data)
|Efficient ingestion speeds
|Does not support read-time field extraction
|Supports read-time field extraction
|Converts incoming data to parquet files, stored in S3, uses Apache Arrow for memory-mapped files, and serves queries using these files
|Innovative approach achieving a balance of faster ingestion, faster queries, and lower hardware requirements
|Query Response Time
|Suffers on query response times, especially if querying non-memory-mapped parquet files
|Search/filter queries perform several magnitudes faster
|Impact of Old and New Data
|Repeated loop of drop/pull for old and new data affecting query response times plus repeated round trips to S3
|No impact on query times due to a balanced approach
|Nodes need to be allocated to handle queries, increasing infra cost
|Lower compute requirements, resulting in several magnitudes less compute usage