You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Parseable is a open source log observability platform. Written in Rust, it is designed for simplicity of deployment and use. It is compatible with standard logging agents via their HTTP output. Parseable also offers a builtin GUI for log query and analysis.
19
+
</div>
20
20
21
-
We're focussed on
21
+
Parseable is a lightweight, cloud native log observability engine. It can use either a local drive or S3 (and compatible stores) for backend data storage.
22
22
23
-
* Simplicity - ease of deployment and use.
24
-
* Efficiency - lesser CPU, Memory usage.
25
-
* Extensibility - freedom to do more with event data.
26
-
* Performance - lower latency, higher throughput.
23
+
Parseable is written in Rust and uses Apache Arrow and Parquet as underlying data structures. Additionally, it uses a simple, index-free mechanism to organize and query data allowing low latency, and high throughput ingestion and query.
27
24
28
-
## :dart: Motivation
29
-
30
-
Given the analytical nature of log data, columnar formats like Parquet are the best way to store and analyze. Parquet offers compression and inherent analytical capabilities. However, indexing based text search engines are _still_ prevalent. We are building Parseable to take full advantage of advanced data formats like Apache Parquet and Arrow. This approach is simpler, efficient and much more scalable.
31
-
32
-
Parseable is developer friendly, cloud native, logging platforms today that is simple to deploy and run - while offering a rich set of features.
25
+
Parseable consumes up to **_~80% lower memory_** and **_~50% lower CPU_** than Elastic for similar ingestion throughput.
33
26
34
-
## :question: How it works
27
+
## :rocket: Features
35
28
36
-
Parseable exposes REST API to ingest and query log data. Under the hood, it uses Apache Arrow and Parquet to handle and compress high volume log data. All data is stored in S3 (or compatible systems). Parseable also has a bundled web console to visualize and query log data.
29
+
- Choose your own storage backend - local drive or S3 (or compatible) object store.
30
+
- Ingestion API compatible with HTTP + JSON output of log agents - [Fluentbit ↗︎](https://fluentbit.io/), [Vector ↗︎](http://vector.dev/), [Logstash ↗︎](https://www.elastic.co/logstash/) and others.
31
+
- Query log data with PostgreSQL compatible SQL.
32
+
-[Grafana ↗︎](https://github.com/parseablehq/parseable-datasource) for visualization.
33
+
- Auto schema inference (schema evolution [coming soon ↗︎](https://github.com/parseablehq/parseable/issues/195)).
34
+
-[Send alerts ↗︎](https://www.parseable.io/docs/api/alerts) to webhook targets including Slack.
35
+
-[Stats API ↗︎](https://www.postman.com/parseable/workspace/parseable/request/22353706-b32abe55-f0c4-4ed2-9add-110d265888c3) to track ingestion and compressed data.
36
+
- Single binary includes all components - ingestion, store and query. Built-in UI.
37
37
38
-
- Written in Rust. Low CPU & memory footprint, with low latency, high throughput.
39
-
- Open data format (Parquet). Complete ownership of data. Wide range of possibilities for data analysis.
40
-
- Single binary / container based deployment (including UI). Deploy in minutes if not seconds.
41
-
- Indexing free design. Lower CPU and storage overhead. Similar levels of performance as indexing based systems.
42
-
- Kubernetes and Cloud native design, build ground up for cloud native environments.
38
+
## :white_check_mark: Getting Started
43
39
44
-
## :white_check_mark: Installing
45
-
46
-
Run the below command to deploy Parseable in demo mode with Docker.
40
+
Run the below command to deploy Parseable in local storage mode with Docker.
47
41
48
42
```sh
49
-
mkdir -p /tmp/data
50
-
docker run \
51
-
-p 8000:8000 \
52
-
-v /tmp/data:/data \
43
+
mkdir -p /tmp/parseable/data
44
+
mkdir -p /tmp/parseable/staging
45
+
46
+
docker run -p 8000:8000 \
47
+
-v /tmp/parseable/data:/parseable/data \
48
+
-v /tmp/parseable/staging:/parseable/staging \
49
+
-e P_FS_DIR=/parseable/data \
50
+
-e P_STAGING_DIR=/parseable/staging \
53
51
parseable/parseable:latest \
54
-
parseable server --demo
52
+
parseable local-store
55
53
```
56
54
57
-
Once this runs successfully, you'll see dashboard at [http://localhost:8000](http://localhost:8000). You can login to the dashboard with `parseable`, `parseable` as the credentials. Please make sure not to post any important data while in demo mode.
58
-
59
-
Prefer other platforms? Check out installation options (Kubernetes, bare-metal), in the [documentation](https://www.parseable.io/docs/category/installation).
60
-
61
-
#### Live demo
55
+
Once this runs successfully, you'll see dashboard at [http://localhost:8000](http://localhost:8000). You can login to the dashboard default credentials `admin`, `admin`.
62
56
63
-
Instead of installing locally, you can also try out Parseable on our [Demo instance](https://demo.parseable.io). Credentials to login to the dashboard are `parseable` / `parseable`.
64
-
65
-
## :100: Usage
66
-
67
-
If you've already deployed Parseable using the above Docker command, use below commands to create stream and post event(s) to the stream. Make sure to replace `<stream-name>` with the name of the stream you want to create and post events (e.g. `my-stream`).
68
-
#### Create a stream
57
+
### Create a stream
69
58
70
59
```sh
71
-
curl --location --request PUT 'http://localhost:8000/api/v1/logstream/<stream-name>' \
@@ -93,19 +82,44 @@ curl --location --request POST 'http://localhost:8000/api/v1/logstream/<stream-n
93
82
]'
94
83
```
95
84
96
-
- For complete Parseable API documentation, refer to [Parseable API Docs](https://www.parseable.io/docs/category/api).
97
-
- To configure Parseable with popular logging agents, please refer to the [agent documentation](https://www.parseable.io/docs/category/log-agents).
98
-
- To integrate Parseable with your applications directly, please refer to the [integration documentation](https://www.parseable.io/docs/category/application-integration).
85
+
### Query the stream
99
86
100
-
## :stethoscope: Support
87
+
You can see the events in Parseable UI, or use the below curl command to see the query response on CLI.
101
88
102
-
For questions and feedback please feel free to reach out to us on [Slack](https://launchpass.com/parseable). For bugs, please create issue on [GitHub](https://github.com/parseablehq/parseable/issues).
89
+
NOTE: Please change the `startTime` and `endTime` to the time range corresponding to the event you sent in the previous step.
90
+
91
+
```sh
92
+
curl --location --request POST 'http://localhost:8000/api/v1/query' \
Traditionally, logging has been seen as a text search problem. Log volumes were not high, and data ingestion or storage were not really issues. This led us to today, where all the logging platforms are primarily text search engines.
111
+
112
+
But with log data growing exponentially, today's log data challenges involve whole lot more – Data ingestion, storage, and observation, all at scale. We are building Parseable to address these challenges.
113
+
114
+
## :stethoscope: Support
103
115
104
-
For commercial support and consultation, please reach out to us at [`hi@parseable.io`](mailto:hi@parseable.io).
116
+
- For questions and feedback please feel free to reach out to us on [Slack ↗︎](https://launchpass.com/parseable).
117
+
- For bugs, please create issue on [GitHub ↗︎](https://github.com/parseablehq/parseable/issues).
118
+
- For commercial support and consultation, please reach out to us at [`hi@parseable.io` ↗︎](mailto:hi@parseable.io).
105
119
106
120
## :trophy: Contributing
107
121
108
-
Refer to the contributing guide [here](https://www.parseable.io/docs/contributing).
122
+
Refer to the contributing guide [here ↗︎](https://www.parseable.io/docs/contributing).
0 commit comments