Skip to content

Commit

Permalink
Add markdown images (#16)
Browse files Browse the repository at this point in the history
  • Loading branch information
cxzl25 authored Feb 1, 2024
1 parent 3bca1a0 commit 509c4c0
Show file tree
Hide file tree
Showing 8 changed files with 13 additions and 13 deletions.
6 changes: 3 additions & 3 deletions specification/ORCv0.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ include the minimum and maximum values for each column in each set of
file reader can skip entire sets of rows that aren't important for
this query.

![ORC file structure](/img/OrcFileLayout.png)
![ORC file structure](./img/OrcFileLayout.png)

# File Tail

Expand Down Expand Up @@ -158,7 +158,7 @@ All of the rows in an ORC file must have the same schema. Logically
the schema is expressed as a tree as in the figure below, where
the compound types have subcolumns under them.

![ORC column structure](/img/TreeWriters.png)
![ORC column structure](./img/TreeWriters.png)

The equivalent Hive DDL would be:

Expand Down Expand Up @@ -381,7 +381,7 @@ for a chunk that compressed to 100,000 bytes would be [0x40, 0x0d,
that as long as a decompressor starts at the top of a header, it can
start decompressing without the previous bytes.

![compression streams](/img/CompressionStream.png)
![compression streams](./img/CompressionStream.png)

The default compression chunk size is 256K, but writers can choose
their own value. Larger chunks lead to better compression, but require
Expand Down
10 changes: 5 additions & 5 deletions specification/ORCv1.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ include the minimum and maximum values for each column in each set of
file reader can skip entire sets of rows that aren't important for
this query.

![ORC file structure](/img/OrcFileLayout.png)
![ORC file structure](./img/OrcFileLayout.png)

# File Tail

Expand Down Expand Up @@ -200,7 +200,7 @@ All of the rows in an ORC file must have the same schema. Logically
the schema is expressed as a tree as in the figure below, where
the compound types have subcolumns under them.

![ORC column structure](/img/TreeWriters.png)
![ORC column structure](./img/TreeWriters.png)

The equivalent Hive DDL would be:

Expand Down Expand Up @@ -619,7 +619,7 @@ for a chunk that compressed to 100,000 bytes would be [0x40, 0x0d,
that as long as a decompressor starts at the top of a header, it can
start decompressing without the previous bytes.

![compression streams](/img/CompressionStream.png)
![compression streams](./img/CompressionStream.png)

The default compression chunk size is 256K, but writers can choose
their own value. Larger chunks lead to better compression, but require
Expand Down Expand Up @@ -796,7 +796,7 @@ length of 4 (3) as [0x5e, 0x03, 0x5c, 0xa1, 0xab, 0x1e, 0xde, 0xad,
> Note: the run length(4) is one-off. We can get 4 by adding 1 to 3
(See [Hive-4123](https://github.com/apache/hive/commit/69deabeaac020ba60b0f2156579f53e9fe46157a#diff-c00fea1863eaf0d6f047535e874274199020ffed3eb00deb897f513aa86f6b59R232-R236))

![Direct](/img/Direct.png)
![Direct](./img/Direct.png)

### Patched Base

Expand Down Expand Up @@ -1334,4 +1334,4 @@ Bloom filter streams are interlaced with row group indexes. This placement
makes it convenient to read the bloom filter stream and row index stream
together in single read operation.

![bloom filter](/img/BloomFilter.png)
![bloom filter](./img/BloomFilter.png)
10 changes: 5 additions & 5 deletions specification/ORCv2.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ include the minimum and maximum values for each column in each set of
file reader can skip entire sets of rows that aren't important for
this query.

![ORC file structure](/img/OrcFileLayout.png)
![ORC file structure](./img/OrcFileLayout.png)

# File Tail

Expand Down Expand Up @@ -220,7 +220,7 @@ All of the rows in an ORC file must have the same schema. Logically
the schema is expressed as a tree as in the figure below, where
the compound types have subcolumns under them.

![ORC column structure](/img/TreeWriters.png)
![ORC column structure](./img/TreeWriters.png)

The equivalent Hive DDL would be:

Expand Down Expand Up @@ -638,7 +638,7 @@ for a chunk that compressed to 100,000 bytes would be [0x40, 0x0d,
that as long as a decompressor starts at the top of a header, it can
start decompressing without the previous bytes.

![compression streams](/img/CompressionStream.png)
![compression streams](./img/CompressionStream.png)

The default compression chunk size is 256K, but writers can choose
their own value. Larger chunks lead to better compression, but require
Expand Down Expand Up @@ -815,7 +815,7 @@ length of 4 (3) as [0x5e, 0x03, 0x5c, 0xa1, 0xab, 0x1e, 0xde, 0xad,
> Note: the run length(4) is one-off. We can get 4 by adding 1 to 3
(See [Hive-4123](https://github.com/apache/hive/commit/69deabeaac020ba60b0f2156579f53e9fe46157a#diff-c00fea1863eaf0d6f047535e874274199020ffed3eb00deb897f513aa86f6b59R232-R236))

![Direct](/img/Direct.png)
![Direct](./img/Direct.png)

### Patched Base

Expand Down Expand Up @@ -1350,4 +1350,4 @@ Bloom filter streams are interlaced with row group indexes. This placement
makes it convenient to read the bloom filter stream and row index stream
together in single read operation.

![bloom filter](/img/BloomFilter.png)
![bloom filter](./img/BloomFilter.png)
Binary file added specification/img/BloomFilter.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added specification/img/CompressionStream.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added specification/img/Direct.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added specification/img/OrcFileLayout.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added specification/img/TreeWriters.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 509c4c0

Please sign in to comment.