Skip to content

Commit 229f690

Browse files
committed
format&clean dist
1 parent a614e7b commit 229f690

19 files changed

+192
-169
lines changed

hugegraph-server/hugegraph-dist/docker/README.md

+25-14
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,27 @@
11
# Deploy Hugegraph server with docker
22

33
> Note:
4-
>
5-
> 1. The docker image of hugegraph is a convenience release, not official distribution artifacts from ASF. You can find more details from [ASF Release Distribution Policy](https://infra.apache.org/release-distribution.html#dockerhub).
6-
>
7-
> 2. Recommand to use `release tag`(like `1.2.0`) for the stable version. Use `latest` tag to experience the newest functions in development.
4+
>
5+
> 1. The docker image of hugegraph is a convenience release, not official distribution artifacts
6+
from ASF. You can find more details
7+
from [ASF Release Distribution Policy](https://infra.apache.org/release-distribution.html#dockerhub).
8+
>
9+
> 2. Recommand to use `release tag`(like `1.2.0`) for the stable version. Use `latest` tag to
10+
experience the newest functions in development.
811

912
## 1. Deploy
1013

1114
We can use docker to quickly start an inner HugeGraph server with RocksDB in background.
1215

1316
1. Using docker run
1417

15-
Use `docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph` to start hugegraph server.
18+
Use `docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph` to start hugegraph server.
1619

1720
2. Using docker compose
1821

19-
Certainly we can only deploy server without other instance. Additionally, if we want to manage other HugeGraph-related instances with `server` in a single file, we can deploy HugeGraph-related instances via `docker-compose up -d`. The `docker-compose.yaml` is as below:
22+
Certainly we can only deploy server without other instance. Additionally, if we want to manage
23+
other HugeGraph-related instances with `server` in a single file, we can deploy HugeGraph-related
24+
instances via `docker-compose up -d`. The `docker-compose.yaml` is as below:
2025

2126
```yaml
2227
version: '3'
@@ -29,18 +34,22 @@ We can use docker to quickly start an inner HugeGraph server with RocksDB in bac
2934
3035
## 2. Create Sample Graph on Server Startup
3136
32-
If you want to **pre-load** some (test) data or graphs in container(by default), you can set the env `PRELOAD=ture`
37+
If you want to **pre-load** some (test) data or graphs in container(by default), you can set the
38+
env `PRELOAD=ture`
3339

3440
If you want to customize the pre-loaded data, please mount the the groovy scripts (not necessary).
3541

3642
1. Using docker run
3743

38-
Use `docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true -v /path/to/yourScript:/hugegraph/scripts/example.groovy hugegraph/hugegraph`
39-
to start hugegraph server.
44+
Use `docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true -v /path/to/yourScript:/hugegraph/scripts/example.groovy hugegraph/hugegraph`
45+
to start hugegraph server.
4046

41-
2. Using docker compose
47+
2. Using docker compose
4248

43-
We can also use `docker-compose up -d` to quickly start. The `docker-compose.yaml` is below. [example.groovy](https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-dist/src/assembly/static/scripts/example.groovy) is a pre-defined script. If needed, we can mount a new `example.groovy` to preload different data:
49+
We can also use `docker-compose up -d` to quickly start. The `docker-compose.yaml` is
50+
below. [example.groovy](https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-dist/src/assembly/static/scripts/example.groovy)
51+
is a pre-defined script. If needed, we can mount a new `example.groovy` to preload different
52+
data:
4453

4554
```yaml
4655
version: '3'
@@ -57,17 +66,19 @@ If you want to customize the pre-loaded data, please mount the the groovy script
5766

5867
3. Using start-hugegraph.sh
5968

60-
If you deploy HugeGraph server without docker, you can also pass arguments using `-p`, like this: `bin/start-hugegraph.sh -p true`.
69+
If you deploy HugeGraph server without docker, you can also pass arguments using `-p`, like
70+
this: `bin/start-hugegraph.sh -p true`.
6171

6272
## 3. Enable Authentication
6373

6474
1. Using docker run
6575

66-
Use `docker run -itd --name=graph -p 8080:8080 -e AUTH=true -e PASSWORD=123456 hugegraph/hugegraph` to enable the authentication and set the password with `-e AUTH=true -e PASSWORD=123456`.
76+
Use `docker run -itd --name=graph -p 8080:8080 -e AUTH=true -e PASSWORD=123456 hugegraph/hugegraph`
77+
to enable the authentication and set the password with `-e AUTH=true -e PASSWORD=123456`.
6778

6879
2. Using docker compose
6980

70-
Similarly, we can set the envionment variables in the docker-compose.yaml:
81+
Similarly, we can set the envionment variables in the docker-compose.yaml:
7182

7283
```yaml
7384
version: '3'

hugegraph-server/hugegraph-dist/docker/example/docker-compose-cassandra.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ services:
3333
depends_on:
3434
- cassandra
3535
healthcheck:
36-
test: ["CMD", "bin/gremlin-console.sh", "--" ,"-e", "scripts/remote-connect.groovy"]
36+
test: [ "CMD", "bin/gremlin-console.sh", "--" ,"-e", "scripts/remote-connect.groovy" ]
3737
interval: 10s
3838
timeout: 30s
3939
retries: 3
@@ -49,7 +49,7 @@ services:
4949
networks:
5050
- ca-network
5151
healthcheck:
52-
test: ["CMD", "cqlsh", "--execute", "describe keyspaces;"]
52+
test: [ "CMD", "cqlsh", "--execute", "describe keyspaces;" ]
5353
interval: 10s
5454
timeout: 30s
5555
retries: 5

hugegraph-server/hugegraph-dist/docker/scripts/remote-connect.groovy

+2-2
Original file line numberDiff line numberDiff line change
@@ -15,5 +15,5 @@
1515
* under the License.
1616
*/
1717

18-
:remote connect tinkerpop.server conf/remote.yaml
19-
:> hugegraph
18+
: remote connect tinkerpop . server conf / remote.yaml
19+
: > hugegraph

hugegraph-server/hugegraph-dist/src/assembly/static/conf/gremlin-driver-settings.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -14,12 +14,12 @@
1414
# See the License for the specific language governing permissions and
1515
# limitations under the License.
1616
#
17-
hosts: [localhost]
17+
hosts: [ localhost ]
1818
port: 8182
1919
serializer: {
2020
className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0,
2121
config: {
2222
serializeResultToString: false,
23-
ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry]
23+
ioRegistries: [ org.apache.hugegraph.io.HugeGraphIoRegistry ]
2424
}
2525
}

hugegraph-server/hugegraph-dist/src/assembly/static/conf/gremlin-server.yaml

+18-18
Original file line numberDiff line numberDiff line change
@@ -28,12 +28,12 @@ graphs: {
2828
scriptEngines: {
2929
gremlin-groovy: {
3030
staticImports: [
31-
org.opencypher.gremlin.process.traversal.CustomPredicates.*',
32-
org.opencypher.gremlin.traversal.CustomFunctions.*
31+
org.opencypher.gremlin.process.traversal.CustomPredicates.*',
32+
org.opencypher.gremlin.traversal.CustomFunctions.*
3333
],
3434
plugins: {
35-
org.apache.hugegraph.plugin.HugeGraphGremlinPlugin: {},
36-
org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: {},
35+
org.apache.hugegraph.plugin.HugeGraphGremlinPlugin: { },
36+
org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: { },
3737
org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin: {
3838
classImports: [
3939
java.lang.Math,
@@ -70,13 +70,13 @@ scriptEngines: {
7070
org.opencypher.gremlin.traversal.CustomPredicate
7171
],
7272
methodImports: [
73-
java.lang.Math#*,
74-
org.opencypher.gremlin.traversal.CustomPredicate#*,
75-
org.opencypher.gremlin.traversal.CustomFunctions#*
73+
java.lang.Math#*,
74+
org.opencypher.gremlin.traversal.CustomPredicate#*,
75+
org.opencypher.gremlin.traversal.CustomFunctions#*
7676
]
7777
},
7878
org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: {
79-
files: [scripts/empty-sample.groovy]
79+
files: [ scripts/empty-sample.groovy ]
8080
}
8181
}
8282
}
@@ -85,34 +85,34 @@ serializers:
8585
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1,
8686
config: {
8787
serializeResultToString: false,
88-
ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry]
88+
ioRegistries: [ org.apache.hugegraph.io.HugeGraphIoRegistry ]
8989
}
9090
}
9191
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0,
9292
config: {
9393
serializeResultToString: false,
94-
ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry]
94+
ioRegistries: [ org.apache.hugegraph.io.HugeGraphIoRegistry ]
9595
}
9696
}
9797
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV2d0,
9898
config: {
9999
serializeResultToString: false,
100-
ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry]
100+
ioRegistries: [ org.apache.hugegraph.io.HugeGraphIoRegistry ]
101101
}
102102
}
103103
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0,
104104
config: {
105105
serializeResultToString: false,
106-
ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry]
106+
ioRegistries: [ org.apache.hugegraph.io.HugeGraphIoRegistry ]
107107
}
108108
}
109109
metrics: {
110-
consoleReporter: {enabled: false, interval: 180000},
111-
csvReporter: {enabled: false, interval: 180000, fileName: ./metrics/gremlin-server-metrics.csv},
112-
jmxReporter: {enabled: false},
113-
slf4jReporter: {enabled: false, interval: 180000},
114-
gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST},
115-
graphiteReporter: {enabled: false, interval: 180000}
110+
consoleReporter: { enabled: false, interval: 180000 },
111+
csvReporter: { enabled: false, interval: 180000, fileName: ./metrics/gremlin-server-metrics.csv },
112+
jmxReporter: { enabled: false },
113+
slf4jReporter: { enabled: false, interval: 180000 },
114+
gangliaReporter: { enabled: false, interval: 180000, addressingMode: MULTICAST },
115+
graphiteReporter: { enabled: false, interval: 180000 }
116116
}
117117
maxInitialLineLength: 4096
118118
maxHeaderSize: 8192

hugegraph-server/hugegraph-dist/src/assembly/static/conf/log4j2.xml

+17-15
Original file line numberDiff line numberDiff line change
@@ -30,48 +30,48 @@
3030

3131
<!-- Normal server log config -->
3232
<RollingRandomAccessFile name="file" fileName="${LOG_PATH}/${FILE_NAME}.log"
33-
filePattern="${LOG_PATH}/$${date:yyyy-MM}/${FILE_NAME}-%d{yyyy-MM-dd}-%i.log"
34-
immediateFlush="false">
33+
filePattern="${LOG_PATH}/$${date:yyyy-MM}/${FILE_NAME}-%d{yyyy-MM-dd}-%i.log"
34+
immediateFlush="false">
3535
<ThresholdFilter level="TRACE" onMatch="ACCEPT" onMismatch="DENY"/>
3636
<PatternLayout pattern="%-d{yyyy-MM-dd HH:mm:ss} [%t] [%p] %c{1.} - %m%n"/>
3737
<!-- Trigger after exceeding 1day or 50MB -->
3838
<Policies>
3939
<SizeBasedTriggeringPolicy size="50MB"/>
40-
<TimeBasedTriggeringPolicy interval="1" modulate="true" />
40+
<TimeBasedTriggeringPolicy interval="1" modulate="true"/>
4141
</Policies>
4242
<!-- Keep 5 files per day & auto delete after over 2GB or 100 files -->
4343
<DefaultRolloverStrategy max="5">
4444
<Delete basePath="${LOG_PATH}" maxDepth="2">
4545
<IfFileName glob="*/*.log"/>
4646
<!-- Limit log amount & size -->
4747
<IfAny>
48-
<IfAccumulatedFileSize exceeds="2GB" />
49-
<IfAccumulatedFileCount exceeds="100" />
48+
<IfAccumulatedFileSize exceeds="2GB"/>
49+
<IfAccumulatedFileCount exceeds="100"/>
5050
</IfAny>
5151
</Delete>
5252
</DefaultRolloverStrategy>
5353
</RollingRandomAccessFile>
5454

5555
<!-- Separate & compress audit log, buffer size is 512KB -->
5656
<RollingRandomAccessFile name="audit" fileName="${LOG_PATH}/audit-${FILE_NAME}.log"
57-
filePattern="${LOG_PATH}/$${date:yyyy-MM}/audit-${FILE_NAME}-%d{yyyy-MM-dd-HH}-%i.gz"
58-
bufferSize="524288" immediateFlush="false">
57+
filePattern="${LOG_PATH}/$${date:yyyy-MM}/audit-${FILE_NAME}-%d{yyyy-MM-dd-HH}-%i.gz"
58+
bufferSize="524288" immediateFlush="false">
5959
<ThresholdFilter level="TRACE" onMatch="ACCEPT" onMismatch="DENY"/>
6060
<!-- Use simple format for audit log to speed up -->
6161
<PatternLayout pattern="%-d{yyyy-MM-dd HH:mm:ss} - %m%n"/>
6262
<!-- Trigger after exceeding 1hour or 500MB -->
6363
<Policies>
6464
<SizeBasedTriggeringPolicy size="500MB"/>
65-
<TimeBasedTriggeringPolicy interval="1" modulate="true" />
65+
<TimeBasedTriggeringPolicy interval="1" modulate="true"/>
6666
</Policies>
6767
<!-- Keep 2 files per hour & auto delete [after 60 days] or [over 5GB or 500 files] -->
6868
<DefaultRolloverStrategy max="2">
6969
<Delete basePath="${LOG_PATH}" maxDepth="2">
7070
<IfFileName glob="*/*.gz"/>
7171
<IfLastModified age="60d"/>
7272
<IfAny>
73-
<IfAccumulatedFileSize exceeds="5GB" />
74-
<IfAccumulatedFileCount exceeds="500" />
73+
<IfAccumulatedFileSize exceeds="5GB"/>
74+
<IfAccumulatedFileCount exceeds="500"/>
7575
</IfAny>
7676
</Delete>
7777
</DefaultRolloverStrategy>
@@ -86,16 +86,16 @@
8686
<!-- Trigger after exceeding 1day or 50MB -->
8787
<Policies>
8888
<SizeBasedTriggeringPolicy size="50MB"/>
89-
<TimeBasedTriggeringPolicy interval="1" modulate="true" />
89+
<TimeBasedTriggeringPolicy interval="1" modulate="true"/>
9090
</Policies>
9191
<!-- Keep 5 files per day & auto delete after over 2GB or 100 files -->
9292
<DefaultRolloverStrategy max="5">
9393
<Delete basePath="${LOG_PATH}" maxDepth="2">
9494
<IfFileName glob="*/*.log"/>
9595
<!-- Limit log amount & size -->
9696
<IfAny>
97-
<IfAccumulatedFileSize exceeds="2GB" />
98-
<IfAccumulatedFileCount exceeds="100" />
97+
<IfAccumulatedFileSize exceeds="2GB"/>
98+
<IfAccumulatedFileCount exceeds="100"/>
9999
</IfAny>
100100
</Delete>
101101
</DefaultRolloverStrategy>
@@ -134,10 +134,12 @@
134134
<AsyncLogger name="org.apache.hugegraph.auth" level="INFO" additivity="false">
135135
<appender-ref ref="audit"/>
136136
</AsyncLogger>
137-
<AsyncLogger name="org.apache.hugegraph.api.filter.AuthenticationFilter" level="INFO" additivity="false">
137+
<AsyncLogger name="org.apache.hugegraph.api.filter.AuthenticationFilter" level="INFO"
138+
additivity="false">
138139
<appender-ref ref="audit"/>
139140
</AsyncLogger>
140-
<AsyncLogger name="org.apache.hugegraph.api.filter.AccessLogFilter" level="INFO" additivity="false">
141+
<AsyncLogger name="org.apache.hugegraph.api.filter.AccessLogFilter" level="INFO"
142+
additivity="false">
141143
<appender-ref ref="slowQueryLog"/>
142144
</AsyncLogger>
143145
</loggers>

hugegraph-server/hugegraph-dist/src/assembly/static/conf/remote-objects.yaml

+3-3
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
# See the License for the specific language governing permissions and
1515
# limitations under the License.
1616
#
17-
hosts: [localhost]
17+
hosts: [ localhost ]
1818
port: 8182
1919
serializer: {
2020
className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0,
@@ -23,8 +23,8 @@ serializer: {
2323
# The duplication of HugeGraphIoRegistry is meant to fix a bug in the
2424
# 'org.apache.tinkerpop.gremlin.driver.Settings:from(Configuration)' method.
2525
ioRegistries: [
26-
org.apache.hugegraph.io.HugeGraphIoRegistry,
27-
org.apache.hugegraph.io.HugeGraphIoRegistry
26+
org.apache.hugegraph.io.HugeGraphIoRegistry,
27+
org.apache.hugegraph.io.HugeGraphIoRegistry
2828
]
2929
}
3030
}

hugegraph-server/hugegraph-dist/src/assembly/static/conf/remote.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -14,12 +14,12 @@
1414
# See the License for the specific language governing permissions and
1515
# limitations under the License.
1616
#
17-
hosts: [localhost]
17+
hosts: [ localhost ]
1818
port: 8182
1919
serializer: {
2020
className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0,
2121
config: {
2222
serializeResultToString: false,
23-
ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry]
23+
ioRegistries: [ org.apache.hugegraph.io.HugeGraphIoRegistry ]
2424
}
2525
}

0 commit comments

Comments
 (0)