@@ -52,12 +52,12 @@ through its web interface. Policies are exposed under `/validate/<policy id>.
52
52
For example, given the configuration file from above, the following API endpoint
53
53
would be created :
54
54
55
- * `/validate/psp-apparmor`: this exposes the `psp-apparmor:v0.1.3`
56
- policy. The Wasm module is downloaded from the OCI registry of GitHub.
57
- * `/validate/psp-capabilities`: this exposes the `psp-capabilities:v0.1.3`
58
- policy. The Wasm module is downloaded from the OCI registry of GitHub.
59
- * `/validate/namespace_simple`: this exposes the `namespace-validate-policy`
60
- policy. The Wasm module is loaded from a local file located under `/tmp/namespace-validate-policy.wasm`.
55
+ - `/validate/psp-apparmor` : this exposes the `psp-apparmor:v0.1.3`
56
+ policy. The Wasm module is downloaded from the OCI registry of GitHub.
57
+ - `/validate/psp-capabilities` : this exposes the `psp-capabilities:v0.1.3`
58
+ policy. The Wasm module is downloaded from the OCI registry of GitHub.
59
+ - `/validate/namespace_simple` : this exposes the `namespace-validate-policy`
60
+ policy. The Wasm module is loaded from a local file located under `/tmp/namespace-validate-policy.wasm`.
61
61
62
62
It's common for policies to allow users to tune their behaviour via ad-hoc settings.
63
63
These customization parameters are provided via the `settings` dictionary.
@@ -72,12 +72,104 @@ The Wasm file providing the Kubewarden Policy can be either loaded from
72
72
the local filesystem or it can be fetched from a remote location. The behaviour
73
73
depends on the URL format provided by the user :
74
74
75
- * `file:///some/local/program.wasm`: load the policy from the local filesystem
76
- * `https://some-host.com/some/remote/program.wasm`: download the policy from the
75
+ - `file:///some/local/program.wasm` : load the policy from the local filesystem
76
+ - `https://some-host.com/some/remote/program.wasm` : download the policy from the
77
77
remote http(s) server
78
- * `registry://localhost:5000/project/artifact:some-version` download the policy
78
+ - ` registry://localhost:5000/project/artifact:some-version` download the policy
79
79
from a OCI registry. The policy must have been pushed as an OCI artifact
80
80
81
+ # ## Policy Group
82
+
83
+ Multiple policies can be grouped together and are evaluated using a user provided boolean expression.
84
+
85
+ The motivation for this feature is to enable users to create complex policies by combining simpler ones.
86
+ This allows users to avoid the need to create custom policies from scratch and instead leverage existing policies.
87
+ This reduces the need to duplicate policy logic across different policies, increases reusability, removes
88
+ the cognitive load of managing complex policy logic, and enables the creation of custom policies using
89
+ a DSL-like configuration.
90
+
91
+ Policy groups are added to the same policy configuration file as individual policies.
92
+
93
+ This is an example of the policies file with a policy group :
94
+
95
+ ` ` ` yml
96
+ pod-image-signatures: # policy group
97
+ policies:
98
+ - name: sigstore_pgp
99
+ url: ghcr.io/kubewarden/policies/verify-image-signatures:v0.2.8
100
+ settings:
101
+ signatures:
102
+ - image: "*"
103
+ pubKeys:
104
+ - "-----BEGIN PUBLIC KEY-----xxxxx-----END PUBLIC KEY-----"
105
+ - "-----BEGIN PUBLIC KEY-----xxxxx-----END PUBLIC KEY-----"
106
+ - name: sigstore_gh_action
107
+ url: ghcr.io/kubewarden/policies/verify-image-signatures:v0.2.8
108
+ settings:
109
+ signatures:
110
+ - image: "*"
111
+ githubActions:
112
+ owner: "kubewarden"
113
+ - name: reject_latest_tag
114
+ url: ghcr.io/kubewarden/policies/trusted-repos-policy:v0.1.12
115
+ settings:
116
+ tags:
117
+ reject:
118
+ - latest
119
+ expression: "sigstore_pgp() || (sigstore_gh_action() && reject_latest_tag())"
120
+ message: "The group policy is rejected."
121
+ ` ` `
122
+
123
+ This will lead to the exposure of a validation endpoint `/validate/pod-image-signatures`
124
+ that will accept the incoming request if the image is signed with the given public keys or
125
+ if the image is built by the given GitHub Actions and the image tag is not `latest`.
126
+
127
+ Each policy in the group can have its own settings and its own list of Kubernetes resources
128
+ that is allowed to access :
129
+
130
+ ` ` ` yml
131
+ strict-ingress-checks:
132
+ policies:
133
+ - name: unique_ingress
134
+ url: ghcr.io/kubewarden/policies/cel-policy:latest
135
+ contextAwareResources:
136
+ - apiVersion: networking.k8s.io/v1
137
+ kind: Ingress
138
+ settings:
139
+ variables:
140
+ - name: knownIngresses
141
+ expression: kw.k8s.apiVersion("networking.k8s.io/v1").kind("Ingress").list().items
142
+ - name: knownHosts
143
+ expression: |
144
+ variables.knownIngresses
145
+ .filter(i, (i.metadata.name != object.metadata.name) && (i.metadata.namespace != object.metadata.namespace))
146
+ .map(i, i.spec.rules.map(r, r.host))
147
+ - name: desiredHosts
148
+ expression: |
149
+ object.spec.rules.map(r, r.host)
150
+ validations:
151
+ - expression: |
152
+ !variables.knownHost.exists_one(hosts, sets.intersects(hosts, variables.desiredHosts))
153
+ message: "Cannot reuse a host across multiple ingresses"
154
+ - name: https_only
155
+ url: ghcr.io/kubewarden/policies/ingress:latest
156
+ settings:
157
+ requireTLS: true
158
+ allowPorts: [443]
159
+ denyPorts: [80]
160
+ - name: http_only
161
+ url: ghcr.io/kubewarden/policies/ingress:latest
162
+ settings:
163
+ requireTLS: false
164
+ allowPorts: [80]
165
+ denyPorts: [443]
166
+
167
+ expression: "unique_ingress() && (https_only() || http_only())"
168
+ message: "The group policy is rejected."
169
+ ` ` `
170
+
171
+ For more details, please refer to the Kubewarden documentation.
172
+
81
173
# # Logging and distributed tracing
82
174
83
175
The verbosity of policy-server can be configured via the `--log-level` flag.
@@ -103,14 +195,14 @@ Policy server can send trace events to the Open Telemetry Collector using the
103
195
104
196
Current limitations :
105
197
106
- * Traces can be sent to the collector only via grpc. The HTTP transport
107
- layer is not supported.
108
- * The Open Telemetry Collector must be listening on localhost. When deployed
109
- on Kubernetes, policy-server must have the Open Telemetry Collector
110
- running as a sidecar.
111
- * Policy server doesn't expose any configuration setting for Open Telemetry
112
- (e.g. : endpoint URL, encryption, authentication,...). All of the tuning
113
- has to be done on the collector process that runs as a sidecar.
198
+ - Traces can be sent to the collector only via grpc. The HTTP transport
199
+ layer is not supported.
200
+ - The Open Telemetry Collector must be listening on localhost. When deployed
201
+ on Kubernetes, policy-server must have the Open Telemetry Collector
202
+ running as a sidecar.
203
+ - Policy server doesn't expose any configuration setting for Open Telemetry
204
+ (e.g. : endpoint URL, encryption, authentication,...). All of the tuning
205
+ has to be done on the collector process that runs as a sidecar.
114
206
115
207
More details about OpenTelemetry and tracing can be found inside of
116
208
our [official docs](https://docs.kubewarden.io/operator-manual/tracing/01-quickstart.html).
0 commit comments