Layer 3 Examples

The layer 3 policy establishes the base connectivity rules regarding which endpoints can talk to each other. Layer 3 policies can be specified using the following methods:

  • Labels Based: This is used to describe the relationship if both endpoints are managed by Cilium and are thus assigned labels. The big advantage of this method is that IP addresses are not encoded into the policies and the policy is completely decoupled from the addressing.
  • Services based: This is an intermediate form between Labels and CIDR and makes use of the services concept in the orchestration system. A good example of this is the Kubernetes concept of Service endpoints which are automatically maintained to contain all backend IP addresses of a service. This allows to avoid hardcoding IP addresses into the policy even if the destination endpoint is not controlled by Cilium.
  • Entities Based: Entities are used to describe remote peers which can be categorized without knowing their IP addresses. This includes connectivity to the local host serving the endpoints or all connectivity to outside of the cluster.
  • IP/CIDR based: This is used to describe the relationship to or from external services if the remote peer is not an endpoint. This requires to hardcode either IP addresses or subnets into the policies. This construct should be used as a last resort as it requires stable IP or subnet assignments.
  • DNS based: Selects remote, non-cluster, peers using DNS names converted to IPs via DNS lookups. It shares all limitations of the IP/CIDR based rules above. DNS information is acquired by routing DNS traffic via a proxy, or polling for listed DNS targets. DNS TTLs are respected.

Labels Based

Label-based L3 policy is used to establish policy between endpoints inside the cluster managed by Cilium. Label-based L3 policies are defined by using an Endpoint Selector inside a rule to choose what kind of traffic that can be received (on ingress), or sent (on egress). An empty Endpoint Selector allows all traffic. The examples below demonstrate this in further detail.

Note

Kubernetes: See section Namespaces for details on how the Endpoint Selector applies in a Kubernetes environment with regard to namespaces.

Ingress

An endpoint is allowed to receive traffic from another endpoint if at least one ingress rule exists which selects the destination endpoint with the Endpoint Selector in the endpointSelector field. To restrict traffic upon ingress to the selected endpoint, the rule selects the source endpoint with the Endpoint Selector in the fromEndpoints field.

Simple Ingress Allow

The following example illustrates how to use a simple ingress rule to allow communication from endpoints with the label role=frontend to endpoints with the label role=backend.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "l3-rule"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. role: backend
  9. ingress:
  10. - fromEndpoints:
  11. - matchLabels:
  12. role: frontend
  1. [{
  2. "labels": [{"key": "name", "value": "l3-rule"}],
  3. "endpointSelector": {"matchLabels": {"role":"backend"}},
  4. "ingress": [{
  5. "fromEndpoints": [
  6. {"matchLabels":{"role":"frontend"}}
  7. ]
  8. }]
  9. }]

Ingress Allow All Endpoints

An empty Endpoint Selector will select all endpoints, thus writing a rule that will allow all ingress traffic to an endpoint may be done as follows:

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "allow-all-to-victim"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. role: victim
  9. ingress:
  10. - fromEndpoints:
  11. - {}
  1. [{
  2. "labels": [{"key": "name", "value": "allow-all-to-victim"}],
  3. "endpointSelector": {"matchLabels": {"role":"victim"}},
  4. "ingress": [{
  5. "fromEndpoints": [
  6. {"matchLabels":{}}
  7. ]
  8. }]
  9. }]

Note that while the above examples allow all ingress traffic to an endpoint, this does not mean that all endpoints are allowed to send traffic to this endpoint per their policies. In other words, policy must be configured on both sides (sender and receiver).

Egress

An endpoint is allowed to send traffic to another endpoint if at least one egress rule exists which selects the destination endpoint with the Endpoint Selector in the endpointSelector field. To restrict traffic upon egress to the selected endpoint, the rule selects the destination endpoint with the Endpoint Selector in the toEndpoints field.

Simple Egress Allow

The following example illustrates how to use a simple egress rule to allow communication to endpoints with the label role=backend from endpoints with the label role=frontend.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "l3-egress-rule"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. role: frontend
  9. egress:
  10. - toEndpoints:
  11. - matchLabels:
  12. role: backend
  1. [{
  2. "labels": [{"key": "name", "value": "l3-egress-rule"}],
  3. "endpointSelector": {"matchLabels": {"role":"frontend"}},
  4. "egress": [{
  5. "toEndpoints": [
  6. {"matchLabels":{"role":"backend"}}
  7. ]
  8. }]
  9. }]

Egress Allow All Endpoints

An empty Endpoint Selector will select all endpoints, thus writing a rule that will allow all egress traffic from an endpoint may be done as follows:

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "allow-all-from-frontend"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. role: frontend
  9. egress:
  10. - toEndpoints:
  11. - {}
  1. [{
  2. "labels": [{"key": "name", "value": "allow-all-from-frontend"}],
  3. "endpointSelector": {"matchLabels": {"role":"frontend"}},
  4. "egress": [{
  5. "toEndpoints": [
  6. {"matchLabels":{}}
  7. ]
  8. }]
  9. }]

Note that while the above examples allow all egress traffic from an endpoint, the receivers of the egress traffic may have ingress rules that deny the traffic. In other words, policy must be configured on both sides (sender and receiver).

Ingress/Egress Default Deny

An endpoint can be put into the default deny mode at ingress or egress if a rule selects the endpoint and contains the respective rule section ingress or egress.

Note

Any rule selecting the endpoint will have this effect, this example illustrates how to put an endpoint into default deny mode without whitelisting other peers at the same time.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "deny-all-egress"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. role: restricted
  9. egress:
  10. - {}
  1. [{
  2. "labels": [{"key": "name", "value": "deny-all-egress"}],
  3. "endpointSelector": {"matchLabels": {"role":"restricted"}},
  4. "egress": [{}]
  5. }]

Additional Label Requirements

It is often required to apply the principle of separation of concern when defining policies. For this reason, an additional construct exists which allows to establish base requirements for any connectivity to happen.

For this purpose, the fromRequires field can be used to establish label requirements which serve as a foundation for any fromEndpoints relationship. fromRequires is a list of additional constraints which must be met in order for the selected endpoints to be reachable. These additional constraints do not grant access privileges by themselves, so to allow traffic there must also be rules which match fromEndpoints. The same applies for egress policies, with toRequires and toEndpoints.

The purpose of this rule is to allow establishing base requirements such as, any endpoint in env=prod can only be accessed if the source endpoint also carries the label env=prod.

This example shows how to require every endpoint with the label env=prod to be only accessible if the source endpoint also has the label env=prod.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "requires-rule"
  5. specs:
  6. - description: "For endpoints with env=prod, only allow if source also has label env=prod"
  7. endpointSelector:
  8. matchLabels:
  9. env: prod
  10. ingress:
  11. - fromRequires:
  12. - matchLabels:
  13. env: prod
  1. [{
  2. "labels": [{"key": "name", "value": "requires-rule"}],
  3. "endpointSelector": {"matchLabels": {"env":"prod"}},
  4. "ingress": [{
  5. "fromRequires": [
  6. {"matchLabels":{"env":"prod"}}
  7. ]
  8. }]
  9. }]

This fromRequires rule doesn’t allow anything on its own and needs to be combined with other rules to allow traffic. For example, when combined with the example policy below, the endpoint with label env=prod will become accessible from endpoints that have both labels env=prod and role=frontend.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "l3-rule"
  5. specs:
  6. - description: "For endpoints with env=prod, allow if source also has label role=frontend"
  7. endpointSelector:
  8. matchLabels:
  9. env: prod
  10. ingress:
  11. - fromEndpoints:
  12. - matchLabels:
  13. role: frontend
  1. [{
  2. "labels": [{"key": "name", "value": "l3-rule"}],
  3. "endpointSelector": {"matchLabels": {"env":"prod"}},
  4. "ingress": [{
  5. "fromEndpoints": [
  6. {"matchLabels":{"role":"frontend"}}
  7. ]
  8. }]
  9. }]

Services based

Services running in your cluster can be whitelisted in Egress rules. Currently Kubernetes Services without a Selector are supported when defined by their name and namespace or label selector. Future versions of Cilium will support specifying non-Kubernetes services and Kubernetes services which are backed by pods.

This example shows how to allow all endpoints with the label id=app2 to talk to all endpoints of kubernetes service myservice in kubernetes namespace default.

Note

These rules will only take effect on Kubernetes services without a selector.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "service-rule"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. id: app2
  9. egress:
  10. - toServices:
  11. - k8sService:
  12. serviceName: myservice
  13. namespace: default
  1. [{
  2. "labels": [{"key": "name", "value": "service-rule"}],
  3. "endpointSelector": {
  4. "matchLabels": {
  5. "id": "app2"
  6. }
  7. },
  8. "egress": [
  9. {
  10. "toServices": [
  11. {
  12. "k8sService": {
  13. "serviceName": "myservice",
  14. "namespace": "default"
  15. }
  16. }
  17. ]
  18. }
  19. ]
  20. }]

This example shows how to allow all endpoints with the label id=app2 to talk to all endpoints of all kubernetes headless services which have head:none set as the label.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "service-labels-rule"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. id: app2
  9. egress:
  10. - toServices:
  11. - k8sServiceSelector:
  12. selector:
  13. matchLabels:
  14. head: none
  1. [{
  2. "labels": [{"key": "name", "value": "service-labels-rule"}],
  3. "endpointSelector": {
  4. "matchLabels": {
  5. "id": "app2"
  6. }
  7. },
  8. "egress": [
  9. {
  10. "toServices": [
  11. {
  12. "k8sServiceSelector": {
  13. "selector": {
  14. "matchLabels": {
  15. "head": "none"
  16. }
  17. }
  18. }
  19. }
  20. ]
  21. }
  22. ]
  23. }
  24. ]

Entities Based

fromEntities is used to describe the entities that can access the selected endpoints. toEntities is used to describe the entities that can be accessed by the selected endpoints.

The following entities are defined:

host

The host entity includes the local host. This also includes all containers running in host networking mode on the local host.

remote-node

Any node in any of the connected clusters other than the local host. This also includes all containers running in host-networking mode on remote nodes. (Requires the option enable-remote-node-identity to be enabled)

cluster

Cluster is the logical group of all network endpoints inside of the local cluster. This includes all Cilium-managed endpoints of the local cluster, unmanaged endpoints in the local cluster, as well as the host, remote-node, and init identities.

init

The init entity contains all endpoints in bootstrap phase for which the security identity has not been resolved yet. This is typically only observed in non-Kubernetes environments. See section Endpoint Lifecycle for details.

health

The health entity represents the health endpoints, used to check cluster connectivity health. Each node managed by Cilium hosts a health endpoint. See Checking cluster connectivity health for details on health checks.

unmanaged

The unmanaged entity represents endpoints not managed by Cilium. Unmanaged endpoints are considered part of the cluster and are included in the cluster entity.

world

The world entity corresponds to all endpoints outside of the cluster. Allowing to world is identical to allowing to CIDR 0.0.0.0/0. An alternative to allowing from and to world is to define fine grained DNS or CIDR based policies.

all

The all entity represents the combination of all known clusters as well world and whitelists all communication.

New in version future: Allowing users to define custom identities is on the roadmap but has not been implemented yet.

Access to/from local host

Allow all endpoints with the label env=dev to access the host that is serving the particular endpoint.

Note

Kubernetes will automatically allow all communication from the local host of all local endpoints. You can run the agent with the option --allow-localhost=policy to disable this behavior which will give you control over this via policy.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "dev-to-host"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. env: dev
  9. egress:
  10. - toEntities:
  11. - host
  1. [{
  2. "labels": [{"key": "name", "value": "dev-to-host"}],
  3. "endpointSelector": {"matchLabels": {"env":"dev"}},
  4. "egress": [{
  5. "toEntities": ["host"]
  6. }]
  7. }]

Access to/from all nodes in the cluster

Allow all endpoints with the label env=dev to receive traffic from any host in the cluster that Cilium is running on.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "to-dev-from-nodes-in-cluster"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. env: dev
  9. ingress:
  10. - fromEntities:
  11. - host
  12. - remote-node
  1. [{
  2. "labels": [{"key": "name", "value": "to-dev-from-nodes-in-cluster"}],
  3. "endpointSelector": {"matchLabels": {"env":"dev"}},
  4. "ingress": [{
  5. "fromEntities": [
  6. "host",
  7. "remote-node"
  8. ]
  9. }]
  10. }]

Access to/from outside cluster

This example shows how to enable access from outside of the cluster to all endpoints that have the label role=public.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "from-world-to-role-public"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. role: public
  9. ingress:
  10. - fromEntities:
  11. - world
  1. [{
  2. "labels": [{"key": "name", "value":"from-world-to-role-public"}],
  3. "endpointSelector": {"matchLabels": {"role":"public"}},
  4. "ingress": [{
  5. "fromEntities": ["world"]
  6. }]
  7. }]

IP/CIDR based

CIDR policies are used to define policies to and from endpoints which are not managed by Cilium and thus do not have labels associated with them. These are typically external services, VMs or metal machines running in particular subnets. CIDR policy can also be used to limit access to external services, for example to limit external access to a particular IP range. CIDR policies can be applied at ingress or egress.

CIDR rules apply if Cilium cannot map the source or destination to an identity derived from endpoint labels, ie the Special Identities. For example, CIDR rules will apply to traffic where one side of the connection is:

  • A network endpoint outside the cluster
  • The host network namespace where the pod is running.
  • Within the cluster prefix but the IP’s networking is not provided by Cilium.

Conversely, CIDR rules do not apply to traffic where both sides of the connection are either managed by Cilium or use an IP belonging to a node in the cluster (including host networking pods). This traffic may be allowed using labels, services or entities -based policies as described above.

Note

When running Cilium on Linux 4.10 or earlier, there are Restrictions on unique prefix lengths for CIDR policy rules.

Ingress

fromCIDR

List of source prefixes/CIDRs that are allowed to talk to all endpoints selected by the endpointSelector.

fromCIDRSet

List of source prefixes/CIDRs that are allowed to talk to all endpoints selected by the endpointSelector, along with an optional list of prefixes/CIDRs per source prefix/CIDR that are subnets of the source prefix/CIDR from which communication is not allowed.

Egress

toCIDR

List of destination prefixes/CIDRs that endpoints selected by endpointSelector are allowed to talk to. Note that endpoints which are selected by a fromEndpoints are automatically allowed to reply back to the respective destination endpoints.

toCIDRSet

List of destination prefixes/CIDRs that are allowed to talk to all endpoints selected by the endpointSelector, along with an optional list of prefixes/CIDRs per source prefix/CIDR that are subnets of the destination prefix/CIDR to which communication is not allowed.

Allow to external CIDR block

This example shows how to allow all endpoints with the label app=myService to talk to the external IP 20.1.1.1, as well as the CIDR prefix 10.0.0.0/8, but not CIDR prefix 10.96.0.0/12

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "cidr-rule"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. app: myService
  9. egress:
  10. - toCIDR:
  11. - 20.1.1.1/32
  12. - toCIDRSet:
  13. - cidr: 10.0.0.0/8
  14. except:
  15. - 10.96.0.0/12
  1. [{
  2. "labels": [{"key": "name", "value": "cidr-rule"}],
  3. "endpointSelector": {"matchLabels":{"app":"myService"}},
  4. "egress": [{
  5. "toCIDR": [
  6. "20.1.1.1/32"
  7. ]
  8. }, {
  9. "toCIDRSet": [{
  10. "cidr": "10.0.0.0/8",
  11. "except": [
  12. "10.96.0.0/12"
  13. ]}
  14. ]
  15. }]
  16. }]

DNS based

DNS policies are used to define Layer 3 policies to endpoints that are not managed by Cilium, but have DNS queryable domain names. The IP addresses provided in DNS responses are allowed by Cilium in a similar manner to IPs in CIDR based policies. They are an alternative when the remote IPs may change or are not know a priori, or when DNS is more convenient. To enforce policy on DNS requests themselves, see Layer 7 Examples.

IP information is captured from DNS responses per-Endpoint via a DNS Proxy. An L3 CIDR based rule is generated for every toFQDNs rule and applies to the same endpoints. The IP information is selected for insertion by matchName or matchPattern rules, and is collected from all DNS responses seen by Cilium on the node. Multiple selectors may be included in a single egress rule. See Obtaining DNS Data for use by toFQDNs for information on collecting this IP data.

toFQDNs egress rules cannot contain any other L3 rules, such as toEndpoints (under Labels Based) and toCIDRs (under CIDR Based). They may contain L4/L7 rules, such as toPorts (see Layer 4 Examples) with, optionally, HTTP and Kafka sections (see Layer 7 Examples).

Note

DNS based rules are intended for external connections and behave similarly to CIDR based rules. See Services based and Labels based for cluster-internal traffic.

IPs to be allowed are selected via:

toFQDNs.matchName

Inserts IPs of domains that match matchName exactly. Multiple distinct names may be included in separate matchName entries and IPs for domains that match any matchName will be inserted.

toFQDNs.matchPattern

Inserts IPs of domains that match the pattern in matchPattern, accounting for wildcards. Patterns are composed of literal characters that are allowed in domain names: a-z, 0-9, . and -.

* is allowed as a wildcard with a number of convenience behaviors:

  • * within a domain allows 0 or more valid DNS characters, except for the . separator. *.cilium.io will match sub.cilium.io but not cilium.io. part*ial.com will match partial.com and part-extra-ial.com.
  • * alone matches all names, and inserts all cached DNS IPs into this rule.

Example

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "to-fqdn"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. app: test-app
  9. egress:
  10. - toEndpoints:
  11. - matchLabels:
  12. "k8s:io.kubernetes.pod.namespace": kube-system
  13. "k8s:k8s-app": kube-dns
  14. toPorts:
  15. - ports:
  16. - port: "53"
  17. protocol: ANY
  18. rules:
  19. dns:
  20. - matchPattern: "*"
  21. - toFQDNs:
  22. - matchName: "my-remote-service.com"
  1. [
  2. {
  3. "endpointSelector": {
  4. "matchLabels": {
  5. "app": "test-app"
  6. }
  7. },
  8. "egress": [
  9. {
  10. "toEndpoints": [
  11. {
  12. "matchLabels": {
  13. "app-type": "dns"
  14. }
  15. }
  16. ],
  17. "toPorts": [
  18. {
  19. "ports": [
  20. {
  21. "port": "53",
  22. "protocol": "ANY"
  23. }
  24. ],
  25. "rules": {
  26. "dns": [
  27. { "matchPattern": "*" }
  28. ]
  29. }
  30. }
  31. ]
  32. },
  33. {
  34. "toFQDNs": [
  35. {
  36. "matchName": "my-remote-service.com"
  37. }
  38. ]
  39. }
  40. ]
  41. }
  42. ]

Managing Long-Lived Connections & Minimum DNS Cache Times

Often, an application may keep a connection open for longer than the DNS TTL. Without further DNS queries the remote IP used in the long-lived connection may expire out of the DNS cache. When this occurs, existing connections established before the TTL expires will continue to be allowed until they terminate. Unused IPs will no longer be allowed, however, even when from the same DNS lookup as an in-use IP. This tracking is per-endpoint per-IP and DNS entries in this state will be have source: connection with a single IP listed within the cilium fqdn cache list output.

A minimum TTL is used to ensure a lower time bound to DNS data expiration, and IPs allowed by a toFQDNs rule will be allowed at least this long It can be configured with the --tofqdns-min-ttl CLI option. The value is in integer seconds and must be 1 or more, the default is 1 hour.

Some care needs to be taken when setting --tofqdns-min-ttl with DNS data that returns many distinct IPs over time. A long TTL will keep each IP cached long after the related connections have terminated. Large numbers of IPs each have corresponding Security Identities and too many may slow down Cilium policy regeneration.

Managing Short-Lived Connections & Maximum IPs per FQDN/endpoint

The minimum TTL for DNS entries in the cache is deliberately long with 1 hour as the default. This is done to accommodate long-lived persistent connections. On the other end of the spectrum are workloads that perform short-lived connections in repetition to FQDNs that are backed by a large number of IP addresses (e.g. AWS S3).

Many short-lived connections can grow the number of IPs mapping to an FQDN quickly. In order to limit the number of IP addresses that map a particular FQDN, each FQDN has a per-endpoint max capacity of IPs that will be retained (default: 50). Once this limit is exceeded, the oldest IP entries are automatically expired from the cache. This capacity can be changed using the --tofqdns-max-ip-per-hostname option.

As with long-lived connections above, live connections are not expired until they terminate. It is safe to mix long- and short-lived connections from the same Pod. IPs above the limit described above will only be removed if unused by a connection.

Layer 4 Examples

Limit ingress/egress ports

Layer 4 policy can be specified in addition to layer 3 policies or independently. It restricts the ability of an endpoint to emit and/or receive packets on a particular port using a particular protocol. If no layer 4 policy is specified for an endpoint, the endpoint is allowed to send and receive on all layer 4 ports and protocols including ICMP. If any layer 4 policy is specified, then ICMP will be blocked unless it’s related to a connection that is otherwise allowed by the policy. Layer 4 policies apply to ports after service port mapping has been applied.

Layer 4 policy can be specified at both ingress and egress using the toPorts field. The toPorts field takes a PortProtocol structure which is defined as follows:

  1. // PortProtocol specifies an L4 port with an optional transport protocol
  2. type PortProtocol struct {
  3. // Port is an L4 port number. For now the string will be strictly
  4. // parsed as a single uint16. In the future, this field may support
  5. // ranges in the form "1024-2048"
  6. Port string `json:"port"`
  7. // Protocol is the L4 protocol. If omitted or empty, any protocol
  8. // matches. Accepted values: "TCP", "UDP", ""/"ANY"
  9. //
  10. // Matching on ICMP is not supported.
  11. //
  12. // +optional
  13. Protocol string `json:"protocol,omitempty"`
  14. }

Example (L4)

The following rule limits all endpoints with the label app=myService to only be able to emit packets using TCP on port 80, to any layer 3 destination:

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "l4-rule"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. app: myService
  9. egress:
  10. - toPorts:
  11. - ports:
  12. - port: "80"
  13. protocol: TCP
  1. [{
  2. "labels": [{"key": "name", "value": "l4-rule"}],
  3. "endpointSelector": {"matchLabels":{"app":"myService"}},
  4. "egress": [{
  5. "toPorts": [
  6. {"ports":[ {"port": "80", "protocol": "TCP"}]}
  7. ]
  8. }]
  9. }]

Labels-dependent Layer 4 rule

This example enables all endpoints with the label role=frontend to communicate with all endpoints with the label role=backend, but they must communicate using TCP on port 80. Endpoints with other labels will not be able to communicate with the endpoints with the label role=backend, and endpoints with the label role=frontend will not be able to communicate with role=backend on ports other than 80.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "l4-rule"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. role: backend
  9. ingress:
  10. - fromEndpoints:
  11. - matchLabels:
  12. role: frontend
  13. toPorts:
  14. - ports:
  15. - port: "80"
  16. protocol: TCP
  1. [{
  2. "labels": [{"key": "name", "value": "l4-rule"}],
  3. "endpointSelector": {"matchLabels":{"role":"backend"}},
  4. "ingress": [{
  5. "fromEndpoints": [
  6. {"matchLabels":{"role":"frontend"}}
  7. ],
  8. "toPorts": [
  9. {"ports":[ {"port": "80", "protocol": "TCP"}]}
  10. ]
  11. }]
  12. }]

CIDR-dependent Layer 4 Rule

This example enables all endpoints with the label role=crawler to communicate with all remote destinations inside the CIDR 192.0.2.0/24, but they must communicate using TCP on port 80. The policy does not allow Endpoints without the label role=crawler to communicate with destinations in the CIDR 192.0.2.0/24. Furthermore, endpoints with the label role=crawler will not be able to communicate with destinations in the CIDR 192.0.2.0/24 on ports other than port 80.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "cidr-l4-rule"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. role: crawler
  9. egress:
  10. - toCIDR:
  11. - 192.0.2.0/24
  12. toPorts:
  13. - ports:
  14. - port: "80"
  15. protocol: TCP
  1. [{
  2. "labels": [{"key": "name", "value": "cidr-l4-rule"}],
  3. "endpointSelector": {"matchLabels":{"role":"crawler"}},
  4. "egress": [{
  5. "toCIDR": [
  6. "192.0.2.0/24"
  7. ],
  8. "toPorts": [
  9. {"ports":[ {"port": "80", "protocol": "TCP"}]}
  10. ]
  11. }]
  12. }]

Layer 7 Examples

Layer 7 policy rules are embedded into Layer 4 Examples rules and can be specified for ingress and egress. L7Rules structure is a base type containing an enumeration of protocol specific fields.

  1. // L7Rules is a union of port level rule types. Mixing of different port
  2. // level rule types is disallowed, so exactly one of the following must be set.
  3. // If none are specified, then no additional port level rules are applied.
  4. type L7Rules struct {
  5. // HTTP specific rules.
  6. //
  7. // +optional
  8. HTTP []PortRuleHTTP `json:"http,omitempty"`
  9. // Kafka-specific rules.
  10. //
  11. // +optional
  12. Kafka []PortRuleKafka `json:"kafka,omitempty"`
  13. // DNS-specific rules.
  14. //
  15. // +optional
  16. DNS []PortRuleDNS `json:"dns,omitempty"`
  17. }

The structure is implemented as a union, i.e. only one member field can be used per port. If multiple toPorts rules with identical PortProtocol select an overlapping list of endpoints, then the layer 7 rules are combined together if they are of the same type. If the type differs, the policy is rejected.

Each member consists of a list of application protocol rules. A layer 7 request is permitted if at least one of the rules matches. If no rules are specified, then all traffic is permitted.

If a layer 4 rule is specified in the policy, and a similar layer 4 rule with layer 7 rules is also specified, then the layer 7 portions of the latter rule will have no effect.

Note

Unlike layer 3 and layer 4 policies, violation of layer 7 rules does not result in packet drops. Instead, if possible, an application protocol specific access denied message is crafted and returned, e.g. an HTTP 403 access denied is sent back for HTTP requests which violate the policy, or a DNS REFUSED response for DNS requests.

Note

There is currently a max limit of 40 ports with layer 7 policies per endpoint. This might change in the future when support for ranges is added.

Note

Layer 7 rules are not currently supported in Host Policies, i.e., policies that use Node Selector.

HTTP

The following fields can be matched on:

Path

Path is an extended POSIX regex matched against the path of a request. Currently it can contain characters disallowed from the conventional “path” part of a URL as defined by RFC 3986. Paths must begin with a /. If omitted or empty, all paths are all allowed.

Method

Method is an extended POSIX regex matched against the method of a request, e.g. GET, POST, PUT, PATCH, DELETE, … If omitted or empty, all methods are allowed.

Host

Host is an extended POSIX regex matched against the host header of a request, e.g. foo.com. If omitted or empty, the value of the host header is ignored.

Headers

Headers is a list of HTTP headers which must be present in the request. If omitted or empty, requests are allowed regardless of headers present.

Allow GET /public

The following example allows GET requests to the URL /public to be allowed to endpoints with the labels env:prod, but requests to any other URL, or using another method, will be rejected. Requests on ports other than port 80 will be dropped.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "rule1"
  5. spec:
  6. description: "Allow HTTP GET /public from env=prod to app=service"
  7. endpointSelector:
  8. matchLabels:
  9. app: service
  10. ingress:
  11. - fromEndpoints:
  12. - matchLabels:
  13. env: prod
  14. toPorts:
  15. - ports:
  16. - port: "80"
  17. protocol: TCP
  18. rules:
  19. http:
  20. - method: "GET"
  21. path: "/public"
  1. [{
  2. "labels": [{"key": "name", "value": "rule1"}],
  3. "endpointSelector": {"matchLabels": {"app": "service"}},
  4. "ingress": [{
  5. "fromEndpoints": [
  6. {"matchLabels": {"env": "prod"}}
  7. ]},{
  8. "toPorts": [{
  9. "ports": [
  10. {"port": "80", "protocol": "TCP"}
  11. ],
  12. "rules": {
  13. "http": [
  14. {
  15. "method": "GET",
  16. "path": "/public"
  17. }
  18. ]
  19. }
  20. }]
  21. }]
  22. }]

All GET /path1 and PUT /path2 when header set

The following example limits all endpoints which carry the labels app=myService to only be able to receive packets on port 80 using TCP. While communicating on this port, the only API endpoints allowed will be GET /path1, and PUT /path2 with the HTTP header X-My-Header set to true:

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "l7-rule"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. app: myService
  9. ingress:
  10. - toPorts:
  11. - ports:
  12. - port: '80'
  13. protocol: TCP
  14. rules:
  15. http:
  16. - method: GET
  17. path: "/path1$"
  18. - method: PUT
  19. path: "/path2$"
  20. headers:
  21. - 'X-My-Header: true'
  1. [{
  2. "labels": [{"key": "name", "value": "l7-rule"}],
  3. "endpointSelector": {"matchLabels":{"app":"myService"}},
  4. "ingress": [{
  5. "toPorts": [{
  6. "ports": [
  7. {"port": "80", "protocol": "TCP"}
  8. ],
  9. "rules": {
  10. "http": [
  11. {
  12. "method": "GET",
  13. "path": "/path1$"
  14. },{
  15. "method": "PUT",
  16. "path": "/path2$",
  17. "headers": ["X-My-Header: true"]
  18. }
  19. ]
  20. }
  21. }]
  22. }]
  23. }]

Kafka (beta)

Note

This is a beta feature. Please provide feedback and file a GitHub issue if you experience any problems.

PortRuleKafka is a list of Kafka protocol constraints. All fields are optional, if all fields are empty or missing, the rule will match all Kafka messages. There are two ways to specify the Kafka rules. We can choose to specify a high-level “produce” or “consume” role to a topic or choose to specify more low-level Kafka protocol specific apiKeys. Writing rules based on Kafka roles is easier and covers most common use cases, however if more granularity is needed then users can alternatively write rules using specific apiKeys.

The following fields can be matched on:

Role

Role is a case-insensitive string which describes a group of API keys necessary to perform certain higher-level Kafka operations such as “produce” or “consume”. A Role automatically expands into all APIKeys required to perform the specified higher-level operation. The following roles are supported:

  • “produce”: Allow producing to the topics specified in the rule.
  • “consume”: Allow consuming from the topics specified in the rule.

This field is incompatible with the APIKey field, i.e APIKey and Role cannot both be specified in the same rule. If omitted or empty, and if APIKey is not specified, then all keys are allowed.

APIKey

APIKey is a case-insensitive string matched against the key of a request, for example “produce”, “fetch”, “createtopic”, “deletetopic”. For a more extensive list, see the Kafka protocol reference. This field is incompatible with the Role field.

APIVersion

APIVersion is the version matched against the api version of the Kafka message. If set, it must be a string representing a positive integer. If omitted or empty, all versions are allowed.

ClientID

ClientID is the client identifier as provided in the request.

From Kafka protocol documentation: This is a user supplied identifier for the client application. The user can use any identifier they like and it will be used when logging errors, monitoring aggregates, etc. For example, one might want to monitor not just the requests per second overall, but the number coming from each client application (each of which could reside on multiple servers). This id acts as a logical grouping across all requests from a particular client.

If omitted or empty, all client identifiers are allowed.

Topic

Topic is the topic name contained in the message. If a Kafka request contains multiple topics, then all topics in the message must be allowed by the policy or the message will be rejected.

This constraint is ignored if the matched request message type does not contain any topic. The maximum length of the Topic is 249 characters, which must be either a-z, A-Z, 0-9, -, . or _.

If omitted or empty, all topics are allowed.

Allow producing to topic empire-announce using Role

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "rule1"
  5. spec:
  6. description: "enable empire-hq to produce to empire-announce and deathstar-plans"
  7. endpointSelector:
  8. matchLabels:
  9. app: kafka
  10. ingress:
  11. - fromEndpoints:
  12. - matchLabels:
  13. app: empire-hq
  14. toPorts:
  15. - ports:
  16. - port: "9092"
  17. protocol: TCP
  18. rules:
  19. kafka:
  20. - role: "produce"
  21. topic: "deathstar-plans"
  22. - role: "produce"
  23. topic: "empire-announce"
  1. [{
  2. "labels": [{"key": "name", "value": "rule1"}],
  3. "endpointSelector": {"matchLabels": {"app": "kafka"}},
  4. "ingress": [{
  5. "fromEndpoints": [
  6. {"matchLabels": {"app": "empire-hq"}}
  7. ],
  8. "toPorts": [{
  9. "ports": [
  10. {"port": "9092", "protocol": "TCP"}
  11. ],
  12. "rules": {
  13. "kafka": [
  14. {"role": "produce","topic": "deathstar-plans"},
  15. {"role": "produce", "topic": "empire-announce"}
  16. ]
  17. }
  18. }]
  19. }]
  20. }]

Allow producing to topic empire-announce using apiKeys

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "rule1"
  5. spec:
  6. description: "enable empire-hq to produce to empire-announce and deathstar-plans"
  7. endpointSelector:
  8. matchLabels:
  9. app: kafka
  10. ingress:
  11. - fromEndpoints:
  12. - matchLabels:
  13. app: empire-hq
  14. toPorts:
  15. - ports:
  16. - port: "9092"
  17. protocol: TCP
  18. rules:
  19. kafka:
  20. - apiKey: "apiversions"
  21. - apiKey: "metadata"
  22. - apiKey: "produce"
  23. topic: "deathstar-plans"
  24. - apiKey: "produce"
  25. topic: "empire-announce"
  1. [{
  2. "labels": [{"key": "name", "value": "rule1"}],
  3. "endpointSelector": {"matchLabels": {"app": "kafka"}},
  4. "ingress": [{
  5. "fromEndpoints": [
  6. {"matchLabels": {"app": "empire-hq"}}
  7. ],
  8. "toPorts": [{
  9. "ports": [
  10. {"port": "9092", "protocol": "TCP"}
  11. ],
  12. "rules": {
  13. "kafka": [
  14. {"apiKey": "apiversions"},
  15. {"apiKey": "metadata"},
  16. {"apiKey": "produce", "topic": "deathstar-plans"},
  17. {"apiKey": "produce", "topic": "empire-announce"}
  18. ]
  19. }
  20. }]
  21. }]
  22. }]

DNS Policy and IP Discovery

Policy may be applied to DNS traffic, allowing or disallowing specific DNS query names or patterns of names (other DNS fields, such as query type, are not considered). This policy is effected via a DNS proxy, which is also used to collect IPs used to populate L3 DNS based toFQDNs rules.

Note

While Layer 7 DNS policy can be applied without any other Layer 3 rules, the presence of a Layer 7 rule (with its Layer 3 and 4 components) will block other traffic.

DNS policy may be applied via:

matchName

Allows queries for domains that match matchName exactly. Multiple distinct names may be included in separate matchName entries and queries for domains that match any matchName will be allowed.

matchPattern

Allows queries for domains that match the pattern in matchPattern, accounting for wildcards. Patterns are composed of literal characters that that are allowed in domain names: a-z, 0-9, . and -.

* is allowed as a wildcard with a number of convenience behaviors:

  • * within a domain allows 0 or more valid DNS characters, except for the . separator. *.cilium.io will match sub.cilium.io but not cilium.io. part*ial.com will match partial.com and part-extra-ial.com.
  • * alone matches all names, and inserts all IPs in DNS responses into the cilium-agent DNS cache.

In this example, L7 DNS policy allows queries for cilium.io, any subdomains of cilium.io, and any subdomains of api.cilium.io. No other DNS queries will be allowed.

The separate L3 toFQDNs egress rule allows connections to any IPs returned in DNS queries for cilium.io, sub.cilium.io, service1.api.cilium.io and any matches of special*service.api.cilium.io, such as special-region1-service.api.cilium.io but not region1-service.api.cilium.io. DNS queries to anothersub.cilium.io are allowed but connections to the returned IPs are not, as there is no L3 toFQDNs rule selecting them. L4 and L7 policy may also be applied (see DNS based), restricting connections to TCP port 80 in this case.

k8s YAML

JSON

  1. apiVersion: cilium.io/v2
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "tofqdn-dns-visibility"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. any:org: alliance
  9. egress:
  10. - toEndpoints:
  11. - matchLabels:
  12. "k8s:io.kubernetes.pod.namespace": kube-system
  13. "k8s:k8s-app": kube-dns
  14. toPorts:
  15. - ports:
  16. - port: "53"
  17. protocol: ANY
  18. rules:
  19. dns:
  20. - matchName: "cilium.io"
  21. - matchPattern: "*.cilium.io"
  22. - matchPattern: "*.api.cilium.io"
  23. - toFQDNs:
  24. - matchName: "cilium.io"
  25. - matchName: "sub.cilium.io"
  26. - matchName: "service1.api.cilium.io"
  27. - matchPattern: "special*service.api.cilium.io"
  28. toPorts:
  29. - ports:
  30. - port: "80"
  31. protocol: TCP
  1. [
  2. {
  3. "endpointSelector": {
  4. "matchLabels": {
  5. "app": "test-app"
  6. }
  7. },
  8. "egress": [
  9. {
  10. "toEndpoints": [
  11. {
  12. "matchLabels": {
  13. "app-type": "dns"
  14. }
  15. }
  16. ],
  17. "toPorts": [
  18. {
  19. "ports": [
  20. {
  21. "port": "53",
  22. "protocol": "ANY"
  23. }
  24. ],
  25. "rules": {
  26. "dns": [
  27. { "matchName": "cilium.io" },
  28. { "matchPattern": "*.cilium.io" },
  29. { "matchPattern": "*.api.cilium.io" }
  30. ]
  31. }
  32. }
  33. ]
  34. },
  35. {
  36. "toFQDNs": [
  37. { "matchName": "cilium.io" },
  38. { "matchName": "sub.cilium.io" },
  39. { "matchName": "service1.api.cilium.io" },
  40. { "matchPattern": "special*service.api.cilium.io" }
  41. ]
  42. }
  43. ]
  44. }
  45. ]

Note

When applying DNS policy in kubernetes, queries for service.namespace.svc.cluster.local. must be explicitly allowed with matchPattern: *.*.svc.cluster.local..

Similarly, queries that rely on the DNS search list to complete the FQDN must be allowed in their entirety. e.g. A query for servicename that succeeds with servicename.namespace.svc.cluster.local. must have the latter allowed with matchName or matchPattern. See Alpine/musl deployments and DNS Refused.

Obtaining DNS Data for use by toFQDNs

IPs are obtained via intercepting DNS requests with a proxy or DNS polling, and matching names are inserted irrespective of how the data is obtained. These IPs can be selected with toFQDN rules. DNS responses are cached within Cilium agent respecting TTL.

DNS Proxy

A DNS Proxy intercepts egress DNS traffic and records IPs seen in the responses. This interception is, itself, a separate policy rule governing the DNS requests, and must be specified separately. For details on how to enforce policy on DNS requests and configuring the DNS proxy, see Layer 7 Examples.

Only IPs in intercepted DNS responses to an application will be allowed in the Cilium policy rules. For a given domain name, IPs from responses to all pods managed by a Cilium instance are allowed by policy (respecting TTLs). This ensures that allowed IPs are consistent with those returned to applications. The DNS Proxy is the only method to allow IPs from responses allowed by wildcard L7 DNS matchPattern rules for use in toFQDNs rules.

The following example obtains DNS data by interception without blocking any DNS requests. It allows L3 connections to cilium.io, sub.cilium.io and any subdomains of sub.cilium.io.

k8s YAML

JSON

  1. apiVersion: cilium.io/v2
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "tofqdn-dns-visibility"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. any:org: alliance
  9. egress:
  10. - toEndpoints:
  11. - matchLabels:
  12. "k8s:io.kubernetes.pod.namespace": kube-system
  13. "k8s:k8s-app": kube-dns
  14. toPorts:
  15. - ports:
  16. - port: "53"
  17. protocol: ANY
  18. rules:
  19. dns:
  20. - matchPattern: "*"
  21. - toFQDNs:
  22. - matchName: "cilium.io"
  23. - matchName: "sub.cilium.io"
  24. - matchPattern: "*.sub.cilium.io"
  1. [
  2. {
  3. "endpointSelector": {
  4. "matchLabels": {
  5. "app": "test-app"
  6. }
  7. },
  8. "egress": [
  9. {
  10. "toEndpoints": [
  11. {
  12. "matchLabels": {
  13. "app-type": "dns"
  14. }
  15. }
  16. ],
  17. "toPorts": [
  18. {
  19. "ports": [
  20. {
  21. "port": "53",
  22. "protocol": "ANY"
  23. }
  24. ],
  25. "rules": {
  26. "dns": [
  27. { "matchPattern": "*" }
  28. ]
  29. }
  30. }
  31. ]
  32. },
  33. {
  34. "toFQDNs": [
  35. { "matchName": "cilium.io" },
  36. { "matchName": "sub.cilium.io" },
  37. { "matchPattern": "*.sub.cilium.io" }
  38. ]
  39. }
  40. ]
  41. }
  42. ]

Alpine/musl deployments and DNS Refused

Some common container images treat the DNS Refused response when the DNS Proxy rejects a query as a more general failure. This stops traversal of the search list defined in /etc/resolv.conf. It is common for pods to search by appending .svc.cluster.local. to DNS queries. When this occurs, a lookup for cilium.io may first be attempted as cilium.io.namespace.svc.cluster.local. and rejected by the proxy. Instead of continuing and eventually attempting cilium.io. alone, the Pod treats the DNS lookup is treated as failed.

This can be mitigated with the --tofqdns-dns-reject-response-code option. The default is refused but nameError can be selected, causing the proxy to return a NXDomain response to refused queries.

A more pod-specific solution is to configure ndots appropriately for each Pod, via dnsConfig, so that the search list is not used for DNS lookups that do not need it. See the Kubernetes documentation for instructions.

Deny Policies

Note

This is a beta feature. Please provide feedback and file a GitHub issue if you experience any problems.

Deny policies, available and enabled by default since Cilium 1.9, allows to explicitly restrict certain traffic to and from a Pod.

Deny policies take precedence over allow policies, regardless of whether they are a Cilium Network Policy, a Clusterwide Cilium Network Policy or even a Kubernetes Network Policy.

Similarly to “allow” policies, Pods will enter default-deny mode as soon a single policy selects it.

If multiple allow and deny policies are applied to the same pod, the following table represents the expected enforcement for that Pod:

Set of Ingress Policies Deployed to Server Pod
Allow PoliciesLayer 7 (HTTP) 
Layer 4 (80/TCP) 
Layer 4 (81/TCP) 
Layer 3 (Pod: Client) 
Deny PoliciesLayer 4 (80/TCP)  
Layer 3 (Pod: Client)   
Result for Traffic Connections (Allowed / Denied)
Client → Servercurl server:81AllowedAllowedDeniedDeniedDenied
curl server:80AllowedDeniedDeniedDeniedDenied
ping serverAllowedAllowedDeniedDeniedDenied

If we pick the second column in the above table, the bottom section shows the forwarding behaviour for a policy that selects curl or ping traffic between the client and server:

  • Curl to port 81 is allowed because there is an allow policy on port 81, and no deny policy on that port;
  • Curl to port 80 is denied because there is a deny policy on that port;
  • Ping to the server is allowed because there is a Layer 3 allow policy and no deny.

The following policy will deny ingress from “world” on all namespaces on all Pods managed by Cilium. Existing inter-cluster policies will still be allowed as this policy is allowing traffic from everywhere except from “world”.

k8s YAML

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumClusterwideNetworkPolicy
  3. metadata:
  4. name: "external-lockdown"
  5. spec:
  6. endpointSelector: {}
  7. ingressDeny:
  8. - fromEntities:
  9. - "world"
  10. ingress:
  11. - fromEntities:
  12. - "all"

Deny policies do not support: policy enforcement at L7, i.e., specifically denying an URL and toFQDNs, i.e., specifically denying traffic to a specific domain name.

Limitations and known issues

The current known limitation is a deny policy with toEntities “world” for which a toFQDNs can cause traffic to be allowed if such traffic is considered external to the cluster.

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "deny-egress-to-world"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. k8s-app.guestbook: web
  9. egressDeny:
  10. - toEntities:
  11. - "world"
  12. egress:
  13. - toEndpoints:
  14. - matchLabels:
  15. "k8s:io.kubernetes.pod.namespace": kube-system
  16. "k8s:k8s-app": kube-dns
  17. toPorts:
  18. - ports:
  19. - port: "53"
  20. protocol: ANY
  21. rules:
  22. dns:
  23. - matchPattern: "*"
  24. - toFQDNs:
  25. - matchName: "www.google.com"

Host Policies

Note

This is a beta feature. Please provide feedback and file a GitHub issue if you experience any problems.

Host policies take the form of a CiliumClusterwideNetworkPolicy with a Node Selector instead of an Endpoint Selector. Host policies can have layer 3 and layer 4 rules on both ingress and egress. They cannot have layer 7 rules.

Host policies apply to all the nodes selected by their Node Selector. In each selected node, they apply only to the host namespace, including host-networking pods. They therefore don’t apply to communications between non-host-networking pods and locations outside of the cluster.

Installation of Host Policies requires the addition of the following helm flags when installing Cilium:

  • --set devices='{interface}' where interface refers to the network device Cilium is configured on such as eth0. Omitting this option leads Cilium to auto-detect what interface the host firewall applies to.
  • --set hostFirewall=true

The following policy will allow ingress traffic for any node with the label type=ingress-worker on TCP ports 22, 6443 (kube-apiserver), 2379 (etcd) and 4240 (health checks), as well as UDP port 8472 (VXLAN).

Replace the port: value with ports used in your environment.

k8s YAML

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumClusterwideNetworkPolicy
  3. metadata:
  4. name: "lock-down-ingress-worker-node"
  5. spec:
  6. description: "Allow a minimum set of required ports on ingress of worker nodes"
  7. nodeSelector:
  8. matchLabels:
  9. type: ingress-worker
  10. ingress:
  11. - fromEntities:
  12. - remote-node
  13. - health
  14. - toPorts:
  15. - ports:
  16. - port: "6443"
  17. protocol: TCP
  18. - port: "22"
  19. protocol: TCP
  20. - port: "2379"
  21. protocol: TCP
  22. - port: "4240"
  23. protocol: TCP
  24. - port: "8472"
  25. protocol: UDP
  26. - port: "REMOVE_ME_AFTER_DOUBLE_CHECKING_PORTS"
  27. protocol: TCP

Troubleshooting Host Policies

If you’re having troubles with Host Policies please ensure the helm options listed above were applied during installation. To verify that your policy has been applied, you can run kubectl get CiliumClusterwideNetworkPolicy -o yaml to validate the policy was accepted.

If policies don’t seem to be applied to your nodes, verify the nodeSelector is labeled correctly in your environment. In the example configuration, you can run kubectl get nodes -o wide|grep type=ingress-worker to verify labels match the policy.

You can verify the policy was applied by running kubectl exec -n $CILIUM_NAMESPACE cilium-xxxx -- cilium policy get for the Cilium agent pod. Verify that the host is selected by the policy using cilium endpoint list and look for the endpoint with reserved:host as the label and ensure that policy is enabled in the selected direction. Ensure the traffic is arriving on the device visible on the NodePort field of the cilium status list output. Use cilium monitor with --related-to and the endpoint ID of the reserved:host endpoint to view traffic.