Getting Started Securing Elasticsearch

This document serves as an introduction for using Cilium to enforce Elasticsearch-aware security policies. It is a detailed walk-through of getting a single-node Cilium environment running on your machine. It is designed to take 15-30 minutes.

If you haven’t read the Introduction to Cilium & Hubble yet, we’d encourage you to do that first.

The best way to get help if you get stuck is to ask a question on the Cilium Slack channel. With Cilium contributors across the globe, there is almost always someone available to help.

Setup Cilium

If you have not set up Cilium yet, follow the guide Quick Installation for instructions on how to quickly bootstrap a Kubernetes cluster and install Cilium. If in doubt, pick the minikube route, you will be good to go in less than 5 minutes.

Deploy the Demo Application

Following the Cilium tradition, we will use a Star Wars-inspired example. The Empire has a large scale Elasticsearch cluster which is used for storing a variety of data including:

  • index: troop_logs: Stormtroopers performance logs collected from every outpost which are used to identify and eliminate weak performers!
  • index: spaceship_diagnostics: Spaceships diagnostics data collected from every spaceship which is used for R&D and improvement of the spaceships.

Every outpost has an Elasticsearch client service to upload the Stormtroopers logs. And every spaceship has a service to upload diagnostics. Similarly, the Empire headquarters has a service to search and analyze the troop logs and spaceship diagnostics data. Before we look into the security concerns, let’s first create this application scenario in minikube.

Deploy the app using command below, which will create

  • An elasticsearch service with the selector label component:elasticsearch and a pod running Elasticsearch.
  • Three Elasticsearch clients one each for empire-hq, outpost and spaceship.
  1. $ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.11/examples/kubernetes-es/es-sw-app.yaml
  2. serviceaccount "elasticsearch" created
  3. service "elasticsearch" created
  4. replicationcontroller "es" created
  5. role "elasticsearch" created
  6. rolebinding "elasticsearch" created
  7. pod "outpost" created
  8. pod "empire-hq" created
  9. pod "spaceship" created
  1. $ kubectl get svc,pods
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. svc/elasticsearch NodePort 10.111.238.254 <none> 9200:30130/TCP,9300:31721/TCP 2d
  4. svc/etcd-cilium NodePort 10.98.67.60 <none> 32379:31079/TCP,32380:31080/TCP 9d
  5. svc/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
  6. NAME READY STATUS RESTARTS AGE
  7. po/empire-hq 1/1 Running 0 2d
  8. po/es-g9qk2 1/1 Running 0 2d
  9. po/etcd-cilium-0 1/1 Running 0 9d
  10. po/outpost 1/1 Running 0 2d
  11. po/spaceship 1/1 Running 0 2d

Security Risks for Elasticsearch Access

For Elasticsearch clusters the least privilege security challenge is to give clients access only to particular indices, and to limit the operations each client is allowed to perform on each index. In this example, the outpost Elasticsearch clients only need access to upload troop logs; and the empire-hq client only needs search access to both the indices. From the security perspective, the outposts are weak spots and susceptible to be captured by the rebels. Once compromised, the clients can be used to search and manipulate the critical data in Elasticsearch. We can simulate this attack, but first let’s run the commands for legitimate behavior for all the client services.

outpost client uploading troop logs

  1. $ kubectl exec outpost -- python upload_logs.py
  2. Uploading Stormtroopers Performance Logs
  3. created : {'_index': 'troop_logs', '_type': 'log', '_id': '1', '_version': 1, 'result': 'created', '_shards': {'total': 2, 'successful': 1, 'failed': 0}, 'created': True}

spaceship uploading diagnostics

  1. $ kubectl exec spaceship -- python upload_diagnostics.py
  2. Uploading Spaceship Diagnostics
  3. created : {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_version': 1, 'result': 'created', '_shards': {'total': 2, 'successful': 1, 'failed': 0}, 'created': True}

empire-hq running search queries for logs and diagnostics

  1. $ kubectl exec empire-hq -- python search.py
  2. Searching for Spaceship Diagnostics
  3. Got 1 Hits:
  4. {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \
  5. '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \
  6. 'stats': '[CRITICAL] [ENGINE BURN @SPEED 5000 km/s] [CHANCE 80%]'}}
  7. Searching for Stormtroopers Performance Logs
  8. Got 1 Hits:
  9. {'_index': 'troop_logs', '_type': 'log', '_id': '1', '_score': 1.0, \
  10. '_source': {'outpost': 'Endor', 'datetime': '33 ABY 4AM DST', 'title': 'Endor Corps 1: Morning Drill', \
  11. 'notes': '5100 PRESENT; 15 ABSENT; 130 CODE-RED BELOW PAR PERFORMANCE'}}

Now imagine an outpost captured by the rebels. In the commands below, the rebels first search all the indices and then manipulate the diagnostics data from a compromised outpost.

  1. $ kubectl exec outpost -- python search.py
  2. Searching for Spaceship Diagnostics
  3. Got 1 Hits:
  4. {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \
  5. '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \
  6. 'stats': '[CRITICAL] [ENGINE BURN @SPEED 5000 km/s] [CHANCE 80%]'}}
  7. Searching for Stormtroopers Performance Logs
  8. Got 1 Hits:
  9. {'_index': 'troop_logs', '_type': 'log', '_id': '1', '_score': 1.0, \
  10. '_source': {'outpost': 'Endor', 'datetime': '33 ABY 4AM DST', 'title': 'Endor Corps 1: Morning Drill', \
  11. 'notes': '5100 PRESENT; 15 ABSENT; 130 CODE-RED BELOW PAR PERFORMANCE'}}

Rebels manipulate spaceship diagnostics data so that the spaceship defects are not known to the empire-hq! (Hint: Rebels have changed the stats for the tiefighter spaceship, a change hard to detect but with adverse impact!)

  1. $ kubectl exec outpost -- python update.py
  2. Uploading Spaceship Diagnostics
  3. {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \
  4. '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \
  5. 'stats': '[OK] [ENGINE OK @SPEED 5000 km/s]'}}

Securing Elasticsearch Using Cilium

../../_images/cilium_es_gsg_topology.png

Following the least privilege security principle, we want to the allow the following legitimate actions and nothing more:

  • outpost service only has upload access to index: troop_logs
  • spaceship service only has upload access to index: spaceship_diagnostics
  • empire-hq service only has search access for both the indices

Fortunately, the Empire DevOps team is using Cilium for their Kubernetes cluster. Cilium provides L7 visibility and security policies to control Elasticsearch API access. Cilium follows the white-list, least privilege model for security. That is to say, a CiliumNetworkPolicy contains a list of rules that define allowed requests and any request that does not match the rules is denied.

In this example, the policy rules are defined for inbound traffic (i.e., “ingress”) connections to the elasticsearch service. Note that endpoints selected as backend pods for the service are defined by the selector labels. Selector labels use the same concept as Kubernetes to define a service. In this example, label component: elasticsearch defines the pods that are part of the elasticsearch service in Kubernetes.

In the policy file below, you will see the following rules for controlling the indices access and actions performed:

  • fromEndpoints with labels app:spaceship only HTTP PUT is allowed on paths matching regex ^/spaceship_diagnostics/stats/.*$
  • fromEndpoints with labels app:outpost only HTTP PUT is allowed on paths matching regex ^/troop_logs/log/.*$
  • fromEndpoints with labels app:empire only HTTP GET is allowed on paths matching regex ^/spaceship_diagnostics/_search/??.*$ and ^/troop_logs/search/??.*$
  1. apiVersion: cilium.io/v2
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: secure-empire-elasticsearch
  5. namespace: default
  6. specs:
  7. - endpointSelector:
  8. matchLabels:
  9. component: elasticsearch
  10. ingress:
  11. - fromEndpoints:
  12. - matchLabels:
  13. app: spaceship
  14. toPorts:
  15. - ports:
  16. - port: "9200"
  17. protocol: TCP
  18. rules:
  19. http:
  20. - method: ^PUT$
  21. path: ^/spaceship_diagnostics/stats/.*$
  22. - fromEndpoints:
  23. - matchLabels:
  24. app: empire-hq
  25. toPorts:
  26. - ports:
  27. - port: "9200"
  28. protocol: TCP
  29. rules:
  30. http:
  31. - method: ^GET$
  32. path: ^/spaceship_diagnostics/_search/??.*$
  33. - method: ^GET$
  34. path: ^/troop_logs/_search/??.*$
  35. - fromEndpoints:
  36. - matchLabels:
  37. app: outpost
  38. toPorts:
  39. - ports:
  40. - port: "9200"
  41. protocol: TCP
  42. rules:
  43. http:
  44. - method: ^PUT$
  45. path: ^/troop_logs/log/.*$
  46. - egress:
  47. - toEndpoints:
  48. - matchExpressions:
  49. - key: k8s:io.kubernetes.pod.namespace
  50. operator: Exists
  51. - toEntities:
  52. - cluster
  53. - host
  54. endpointSelector: {}
  55. ingress:
  56. - {}

Apply this Elasticsearch-aware network security policy using kubectl:

  1. $ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.11/examples/kubernetes-es/es-sw-policy.yaml
  2. ciliumnetworkpolicy "secure-empire-elasticsearch" created

Let’s test the security policies. Firstly, the search access is blocked for both outpost and spaceship. So from a compromised outpost, rebels will not be able to search and obtain knowledge about troops and spaceship diagnostics. Secondly, the outpost clients don’t have access to create or update the index: spaceship_diagnostics.

  1. $ kubectl exec outpost -- python search.py
  2. GET http://elasticsearch:9200/spaceship_diagnostics/_search [status:403 request:0.008s]
  3. ...
  4. ...
  5. elasticsearch.exceptions.AuthorizationException: TransportError(403, 'Access denied\r\n')
  6. command terminated with exit code 1
  1. $ kubectl exec outpost -- python update.py
  2. PUT http://elasticsearch:9200/spaceship_diagnostics/stats/1 [status:403 request:0.006s]
  3. ...
  4. ...
  5. elasticsearch.exceptions.AuthorizationException: TransportError(403, 'Access denied\r\n')
  6. command terminated with exit code 1

We can re-run any of the below commands to show that the security policy still allows all legitimate requests (i.e., no 403 errors are returned).

  1. $ kubectl exec outpost -- python upload_logs.py
  2. ...
  3. $ kubectl exec spaceship -- python upload_diagnostics.py
  4. ...
  5. $ kubectl exec empire-hq -- python search.py
  6. ...

Clean Up

You have now installed Cilium, deployed a demo app, and finally deployed & tested Elasticsearch-aware network security policies. To clean up, run:

  1. $ kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/v1.11/examples/kubernetes-es/es-sw-app.yaml
  2. $ kubectl delete cnp secure-empire-elasticsearch