Service Virtualization and Istio

Before StartYou should have NO virtualservice nor destinationrule (in tutorial namespace) kubectl get virtualservice kubectl get destinationruleif so run:
  1. ./scripts/clean.sh

We’ll create version 2 of preferences service.But in this case instead of communicating with recommendation service, we are going to communicate with a virtualized recommendation service.

Service virtualization can be understood as something similar as mocking where instead of mocking components (classes), you are mocking remote services.

For this concrete case, virtualized recommendation service will return recommendation v2 from 'virtualized': 2.

Deploy Preference:v2

Move to preference directory:

  1. cd preference/java/springboot

Change PreferencesController.java like the following and creating a "v2" docker image.

  1. private static final String RESPONSE_STRING_FORMAT = "PREFERENCE => %s\n";

The "v2" tag during the Docker build is significant.

There is also a second deployment.yml file to label things correctly

Docker build (if you have access to Docker daemon)

  1. mvn clean package
  2. docker build -t example/preference:v2 .
We have a 2nd Deployment to manage the v2 version of preference.
  1. oc apply -f <(istioctl kube-inject -f ../../kubernetes/Deployment-v2.yml) -n tutorial
  2. oc get pods -w
  3. cd ../../..

OpenShift S2I strategy (if you DON’T have access to Docker daemon)

  1. mvn clean package -f preference/java/springboot
  2. oc new-app -l app=preference,version=v2 --name=preference-v2 --context-dir=preference/java/springboot -e JAEGER_SERVICE_NAME=preference JAEGER_ENDPOINT=http://jaeger-collector.istio-system.svc:14268/api/traces JAEGER_PROPAGATION=b3 JAEGER_SAMPLER_TYPE=const JAEGER_SAMPLER_PARAM=1 JAVA_OPTIONS='-Xms128m -Xmx256m -Djava.net.preferIPv4Stack=true' fabric8/s2i-java~https://github.com/redhat-developer-demos/istio-tutorial -o yaml > preference-v2.yml
  3. oc apply -f <(istioctl kube-inject -f preference-v2.yml) -n tutorial
  4. oc cancel-build bc/preference-v2
  5. oc delete svc/preference-v2
  6. oc start-build preference-v2 --from-dir=. --follow

Wait for v2 to be deployed

Wait for those pods to show "2/2", the istio-proxy/envoy sidecar is part of that pod

  1. NAME READY STATUS RESTARTS AGE
  2. customer-3647816848-j5xd5 2/2 Running 25 14d
  3. preference-v1-406256754-8v7x5 2/2 Running 12 2h
  4. preference-v2-3602772496-wmkvl 2/2 Running 12 2h
  5. recommendation-v1-2409176097-kcjsr 2/2 Running 8 14d
  6. recommendation-v2-1275713543-2bs5k 2/2 Running 4 2d

and test the customer endpoint

  1. curl customer-tutorial.$(minishift ip).nip.io

you likely see "customer ⇒ preference ⇒ recommendation v2 from '2819441432-5v22s': 1" as by default you get round-robin load-balancing when there is more than one Pod behind a Service

Send several requests to see their responses

  1. ./scripts/run.sh

The default Kubernetes/OpenShift behavior is to round-robin load-balance across all available pods behind a single Service.

So after running several times you’ll get some combinations of:

  1. customer => preference => recommendation v1 from '2409176097-kcjsr': 3
  2. customer => PREFERENCE => recommendation v1 from '2409176097-kcjsr': 4
  3. customer => preference => recommendation v2 from '1275713543-2bs5k': 3
  4. customer => PREFERENCE => recommendation v2 from '1275713543-2bs5k': 3

Adding Service Virtualization

We’ll create a Docker image with Hoverfly (Service Virtualization tool) with some canned requests/answers for recommendation service.

  1. cd recommendation/virtualized
  2. docker build -t example/recommendation:virtualized .
  3. docker images | grep recommendation
  4. oc apply -f <(istioctl kube-inject -f ../../kubernetes/Deployment-virtualized.yml) -n tutorial
  5. oc get pods -w
  6. cd ../..

After this step, you should have three versions of recommendation service (v1, v2 and virtualized).

  1. NAME READY STATUS RESTARTS AGE
  2. customer-3647816848-j5xd5 2/2 Running 25 14d
  3. preference-v1-406256754-8v7x5 2/2 Running 12 2h
  4. preference-v2-3602772496-wmkvl 2/2 Running 12 2h
  5. recommendation-v1-2409176097-kcjsr 2/2 Running 8 14d
  6. recommendation-v2-1275713543-2bs5k 2/2 Running 4 2d
  7. recommendation-virtualized-2649197284-rp9cg 2/2 Running 2 3h

Send several requests to see their responses

  1. ./scripts/run.sh

The default Kubernetes/OpenShift behavior is to round-robin load-balance across all available pods behind a single Service.

So after running several times you’ll get some combinations of:

  1. customer => preference => recommendation v1 from '2409176097-kcjsr': 3
  2. customer => PREFERENCE => recommendation v1 from '2409176097-kcjsr': 2
  3. customer => preference => recommendation v1 from 'virtualized': 2
  4. customer => PREFERENCE => recommendation v1 from 'virtualized': 2
  5. customer => preference => recommendation v2 from '1275713543-2bs5k'
  6. customer => PREFERENCE => recommendation v2 from '1275713543-2bs5k'
  7. customer => preference => recommendation v2 from 'virtualized': 2
  8. customer => PREFERENCE => recommendation v2 from 'virtualized': 2

Notice that now the v2 reaches all recommendation services.Let’s avoid this by just sending traffic that comes from preference v2 service to the virtualized recommendation service.

  1. istioctl create -f istiofiles/destination-rule-recommendation-v1.yml -n tutorial
  2. istioctl create -f istiofiles/virtual-service-recommendation-virtualized.yml -n tutorial

Then do again some requests and you’ll get something like:

  1. customer => preference => recommendation v1 from '2409176097-kcjsr': 5
  2. customer => PREFERENCE => recommendation v1 from 'virtualized': 2
  3. customer => preference => recommendation v2 from '1275713543-2bs5k': 6
  4. customer => PREFERENCE => recommendation v2 from 'virtualized': 2

Now all requests that are from preference v2 are redirected to virtualized recommendation service.In this way when you deploy a new service, you can mirror the traffic without worrying about side-effects on other services, since the requests are redirected to a virtualized instance instead of a production one.

Clean up

  1. istioctl delete -f istiofiles/destination-rule-recommendation-v1.yml -n tutorial
  2. istioctl delete -f istiofiles/virtual-service-recommendation-virtualized.yml -n tutorial
  3. oc delete all -l app=preference,version=v2
  4. oc delete all -l app=recommendation,version=virtualized