Writing APBs: Getting Started

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

Overview

In this tutorial, you will walk through the creation of some sample Ansible Playbook Bundles (APBs). You will create actions for them to allow provision, deprovision, bind, and unbind. You can find more information about the design of APBs in the Design topic. More in-depth information about writing APBs is available in the Reference topic.

For the remainder of this tutorial, substitute your own information for items marked in brackets; for example, <host>:<port> might need to be replaced with 172.17.0.1.nip.io:8443.

Before You Begin

Before getting started creating your own APBs, you must set up your development environment:

  1. Ensure you have access to an OKD cluster. The cluster should be running both the service catalog and the OpenShift Ansible broker (OAB), which is supported starting with OKD 3.7.

  2. Install the APB tools as documented in the CLI Tooling topic. To verify, you can run the apb help command and check for a valid response.

  3. If you are developing against an OKD cluster that exists on a remote host or you do not have access to the docker daemon, see Working with Remote Clusters for alternative steps when using the apb push and apb run commands described in this guide.

Creating Your First APB

In this tutorial, you will create an APB for a containerized hello world application. You will work through a basic APB that will mirror the APB hello-world-apb.

  1. Your first task is to initialize the APB using the apb CLI tool. This creates the skeleton for your APB. The command for this is simple:

    1. $ apb init my-test-apb

    After initialization, you will see the following file structure:

    1. my-test-apb/
    2. ├── apb.yml
    3. ├── Dockerfile
    4. ├── playbooks
    5. ├── deprovision.yml
    6. └── provision.yml
    7. └── roles
    8. ├── deprovision-my-test-apb
    9. └── tasks
    10. └── main.yml
    11. └── provision-my-test-apb
    12. └── tasks
    13. └── main.yml

    Two files were created at the root directory: an apb.yml (the APB spec file) and a Dockerfile. These are the minimum files required for any APB. For more information about the APB spec file, see the Reference topic. There is also an explanation of what you can do in the Dockerfile.

    apb.yml

    1. version: 1.0
    2. name: my-test-apb
    3. description: This is a sample application generated by apb init
    4. bindable: False
    5. async: optional
    6. metadata:
    7. displayName: my-test
    8. plans:
    9. - name: default
    10. description: This default plan deploys my-test-apb
    11. free: True
    12. metadata: {}
    13. parameters: []

    Dockerfile

    1. FROM ansibleplaybookbundle/apb-base
    2. LABEL "com.redhat.apb.spec"=\
    3. COPY playbooks /opt/apb/actions
    4. COPY roles /opt/ansible/roles
    5. RUN chmod -R g=u /opt/{ansible,apb}
    6. USER apb
  2. In the Dockerfile, there are two updates to make:

    1. Change the FROM directive to use the image from the Red Hat Container Catalog. The first line should now read:

      1. FROM openshift3/apb-base
    2. Update com.redhat.apb.spec in the LABEL instruction with a base64 encoded version of apb.yml. To do this, run apb prepare:

      1. $ cd my-test-apb
      2. $ apb prepare

      This updates the Dockerfile as follows:

      Dockerfile

      1. FROM openshift3/apb-base
      2. LABEL "com.redhat.apb.spec"=\
      3. "dmVyc2lvbjogMS4wCm5hbWU6IG15LXRlc3QtYXBiCmRlc2NyaXB0aW9uOiBUaGlzIGlzIGEgc2Ft\
      4. cGxlIGFwcGxpY2F0aW9uIGdlbmVyYXRlZCBieSBhcGIgaW5pdApiaW5kYWJsZTogRmFsc2UKYXN5\
      5. bmM6IG9wdGlvbmFsCm1ldGFkYXRhOgogIGRpc3BsYXlOYW1lOiBteS10ZXN0CnBsYW5zOgogIC0g\
      6. bmFtZTogZGVmYXVsdAogICAgZGVzY3JpcHRpb246IFRoaXMgZGVmYXVsdCBwbGFuIGRlcGxveXMg\
      7. bXktdGVzdC1hcGIKICAgIGZyZWU6IFRydWUKICAgIG1ldGFkYXRhOiB7fQogICAgcGFyYW1ldGVy\
      8. czogW10="
      9. COPY playbooks /opt/apb/actions
      10. COPY roles /opt/ansible/roles
      11. RUN chmod -R g=u /opt/{ansible,apb}
      12. USER apb
  3. At this point, you have a fully formed APB that you can build. If you skipped using apb prepare, the apb build command will still prepare the APB before building the image:

    1. $ apb build
  4. You can now push the new APB image to the local OpenShift Container Registry:

    1. $ apb push
  5. Querying the OAB will now show your new APB listed:

    1. $ apb list
    2. ID NAME DESCRIPTION
    3. < ------------ ID -------------> dh-my-test-apb This is a sample application generated by apb init

    Similarly, visiting the OKD web console will now display the new APB named my-test-apb in the service catalog under the All and Other tabs.

Adding Actions

The brand new APB created in the last section does not do much in its current state. For that, you must add some actions. The actions supported are:

  • provision

  • deprovision

  • bind

  • unbind

  • test

You will add each of these actions in the following sections. But before beginning:

  1. Ensure that you are logged in to your OKD cluster via the oc CLI. This will ensure the apb tool can interact with OKD and the OAB:

    1. # oc login <cluster_host>:<port> -u <user_name> -p <password>
  2. Log in to the OKD web console and verify your APB listed in the catalog:

    browse catalog my test

    Figure 1. OKD Web Console

  3. Create a project named getting-started where you will deploy OKD resources. You can create it using the web console or CLI:

    1. $ oc new-project getting-started

Provision

During the apb init process, two parts of the provision task were stubbed out. The playbook, playbooks/provision.yml, and the associated role in roles/provision-my-test-apb:

  1. my-test-apb
  2. ├── apb.yml
  3. ├── Dockerfile
  4. ├── playbooks
  5. └── provision.yml (1)
  6. └── roles
  7. └── provision-my-test-apb
  8. └── tasks
  9. └── main.yml (2)
1Inspect this playbook.
2Edit this role.

The playbooks/provision.yml file is the Ansible playbook that will be run when the provision action is called from the OAB. You can change the playbook, but for now you can just leave the code as is.

playbooks/provision.yml

  1. - name: my-test-apb playbook to provision the application
  2. hosts: localhost
  3. gather_facts: false
  4. connection: local
  5. roles:
  6. - role: ansible.kubernetes-modules
  7. install_python_requirements: no
  8. - role: ansibleplaybookbundle.asb-modules
  9. - role: provision-my-test-apb
  10. playbook_debug: false

The playbook will execute on localhost and execute the role provision-my-test-apb. This playbook works on its local container created by the service broker. The ansible.kubernetes-modules role allow you to use the kubernetes-modules to create your OKD resources. The asb-modules provide additional functionality for use with the OAB.

Currently, there are no tasks in the role. The contents of the roles/provision-my-test-apb/tasks/main.yml only contains comments showing common resource creation tasks. ou can currently execute the provision task, but since there are no tasks to perform, it would simply launch the APB container and exit without deploying anything.

You can try this now by clicking on the my-test APB and deploying it to the getting-started project using the web console:

provision my test

Figure 2. Provisioning my-test

When the provision is executing, a new namespace is created with the name dh-my-test-apb-prov-<random>. In development mode, it will persist, but usually this namespace would be deleted after successful completion. If the APB fails provisioning, the namespace will persist by default.

By looking at the pod resources, you can see the log for the execution of the APB. To view the pod’s logs:

  1. Find the namespaces by either using the web console to view all namespaces and sort by creation date, or using the following command:

    1. $ oc get ns
    2. NAME STATUS AGE
    3. ansible-service-broker Active 1h
    4. default Active 1h
    5. dh-my-test-apb-prov-<random> Active 4m
  2. Switch to the project:

    1. $ oc project dh-my-test-apb-prov-<random>
    2. Now using project "dh-my-test-apb-prov-<random>" on server "<cluster_host>:<port>".
  3. Get the pod name:

    1. $ oc get pods
    2. NAME READY STATUS RESTARTS AGE
    3. <apb_pod_name> 0/1 Completed 0 3m
  4. View the logs:

    1. $ oc logs -f <apb_pod_name>
    2. ...
    3. + ansible-playbook /opt/apb/actions/provision.yml --extra-vars '{"_apb_plan_id":"default","namespace":"getting-started"}'
    4. PLAY [my-test-apb playbook to provision the application] ***********************
    5. TASK [ansible.kubernetes-modules : Install latest openshift client] *************
    6. skipping: [localhost]
    7. TASK [ansibleplaybookbundle.asb-modules : debug] *******************************
    8. skipping: [localhost]
    9. PLAY RECAP *********************************************************************
    10. localhost : ok=0 changed=0 unreachable=0 failed=0

Creating a Deploying Configuration

At the minimum, your APB should deploy the application pods. You can do this by specifying a deployment configuration:

  1. One of the first tasks that is commented out in the provision-my-test-apb/tasks/main.yml file is the creation of the deployment configuration. You can uncomment it or paste the following:

    Normally, you would replace the image: value with your own application image.

    1. - name: create deployment config
    2. openshift_v1_deployment_config:
    3. name: my-test
    4. namespace: '{{ namespace }}' (1)
    5. labels: (2)
    6. app: my-test
    7. service: my-test
    8. replicas: 1 (3)
    9. selector: (4)
    10. app: my-test
    11. service: my-test
    12. spec_template_metadata_labels:
    13. app: my-test
    14. service: my-test
    15. containers: (5)
    16. - env:
    17. image: docker.io/ansibleplaybookbundle/hello-world:latest
    18. name: my-test
    19. ports:
    20. - container_port: 8080
    21. protocol: TCP
    1Designates which namespace the deployment configuration should be in.
    2Used to help organize, group, and select objects.
    3Specifies that you only want one pod.
    4The selector section is a labels query over pods.
    5This containers section specifies a container with a hello-world application running on port 8080 on TCP. The image is stored at docker.io/ansibleplaybookbundle/hello-world.

    For more information, Writing APBs: Reference has more detail, and you can see the ansible-kubernetes-modules documentation for a full accounting of all fields.

  2. Build and push the APB:

    1. $ apb build
    2. $ apb push
  3. Provision the APB using the web console.

  4. After provisioning, there will be a new running pod and a new deployment configuration. Verify by checking your OKD resources:

    1. $ oc project getting-started
    2. $ oc get all
    3. NAME REVISION DESIRED CURRENT TRIGGERED BY
    4. dc/my-test 1 1 1 config
    5. NAME DESIRED CURRENT READY AGE
    6. rc/my-test-1 1 1 1 35s
    7. NAME READY STATUS RESTARTS AGE
    8. po/my-test-1-2pw4t 1/1 Running 0 33s

    You will also be able to see the deployed application in the web console on the project’s Overview page.

The only way to use this pod in its current state is to use:

  1. $ oc describe pods/<pod_name>

to find its IP address and access it directly. If there were multiple pods, they would be accessed separately. To treat them like a single host, you need to create a service, described in the next section.

To clean up before moving on and allow you to provision again, you can delete the getting-started project and recreate it or create a new one.

Creating a Service

You will want to use multiple pods, load balance them, and create a service so that a user can access them as a single host:

  1. Modify the provision-my-test-apb/tasks/main.yml file and add the following:

    1. - name: create my-test service
    2. k8s_v1_service:
    3. name: my-test
    4. namespace: '{{ namespace }}'
    5. labels:
    6. app: my-test
    7. service: my-test
    8. selector:
    9. app: my-test
    10. service: my-test
    11. ports:
    12. - name: web
    13. port: 80
    14. target_port: 8080

    The selector section will allow the my-test service to include the correct pods. The ports will take the target port from the pods (8080) and expose them as a single port for the service (80). Notice the application was running on 8080 but has now been made available on the default HTTP port of 80.

    The name field of the port allows you to specify this port in the future with other resources. More information is available in the k8s_v1_service module.

  2. Build and push the APB:

    1. $ apb build
    2. $ apb push
  3. Provision the APB using the web console.

After provisioning, you will see a new service in the web console or CLI. In the web console, you can click on the new service under Networking in the application on the Overview page or under Applications → Services. The service’s IP address will be shown which you can use to access the load balanced application.

To view the service information from the command line, you can do the following:

  1. $ oc project getting-started
  2. $ oc get services
  3. $ oc describe services/my-test

The describe command will show the IP address to access the service. However, using an IP address for users to access your application is not generally what you want. Instead, you should create a route, described in the next section.

To clean up before moving on and allow you to provision again, you can delete the getting-started project and recreate it or create a new one.

Creating a Route

You can expose external access to your application through a reliable named route:

  1. Modify the provision-my-test-apb/tasks/main.yml file and adding the following:

    1. - name: create my-test route
    2. openshift_v1_route:
    3. name: my-test
    4. namespace: '{{ namespace }}'
    5. labels:
    6. app: my-test
    7. service: my-test
    8. to_name: my-test
    9. spec_port_target_port: web

    The to_name is the name of the target service. The spec_port_target_port refers to the name of the target service’s port. More information is available in the openshift_v1_route module.

  2. Build and push the APB:

    1. $ apb build
    2. $ apb push
  3. Provision the APB using the web console.

After provisioning, you will see the new route created. On the web console’s Overview page for the getting-started project, you will now see an active and clickable route link listed on the application. Clicking on the route or visiting the URL will bring up the hello-world application.

You can also view the route information from the CLI:

  1. $ oc project getting-started
  2. $ oc get routes
  3. NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
  4. my-test my-test-getting-started.172.17.0.1.nip.io my-test web None
  5. $ oc describe routes/my-test
  6. Name: my-test
  7. Namespace: getting-started
  8. ...

At this point, your my-test application is fully functional, load balanced, scalable, and accessible. You can compare your finished APB to the hello-world APB in the hello-world-apb example repository.

Deprovision

For the deprovision task, you must destroy all provisioned resources, usually in reverse order from how they were created.

To add the deprovision action, you need a deprovision.yml file under playbooks/ directory and related tasks in the roles/deprovision-my-test-apb/tasks/main.yml. Both these files should already be created for you:

  1. my-test-apb/
  2. ├── apb.yml
  3. ├── Dockerfile
  4. ├── playbooks
  5. └── deprovision.yml (1)
  6. └── roles
  7. └── deprovision-my-test-apb
  8. └── tasks
  9. └── main.yml (2)
1Inspect this file.
2Edit this file.

The content of the deprovision.yml file looks the same as the provision task, except it is calling a different role:

playbooks/deprovision.yml

  1. - name: my-test-apb playbook to deprovision the application
  2. hosts: localhost
  3. gather_facts: false
  4. connection: local
  5. roles:
  6. - role: ansible.kubernetes-modules
  7. install_python_requirements: no
  8. - role: ansibleplaybookbundle.asb-modules
  9. - role: deprovision-my-test-apb
  10. playbook_debug: false

Edit that role in the file roles/deprovision-my-test-apb/tasks/main.yml. By uncommenting the tasks, the resulting file without comments should look like the following:

  1. - openshift_v1_route:
  2. name: my-test
  3. namespace: '{{ namespace }}'
  4. state: absent
  5. - k8s_v1_service:
  6. name: my-test
  7. namespace: '{{ namespace }}'
  8. state: absent
  9. - openshift_v1_deployment_config:
  10. name: my-test
  11. namespace: '{{ namespace }}'
  12. state: absent

In the provision.yml file created earlier, you created a deployment configuration, service, then route. For the deprovision action, you should delete the resources in reverse order. You can do so by identifying the resource by namespace and name, and then marking it as state: absent.

To run the deprovision template, click on the menu on the list of Deployed Services and select Delete.

Bind

From the previous sections, you learned how to deploy a standalone application. However, in most cases applications will need to communicate with other applications, and often with a data source. In the following sections, you will create a PostgreSQL database that the hello-world application deployed from my-test-apb can use.

Preparation

For a good starting point, create the necessary files for provision and deprovisioning PostgreSQL.

A more in-depth example can be found at the PostgreSQL example APB.

  1. Initialize the APB using the --bindable option:

    1. $ apb init my-pg-apb --bindable

    This creates the normal APB file structure with a few differences:

    1. my-pg-apb/
    2. ├── apb.yml (1)
    3. ├── Dockerfile
    4. ├── playbooks
    5. ├── bind.yml (2)
    6. ├── deprovision.yml
    7. ├── provision.yml
    8. └── unbind.yml (3)
    9. └── roles
    10. ├── bind-my-pg-apb
    11. └── tasks
    12. └── main.yml (4)
    13. ├── deprovision-my-pg-apb
    14. └── tasks
    15. └── main.yml
    16. ├── provision-my-pg-apb
    17. └── tasks
    18. └── main.yml (5)
    19. └── unbind-my-pg-apb
    20. └── tasks
    21. └── main.yml (6)
    1bindable flag set to true
    2New file
    3New file
    4New empty file
    5Encoded binding credentials
    6New empty file

    In addition to the normal files, new playbooks bind.yml, unbind.yml, and their associated roles have been stubbed out. The bind.yml and unbind.yml files are both empty and, because you are using the default binding behavior, will remain empty.

  2. Edit the apb.yml file. Notice the setting bindable: true. In addition to those changes, you must add some parameters to the apb.yml for configuring PostgreSQL. They will be available fields in the web console when provisioning your new APB:

    1. version: 1.0
    2. name: my-pg-apb
    3. description: This is a sample application generated by apb init
    4. bindable: True
    5. async: optional
    6. metadata:
    7. displayName: my-pg
    8. plans:
    9. - name: default
    10. description: This default plan deploys my-pg-apb
    11. free: True
    12. metadata: {}
    13. # edit the parameters and add the ones below.
    14. parameters:
    15. - name: postgresql_database
    16. title: PostgreSQL Database Name
    17. type: string
    18. default: admin
    19. - name: postgresql_user
    20. title: PostgreSQL User
    21. type: string
    22. default: admin
    23. - name: postgresql_password
    24. title: PostgreSQL Password
    25. type: string
    26. default: admin

    The playbooks/provision.yml will look like the following:

    1. - name: my-pg-apb playbook to provision the application
    2. hosts: localhost
    3. gather_facts: false
    4. connection: local
    5. roles:
    6. - role: ansible.kubernetes-modules
    7. install_python_requirements: no
    8. - role: ansibleplaybookbundle.asb-modules
    9. - role: provision-my-pg-apb
    10. playbook_debug: false

    The playbooks/deprovision.yml will look like the following:

    1. - name: my-pg-apb playbook to deprovision the application
    2. hosts: localhost
    3. gather_facts: false
    4. connection: local
    5. roles:
    6. - role: ansible.kubernetes-modules
    7. install_python_requirements: no
    8. - role: deprovision-my-pg-apb
    9. playbook_debug: false
  3. Edit the roles/provision-my-pg-apb/tasks/main.yml file. This file mirrors your hello-world application in many respects, but adds a persistent volume (PV) to save data between restarts and various configuration options for the deployment configuration.

    In addition, a new task has been added at the very bottom after the provision tasks. To save the credentials created during the provision process, you must encode them for retrieval by the OAB. The new task, using the module asb_encode_binding, will do so for you.

    You can safely delete everything in that file and replace it with the following:

    1. # New persistent volume claim
    2. - name: create volumes
    3. k8s_v1_persistent_volume_claim:
    4. name: my-pg
    5. namespace: '{{ namespace }}'
    6. state: present
    7. access_modes:
    8. - ReadWriteOnce
    9. resources_requests:
    10. storage: 1Gi
    11. - name: create deployment config
    12. openshift_v1_deployment_config:
    13. name: my-pg
    14. namespace: '{{ namespace }}'
    15. labels:
    16. app: my-pg
    17. service: my-pg
    18. replicas: 1
    19. selector:
    20. app: my-pg
    21. service: my-pg
    22. spec_template_metadata_labels:
    23. app: my-pg
    24. service: my-pg
    25. containers:
    26. - env:
    27. - name: POSTGRESQL_PASSWORD
    28. value: '{{ postgresql_password }}'
    29. - name: POSTGRESQL_USER
    30. value: '{{ postgresql_user }}'
    31. - name: POSTGRESQL_DATABASE
    32. value: '{{ postgresql_database }}'
    33. image: docker.io/centos/postgresql-94-centos7
    34. name: my-pg
    35. ports:
    36. - container_port: 5432
    37. protocol: TCP
    38. termination_message_path: /dev/termination-log
    39. volume_mounts:
    40. - mount_path: /var/lib/pgsql/data
    41. name: my-pg
    42. working_dir: /
    43. volumes:
    44. - name: my-pg
    45. persistent_volume_claim:
    46. claim_name: my-pg
    47. test: false
    48. triggers:
    49. - type: ConfigChange
    50. - name: create service
    51. k8s_v1_service:
    52. name: my-pg
    53. namespace: '{{ namespace }}'
    54. state: present
    55. labels:
    56. app: my-pg
    57. service: my-pg
    58. selector:
    59. app: my-pg
    60. service: my-pg
    61. ports:
    62. - name: port-5432
    63. port: 5432
    64. protocol: TCP
    65. target_port: 5432
    66. # New encoding task makes credentials available to future bind operations
    67. - name: encode bind credentials
    68. asb_encode_binding:
    69. fields:
    70. DB_TYPE: postgres
    71. DB_HOST: my-pg
    72. DB_PORT: "5432"
    73. DB_USER: "{{ postgresql_user }}"
    74. DB_PASSWORD: "{{ postgresql_password }}"
    75. DB_NAME: "{{ postgresql_database }}"

    The encode bind credentials task will make available several fields as environment variables: DB_TYPE, DB_HOST, DB_PORT, DB_USER, DB_PASSWORD, and DB_NAME. This is the default behavior when the bind.yml file is left empty. Any application (such as hello-world) can use these environment variables to connect to the configured database after performing a bind operation.

  4. Edit the roles/deprovision-my-pg-apb/tasks/main.yml and uncomment the following lines so that the created resources will be deleted during deprovisioning:

    1. - k8s_v1_service:
    2. name: my-pg
    3. namespace: '{{ namespace }}'
    4. state: absent
    5. - openshift_v1_deployment_config:
    6. name: my-pg
    7. namespace: '{{ namespace }}'
    8. state: absent
    9. - k8s_v1_persistent_volume_claim:
    10. name: my-pg
    11. namespace: '{{ namespace }}'
    12. state: absent
  5. Finally, build and push your APB:

    1. $ apb build
    2. $ apb push

At this point, the APB can create a fully functional PostgreSQL database to your cluster. You can test it out in the next section.

Executing From the UI

To test your application, you can bind a hello-world application to the provisioned PostgreSQL database. You can use the application previously created in the Provision section of this tutorial, or you can use the hello-world-apb:

  1. First, provision my-test-apb.

  2. Then, provision my-pg-apb and select the option to Create a secret:

    provision my pg

    provision my pg params

    provision my pg binding

    provision my pg results

  3. Now, if you have not already done so, navigate to the project. You can see both your hello-world application and your PostgreSQL database. If you did not select to create a binding at provision time, you can also do so here with the Create binding link.

  4. After you the binding has been created, you must add the secret created by the binding into the application. First, navigate to the secrets on the Resources → Secrets page:

    my pg nav secrets

    my pg secrets

  5. Add the secret as environment variables:

    my pg add secret

    my pg add secret app

  6. After this addition, you can return to the Overview page. The my-test application may still be redeploying from the configuration change. If so, wait until you can click on the route to view the application:

    my pg overview

    After clicking the route, you will see the hello-world application has detected and connected to the my-pg database:

    my pg hello world

Test

Test actions are intended to check that an APB passes a basic sanity check before publishing to the service catalog. They are not meant to test a live service. OKD provides the ability to test a live service using liveness and readiness probes, which you can add when provisioning.

The actual implementation of your test is left to you as the APB author. The following sections provide guidance and best practices.

Writing a Test Action

To create a test action for your APB:

  • Include a playbooks/test.yml file.

  • Include defaults for the test in the playbooks/vars/ directory.

  1. my-apb/
  2. ├── ...
  3. ├── playbooks/
  4. ├── test.yml
  5. └── vars/
  6. └── test_defaults.yml

To orchestrate the testing of an APB, you should use the include_vars and include_role modules in your test.yml file:

test.yml

  1. - name: test media wiki abp
  2. hosts: localhost
  3. gather_facts: false
  4. connection: local
  5. roles:
  6. - role: ansible.kubernetes-modules (1)
  7. install_python_requirements: no
  8. post_tasks:
  9. - name: Load default variables for testing (2)
  10. include_vars: test_defaults.yaml
  11. - name: create project for namespace
  12. openshift_v1_project:
  13. name: '{{ namespace }}'
  14. - name: Run the provision role. (3)
  15. include_role:
  16. name: provision-mediawiki-apb
  17. - name: Run the verify role. (4)
  18. include_role:
  19. name: verify-mediawiki-apb
1Load the Ansible Kubernetes modules.
2Include the default values needed for provision from the test role.
3Include the provision role to run.
4Include the verify role to run. See Writing a Verify Role.
Writing a Verify Role

A verify role allows you to determine if the provision has failed or succeeded. The verify_<name> role should be in the roles/ directory. This should be a normal Ansible role.

  1. my-apb/
  2. ├── ...
  3. └── roles/
  4. ├── ...
  5. └── verify_<name>
  6. ├── defaults
  7. └── defaults.yml
  8. └── tasks
  9. └── main.yml

An example task in the main.yml file could look like:

  1. - name: url check for media wiki
  2. uri:
  3. url: "http://{{ route.route.spec.host }}"
  4. return_content: yes
  5. register: webpage
  6. failed_when: webpage.status != 200
Saving Test Results

The asb_save_test_result module can also be used in the verify role, allowing the APB to save test results so that the apb test command can return them. The APB pod will stay alive for the tool to retrieve the test results.

For example, adding asb_save_test_result usage to the previous main.yml example:

  1. - name: url check for media wiki
  2. uri:
  3. url: "http://{{ route.route.spec.host }}"
  4. return_content: yes
  5. register: webpage
  6. - name: Save failure for the web page
  7. asb_save_test_result:
  8. fail: true
  9. msg: "Could not reach route and retrieve a 200 status code. Recieved status - {{ webpage.status }}"
  10. when: webpage.status != 200
  11. - fail:
  12. msg: "Could not reach route and retrieve a 200 status code. Recieved status - {{ webpage.status }}"
  13. when: webpage.status != 200
  14. - name: Save test pass
  15. asb_save_test_result:
  16. fail: false
  17. when: webpage.status == 200
Running a Test Action

After you have defined your test action, you can use the CLI tooling to run the test:

  1. $ apb test

The test action will:

  • build the image,

  • start up a pod as if it was being run by the service broker, and

  • retrieve the test results if any were saved.

The status of pod after execution has finished will determine the status of the test. If the pod is in an error state, then something failed and the command reports that the test was unsuccessful.