Developing PTP events consumer applications

When developing consumer applications that make use of Precision Time Protocol (PTP) events on a bare-metal cluster node, you need to deploy your consumer application and a cloud-event-proxy container in a separate application pod. The cloud-event-proxy container receives the events from the PTP Operator pod and passes it to the consumer application. The consumer application subscribes to the events posted in the cloud-event-proxy container by using a REST API.

For more information about deploying PTP events applications, see About the PTP fast event notifications framework.

The following information provides general guidance for developing consumer applications that use PTP events. A complete events consumer application example is outside the scope of this information.

PTP events consumer application reference

PTP event consumer applications require the following features:

  1. A web service running with a POST handler to receive the cloud native PTP events JSON payload

  2. A createSubscription function to subscribe to the PTP events producer

  3. A getCurrentState function to poll the current state of the PTP events producer

The following example Go snippets illustrate these requirements:

Example PTP events consumer server function in Go

  1. func server() {
  2. http.HandleFunc("/event", getEvent)
  3. http.ListenAndServe("localhost:8989", nil)
  4. }
  5. func getEvent(w http.ResponseWriter, req *http.Request) {
  6. defer req.Body.Close()
  7. bodyBytes, err := io.ReadAll(req.Body)
  8. if err != nil {
  9. log.Errorf("error reading event %v", err)
  10. }
  11. e := string(bodyBytes)
  12. if e != "" {
  13. processEvent(bodyBytes)
  14. log.Infof("received event %s", string(bodyBytes))
  15. } else {
  16. w.WriteHeader(http.StatusNoContent)
  17. }
  18. }

Example PTP events createSubscription function in Go

  1. import (
  2. "github.com/redhat-cne/sdk-go/pkg/pubsub"
  3. "github.com/redhat-cne/sdk-go/pkg/types"
  4. v1pubsub "github.com/redhat-cne/sdk-go/v1/pubsub"
  5. )
  6. // Subscribe to PTP events using REST API
  7. s1,_:=createsubscription("/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state") (1)
  8. s2,_:=createsubscription("/cluster/node/<node_name>/sync/ptp-status/ptp-clock-class-change")
  9. s3,_:=createsubscription("/cluster/node/<node_name>/sync/ptp-status/lock-state")
  10. // Create PTP event subscriptions POST
  11. func createSubscription(resourceAddress string) (sub pubsub.PubSub, err error) {
  12. var status int
  13. apiPath:= "/api/ocloudNotifications/v1/"
  14. localAPIAddr:=localhost:8989 // vDU service API address
  15. apiAddr:= "localhost:8089" // event framework API address
  16. subURL := &types.URI{URL: url.URL{Scheme: "http",
  17. Host: apiAddr
  18. Path: fmt.Sprintf("%s%s", apiPath, "subscriptions")}}
  19. endpointURL := &types.URI{URL: url.URL{Scheme: "http",
  20. Host: localAPIAddr,
  21. Path: "event"}}
  22. sub = v1pubsub.NewPubSub(endpointURL, resourceAddress)
  23. var subB []byte
  24. if subB, err = json.Marshal(&sub); err == nil {
  25. rc := restclient.New()
  26. if status, subB = rc.PostWithReturn(subURL, subB); status != http.StatusCreated {
  27. err = fmt.Errorf("error in subscription creation api at %s, returned status %d", subURL, status)
  28. } else {
  29. err = json.Unmarshal(subB, &sub)
  30. }
  31. } else {
  32. err = fmt.Errorf("failed to marshal subscription for %s", resourceAddress)
  33. }
  34. return
  35. }
1Replace <node_name> with the FQDN of the node that is generating the PTP events. For example, compute-1.example.com.

Example PTP events consumer getCurrentState function in Go

  1. //Get PTP event state for the resource
  2. func getCurrentState(resource string) {
  3. //Create publisher
  4. url := &types.URI{URL: url.URL{Scheme: "http",
  5. Host: localhost:8989,
  6. Path: fmt.SPrintf("/api/ocloudNotifications/v1/%s/CurrentState",resource}}
  7. rc := restclient.New()
  8. status, event := rc.Get(url)
  9. if status != http.StatusOK {
  10. log.Errorf("CurrentState:error %d from url %s, %s", status, url.String(), event)
  11. } else {
  12. log.Debugf("Got CurrentState: %s ", event)
  13. }
  14. }

Reference cloud-event-proxy deployment and service CRs

Use the following example cloud-event-proxy deployment and subscriber service CRs as a reference when deploying your PTP events consumer application.

Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 30 November 2030. For more information see, Red Hat AMQ Interconnect support status.

Reference cloud-event-proxy deployment with HTTP transport

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: event-consumer-deployment
  5. namespace: <namespace>
  6. labels:
  7. app: consumer
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. app: consumer
  13. template:
  14. metadata:
  15. labels:
  16. app: consumer
  17. spec:
  18. serviceAccountName: sidecar-consumer-sa
  19. containers:
  20. - name: event-subscriber
  21. image: event-subscriber-app
  22. - name: cloud-event-proxy-as-sidecar
  23. image: openshift4/ose-cloud-event-proxy
  24. args:
  25. - "--metrics-addr=127.0.0.1:9091"
  26. - "--store-path=/store"
  27. - "--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043"
  28. - "--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043"
  29. - "--api-port=8089"
  30. env:
  31. - name: NODE_NAME
  32. valueFrom:
  33. fieldRef:
  34. fieldPath: spec.nodeName
  35. - name: NODE_IP
  36. valueFrom:
  37. fieldRef:
  38. fieldPath: status.hostIP
  39. volumeMounts:
  40. - name: pubsubstore
  41. mountPath: /store
  42. ports:
  43. - name: metrics-port
  44. containerPort: 9091
  45. - name: sub-port
  46. containerPort: 9043
  47. volumes:
  48. - name: pubsubstore
  49. emptyDir: {}

Reference cloud-event-proxy deployment with AMQ transport

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: cloud-event-proxy-sidecar
  5. namespace: cloud-events
  6. labels:
  7. app: cloud-event-proxy
  8. spec:
  9. selector:
  10. matchLabels:
  11. app: cloud-event-proxy
  12. template:
  13. metadata:
  14. labels:
  15. app: cloud-event-proxy
  16. spec:
  17. nodeSelector:
  18. node-role.kubernetes.io/worker: ""
  19. containers:
  20. - name: cloud-event-sidecar
  21. image: openshift4/ose-cloud-event-proxy
  22. args:
  23. - "--metrics-addr=127.0.0.1:9091"
  24. - "--store-path=/store"
  25. - "--transport-host=amqp://router.router.svc.cluster.local"
  26. - "--api-port=8089"
  27. env:
  28. - name: <node_name>
  29. valueFrom:
  30. fieldRef:
  31. fieldPath: spec.nodeName
  32. - name: <node_ip>
  33. valueFrom:
  34. fieldRef:
  35. fieldPath: status.hostIP
  36. volumeMounts:
  37. - name: pubsubstore
  38. mountPath: /store
  39. ports:
  40. - name: metrics-port
  41. containerPort: 9091
  42. - name: sub-port
  43. containerPort: 9043
  44. volumes:
  45. - name: pubsubstore
  46. emptyDir: {}

Reference cloud-event-proxy subscriber service

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. annotations:
  5. prometheus.io/scrape: "true"
  6. service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret
  7. name: consumer-events-subscription-service
  8. namespace: cloud-events
  9. labels:
  10. app: consumer-service
  11. spec:
  12. ports:
  13. - name: sub-port
  14. port: 9043
  15. selector:
  16. app: consumer
  17. clusterIP: None
  18. sessionAffinity: None
  19. type: ClusterIP

PTP events available from the cloud-event-proxy sidecar REST API

PTP events consumer applications can poll the PTP events producer for the following PTP timing events.

Table 1. PTP events available from the cloud-event-proxy sidecar
Resource URIDescription

/cluster/node/<node_name>/sync/ptp-status/lock-state

Describes the current status of the PTP equipment lock state. Can be in LOCKED, HOLDOVER, or FREERUN state.

/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state

Describes the host operating system clock synchronization state. Can be in LOCKED or FREERUN state.

/cluster/node/<node_name>/sync/ptp-status/ptp-clock-class-change

Describes the current state of the PTP clock class.

Subscribing the consumer application to PTP events

Before the PTP events consumer application can poll for events, you need to subscribe the application to the event producer.

Subscribing to PTP lock-state events

To create a subscription for PTP lock-state events, send a POST action to the cloud event API at http://localhost:8081/api/ocloudNotifications/v1/subscriptions with the following payload:

  1. {
  2. "endpointUri": "http://localhost:8989/event",
  3. "resource": "/cluster/node/<node_name>/sync/ptp-status/lock-state",
  4. }

Example response

  1. {
  2. "id": "e23473d9-ba18-4f78-946e-401a0caeff90",
  3. "endpointUri": "http://localhost:8989/event",
  4. "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions/e23473d9-ba18-4f78-946e-401a0caeff90",
  5. "resource": "/cluster/node/<node_name>/sync/ptp-status/lock-state",
  6. }

Subscribing to PTP os-clock-sync-state events

To create a subscription for PTP os-clock-sync-state events, send a POST action to the cloud event API at http://localhost:8081/api/ocloudNotifications/v1/subscriptions with the following payload:

  1. {
  2. "endpointUri": "http://localhost:8989/event",
  3. "resource": "/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state",
  4. }

Example response

  1. {
  2. "id": "e23473d9-ba18-4f78-946e-401a0caeff90",
  3. "endpointUri": "http://localhost:8989/event",
  4. "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions/e23473d9-ba18-4f78-946e-401a0caeff90",
  5. "resource": "/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state",
  6. }

Subscribing to PTP ptp-clock-class-change events

To create a subscription for PTP ptp-clock-class-change events, send a POST action to the cloud event API at http://localhost:8081/api/ocloudNotifications/v1/subscriptions with the following payload:

  1. {
  2. "endpointUri": "http://localhost:8989/event",
  3. "resource": "/cluster/node/<node_name>/sync/ptp-status/ptp-clock-class-change",
  4. }

Example response

  1. {
  2. "id": "e23473d9-ba18-4f78-946e-401a0caeff90",
  3. "endpointUri": "http://localhost:8989/event",
  4. "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions/e23473d9-ba18-4f78-946e-401a0caeff90",
  5. "resource": "/cluster/node/<node_name>/sync/ptp-status/ptp-clock-class-change",
  6. }

Getting the current PTP clock status

To get the current PTP status for the node, send a GET action to one of the following event REST APIs:

  • http://localhost:8081/api/ocloudNotifications/v1/cluster/node/<node_name>/sync/ptp-status/lock-state/CurrentState

  • http://localhost:8081/api/ocloudNotifications/v1/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state/CurrentState

  • http://localhost:8081/api/ocloudNotifications/v1/cluster/node/<node_name>/sync/ptp-status/ptp-clock-class-change/CurrentState

The response is a cloud native event JSON object. For example:

Example lock-state API response

  1. {
  2. "id": "c1ac3aa5-1195-4786-84f8-da0ea4462921",
  3. "type": "event.sync.ptp-status.ptp-state-change",
  4. "source": "/cluster/node/compute-1.example.com/sync/ptp-status/lock-state",
  5. "dataContentType": "application/json",
  6. "time": "2023-01-10T02:41:57.094981478Z",
  7. "data": {
  8. "version": "v1",
  9. "values": [
  10. {
  11. "resource": "/cluster/node/compute-1.example.com/ens5fx/master",
  12. "dataType": "notification",
  13. "valueType": "enumeration",
  14. "value": "LOCKED"
  15. },
  16. {
  17. "resource": "/cluster/node/compute-1.example.com/ens5fx/master",
  18. "dataType": "metric",
  19. "valueType": "decimal64.3",
  20. "value": "29"
  21. }
  22. ]
  23. }
  24. }

Verifying that the PTP events consumer application is receiving events

Verify that the cloud-event-proxy container in the application pod is receiving PTP events.

Prerequisites

  • You have installed the OpenShift CLI (oc).

  • You have logged in as a user with cluster-admin privileges.

  • You have installed and configured the PTP Operator.

Procedure

  1. Get the list of active linuxptp-daemon pods. Run the following command:

    1. $ oc get pods -n openshift-ptp

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. linuxptp-daemon-2t78p 3/3 Running 0 8h
    3. linuxptp-daemon-k8n88 3/3 Running 0 8h
  2. Access the metrics for the required consumer-side cloud-event-proxy container by running the following command:

    1. $ oc exec -it <linuxptp-daemon> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics

    where:

    <linuxptp-daemon>

    Specifies the pod you want to query, for example, linuxptp-daemon-2t78p.

    Example output

    1. # HELP cne_transport_connections_resets Metric to get number of connection resets
    2. # TYPE cne_transport_connections_resets gauge
    3. cne_transport_connection_reset 1
    4. # HELP cne_transport_receiver Metric to get number of receiver created
    5. # TYPE cne_transport_receiver gauge
    6. cne_transport_receiver{address="/cluster/node/compute-1.example.com/ptp",status="active"} 2
    7. cne_transport_receiver{address="/cluster/node/compute-1.example.com/redfish/event",status="active"} 2
    8. # HELP cne_transport_sender Metric to get number of sender created
    9. # TYPE cne_transport_sender gauge
    10. cne_transport_sender{address="/cluster/node/compute-1.example.com/ptp",status="active"} 1
    11. cne_transport_sender{address="/cluster/node/compute-1.example.com/redfish/event",status="active"} 1
    12. # HELP cne_events_ack Metric to get number of events produced
    13. # TYPE cne_events_ack gauge
    14. cne_events_ack{status="success",type="/cluster/node/compute-1.example.com/ptp"} 18
    15. cne_events_ack{status="success",type="/cluster/node/compute-1.example.com/redfish/event"} 18
    16. # HELP cne_events_transport_published Metric to get number of events published by the transport
    17. # TYPE cne_events_transport_published gauge
    18. cne_events_transport_published{address="/cluster/node/compute-1.example.com/ptp",status="failed"} 1
    19. cne_events_transport_published{address="/cluster/node/compute-1.example.com/ptp",status="success"} 18
    20. cne_events_transport_published{address="/cluster/node/compute-1.example.com/redfish/event",status="failed"} 1
    21. cne_events_transport_published{address="/cluster/node/compute-1.example.com/redfish/event",status="success"} 18
    22. # HELP cne_events_transport_received Metric to get number of events received by the transport
    23. # TYPE cne_events_transport_received gauge
    24. cne_events_transport_received{address="/cluster/node/compute-1.example.com/ptp",status="success"} 18
    25. cne_events_transport_received{address="/cluster/node/compute-1.example.com/redfish/event",status="success"} 18
    26. # HELP cne_events_api_published Metric to get number of events published by the rest api
    27. # TYPE cne_events_api_published gauge
    28. cne_events_api_published{address="/cluster/node/compute-1.example.com/ptp",status="success"} 19
    29. cne_events_api_published{address="/cluster/node/compute-1.example.com/redfish/event",status="success"} 19
    30. # HELP cne_events_received Metric to get number of events received
    31. # TYPE cne_events_received gauge
    32. cne_events_received{status="success",type="/cluster/node/compute-1.example.com/ptp"} 18
    33. cne_events_received{status="success",type="/cluster/node/compute-1.example.com/redfish/event"} 18
    34. # HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
    35. # TYPE promhttp_metric_handler_requests_in_flight gauge
    36. promhttp_metric_handler_requests_in_flight 1
    37. # HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
    38. # TYPE promhttp_metric_handler_requests_total counter
    39. promhttp_metric_handler_requests_total{code="200"} 4
    40. promhttp_metric_handler_requests_total{code="500"} 0
    41. promhttp_metric_handler_requests_total{code="503"} 0