Using PTP hardware

You can configure linuxptp services and use PTP-capable hardware in OKD cluster nodes.

About PTP hardware

You can use the OKD console or OpenShift CLI (oc) to install PTP by deploying the PTP Operator. The PTP Operator creates and manages the linuxptp services and provides the following features:

  • Discovery of the PTP-capable devices in the cluster.

  • Management of the configuration of linuxptp services.

  • Notification of PTP clock events that negatively affect the performance and reliability of your application with the PTP Operator cloud-event-proxy sidecar.

The PTP Operator works with PTP-capable devices on clusters provisioned only on bare-metal infrastructure.

About PTP

Precision Time Protocol (PTP) is used to synchronize clocks in a network. When used in conjunction with hardware support, PTP is capable of sub-microsecond accuracy, and is more accurate than Network Time Protocol (NTP).

Elements of a PTP domain

PTP is used to synchronize multiple nodes connected in a network, with clocks for each node. The clocks synchronized by PTP are organized in a source-destination hierarchy. The hierarchy is created and updated automatically by the best master clock (BMC) algorithm, which runs on every clock. Destination clocks are synchronized to source clocks, and destination clocks can themselves be the source for other downstream clocks. The three primary types of PTP clocks are described below.

Grandmaster clock

The grandmaster clock provides standard time information to other clocks across the network and ensures accurate and stable synchronisation. It writes time stamps and responds to time requests from other clocks. Grandmaster clocks synchronize to a Global Navigation Satellite System (GNSS) time source. The Grandmaster clock is the authoritative source of time in the network and is responsible for providing time synchronization to all other devices.

Ordinary clock

The ordinary clock has a single port connection that can play the role of source or destination clock, depending on its position in the network. The ordinary clock can read and write time stamps.

Boundary clock

The boundary clock has ports in two or more communication paths and can be a source and a destination to other destination clocks at the same time. The boundary clock works as a destination clock upstream. The destination clock receives the timing message, adjusts for delay, and then creates a new source time signal to pass down the network. The boundary clock produces a new timing packet that is still correctly synced with the source clock and can reduce the number of connected devices reporting directly to the source clock.

Advantages of PTP over NTP

One of the main advantages that PTP has over NTP is the hardware support present in various network interface controllers (NIC) and network switches. The specialized hardware allows PTP to account for delays in message transfer and improves the accuracy of time synchronization. To achieve the best possible accuracy, it is recommended that all networking components between PTP clocks are PTP hardware enabled.

Hardware-based PTP provides optimal accuracy, since the NIC can time stamp the PTP packets at the exact moment they are sent and received. Compare this to software-based PTP, which requires additional processing of the PTP packets by the operating system.

Before enabling PTP, ensure that NTP is disabled for the required nodes. You can disable the chrony time service (chronyd) using a MachineConfig custom resource. For more information, see Disabling chrony time service.

Using PTP with dual NIC hardware

OKD supports single and dual NIC hardware for precision PTP timing in the cluster.

For 5G telco networks that deliver mid-band spectrum coverage, each virtual distributed unit (vDU) requires connections to 6 radio units (RUs). To make these connections, each vDU host requires 2 NICs configured as boundary clocks.

Dual NIC hardware allows you to connect each NIC to the same upstream leader clock with separate ptp4l instances for each NIC feeding the downstream clocks.

Overview of linuxptp in OKD nodes

OKD uses PTP and linuxptp for high precision system timing in bare-metal infrastructure. The linuxptp package includes the ts2phc, pmc, ptp4l, and phc2sys programs for system clock synchronization.

ts2phc

ts2phc synchronizes the PTP hardware clock (PHC) across PTP devices with a high degree of precision. ts2phc is used in grandmaster clock configurations. It receives the precision timing signal a high precision clock source such as Global Navigation Satellite System (GNSS). GNSS provides an accurate and reliable source of synchronized time for use in large distributed networks. GNSS clocks typically provide time information with a precision of a few nanoseconds.

The ts2phc system daemon sends timing information from the grandmaster clock to other PTP devices in the network by reading time information from the grandmaster clock and converting it to PHC format. PHC time is used by other devices in the network to synchronize their clocks with the grandmaster clock.

pmc

pmc implements a PTP management client (pmc) according to IEEE standard 1588.1588. pmc provides basic management access for the ptp4l system daemon. pmc reads from standard input and sends the output over the selected transport, printing any replies it receives.

ptp4l

ptp4l implements the PTP boundary clock and ordinary clock and runs as a system daemon. ptp4l does the following:

  • Synchronizes the PHC to the source clock with hardware time stamping

  • Synchronizes the system clock to the source clock with software time stamping

phc2sys

phc2sys synchronizes the system clock to the PHC on the network interface controller (NIC). The phc2sys system daemon continuously monitors the PHC for timing information. When it detects a timing error, the PHC corrects the system clock.

Installing the PTP Operator using the CLI

As a cluster administrator, you can install the Operator by using the CLI.

Prerequisites

  • A cluster installed on bare-metal hardware with nodes that have hardware that supports PTP.

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create a namespace for the PTP Operator.

    1. Save the following YAML in the ptp-namespace.yaml file:

      1. apiVersion: v1
      2. kind: Namespace
      3. metadata:
      4. name: openshift-ptp
      5. annotations:
      6. workload.openshift.io/allowed: management
      7. labels:
      8. name: openshift-ptp
      9. openshift.io/cluster-monitoring: "true"
    2. Create the Namespace CR:

      1. $ oc create -f ptp-namespace.yaml
  2. Create an Operator group for the PTP Operator.

    1. Save the following YAML in the ptp-operatorgroup.yaml file:

      1. apiVersion: operators.coreos.com/v1
      2. kind: OperatorGroup
      3. metadata:
      4. name: ptp-operators
      5. namespace: openshift-ptp
      6. spec:
      7. targetNamespaces:
      8. - openshift-ptp
    2. Create the OperatorGroup CR:

      1. $ oc create -f ptp-operatorgroup.yaml
  3. Subscribe to the PTP Operator.

    1. Save the following YAML in the ptp-sub.yaml file:

      1. apiVersion: operators.coreos.com/v1alpha1
      2. kind: Subscription
      3. metadata:
      4. name: ptp-operator-subscription
      5. namespace: openshift-ptp
      6. spec:
      7. channel: "stable"
      8. name: ptp-operator
      9. source: redhat-operators
      10. sourceNamespace: openshift-marketplace
    2. Create the Subscription CR:

      1. $ oc create -f ptp-sub.yaml
  4. To verify that the Operator is installed, enter the following command:

    1. $ oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase

    Example output

    1. Name Phase
    2. 4.13.0-202301261535 Succeeded

Installing the PTP Operator using the web console

As a cluster administrator, you can install the PTP Operator using the web console.

You have to create the namespace and Operator group as mentioned in the previous section.

Procedure

  1. Install the PTP Operator using the OKD web console:

    1. In the OKD web console, click OperatorsOperatorHub.

    2. Choose PTP Operator from the list of available Operators, and then click Install.

    3. On the Install Operator page, under A specific namespace on the cluster select openshift-ptp. Then, click Install.

  2. Optional: Verify that the PTP Operator installed successfully:

    1. Switch to the OperatorsInstalled Operators page.

    2. Ensure that PTP Operator is listed in the openshift-ptp project with a Status of InstallSucceeded.

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

      If the Operator does not appear as installed, to troubleshoot further:

      • Go to the OperatorsInstalled Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.

      • Go to the WorkloadsPods page and check the logs for pods in the openshift-ptp project.

Configuring PTP devices

The PTP Operator adds the NodePtpDevice.ptp.openshift.io custom resource definition (CRD) to OKD.

When installed, the PTP Operator searches your cluster for PTP-capable network devices on each node. It creates and updates a NodePtpDevice custom resource (CR) object for each node that provides a compatible PTP-capable network device.

Discovering PTP capable network devices in your cluster

  • To return a complete list of PTP capable network devices in your cluster, run the following command:

    1. $ oc get NodePtpDevice -n openshift-ptp -o yaml

    Example output

    1. apiVersion: v1
    2. items:
    3. - apiVersion: ptp.openshift.io/v1
    4. kind: NodePtpDevice
    5. metadata:
    6. creationTimestamp: "2022-01-27T15:16:28Z"
    7. generation: 1
    8. name: dev-worker-0 (1)
    9. namespace: openshift-ptp
    10. resourceVersion: "6538103"
    11. uid: d42fc9ad-bcbf-4590-b6d8-b676c642781a
    12. spec: {}
    13. status:
    14. devices: (2)
    15. - name: eno1
    16. - name: eno2
    17. - name: eno3
    18. - name: eno4
    19. - name: enp5s0f0
    20. - name: enp5s0f1
    21. ...
    1The value for the name parameter is the same as the name of the parent node.
    2The devices collection includes a list of the PTP capable devices that the PTP Operator discovers for the node.

Configuring linuxptp services as a grandmaster clock

You can configure the linuxptp services (ptp4l, phc2sys, ts2phc) as grandmaster clock by creating a PtpConfig custom resource (CR) that configures the host NIC.

The ts2phc utility allows you to synchronize the system clock with the PTP grandmaster clock so that the node can stream precision clock signal to downstream PTP ordinary clocks and boundary clocks.

Use the following example PtpConfig CR as the basis to configure linuxptp services as the grandmaster clock for your particular hardware and environment. This example CR does not configure PTP fast events. To configure PTP fast events, set appropriate values for ptp4lOpts, ptp4lConf, and ptpClockThreshold. ptpClockThreshold is used only when events are enabled. See “Configuring the PTP fast event notifications publisher” for more information.

Prerequisites

  • Install an Intel Westport Channel network interface in the bare-metal cluster host.

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

  • Install the PTP Operator.

Procedure

  1. Create the PtpConfig resource. For example:

    1. Save the following YAML in the grandmaster-clock-ptp-config.yaml file:

      Recommended PTP grandmaster clock configuration

      1. apiVersion: ptp.openshift.io/v1
      2. kind: PtpConfig
      3. metadata:
      4. name: grandmaster
      5. namespace: openshift-ptp
      6. spec:
      7. profile:
      8. - name: "grandmaster"
      9. ptp4lOpts: "-2 --summary_interval -4"
      10. phc2sysOpts: -r -u 0 -m -O -37 -N 8 -R 16 -s ens2f1 -n 24
      11. ptpSchedulingPolicy: SCHED_FIFO
      12. ptpSchedulingPriority: 10
      13. plugins:
      14. e810:
      15. enableDefaultConfig: true
      16. ts2phcOpts: " "
      17. ts2phcConf: |
      18. [nmea]
      19. ts2phc.master 1
      20. [global]
      21. use_syslog 0
      22. verbose 1
      23. logging_level 7
      24. ts2phc.pulsewidth 100000000
      25. #GNSS module - ls /dev/gnss* -al
      26. ts2phc.nmea_serialport /dev/gnss0
      27. leapfile /usr/share/zoneinfo/leap-seconds.list
      28. [ens2f1]
      29. ts2phc.extts_polarity rising
      30. ts2phc.extts_correction 0
      31. ptp4lConf: |
      32. [ens2f1]
      33. masterOnly 1
      34. [ens2f2]
      35. masterOnly 1
      36. [ens2f3]
      37. masterOnly 1
      38. [ens2f4]
      39. masterOnly 1
      40. [global]
      41. #
      42. # Default Data Set
      43. #
      44. twoStepFlag 1
      45. priority1 128
      46. priority2 128
      47. domainNumber 24
      48. #utc_offset 37
      49. clockClass 6
      50. clockAccuracy 0x27
      51. offsetScaledLogVariance 0xFFFF
      52. free_running 0
      53. freq_est_interval 1
      54. dscp_event 0
      55. dscp_general 0
      56. dataset_comparison G.8275.x
      57. G.8275.defaultDS.localPriority 128
      58. #
      59. # Port Data Set
      60. #
      61. logAnnounceInterval -3
      62. logSyncInterval -4
      63. logMinDelayReqInterval -4
      64. logMinPdelayReqInterval 0
      65. announceReceiptTimeout 3
      66. syncReceiptTimeout 0
      67. delayAsymmetry 0
      68. fault_reset_interval 4
      69. neighborPropDelayThresh 20000000
      70. masterOnly 0
      71. G.8275.portDS.localPriority 128
      72. #
      73. # Run time options
      74. #
      75. assume_two_step 0
      76. logging_level 6
      77. path_trace_enabled 0
      78. follow_up_info 0
      79. hybrid_e2e 0
      80. inhibit_multicast_service 0
      81. net_sync_monitor 0
      82. tc_spanning_tree 0
      83. tx_timestamp_timeout 50
      84. unicast_listen 0
      85. unicast_master_table 0
      86. unicast_req_duration 3600
      87. use_syslog 1
      88. verbose 0
      89. summary_interval -4
      90. kernel_leap 1
      91. check_fup_sync 0
      92. #
      93. # Servo Options
      94. #
      95. pi_proportional_const 0.0
      96. pi_integral_const 0.0
      97. pi_proportional_scale 0.0
      98. pi_proportional_exponent -0.3
      99. pi_proportional_norm_max 0.7
      100. pi_integral_scale 0.0
      101. pi_integral_exponent 0.4
      102. pi_integral_norm_max 0.3
      103. step_threshold 0.0
      104. first_step_threshold 0.00002
      105. clock_servo pi
      106. sanity_freq_limit 200000000
      107. ntpshm_segment 0
      108. #
      109. # Transport options
      110. #
      111. transportSpecific 0x0
      112. ptp_dst_mac 01:1B:19:00:00:00
      113. p2p_dst_mac 01:80:C2:00:00:0E
      114. udp_ttl 1
      115. udp6_scope 0x0E
      116. uds_address /var/run/ptp4l
      117. #
      118. # Default interface options
      119. #
      120. clock_type BC
      121. network_transport L2
      122. delay_mechanism E2E
      123. time_stamping hardware
      124. tsproc_mode filter
      125. delay_filter moving_median
      126. delay_filter_length 10
      127. egressLatency 0
      128. ingressLatency 0
      129. boundary_clock_jbod 0
      130. #
      131. # Clock description
      132. #
      133. productDescription ;;
      134. revisionData ;;
      135. manufacturerIdentity 00:00:00
      136. userDescription ;
      137. timeSource 0x20
      138. recommend:
      139. - profile: "grandmaster"
      140. priority: 4
      141. match:
      142. - nodeLabel: "node-role.kubernetes.io/worker"
    2. Create the CR by running the following command:

      1. $ oc create -f grandmaster-clock-ptp-config.yaml

Verification

  1. Check that the PtpConfig profile is applied to the node.

    1. Get the list of pods in the openshift-ptp namespace by running the following command:

      1. $ oc get pods -n openshift-ptp -o wide

      Example output

      1. NAME READY STATUS RESTARTS AGE IP NODE
      2. linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com
      3. ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com
    2. Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command:

      1. $ oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container

      Example output

      1. ts2phc[94980.334]: [ts2phc.0.config] nmea delay: 98690975 ns
      2. ts2phc[94980.334]: [ts2phc.0.config] ens3f0 extts index 0 at 1676577329.999999999 corr 0 src 1676577330.901342528 diff -1
      3. ts2phc[94980.334]: [ts2phc.0.config] ens3f0 master offset -1 s2 freq -1
      4. ts2phc[94980.441]: [ts2phc.0.config] nmea sentence: GNRMC,195453.00,A,4233.24427,N,07126.64420,W,0.008,,160223,,,A,V
      5. phc2sys[94980.450]: [ptp4l.0.config] CLOCK_REALTIME phc offset 943 s2 freq -89604 delay 504
      6. phc2sys[94980.512]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1000 s2 freq -89264 delay 474

Additional resources

Grandmaster clock PtpConfig configuration reference

The following reference information describes the configuration options for the PtpConfig custom resource (CR) that configures the linuxptp services (ptp4l, phc2sys, ts2phc) as grandmaster clock.

Table 1. PtpConfig configuration options for PTP Grandmaster clock
PtpConfig CR fieldDescription

plugins

Specify an array of .exec.cmdline options that configure the NIC for grandmaster clock operation. Grandmaster clock configuration requires certain PTP pins to be disabled.

The plugin mechanism allows the PTP Operator to do automated hardware configuration. For the Intel Westport Channel NIC, when enableDefaultConfig is true, The PTP Operator runs a hard-coded script to do the required configuration for the NIC.

ptp4lOpts

Specify system configuration options for the ptp4l service. The options should not include the network interface name -i <interface> and service config file -f /etc/ptp4l.conf because the network interface name and the service config file are automatically appended.

ptp4lConf

Specify the required configuration to start ptp4l as grandmaster clock. For example, the ens2f1 interface synchronizes downstream connected devices. For grandmaster clocks, set clockClass to 6 and set clockAccuracy to 0x27. Set timeSource to 0x20 for when receiving the timing signal from a Global navigation satellite system (GNSS).

tx_timestamp_timeout

Specify the maximum amount of time to wait for the transmit (TX) timestamp from the sender before discarding the data.

boundary_clock_jbod

Specify the JBOD boundary clock time delay value. This value is used to correct the time values that are passed between the network time devices.

phc2sysOpts

Specify system config options for the phc2sys service. If this field is empty the PTP Operator does not start the phc2sys service.

Ensure that the network interface listed here is configured as grandmaster and is referenced as required in the ts2phcConf and ptp4lConf fields.

ptpSchedulingPolicy

Configure the scheduling policy for ptp4l and phc2sys processes. Default value is SCHED_OTHER. Use SCHED_FIFO on systems that support FIFO scheduling.

ptpSchedulingPriority

Set an integer value from 1-65 to configure FIFO priority for ptp4l and phc2sys processes when ptpSchedulingPolicy is set to SCHED_FIFO. The ptpSchedulingPriority field is not used when ptpSchedulingPolicy is set to SCHED_OTHER.

ptpClockThreshold

Optional. If ptpClockThreshold stanza is not present, default values are used for ptpClockThreshold fields. Stanza shows default ptpClockThreshold values. ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME (phc2sys) or master offset (ptp4l). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN. When the offset value is within this range, the PTP clock state is set to LOCKED.

ts2phcConf

Sets the configuration for the ts2phc command.

leapfile is the default path to the current leap seconds definition file in the PTP Operator container image.

ts2phc.nmea_serialport is the serial port device that is connected to the NMEA GPS clock source. When configured, the GNSS receiver is accessible on /dev/gnss<id>. If the host has multiple GNSS receivers, you can find the correct device by enumerating either of the following devices:

  • /sys/class/net/<eth_port>/device/gnss/

  • /sys/class/gnss/gnss<id>/device/

ts2phcOpts

Set options for the ts2phc command.

recommend

Specify an array of one or more recommend objects that define rules on how the profile should be applied to nodes.

.recommend.profile

Specify the .recommend.profile object name that is defined in the profile section.

.recommend.priority

Specify the priority with an integer value between 0 and 99. A larger number gets lower priority, so a priority of 99 is lower than a priority of 10. If a node can be matched with multiple profiles according to rules defined in the match field, the profile with the higher priority is applied to that node.

.recommend.match

Specify .recommend.match rules with nodeLabel or nodeName.

.recommend.match.nodeLabel

Set nodeLabel with the key of node.Labels from the node object by using the oc get nodes —show-labels command. For example: node-role.kubernetes.io/worker.

.recommend.match.nodeName

Set nodeName with value of node.Name from the node object by using the oc get nodes command. For example: compute-1.example.com.

Configuring linuxptp services as an ordinary clock

You can configure linuxptp services (ptp4l, phc2sys) as ordinary clock by creating a PtpConfig custom resource (CR) object.

Use the following example PtpConfig CR as the basis to configure linuxptp services as an ordinary clock for your particular hardware and environment. This example CR does not configure PTP fast events. To configure PTP fast events, set appropriate values for ptp4lOpts, ptp4lConf, and ptpClockThreshold. ptpClockThreshold is required only when events are enabled. See “Configuring the PTP fast event notifications publisher” for more information.

Prerequisites

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

  • Install the PTP Operator.

Procedure

  1. Create the following PtpConfig CR, and then save the YAML in the ordinary-clock-ptp-config.yaml file.

    Recommended PTP ordinary clock configuration

    1. apiVersion: ptp.openshift.io/v1
    2. kind: PtpConfig
    3. metadata:
    4. name: ordinary-clock-ptp-config
    5. namespace: openshift-ptp
    6. spec:
    7. profile:
    8. - name: ordinary-clock
    9. interface: "<interface_name>"
    10. phc2sysOpts: "-a -r -n 24"
    11. ptp4lOpts: "-2 -s"
    12. ptpSchedulingPolicy: SCHED_FIFO
    13. ptpSchedulingPriority: 10
    14. ptp4lConf: |
    15. [global]
    16. #
    17. # Default Data Set
    18. #
    19. twoStepFlag 1
    20. slaveOnly 1
    21. priority1 128
    22. priority2 128
    23. domainNumber 24
    24. clockClass 255
    25. clockAccuracy 0xFE
    26. offsetScaledLogVariance 0xFFFF
    27. free_running 0
    28. freq_est_interval 1
    29. dscp_event 0
    30. dscp_general 0
    31. dataset_comparison G.8275.x
    32. G.8275.defaultDS.localPriority 128
    33. #
    34. # Port Data Set
    35. #
    36. logAnnounceInterval -3
    37. logSyncInterval -4
    38. logMinDelayReqInterval -4
    39. logMinPdelayReqInterval -4
    40. announceReceiptTimeout 3
    41. syncReceiptTimeout 0
    42. delayAsymmetry 0
    43. fault_reset_interval 4
    44. neighborPropDelayThresh 20000000
    45. masterOnly 0
    46. G.8275.portDS.localPriority 128
    47. #
    48. # Run time options
    49. #
    50. assume_two_step 0
    51. logging_level 6
    52. path_trace_enabled 0
    53. follow_up_info 0
    54. hybrid_e2e 0
    55. inhibit_multicast_service 0
    56. net_sync_monitor 0
    57. tc_spanning_tree 0
    58. tx_timestamp_timeout 50
    59. unicast_listen 0
    60. unicast_master_table 0
    61. unicast_req_duration 3600
    62. use_syslog 1
    63. verbose 0
    64. summary_interval 0
    65. kernel_leap 1
    66. check_fup_sync 0
    67. #
    68. # Servo Options
    69. #
    70. pi_proportional_const 0.0
    71. pi_integral_const 0.0
    72. pi_proportional_scale 0.0
    73. pi_proportional_exponent -0.3
    74. pi_proportional_norm_max 0.7
    75. pi_integral_scale 0.0
    76. pi_integral_exponent 0.4
    77. pi_integral_norm_max 0.3
    78. step_threshold 2.0
    79. first_step_threshold 0.00002
    80. max_frequency 900000000
    81. clock_servo pi
    82. sanity_freq_limit 200000000
    83. ntpshm_segment 0
    84. #
    85. # Transport options
    86. #
    87. transportSpecific 0x0
    88. ptp_dst_mac 01:1B:19:00:00:00
    89. p2p_dst_mac 01:80:C2:00:00:0E
    90. udp_ttl 1
    91. udp6_scope 0x0E
    92. uds_address /var/run/ptp4l
    93. #
    94. # Default interface options
    95. #
    96. clock_type OC
    97. network_transport L2
    98. delay_mechanism E2E
    99. time_stamping hardware
    100. tsproc_mode filter
    101. delay_filter moving_median
    102. delay_filter_length 10
    103. egressLatency 0
    104. ingressLatency 0
    105. boundary_clock_jbod 0
    106. #
    107. # Clock description
    108. #
    109. productDescription ;;
    110. revisionData ;;
    111. manufacturerIdentity 00:00:00
    112. userDescription ;
    113. timeSource 0xA0
    114. recommend:
    115. - profile: ordinary-clock
    116. priority: 4
    117. match:
    118. - nodeLabel: "node-role.kubernetes.io/worker"
    119. nodeName: "<node_name>"
    Table 2. PTP ordinary clock CR configuration options
    Custom resource fieldDescription

    name

    The name of the PtpConfig CR.

    profile

    Specify an array of one or more profile objects. Each profile must be uniquely named.

    interface

    Specify the network interface to be used by the ptp4l service, for example ens787f1.

    ptp4lOpts

    Specify system config options for the ptp4l service, for example -2 to select the IEEE 802.3 network transport. The options should not include the network interface name -i <interface> and service config file -f /etc/ptp4l.conf because the network interface name and the service config file are automatically appended. Append —summary_interval -4 to use PTP fast events with this interface.

    phc2sysOpts

    Specify system config options for the phc2sys service. If this field is empty, the PTP Operator does not start the phc2sys service. For Intel Columbiaville 800 Series NICs, set phc2sysOpts options to -a -r -m -n 24 -N 8 -R 16. -m prints messages to stdout. The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics.

    ptp4lConf

    Specify a string that contains the configuration to replace the default /etc/ptp4l.conf file. To use the default configuration, leave the field empty.

    tx_timestamp_timeout

    For Intel Columbiaville 800 Series NICs, set tx_timestamp_timeout to 50.

    boundary_clock_jbod

    For Intel Columbiaville 800 Series NICs, set boundary_clock_jbod to 0.

    ptpSchedulingPolicy

    Scheduling policy for ptp4l and phc2sys processes. Default value is SCHED_OTHER. Use SCHED_FIFO on systems that support FIFO scheduling.

    ptpSchedulingPriority

    Integer value from 1-65 used to set FIFO priority for ptp4l and phc2sys processes when ptpSchedulingPolicy is set to SCHED_FIFO. The ptpSchedulingPriority field is not used when ptpSchedulingPolicy is set to SCHED_OTHER.

    ptpClockThreshold

    Optional. If ptpClockThreshold is not present, default values are used for the ptpClockThreshold fields. ptpClockThreshold configures how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME (phc2sys) or master offset (ptp4l). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN. When the offset value is within this range, the PTP clock state is set to LOCKED.

    recommend

    Specify an array of one or more recommend objects that define rules on how the profile should be applied to nodes.

    .recommend.profile

    Specify the .recommend.profile object name defined in the profile section.

    .recommend.priority

    Set .recommend.priority to 0 for ordinary clock.

    .recommend.match

    Specify .recommend.match rules with nodeLabel or nodeName.

    .recommend.match.nodeLabel

    Update nodeLabel with the key of node.Labels from the node object by using the oc get nodes —show-labels command. For example: node-role.kubernetes.io/worker.

    .recommend.match.nodeLabel

    Update nodeName with value of node.Name from the node object by using the oc get nodes command. For example: compute-0.example.com.

  2. Create the PtpConfig CR by running the following command:

    1. $ oc create -f ordinary-clock-ptp-config.yaml

Verification

  1. Check that the PtpConfig profile is applied to the node.

    1. Get the list of pods in the openshift-ptp namespace by running the following command:

      1. $ oc get pods -n openshift-ptp -o wide

      Example output

      1. NAME READY STATUS RESTARTS AGE IP NODE
      2. linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com
      3. linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com
      4. ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com
    2. Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command:

      1. $ oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container

      Example output

      1. I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile
      2. I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to:
      3. I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------
      4. I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1
      5. I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1
      6. I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 -s
      7. I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24
      8. I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------

Additional resources

Configuring linuxptp services as a boundary clock

You can configure the linuxptp services (ptp4l, phc2sys) as boundary clock by creating a PtpConfig custom resource (CR) object.

Use the following example PtpConfig CR as the basis to configure linuxptp services as the boundary clock for your particular hardware and environment. This example CR does not configure PTP fast events. To configure PTP fast events, set appropriate values for ptp4lOpts, ptp4lConf, and ptpClockThreshold. ptpClockThreshold is used only when events are enabled. See “Configuring the PTP fast event notifications publisher” for more information.

Prerequisites

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

  • Install the PTP Operator.

Procedure

  1. Create the following PtpConfig CR, and then save the YAML in the boundary-clock-ptp-config.yaml file.

    Recommended PTP boundary clock configuration

    1. ---
    2. apiVersion: ptp.openshift.io/v1
    3. kind: PtpConfig
    4. metadata:
    5. name: boundary-clock-ptp-config
    6. namespace: openshift-ptp
    7. spec:
    8. profile:
    9. - name: boundary-clock
    10. phc2sysOpts: "-a -r -n 24"
    11. ptp4lOpts: "-2"
    12. ptpSchedulingPolicy: SCHED_FIFO
    13. ptpSchedulingPriority: 10
    14. ptp4lConf: |
    15. [<interface_1>]
    16. masterOnly 0
    17. [<interface_2>]
    18. masterOnly 1
    19. [<interface_3>]
    20. masterOnly 1
    21. [<interface_4>]
    22. masterOnly 1
    23. [global]
    24. #
    25. # Default Data Set
    26. #
    27. twoStepFlag 1
    28. slaveOnly 0
    29. priority1 128
    30. priority2 128
    31. domainNumber 24
    32. clockClass 248
    33. clockAccuracy 0xFE
    34. offsetScaledLogVariance 0xFFFF
    35. free_running 0
    36. freq_est_interval 1
    37. dscp_event 0
    38. dscp_general 0
    39. dataset_comparison G.8275.x
    40. G.8275.defaultDS.localPriority 128
    41. #
    42. # Port Data Set
    43. #
    44. logAnnounceInterval -3
    45. logSyncInterval -4
    46. logMinDelayReqInterval -4
    47. logMinPdelayReqInterval -4
    48. announceReceiptTimeout 3
    49. syncReceiptTimeout 0
    50. delayAsymmetry 0
    51. fault_reset_interval 4
    52. neighborPropDelayThresh 20000000
    53. masterOnly 0
    54. G.8275.portDS.localPriority 128
    55. #
    56. # Run time options
    57. #
    58. assume_two_step 0
    59. logging_level 6
    60. path_trace_enabled 0
    61. follow_up_info 0
    62. hybrid_e2e 0
    63. inhibit_multicast_service 0
    64. net_sync_monitor 0
    65. tc_spanning_tree 0
    66. tx_timestamp_timeout 50
    67. unicast_listen 0
    68. unicast_master_table 0
    69. unicast_req_duration 3600
    70. use_syslog 1
    71. verbose 0
    72. summary_interval 0
    73. kernel_leap 1
    74. check_fup_sync 0
    75. #
    76. # Servo Options
    77. #
    78. pi_proportional_const 0.0
    79. pi_integral_const 0.0
    80. pi_proportional_scale 0.0
    81. pi_proportional_exponent -0.3
    82. pi_proportional_norm_max 0.7
    83. pi_integral_scale 0.0
    84. pi_integral_exponent 0.4
    85. pi_integral_norm_max 0.3
    86. step_threshold 2.0
    87. first_step_threshold 0.00002
    88. max_frequency 900000000
    89. clock_servo pi
    90. sanity_freq_limit 200000000
    91. ntpshm_segment 0
    92. #
    93. # Transport options
    94. #
    95. transportSpecific 0x0
    96. ptp_dst_mac 01:1B:19:00:00:00
    97. p2p_dst_mac 01:80:C2:00:00:0E
    98. udp_ttl 1
    99. udp6_scope 0x0E
    100. uds_address /var/run/ptp4l
    101. #
    102. # Default interface options
    103. #
    104. clock_type BC
    105. network_transport L2
    106. delay_mechanism E2E
    107. time_stamping hardware
    108. tsproc_mode filter
    109. delay_filter moving_median
    110. delay_filter_length 10
    111. egressLatency 0
    112. ingressLatency 0
    113. boundary_clock_jbod 0
    114. #
    115. # Clock description
    116. #
    117. productDescription ;;
    118. revisionData ;;
    119. manufacturerIdentity 00:00:00
    120. userDescription ;
    121. timeSource 0xA0
    122. recommend:
    123. - profile: boundary-clock
    124. priority: 4
    125. match:
    126. - nodeLabel: node-role.kubernetes.io/master
    127. nodeName: <nodename>
    Table 3. PTP boundary clock CR configuration options
    Custom resource fieldDescription

    name

    The name of the PtpConfig CR.

    profile

    Specify an array of one or more profile objects.

    name

    Specify the name of a profile object which uniquely identifies a profile object.

    ptp4lOpts

    Specify system config options for the ptp4l service. The options should not include the network interface name -i <interface> and service config file -f /etc/ptp4l.conf because the network interface name and the service config file are automatically appended.

    ptp4lConf

    Specify the required configuration to start ptp4l as boundary clock. For example, ens1f0 synchronizes from a grandmaster clock and ens1f3 synchronizes connected devices.

    <interface_1>

    The interface that receives the synchronization clock.

    <interface_2>

    The interface that sends the synchronization clock.

    tx_timestamp_timeout

    For Intel Columbiaville 800 Series NICs, set tx_timestamp_timeout to 50.

    boundary_clock_jbod

    For Intel Columbiaville 800 Series NICs, ensure boundary_clock_jbod is set to 0. For Intel Fortville X710 Series NICs, ensure boundary_clock_jbod is set to 1.

    phc2sysOpts

    Specify system config options for the phc2sys service. If this field is empty, the PTP Operator does not start the phc2sys service.

    ptpSchedulingPolicy

    Scheduling policy for ptp4l and phc2sys processes. Default value is SCHED_OTHER. Use SCHED_FIFO on systems that support FIFO scheduling.

    ptpSchedulingPriority

    Integer value from 1-65 used to set FIFO priority for ptp4l and phc2sys processes when ptpSchedulingPolicy is set to SCHED_FIFO. The ptpSchedulingPriority field is not used when ptpSchedulingPolicy is set to SCHED_OTHER.

    ptpClockThreshold

    Optional. If ptpClockThreshold is not present, default values are used for the ptpClockThreshold fields. ptpClockThreshold configures how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME (phc2sys) or master offset (ptp4l). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN. When the offset value is within this range, the PTP clock state is set to LOCKED.

    recommend

    Specify an array of one or more recommend objects that define rules on how the profile should be applied to nodes.

    .recommend.profile

    Specify the .recommend.profile object name defined in the profile section.

    .recommend.priority

    Specify the priority with an integer value between 0 and 99. A larger number gets lower priority, so a priority of 99 is lower than a priority of 10. If a node can be matched with multiple profiles according to rules defined in the match field, the profile with the higher priority is applied to that node.

    .recommend.match

    Specify .recommend.match rules with nodeLabel or nodeName.

    .recommend.match.nodeLabel

    Update nodeLabel with the key of node.Labels from the node object by using the oc get nodes —show-labels command. For example: node-role.kubernetes.io/worker.

    .recommend.match.nodeLabel

    Update nodeName with value of node.Name from the node object by using the oc get nodes command. For example: compute-0.example.com.

  2. Create the CR by running the following command:

    1. $ oc create -f boundary-clock-ptp-config.yaml

Verification

  1. Check that the PtpConfig profile is applied to the node.

    1. Get the list of pods in the openshift-ptp namespace by running the following command:

      1. $ oc get pods -n openshift-ptp -o wide

      Example output

      1. NAME READY STATUS RESTARTS AGE IP NODE
      2. linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com
      3. linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com
      4. ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com
    2. Check that the profile is correct. Examine the logs of the linuxptp daemon that corresponds to the node you specified in the PtpConfig profile. Run the following command:

      1. $ oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container

      Example output

      1. I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile
      2. I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to:
      3. I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------
      4. I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1
      5. I1115 09:41:17.117616 4143292 daemon.go:102] Interface:
      6. I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2
      7. I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24
      8. I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------

Additional resources

Configuring linuxptp services as boundary clocks for dual NIC hardware

Precision Time Protocol (PTP) hardware with dual NIC configured as boundary clocks is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

You can configure the linuxptp services (ptp4l, phc2sys) as boundary clocks for dual NIC hardware by creating a PtpConfig custom resource (CR) object for each NIC.

Dual NIC hardware allows you to connect each NIC to the same upstream leader clock with separate ptp4l instances for each NIC feeding the downstream clocks.

Prerequisites

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

  • Install the PTP Operator.

Procedure

  1. Create two separate PtpConfig CRs, one for each NIC, using the reference CR in “Configuring linuxptp services as a boundary clock” as the basis for each CR. For example:

    1. Create boundary-clock-ptp-config-nic1.yaml, specifying values for phc2sysOpts:

      1. apiVersion: ptp.openshift.io/v1
      2. kind: PtpConfig
      3. metadata:
      4. name: boundary-clock-ptp-config-nic1
      5. namespace: openshift-ptp
      6. spec:
      7. profile:
      8. - name: "profile1"
      9. ptp4lOpts: "-2 --summary_interval -4"
      10. ptp4lConf: | (1)
      11. [ens5f1]
      12. masterOnly 1
      13. [ens5f0]
      14. masterOnly 0
      15. ...
      16. phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" (2)
      1Specify the required interfaces to start ptp4l as a boundary clock. For example, ens5f0 synchronizes from a grandmaster clock and ens5f1 synchronizes connected devices.
      2Required phc2sysOpts values. -m prints messages to stdout. The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics.
    2. Create boundary-clock-ptp-config-nic2.yaml, removing the phc2sysOpts field altogether to disable the phc2sys service for the second NIC:

      1. apiVersion: ptp.openshift.io/v1
      2. kind: PtpConfig
      3. metadata:
      4. name: boundary-clock-ptp-config-nic2
      5. namespace: openshift-ptp
      6. spec:
      7. profile:
      8. - name: "profile2"
      9. ptp4lOpts: "-2 --summary_interval -4"
      10. ptp4lConf: | (1)
      11. [ens7f1]
      12. masterOnly 1
      13. [ens7f0]
      14. masterOnly 0
      15. ...
      1Specify the required interfaces to start ptp4l as a boundary clock on the second NIC.

      You must completely remove the phc2sysOpts field from the second PtpConfig CR to disable the phc2sys service on the second NIC.

  2. Create the dual NIC PtpConfig CRs by running the following commands:

    1. Create the CR that configures PTP for the first NIC:

      1. $ oc create -f boundary-clock-ptp-config-nic1.yaml
    2. Create the CR that configures PTP for the second NIC:

      1. $ oc create -f boundary-clock-ptp-config-nic2.yaml

Verification

  • Check that the PTP Operator has applied the PtpConfig CRs for both NICs. Examine the logs for the linuxptp daemon corresponding to the node that has the dual NIC hardware installed. For example, run the following command:

    1. $ oc logs linuxptp-daemon-cvgr6 -n openshift-ptp -c linuxptp-daemon-container

    Example output

    1. ptp4l[80828.335]: [ptp4l.1.config] master offset 5 s2 freq -5727 path delay 519
    2. ptp4l[80828.343]: [ptp4l.0.config] master offset -5 s2 freq -10607 path delay 533
    3. phc2sys[80828.390]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1 s2 freq -87239 delay 539

Intel Columbiaville E800 series NIC as PTP ordinary clock reference

The following table describes the changes that you must make to the reference PTP configuration in order to use Intel Columbiaville E800 series NICs as ordinary clocks. Make the changes in a PtpConfig custom resource (CR) that you apply to the cluster.

Table 4. Recommended PTP settings for Intel Columbiaville NIC
PTP configurationRecommended setting

phc2sysOpts

-a -r -m -n 24 -N 8 -R 16

tx_timestamp_timeout

50

boundary_clock_jbod

0

For phc2sysOpts, -m prints messages to stdout. The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics.

Additional resources

Configuring FIFO priority scheduling for PTP hardware

In telco or other deployment configurations that require low latency performance, PTP daemon threads run in a constrained CPU footprint alongside the rest of the infrastructure components. By default, PTP threads run with the SCHED_OTHER policy. Under high load, these threads might not get the scheduling latency they require for error-free operation.

To mitigate against potential scheduling latency errors, you can configure the PTP Operator linuxptp services to allow threads to run with a SCHED_FIFO policy. If SCHED_FIFO is set for a PtpConfig CR, then ptp4l and phc2sys will run in the parent container under chrt with a priority set by the ptpSchedulingPriority field of the PtpConfig CR.

Setting ptpSchedulingPolicy is optional, and is only required if you are experiencing latency errors.

Procedure

  1. Edit the PtpConfig CR profile:

    1. $ oc edit PtpConfig -n openshift-ptp
  2. Change the ptpSchedulingPolicy and ptpSchedulingPriority fields:

    1. apiVersion: ptp.openshift.io/v1
    2. kind: PtpConfig
    3. metadata:
    4. name: <ptp_config_name>
    5. namespace: openshift-ptp
    6. ...
    7. spec:
    8. profile:
    9. - name: "profile1"
    10. ...
    11. ptpSchedulingPolicy: SCHED_FIFO (1)
    12. ptpSchedulingPriority: 10 (2)
    1Scheduling policy for ptp4l and phc2sys processes. Use SCHED_FIFO on systems that support FIFO scheduling.
    2Required. Sets the integer value 1-65 used to configure FIFO priority for ptp4l and phc2sys processes.
  3. Save and exit to apply the changes to the PtpConfig CR.

Verification

  1. Get the name of the linuxptp-daemon pod and corresponding node where the PtpConfig CR has been applied:

    1. $ oc get pods -n openshift-ptp -o wide

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE
    2. linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com
    3. linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com
    4. ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com
  2. Check that the ptp4l process is running with the updated chrt FIFO priority:

    1. $ oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrt

    Example output

    1. I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -m

Configuring log filtering for linuxptp services

The linuxptp daemon generates logs that you can use for debugging purposes. In telco or other deployment configurations that feature a limited storage capacity, these logs can add to the storage demand.

To reduce the number log messages, you can configure the PtpConfig custom resource (CR) to exclude log messages that report the master offset value. The master offset log message reports the difference between the current node’s clock and the master clock in nanoseconds.

Prerequisites

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

  • Install the PTP Operator.

Procedure

  1. Edit the PtpConfig CR:

    1. $ oc edit PtpConfig -n openshift-ptp
  2. In spec.profile, add the ptpSettings.logReduce specification and set the value to true:

    1. apiVersion: ptp.openshift.io/v1
    2. kind: PtpConfig
    3. metadata:
    4. name: <ptp_config_name>
    5. namespace: openshift-ptp
    6. ...
    7. spec:
    8. profile:
    9. - name: "profile1"
    10. ...
    11. ptpSettings:
    12. logReduce: "true"

    For debugging purposes, you can revert this specification to False to include the master offset messages.

  3. Save and exit to apply the changes to the PtpConfig CR.

Verification

  1. Get the name of the linuxptp-daemon pod and corresponding node where the PtpConfig CR has been applied:

    1. $ oc get pods -n openshift-ptp -o wide

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE
    2. linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com
    3. linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com
    4. ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com
  2. Verify that master offset messages are excluded from the logs by running the following command:

    1. $ oc -n openshift-ptp logs <linux_daemon_container> -c linuxptp-daemon-container | grep "master offset" (1)
    1<linux_daemon_container> is the name of the linuxptp-daemon pod, for example linuxptp-daemon-gmv2n.

    When you configure the logReduce specification, this command does not report any instances of master offset in the logs of the linuxptp daemon.

Troubleshooting common PTP Operator issues

Troubleshoot common problems with the PTP Operator by performing the following steps.

Prerequisites

  • Install the OKD CLI (oc).

  • Log in as a user with cluster-admin privileges.

  • Install the PTP Operator on a bare-metal cluster with hosts that support PTP.

Procedure

  1. Check the Operator and operands are successfully deployed in the cluster for the configured nodes.

    1. $ oc get pods -n openshift-ptp -o wide

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE
    2. linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com
    3. linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com
    4. ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com

    When the PTP fast event bus is enabled, the number of ready linuxptp-daemon pods is 3/3. If the PTP fast event bus is not enabled, 2/2 is displayed.

  2. Check that supported hardware is found in the cluster.

    1. $ oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io

    Example output

    1. NAME AGE
    2. control-plane-0.example.com 10d
    3. control-plane-1.example.com 10d
    4. compute-0.example.com 10d
    5. compute-1.example.com 10d
    6. compute-2.example.com 10d
  3. Check the available PTP network interfaces for a node:

    1. $ oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml

    where:

    <node_name>

    Specifies the node you want to query, for example, compute-0.example.com.

    Example output

    1. apiVersion: ptp.openshift.io/v1
    2. kind: NodePtpDevice
    3. metadata:
    4. creationTimestamp: "2021-09-14T16:52:33Z"
    5. generation: 1
    6. name: compute-0.example.com
    7. namespace: openshift-ptp
    8. resourceVersion: "177400"
    9. uid: 30413db0-4d8d-46da-9bef-737bacd548fd
    10. spec: {}
    11. status:
    12. devices:
    13. - name: eno1
    14. - name: eno2
    15. - name: eno3
    16. - name: eno4
    17. - name: enp5s0f0
    18. - name: enp5s0f1
  4. Check that the PTP interface is successfully synchronized to the primary clock by accessing the linuxptp-daemon pod for the corresponding node.

    1. Get the name of the linuxptp-daemon pod and corresponding node you want to troubleshoot by running the following command:

      1. $ oc get pods -n openshift-ptp -o wide

      Example output

      1. NAME READY STATUS RESTARTS AGE IP NODE
      2. linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com
      3. linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com
      4. ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com
    2. Remote shell into the required linuxptp-daemon container:

      1. $ oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container>

      where:

      <linux_daemon_container>

      is the container you want to diagnose, for example linuxptp-daemon-lmvgn.

    3. In the remote shell connection to the linuxptp-daemon container, use the PTP Management Client (pmc) tool to diagnose the network interface. Run the following pmc command to check the sync status of the PTP device, for example ptp4l.

      1. # pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'

      Example output when the node is successfully synced to the primary clock

      1. sending: GET PORT_DATA_SET
      2. 40a6b7.fffe.166ef0-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET
      3. portIdentity 40a6b7.fffe.166ef0-1
      4. portState SLAVE
      5. logMinDelayReqInterval -4
      6. peerMeanPathDelay 0
      7. logAnnounceInterval -3
      8. announceReceiptTimeout 3
      9. logSyncInterval -4
      10. delayMechanism 1
      11. logMinPdelayReqInterval -4
      12. versionNumber 2

PTP hardware fast event notifications framework

Cloud native applications such as virtual RAN (vRAN) require access to notifications about hardware timing events that are critical to the functioning of the overall network. PTP clock synchronization errors can negatively affect the performance and reliability of your low-latency application, for example, a vRAN application running in a distributed unit (DU).

About PTP and clock synchronization error events

Loss of PTP synchronization is a critical error for a RAN network. If synchronization is lost on a node, the radio might be shut down and the network Over the Air (OTA) traffic might be shifted to another node in the wireless network. Fast event notifications mitigate against workload errors by allowing cluster nodes to communicate PTP clock sync status to the vRAN application running in the DU.

Event notifications are available to vRAN applications running on the same DU node. A publish-subscribe REST API passes events notifications to the messaging bus. Publish-subscribe messaging, or pub-sub messaging, is an asynchronous service-to-service communication architecture where any message published to a topic is immediately received by all of the subscribers to the topic.

The PTP Operator generates fast event notifications for every PTP-capable network interface. You can access the events by using a cloud-event-proxy sidecar container over an HTTP or Advanced Message Queuing Protocol (AMQP) message bus.

PTP fast event notifications are available for network interfaces configured to use PTP ordinary clocks or PTP boundary clocks.

Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status.

About the PTP fast event notifications framework

Use the Precision Time Protocol (PTP) fast event notifications framework to subscribe cluster applications to PTP events that the bare-metal cluster node generates.

The fast events notifications framework uses a REST API for communication. The REST API is based on the O-RAN O-Cloud Notification API Specification for Event Consumers 3.0 that is available from O-RAN ALLIANCE Specifications.

The framework consists of a publisher, subscriber, and an AMQ or HTTP messaging protocol to handle communications between the publisher and subscriber applications. Applications run the cloud-event-proxy container in a sidecar pattern to subscribe to PTP events. The cloud-event-proxy sidecar container can access the same resources as the primary application container without using any of the resources of the primary application and with no significant latency.

Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status.

Overview of PTP fast events

Figure 1. Overview of PTP fast events

20 Event is generated on the cluster host

linuxptp-daemon in the PTP Operator-managed pod runs as a Kubernetes DaemonSet and manages the various linuxptp processes (ptp4l, phc2sys, and optionally for grandmaster clocks, ts2phc). The linuxptp-daemon passes the event to the UNIX domain socket.

20 Event is passed to the cloud-event-proxy sidecar

The PTP plugin reads the event from the UNIX domain socket and passes it to the cloud-event-proxy sidecar in the PTP Operator-managed pod. cloud-event-proxy delivers the event from the Kubernetes infrastructure to Cloud-Native Network Functions (CNFs) with low latency.

20 Event is persisted

The cloud-event-proxy sidecar in the PTP Operator-managed pod processes the event and publishes the cloud-native event by using a REST API.

When you use HTTP transport for events, you must persist the events subscription in the PTP Operator-managed pod by using a Persistent Volume (PV) resource or similar persistent storage mechanism.

20 Message is transported

The message transporter transports the event to the cloud-event-proxy sidecar in the application pod over HTTP or AMQP 1.0 QPID.

20 Event is available from the REST API

The cloud-event-proxy sidecar in the Application pod processes the event and makes it available by using the REST API.

20 Consumer application requests a subscription and receives the subscribed event

The consumer application sends an API request to the cloud-event-proxy sidecar in the application pod to create a PTP events subscription. The cloud-event-proxy sidecar creates an AMQ or HTTP messaging listener protocol for the resource specified in the subscription.

The cloud-event-proxy sidecar in the application pod receives the event from the PTP Operator-managed pod, unwraps the cloud events object to retrieve the data, and posts the event to the consumer application. The consumer application listens to the address specified in the resource qualifier and receives and processes the PTP event.

Configuring the PTP fast event notifications publisher

To start using PTP fast event notifications for a network interface in your cluster, you must enable the fast event publisher in the PTP Operator PtpOperatorConfig custom resource (CR) and configure ptpClockThreshold values in a PtpConfig CR that you create.

Prerequisites

  • You have installed the OKD CLI (oc).

  • You have logged in as a user with cluster-admin privileges.

  • You have installed the PTP Operator.

  • When you use HTTP events transport, configure dynamic volume provisioning in the cluster or manually create StorageClass, LocalVolume, and PersistentVolume resources to persist the events subscription.

    When you enable dynamic volume provisioning in the cluster, a PersistentVolume resource is automatically created for the PersistentVolumeClaim that the PTP Operator deploys.

    For more information about manually creating persistent storage in the cluster, see “Persistent storage using local volumes”.

Procedure

  1. Modify the default PTP Operator config to enable PTP fast events.

    1. Save the following YAML in the ptp-operatorconfig.yaml file:

      1. apiVersion: ptp.openshift.io/v1
      2. kind: PtpOperatorConfig
      3. metadata:
      4. name: default
      5. namespace: openshift-ptp
      6. spec:
      7. daemonNodeSelector:
      8. node-role.kubernetes.io/worker: ""
      9. ptpEventConfig:
      10. enableEventPublisher: true (1)
      11. storageType: "example-storage-class" (2)
      1Set enableEventPublisher to true to enable PTP fast event notifications.
      2Use the value that you set for storageType to populate the StorageClassName field for the PersistentVolumeClaim (PVC) resource that the PTP Operator automatically deploys. The PVC resource is used to persist consumer event subscriptions.

      In OKD 4.13 or later, you do not need to set the spec.ptpEventConfig.transportHost field in the PtpOperatorConfig resource when you use HTTP transport for PTP events. Set transportHost only when you use AMQP transport for PTP events.

      The value that you set for .spec.storageType in the PtpOperatorConfig CR must match the storageClassName that is set in the PersistentVolume CR. If storageType is not set and the transportHost uses HTTP, the PTP daemons are not deployed.

    2. Update the PtpOperatorConfig CR:

      1. $ oc apply -f ptp-operatorconfig.yaml
  2. Create a PtpConfig custom resource (CR) for the PTP enabled interface, and set the required values for ptpClockThreshold and ptp4lOpts. The following YAML illustrates the required values that you must set in the PtpConfig CR:

    1. spec:
    2. profile:
    3. - name: "profile1"
    4. interface: "enp5s0f0"
    5. ptp4lOpts: "-2 -s --summary_interval -4" (1)
    6. phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" (2)
    7. ptp4lConf: "" (3)
    8. ptpClockThreshold: (4)
    9. holdOverTimeout: 5
    10. maxOffsetThreshold: 100
    11. minOffsetThreshold: -100
    1Append —summary_interval -4 to use PTP fast events.
    2Required phc2sysOpts values. -m prints messages to stdout. The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics.
    3Specify a string that contains the configuration to replace the default /etc/ptp4l.conf file. To use the default configuration, leave the field empty.
    4Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME (phc2sys) or master offset (ptp4l). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN. When the offset value is within this range, the PTP clock state is set to LOCKED.

Additional resources

Migrating consumer applications to use HTTP transport for PTP or bare-metal events

If you have previously deployed PTP or bare-metal events consumer applications, you need to update the applications to use HTTP message transport.

Prerequisites

  • You have installed the OpenShift CLI (oc).

  • You have logged in as a user with cluster-admin privileges.

  • You have updated the PTP Operator or Bare Metal Event Relay to version 4.13+ which uses HTTP transport by default.

  • Configure dynamic volume provisioning in the cluster or manually create StorageClass, LocalVolume, and PersistentVolume resources to persist the events subscription.

    When dynamic volume provisioning is enabled, a PersistentVolume resource is automatically created for the PersistentVolumeClaim that the PTP Operator or Bare Metal Event Relay deploys.

Procedure

  1. Update your events consumer application to use HTTP transport. Set the http-event-publishers variable for the cloud event sidecar deployment.

    For example, in a cluster with PTP events configured, the following YAML snippet illustrates a cloud event sidecar deployment:

    1. containers:
    2. - name: cloud-event-sidecar
    3. image: cloud-event-sidecar
    4. args:
    5. - "--metrics-addr=127.0.0.1:9091"
    6. - "--store-path=/store"
    7. - "--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043"
    8. - "--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043" (1)
    9. - "--api-port=8089"
    1The PTP Operator automatically resolves NODE_NAME to the host that is generating the PTP events. For example, compute-1.example.com.

    In a cluster with bare-metal events configured, set the http-event-publishers field to hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043 in the cloud event sidecar deployment CR.

  2. Deploy the consumer-events-subscription-service service alongside the events consumer application. For example:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. annotations:
    5. prometheus.io/scrape: "true"
    6. service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret
    7. name: consumer-events-subscription-service
    8. namespace: cloud-events
    9. labels:
    10. app: consumer-service
    11. spec:
    12. ports:
    13. - name: sub-port
    14. port: 9043
    15. selector:
    16. app: consumer
    17. clusterIP: None
    18. sessionAffinity: None
    19. type: ClusterIP

Installing the AMQ messaging bus

To pass PTP fast event notifications between publisher and subscriber on a node, you can install and configure an AMQ messaging bus to run locally on the node. To use AMQ messaging, you must install the AMQ Interconnect Operator.

Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status.

Prerequisites

  • Install the OKD CLI (oc).

  • Log in as a user with cluster-admin privileges.

Procedure

Verification

  1. Check that the AMQ Interconnect Operator is available and the required pods are running:

    1. $ oc get pods -n amq-interconnect

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h
    3. interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h
  2. Check that the required linuxptp-daemon PTP event producer pods are running in the openshift-ptp namespace.

    1. $ oc get pods -n openshift-ptp

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. linuxptp-daemon-2t78p 3/3 Running 0 12h
    3. linuxptp-daemon-k8n88 3/3 Running 0 12h

Subscribing DU applications to PTP events REST API reference

Use the PTP event notifications REST API to subscribe a distributed unit (DU) application to the PTP events that are generated on the parent node.

Subscribe applications to PTP events by using the resource address /cluster/node/<node_name>/ptp, where <node_name> is the cluster node running the DU application.

Deploy your cloud-event-consumer DU application container and cloud-event-proxy sidecar container in a separate DU application pod. The cloud-event-consumer DU application subscribes to the cloud-event-proxy container in the application pod.

Use the following API endpoints to subscribe the cloud-event-consumer DU application to PTP events posted by the cloud-event-proxy container at http://localhost:8089/api/ocloudNotifications/v1/ in the DU application pod:

  • /api/ocloudNotifications/v1/subscriptions

    • POST: Creates a new subscription

    • GET: Retrieves a list of subscriptions

  • /api/ocloudNotifications/v1/subscriptions/<subscription_id>

    • GET: Returns details for the specified subscription ID
  • api/ocloudNotifications/v1/subscriptions/status/<subscription_id>

    • PUT: Creates a new status ping request for the specified subscription ID
  • /api/ocloudNotifications/v1/health

    • GET: Returns the health status of ocloudNotifications API
  • api/ocloudNotifications/v1/publishers

    • GET: Returns an array of os-clock-sync-state, ptp-clock-class-change, and lock-state messages for the cluster node
  • /api/ocloudnotifications/v1/<resource_address>/CurrentState

    • GET: Returns the current state of one the following event types: os-clock-sync-state, ptp-clock-class-change, or lock-state events

9089 is the default port for the cloud-event-consumer container deployed in the application pod. You can configure a different port for your DU application as required.

api/ocloudNotifications/v1/subscriptions

HTTP method

GET api/ocloudNotifications/v1/subscriptions

Description

Returns a list of subscriptions. If subscriptions exist, a 200 OK status code is returned along with the list of subscriptions.

Example API response

  1. [
  2. {
  3. "id": "75b1ad8f-c807-4c23-acf5-56f4b7ee3826",
  4. "endpointUri": "http://localhost:9089/event",
  5. "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions/75b1ad8f-c807-4c23-acf5-56f4b7ee3826",
  6. "resource": "/cluster/node/compute-1.example.com/ptp"
  7. }
  8. ]
HTTP method

POST api/ocloudNotifications/v1/subscriptions

Description

Creates a new subscription. If a subscription is successfully created, or if it already exists, a 201 Created status code is returned.

Table 5. Query parameters
ParameterType

subscription

data

Example payload

  1. {
  2. "uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions",
  3. "resource": "/cluster/node/compute-1.example.com/ptp"
  4. }

api/ocloudNotifications/v1/subscriptions/<subscription_id>

HTTP method

GET api/ocloudNotifications/v1/subscriptions/<subscription_id>

Description

Returns details for the subscription with ID <subscription_id>

Table 6. Query parameters
ParameterType

<subscription_id>

string

Example API response

  1. {
  2. "id":"48210fb3-45be-4ce0-aa9b-41a0e58730ab",
  3. "endpointUri": "http://localhost:9089/event",
  4. "uriLocation":"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/48210fb3-45be-4ce0-aa9b-41a0e58730ab",
  5. "resource":"/cluster/node/compute-1.example.com/ptp"
  6. }

api/ocloudNotifications/v1/subscriptions/status/<subscription_id>

HTTP method

PUT api/ocloudNotifications/v1/subscriptions/status/<subscription_id>

Description

Creates a new status ping request for subscription with ID <subscription_id>. If a subscription is present, the status request is successful and a 202 Accepted status code is returned.

Table 7. Query parameters
ParameterType

<subscription_id>

string

Example API response

  1. {"status":"ping sent"}

api/ocloudNotifications/v1/health/

HTTP method

GET api/ocloudNotifications/v1/health/

Description

Returns the health status for the ocloudNotifications REST API.

Example API response

  1. OK

api/ocloudNotifications/v1/publishers

HTTP method

GET api/ocloudNotifications/v1/publishers

Description

Returns an array of os-clock-sync-state, ptp-clock-class-change, and lock-state details for the cluster node. The system generates notifications when the relevant equipment state changes.

  • os-clock-sync-state notifications describe the host operating system clock synchronization state. Can be in LOCKED or FREERUN state.

  • ptp-clock-class-change notifications describe the current state of the PTP clock class.

  • lock-state notifications describe the current status of the PTP equipment lock state. Can be in LOCKED, HOLDOVER or FREERUN state.

Example API response

  1. [
  2. {
  3. "id": "0fa415ae-a3cf-4299-876a-589438bacf75",
  4. "endpointUri": "http://localhost:9085/api/ocloudNotifications/v1/dummy",
  5. "uriLocation": "http://localhost:9085/api/ocloudNotifications/v1/publishers/0fa415ae-a3cf-4299-876a-589438bacf75",
  6. "resource": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state"
  7. },
  8. {
  9. "id": "28cd82df-8436-4f50-bbd9-7a9742828a71",
  10. "endpointUri": "http://localhost:9085/api/ocloudNotifications/v1/dummy",
  11. "uriLocation": "http://localhost:9085/api/ocloudNotifications/v1/publishers/28cd82df-8436-4f50-bbd9-7a9742828a71",
  12. "resource": "/cluster/node/compute-1.example.com/sync/ptp-status/ptp-clock-class-change"
  13. },
  14. {
  15. "id": "44aa480d-7347-48b0-a5b0-e0af01fa9677",
  16. "endpointUri": "http://localhost:9085/api/ocloudNotifications/v1/dummy",
  17. "uriLocation": "http://localhost:9085/api/ocloudNotifications/v1/publishers/44aa480d-7347-48b0-a5b0-e0af01fa9677",
  18. "resource": "/cluster/node/compute-1.example.com/sync/ptp-status/lock-state"
  19. }
  20. ]

You can find os-clock-sync-state, ptp-clock-class-change and lock-state events in the logs for the cloud-event-proxy container. For example:

  1. $ oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy

Example os-clock-sync-state event

  1. {
  2. "id":"c8a784d1-5f4a-4c16-9a81-a3b4313affe5",
  3. "type":"event.sync.sync-status.os-clock-sync-state-change",
  4. "source":"/cluster/compute-1.example.com/ptp/CLOCK_REALTIME",
  5. "dataContentType":"application/json",
  6. "time":"2022-05-06T15:31:23.906277159Z",
  7. "data":{
  8. "version":"v1",
  9. "values":[
  10. {
  11. "resource":"/sync/sync-status/os-clock-sync-state",
  12. "dataType":"notification",
  13. "valueType":"enumeration",
  14. "value":"LOCKED"
  15. },
  16. {
  17. "resource":"/sync/sync-status/os-clock-sync-state",
  18. "dataType":"metric",
  19. "valueType":"decimal64.3",
  20. "value":"-53"
  21. }
  22. ]
  23. }
  24. }

Example ptp-clock-class-change event

  1. {
  2. "id":"69eddb52-1650-4e56-b325-86d44688d02b",
  3. "type":"event.sync.ptp-status.ptp-clock-class-change",
  4. "source":"/cluster/compute-1.example.com/ptp/ens2fx/master",
  5. "dataContentType":"application/json",
  6. "time":"2022-05-06T15:31:23.147100033Z",
  7. "data":{
  8. "version":"v1",
  9. "values":[
  10. {
  11. "resource":"/sync/ptp-status/ptp-clock-class-change",
  12. "dataType":"metric",
  13. "valueType":"decimal64.3",
  14. "value":"135"
  15. }
  16. ]
  17. }
  18. }

Example lock-state event

  1. {
  2. "id":"305ec18b-1472-47b3-aadd-8f37933249a9",
  3. "type":"event.sync.ptp-status.ptp-state-change",
  4. "source":"/cluster/compute-1.example.com/ptp/ens2fx/master",
  5. "dataContentType":"application/json",
  6. "time":"2022-05-06T15:31:23.467684081Z",
  7. "data":{
  8. "version":"v1",
  9. "values":[
  10. {
  11. "resource":"/sync/ptp-status/lock-state",
  12. "dataType":"notification",
  13. "valueType":"enumeration",
  14. "value":"LOCKED"
  15. },
  16. {
  17. "resource":"/sync/ptp-status/lock-state",
  18. "dataType":"metric",
  19. "valueType":"decimal64.3",
  20. "value":"62"
  21. }
  22. ]
  23. }
  24. }

/api/ocloudnotifications/v1/<resource_address>/CurrentState

HTTP method

GET api/ocloudNotifications/v1/cluster/node/<node_name>/sync/ptp-status/lock-state/CurrentState

GET api/ocloudNotifications/v1/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state/CurrentState

GET api/ocloudNotifications/v1/cluster/node/<node_name>/sync/ptp-status/ptp-clock-class-change/CurrentState

Description

Configure the CurrentState API endpoint to return the current state of the os-clock-sync-state, ptp-clock-class-change, or lock-state events for the cluster node.

  • os-clock-sync-state notifications describe the host operating system clock synchronization state. Can be in LOCKED or FREERUN state.

  • ptp-clock-class-change notifications describe the current state of the PTP clock class.

  • lock-state notifications describe the current status of the PTP equipment lock state. Can be in LOCKED, HOLDOVER or FREERUN state.

Table 8. Query parameters
ParameterType

<resource_address>

string

Example lock-state API response

  1. {
  2. "id": "c1ac3aa5-1195-4786-84f8-da0ea4462921",
  3. "type": "event.sync.ptp-status.ptp-state-change",
  4. "source": "/cluster/node/compute-1.example.com/sync/ptp-status/lock-state",
  5. "dataContentType": "application/json",
  6. "time": "2023-01-10T02:41:57.094981478Z",
  7. "data": {
  8. "version": "v1",
  9. "values": [
  10. {
  11. "resource": "/cluster/node/compute-1.example.com/ens5fx/master",
  12. "dataType": "notification",
  13. "valueType": "enumeration",
  14. "value": "LOCKED"
  15. },
  16. {
  17. "resource": "/cluster/node/compute-1.example.com/ens5fx/master",
  18. "dataType": "metric",
  19. "valueType": "decimal64.3",
  20. "value": "29"
  21. }
  22. ]
  23. }
  24. }

Example os-clock-sync-state API response

  1. {
  2. "specversion": "0.3",
  3. "id": "4f51fe99-feaa-4e66-9112-66c5c9b9afcb",
  4. "source": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state",
  5. "type": "event.sync.sync-status.os-clock-sync-state-change",
  6. "subject": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state",
  7. "datacontenttype": "application/json",
  8. "time": "2022-11-29T17:44:22.202Z",
  9. "data": {
  10. "version": "v1",
  11. "values": [
  12. {
  13. "resource": "/cluster/node/compute-1.example.com/CLOCK_REALTIME",
  14. "dataType": "notification",
  15. "valueType": "enumeration",
  16. "value": "LOCKED"
  17. },
  18. {
  19. "resource": "/cluster/node/compute-1.example.com/CLOCK_REALTIME",
  20. "dataType": "metric",
  21. "valueType": "decimal64.3",
  22. "value": "27"
  23. }
  24. ]
  25. }
  26. }

Example ptp-clock-class-change API response

  1. {
  2. "id": "064c9e67-5ad4-4afb-98ff-189c6aa9c205",
  3. "type": "event.sync.ptp-status.ptp-clock-class-change",
  4. "source": "/cluster/node/compute-1.example.com/sync/ptp-status/ptp-clock-class-change",
  5. "dataContentType": "application/json",
  6. "time": "2023-01-10T02:41:56.785673989Z",
  7. "data": {
  8. "version": "v1",
  9. "values": [
  10. {
  11. "resource": "/cluster/node/compute-1.example.com/ens5fx/master",
  12. "dataType": "metric",
  13. "valueType": "decimal64.3",
  14. "value": "165"
  15. }
  16. ]
  17. }
  18. }

Monitoring PTP fast event metrics

You can monitor PTP fast events metrics from cluster nodes where the linuxptp-daemon is running. You can also monitor PTP fast event metrics in the OKD web console by using the pre-configured and self-updating Prometheus monitoring stack.

Prerequisites

  • Install the OKD CLI oc.

  • Log in as a user with cluster-admin privileges.

  • Install and configure the PTP Operator on a node with PTP-capable hardware.

Procedure

  1. Check for exposed PTP metrics on any node where the linuxptp-daemon is running. For example, run the following command:

    1. $ curl http://<node_name>:9091/metrics

    Example output

    1. # HELP openshift_ptp_clock_state 0 = FREERUN, 1 = LOCKED, 2 = HOLDOVER
    2. # TYPE openshift_ptp_clock_state gauge
    3. openshift_ptp_clock_state{iface="ens1fx",node="compute-1.example.com",process="ptp4l"} 1
    4. openshift_ptp_clock_state{iface="ens3fx",node="compute-1.example.com",process="ptp4l"} 1
    5. openshift_ptp_clock_state{iface="ens5fx",node="compute-1.example.com",process="ptp4l"} 1
    6. openshift_ptp_clock_state{iface="ens7fx",node="compute-1.example.com",process="ptp4l"} 1
    7. # HELP openshift_ptp_delay_ns
    8. # TYPE openshift_ptp_delay_ns gauge
    9. openshift_ptp_delay_ns{from="master",iface="ens1fx",node="compute-1.example.com",process="ptp4l"} 842
    10. openshift_ptp_delay_ns{from="master",iface="ens3fx",node="compute-1.example.com",process="ptp4l"} 480
    11. openshift_ptp_delay_ns{from="master",iface="ens5fx",node="compute-1.example.com",process="ptp4l"} 584
    12. openshift_ptp_delay_ns{from="master",iface="ens7fx",node="compute-1.example.com",process="ptp4l"} 482
    13. openshift_ptp_delay_ns{from="phc",iface="CLOCK_REALTIME",node="compute-1.example.com",process="phc2sys"} 547
    14. # HELP openshift_ptp_offset_ns
    15. # TYPE openshift_ptp_offset_ns gauge
    16. openshift_ptp_offset_ns{from="master",iface="ens1fx",node="compute-1.example.com",process="ptp4l"} -2
    17. openshift_ptp_offset_ns{from="master",iface="ens3fx",node="compute-1.example.com",process="ptp4l"} -44
    18. openshift_ptp_offset_ns{from="master",iface="ens5fx",node="compute-1.example.com",process="ptp4l"} -8
    19. openshift_ptp_offset_ns{from="master",iface="ens7fx",node="compute-1.example.com",process="ptp4l"} 3
    20. openshift_ptp_offset_ns{from="phc",iface="CLOCK_REALTIME",node="compute-1.example.com",process="phc2sys"} 12
  2. To view the PTP event in the OKD web console, copy the name of the PTP metric you want to query, for example, openshift_ptp_offset_ns.

  3. In the OKD web console, click ObserveMetrics.

  4. Paste the PTP metric name into the Expression field, and click Run queries.

Additional resources