Configuring PTP hardware

Precision Time Protocol (PTP) hardware is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

About PTP hardware

OKD includes the capability to use Precision Time Protocol (PTP)hardware on your nodes. You can configure linuxptp services on nodes in your cluster that have PTP-capable hardware.

The PTP Operator works with PTP-capable devices on clusters provisioned only on bare metal infrastructure.

You can use the OKD console to install PTP by deploying the PTP Operator. The PTP Operator creates and manages the linuxptp services. The Operator provides the following features:

  • Discovery of the PTP-capable devices in a cluster.

  • Management of the configuration of linuxptp services.

Automated discovery of PTP network devices

The PTP Operator adds the NodePtpDevice.ptp.openshift.io custom resource definition (CRD) to OKD. The PTP Operator will search your cluster for PTP capable network devices on each node. The Operator creates and updates a NodePtpDevice custom resource (CR) object for each node that provides a compatible PTP device.

One CR is created for each node, and shares the same name as the node. The .status.devices list provides information about the PTP devices on a node.

The following is an example of a NodePtpDevice CR created by the PTP Operator:

  1. apiVersion: ptp.openshift.io/v1
  2. kind: NodePtpDevice
  3. metadata:
  4. creationTimestamp: "2019-11-15T08:57:11Z"
  5. generation: 1
  6. name: dev-worker-0 (1)
  7. namespace: openshift-ptp (2)
  8. resourceVersion: "487462"
  9. selfLink: /apis/ptp.openshift.io/v1/namespaces/openshift-ptp/nodeptpdevices/dev-worker-0
  10. uid: 08d133f7-aae2-403f-84ad-1fe624e5ab3f
  11. spec: {}
  12. status:
  13. devices: (3)
  14. - name: eno1
  15. - name: eno2
  16. - name: ens787f0
  17. - name: ens787f1
  18. - name: ens801f0
  19. - name: ens801f1
  20. - name: ens802f0
  21. - name: ens802f1
  22. - name: ens803
1The value for the name parameter is the same as the name of the node.
2The CR is created in openshift-ptp namespace by PTP Operator.
3The devices collection includes a list of all of the PTP capable devices discovered by the Operator on the node.

Installing the PTP Operator

As a cluster administrator, you can install the PTP Operator using the OKD CLI or the web console.

CLI: Installing the PTP Operator

As a cluster administrator, you can install the Operator using the CLI.

Prerequisites

  • A cluster installed on bare-metal hardware with nodes that have hardware that supports PTP.

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

Procedure

  1. To create a namespace for the PTP Operator, enter the following command:

    1. $ cat << EOF| oc create -f -
    2. apiVersion: v1
    3. kind: Namespace
    4. metadata:
    5. name: openshift-ptp
    6. labels:
    7. name: openshift-ptp
    8. openshift.io/cluster-monitoring: "true"
    9. EOF
  2. To create an Operator group for the Operator, enter the following command:

    1. $ cat << EOF| oc create -f -
    2. apiVersion: operators.coreos.com/v1
    3. kind: OperatorGroup
    4. metadata:
    5. name: ptp-operators
    6. namespace: openshift-ptp
    7. spec:
    8. targetNamespaces:
    9. - openshift-ptp
    10. EOF
  3. Subscribe to the PTP Operator.

    1. Run the following command to set the OKD major and minor version as an environment variable, which is used as the channel value in the next step.

      1. $ OC_VERSION=$(oc version -o yaml | grep openshiftVersion | \
      2. grep -o '[0-9]*[.][0-9]*' | head -1)
    2. To create a subscription for the PTP Operator, enter the following command:

      1. $ cat << EOF| oc create -f -
      2. apiVersion: operators.coreos.com/v1alpha1
      3. kind: Subscription
      4. metadata:
      5. name: ptp-operator-subscription
      6. namespace: openshift-ptp
      7. spec:
      8. channel: "${OC_VERSION}"
      9. name: ptp-operator
      10. source: redhat-operators
      11. sourceNamespace: openshift-marketplace
      12. EOF
  4. To verify that the Operator is installed, enter the following command:

    1. $ oc get csv -n openshift-ptp \
    2. -o custom-columns=Name:.metadata.name,Phase:.status.phase

    Example output

    1. Name Phase
    2. ptp-operator.4.4.0-202006160135 Succeeded

Web console: Installing the PTP Operator

As a cluster administrator, you can install the Operator using the web console.

You have to create the namespace and operator group as mentioned in the previous section.

Procedure

  1. Install the PTP Operator using the OKD web console:

    1. In the OKD web console, click OperatorsOperatorHub.

    2. Choose PTP Operator from the list of available Operators, and then click Install.

    3. On the Install Operator page, under A specific namespace on the cluster select openshift-ptp. Then, click Install.

  2. Optional: Verify that the PTP Operator installed successfully:

    1. Switch to the OperatorsInstalled Operators page.

    2. Ensure that PTP Operator is listed in the openshift-ptp project with a Status of InstallSucceeded.

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

      If the operator does not appear as installed, to troubleshoot further:

      • Go to the OperatorsInstalled Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.

      • Go to the WorkloadsPods page and check the logs for pods in the openshift-ptp project.

Configuring Linuxptp services

The PTP Operator adds the PtpConfig.ptp.openshift.io custom resource definition (CRD) to OKD. You can configure the Linuxptp services (ptp4l, phc2sys) by creating a PtpConfig custom resource (CR) object.

Prerequisites

  • Install the OpenShift CLI (oc).

  • Log in as a user with cluster-admin privileges.

  • You must have installed the PTP Operator.

Procedure

  1. Create the following PtpConfig CR, and then save the YAML in the <name>-ptp-config.yaml file. Replace <name> with the name for this configuration.

    1. apiVersion: ptp.openshift.io/v1
    2. kind: PtpConfig
    3. metadata:
    4. name: <name> (1)
    5. namespace: openshift-ptp (2)
    6. spec:
    7. profile: (3)
    8. - name: "profile1" (4)
    9. interface: "ens787f1" (5)
    10. ptp4lOpts: "-s -2" (6)
    11. phc2sysOpts: "-a -r" (7)
    12. recommend: (8)
    13. - profile: "profile1" (9)
    14. priority: 10 (10)
    15. match: (11)
    16. - nodeLabel: "node-role.kubernetes.io/worker" (12)
    17. nodeName: "dev-worker-0" (13)
    1Specify a name for the PtpConfig CR.
    2Specify the namespace where the PTP Operator is installed.
    3Specify an array of one or more profile objects.
    4Specify the name of a profile object which is used to uniquely identify a profile object.
    5Specify the network interface name to use by the ptp4l service, for example ens787f1.
    6Specify system config options for the ptp4l service, for example -s -2. This should not include the interface name -i <interface> and service config file -f /etc/ptp4l.conf because these will be automatically appended.
    7Specify system config options for the phc2sys service, for example -a -r.
    8Specify an array of one or more recommend objects which define rules on how the profile should be applied to nodes.
    9Specify the profile object name defined in the profile section.
    10Specify the priority with an integer value between 0 and 99. A larger number gets lower priority, so a priority of 99 is lower than a priority of 10. If a node can be matched with multiple profiles according to rules defined in the match field, the profile with the higher priority will be applied to that node.
    11Specify match rules with nodeLabel or nodeName.
    12Specify nodeLabel with the key of node.Labels from the node object.
    13Specify nodeName with node.Name from the node object.
  2. Create the CR by running the following command:

    1. $ oc create -f <filename> (1)
    1Replace <filename> with the name of the file you created in the previous step.
  3. Optional: Check that the PtpConfig profile is applied to nodes that match with nodeLabel or nodeName.

    1. $ oc get pods -n openshift-ptp -o wide

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. linuxptp-daemon-4xkbb 1/1 Running 0 43m 192.168.111.15 dev-worker-0 <none> <none>
    3. linuxptp-daemon-tdspf 1/1 Running 0 43m 192.168.111.11 dev-master-0 <none> <none>
    4. ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.128.0.116 dev-master-0 <none> <none>
    5. $ oc logs linuxptp-daemon-4xkbb -n openshift-ptp
    6. I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile
    7. I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to:
    8. I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------
    9. I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 (1)
    10. I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1 (2)
    11. I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -s -2 (3)
    12. I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r (4)
    13. I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------
    14. I1115 09:41:18.117934 4143292 daemon.go:186] Starting phc2sys...
    15. I1115 09:41:18.117985 4143292 daemon.go:187] phc2sys cmd: &{Path:/usr/sbin/phc2sys Args:[/usr/sbin/phc2sys -a -r] Env:[] Dir: Stdin:<nil> Stdout:<nil> Stderr:<nil> ExtraFiles:[] SysProcAttr:<nil> Process:<nil> ProcessState:<nil> ctx:<nil> lookPathErr:<nil> finished:false childFiles:[] closeAfterStart:[] closeAfterWait:[] goroutine:[] errch:<nil> waitDone:<nil>}
    16. I1115 09:41:19.118175 4143292 daemon.go:186] Starting ptp4l...
    17. I1115 09:41:19.118209 4143292 daemon.go:187] ptp4l cmd: &{Path:/usr/sbin/ptp4l Args:[/usr/sbin/ptp4l -m -f /etc/ptp4l.conf -i ens787f1 -s -2] Env:[] Dir: Stdin:<nil> Stdout:<nil> Stderr:<nil> ExtraFiles:[] SysProcAttr:<nil> Process:<nil> ProcessState:<nil> ctx:<nil> lookPathErr:<nil> finished:false childFiles:[] closeAfterStart:[] closeAfterWait:[] goroutine:[] errch:<nil> waitDone:<nil>}
    18. ptp4l[102189.864]: selected /dev/ptp5 as PTP clock
    19. ptp4l[102189.886]: port 1: INITIALIZING to LISTENING on INIT_COMPLETE
    20. ptp4l[102189.886]: port 0: INITIALIZING to LISTENING on INIT_COMPLETE
    1Profile Name is the name that is applied to node dev-worker-0.
    2Interface is the PTP device specified in the profile1 interface field. The ptp4l service runs on this interface.
    3Ptp4lOpts are the ptp4l sysconfig options specified in profile1 Ptp4lOpts field.
    4Phc2sysOpts are the phc2sys sysconfig options specified in profile1 Phc2sysOpts field.