FlowCollector configuration parameters

FlowCollector is the Schema for the flowcollectors API, which pilots and configures netflow collection.

FlowCollector API specifications

Type

object

PropertyTypeDescription

apiVersion

string

APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and might reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

kind

string

Kind is a string value representing the REST resource this object represents. Servers might infer this from the endpoint the client submits requests to. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

metadata

ObjectMeta

Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

spec

object

FlowCollectorSpec defines the desired state of FlowCollector

status

object

FlowCollectorStatus defines the observed state of FlowCollector

.spec

Description

FlowCollectorSpec defines the desired state of FlowCollector

Type

object

Required

  • agent

  • deploymentModel

PropertyTypeDescription

agent

object

agent for flows extraction.

consolePlugin

object

consolePlugin defines the settings related to the OKD Console plugin, when available.

deploymentModel

string

deploymentModel defines the desired type of deployment for flow processing. Possible values are “DIRECT” (default) to make the flow processor listening directly from the agents, or “KAFKA” to make flows sent to a Kafka pipeline before consumption by the processor. Kafka can provide better scalability, resiliency and high availability (for more details, see https://www.redhat.com/en/topics/integration/what-is-apache-kafka).

exporters

array

exporters defines additional optional exporters for custom consumption or storage. This is an experimental feature. Currently, only KAFKA exporter is available.

exporters[]

object

FlowCollectorExporter defines an additional exporter to send enriched flows to

kafka

object

Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. Available when the “spec.deploymentModel” is “KAFKA”.

loki

object

Loki, the flow store, client settings.

namespace

string

namespace where NetObserv pods are deployed. If empty, the namespace of the operator is going to be used.

processor

object

processor defines the settings of the component that receives the flows from the agent, enriches them, and forwards them to the Loki persistence layer.

.spec.agent

Description

agent for flows extraction.

Type

object

Required

  • type
PropertyTypeDescription

ebpf

object

ebpf describes the settings related to the eBPF-based flow reporter when the “agent.type” property is set to “EBPF”.

ipfix

object

ipfix describes the settings related to the IPFIX-based flow reporter when the “agent.type” property is set to “IPFIX”.

type

string

type selects the flows tracing agent. Possible values are “EBPF” (default) to use NetObserv eBPF agent, “IPFIX” to use the legacy IPFIX collector. “EBPF” is recommended in most cases as it offers better performances and should work regardless of the CNI installed on the cluster. “IPFIX” works with OVN-Kubernetes CNI (other CNIs could work if they support exporting IPFIX, but they would require manual configuration).

.spec.agent.ebpf

Description

ebpf describes the settings related to the eBPF-based flow reporter when the “agent.type” property is set to “EBPF”.

Type

object

PropertyTypeDescription

cacheActiveTimeout

string

cacheActiveTimeout is the max period during which the reporter will aggregate flows before sending. Increasing cacheMaxFlows and cacheActiveTimeout can decrease the network traffic overhead and the CPU load, however you can expect higher memory consumption and an increased latency in the flow collection.

cacheMaxFlows

integer

cacheMaxFlows is the max number of flows in an aggregate; when reached, the reporter sends the flows. Increasing cacheMaxFlows and cacheActiveTimeout can decrease the network traffic overhead and the CPU load, however you can expect higher memory consumption and an increased latency in the flow collection.

debug

object

Debug allows setting some aspects of the internal configuration of the eBPF agent. This section is aimed exclusively for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Users setting its values do it at their own risk.

excludeInterfaces

array (string)

excludeInterfaces contains the interface names that will be excluded from flow tracing. If an entry is enclosed by slashes, such as /br-/, matches as a regular expression, otherwise it will be matched as a case-sensitive string.

imagePullPolicy

string

imagePullPolicy is the Kubernetes pull policy for the image defined above

interfaces

array (string)

interfaces contains the interface names from where flows will be collected. If empty, the agent will fetch all the interfaces in the system, excepting the ones listed in ExcludeInterfaces. If an entry is enclosed by slashes, for example /br-/, matches as a regular expression, otherwise it will be matched as a case-sensitive string.

kafkaBatchSize

integer

kafkaBatchSize limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 10MB.

logLevel

string

logLevel defines the log level for the NetObserv eBPF Agent

privileged

boolean

privileged mode for the eBPF Agent container. In general this setting can be ignored or set to false: in that case, the operator will set granular capabilities (BPF, PERFMON, NET_ADMIN, SYS_RESOURCE) to the container, to enable its correct operation. If for some reason these capabilities cannot be set, such as if an old kernel version not knowing CAP_BPF is in use, then you can turn on this mode for more global privileges.

resources

object

resources are the compute resources required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

sampling

integer

sampling rate of the flow reporter. 100 means one flow on 100 is sent. 0 or 1 means all flows are sampled.

.spec.agent.ebpf.debug

Description

Debug allows setting some aspects of the internal configuration of the eBPF agent. This section is aimed exclusively for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Users setting its values do it at their own risk.

Type

object

PropertyTypeDescription

env

object (string)

env allows passing custom environment variables to the NetObserv Agent. Useful for passing some very concrete performance-tuning options, such as GOGC, GOMAXPROCS, that shouldn’t be publicly exposed as part of the FlowCollector descriptor, as they are only useful in edge debug or support scenarios.

.spec.agent.ebpf.resources

Description

resources are the compute resources required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

Type

object

PropertyTypeDescription

limits

integer-or-string

Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

requests

integer-or-string

Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

.spec.agent.ipfix

Description

ipfix describes the settings related to the IPFIX-based flow reporter when the “agent.type” property is set to “IPFIX”.

Type

object

PropertyTypeDescription

cacheActiveTimeout

string

cacheActiveTimeout is the max period during which the reporter will aggregate flows before sending

cacheMaxFlows

integer

cacheMaxFlows is the max number of flows in an aggregate; when reached, the reporter sends the flows

clusterNetworkOperator

object

clusterNetworkOperator defines the settings related to the OKD Cluster Network Operator, when available.

forceSampleAll

boolean

forceSampleAll allows disabling sampling in the IPFIX-based flow reporter. It is not recommended to sample all the traffic with IPFIX, as it might generate cluster instability. If you REALLY want to do that, set this flag to true. Use at your own risk. When it is set to true, the value of “sampling” is ignored.

ovnKubernetes

object

ovnKubernetes defines the settings of the OVN-Kubernetes CNI, when available. This configuration is used when using OVN’s IPFIX exports, without OKD. When using OKD, refer to the clusterNetworkOperator property instead.

sampling

integer

sampling is the sampling rate on the reporter. 100 means one flow on 100 is sent. To ensure cluster stability, it is not possible to set a value below 2. If you really want to sample every packet, which might impact the cluster stability, refer to “forceSampleAll”. Alternatively, you can use the eBPF Agent instead of IPFIX.

.spec.agent.ipfix.clusterNetworkOperator

Description

clusterNetworkOperator defines the settings related to the OKD Cluster Network Operator, when available.

Type

object

PropertyTypeDescription

namespace

string

namespace where the config map is going to be deployed.

.spec.agent.ipfix.ovnKubernetes

Description

ovnKubernetes defines the settings of the OVN-Kubernetes CNI, when available. This configuration is used when using OVN’s IPFIX exports, without OKD. When using OKD, refer to the clusterNetworkOperator property instead.

Type

object

PropertyTypeDescription

containerName

string

containerName defines the name of the container to configure for IPFIX.

daemonSetName

string

daemonSetName defines the name of the DaemonSet controlling the OVN-Kubernetes pods.

namespace

string

namespace where OVN-Kubernetes pods are deployed.

.spec.consolePlugin

Description

consolePlugin defines the settings related to the OKD Console plugin, when available.

Type

object

Required

  • register
PropertyTypeDescription

autoscaler

object

autoscaler spec of a horizontal pod autoscaler to set up for the plugin Deployment.

imagePullPolicy

string

imagePullPolicy is the Kubernetes pull policy for the image defined above

logLevel

string

logLevel for the console plugin backend

port

integer

port is the plugin service port

portNaming

object

portNaming defines the configuration of the port-to-service name translation

quickFilters

array

quickFilters configures quick filter presets for the Console plugin

quickFilters[]

object

QuickFilter defines preset configuration for Console’s quick filters

register

boolean

register allows, when set to true, to automatically register the provided console plugin with the OKD Console operator. When set to false, you can still register it manually by editing console.operator.OKD.io/cluster with the following command: oc patch console.operator.OKD.io cluster —type=’json’ -p ‘[{“op”: “add”, “path”: “/spec/plugins/-“, “value”: “netobserv-plugin”}]’

replicas

integer

replicas defines the number of replicas (pods) to start.

resources

object

resources, in terms of compute resources, required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

.spec.consolePlugin.autoscaler

Description

autoscaler spec of a horizontal pod autoscaler to set up for the plugin Deployment. Please refer to HorizontalPodAutoscaler documentation (autoscaling/v2)

.spec.consolePlugin.portNaming

Description

portNaming defines the configuration of the port-to-service name translation

Type

object

PropertyTypeDescription

enable

boolean

enable the console plugin port-to-service name translation

portNames

object (string)

portNames defines additional port names to use in the console, for example, portNames: {“3100”: “loki”}

.spec.consolePlugin.quickFilters

Description

quickFilters configures quick filter presets for the Console plugin

Type

array

.spec.consolePlugin.quickFilters[]

Description

QuickFilter defines preset configuration for Console’s quick filters

Type

object

Required

  • filter

  • name

PropertyTypeDescription

default

boolean

default defines whether this filter should be active by default or not

filter

object (string)

filter is a set of keys and values to be set when this filter is selected. Each key can relate to a list of values using a coma-separated string, for example, filter: {“src_namespace”: “namespace1,namespace2”}

name

string

name of the filter, that will be displayed in Console

.spec.consolePlugin.resources

Description

resources, in terms of compute resources, required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

Type

object

PropertyTypeDescription

limits

integer-or-string

Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

requests

integer-or-string

Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

.spec.exporters

Description

exporters defines additional optional exporters for custom consumption or storage. This is an experimental feature. Currently, only KAFKA exporter is available.

Type

array

.spec.exporters[]

Description

FlowCollectorExporter defines an additional exporter to send enriched flows to

Type

object

Required

  • type
PropertyTypeDescription

kafka

object

kafka describes the kafka configuration (address, topic…​) to send enriched flows to.

type

string

type selects the type of exporters. Only “KAFKA” is available at the moment.

.spec.exporters[].kafka

Description

describes the kafka configuration, such as address or topic, to send enriched flows.

Type

object

Required

  • address

  • topic

PropertyTypeDescription

address

string

address of the Kafka server

tls

object

tls client configuration. When using TLS, verify the address matches the Kafka port used for TLS, generally 9093. Note that, when eBPF agents are used, Kafka certificate needs to be copied in the agent namespace (by default it’s netobserv-privileged).

topic

string

kafka topic to use. It must exist, NetObserv will not create it.

.spec.exporters[].kafka.tls

Description

tls client configuration. When using TLS, verify the address matches the Kafka port used for TLS, generally 9093. Note that, when eBPF agents are used, Kafka certificate needs to be copied in the agent namespace (by default it’s netobserv-privileged).

Type

object

PropertyTypeDescription

caCert

object

caCert defines the reference of the certificate for the Certificate Authority

enable

boolean

enable TLS

insecureSkipVerify

boolean

insecureSkipVerify allows skipping client-side verification of the server certificate If set to true, CACert field will be ignored

userCert

object

userCert defines the user certificate reference, used for mTLS (you can ignore it when using regular, one-way TLS)

.spec.exporters[].kafka.tls.caCert

Description

caCert defines the reference of the certificate for the Certificate Authority

Type

object

PropertyTypeDescription

certFile

string

certFile defines the path to the certificate file name within the config map / Secret

certKey

string

certKey defines the path to the certificate private key file name within the config map / Secret. Omit when the key is not necessary.

name

string

name of the config map or Secret containing certificates

type

string

type for the certificate reference: config map or secret

.spec.exporters[].kafka.tls.userCert

Description

userCert defines the user certificate reference, used for mTLS (you can ignore it when using regular, one-way TLS)

Type

object

PropertyTypeDescription

certFile

string

certFile defines the path to the certificate file name within the config map / Secret

certKey

string

certKey defines the path to the certificate private key file name within the config map / Secret. Omit when the key is not necessary.

name

string

name of the config map or Secret containing certificates

type

string

type for the certificate reference: config map or secret

.spec.kafka

Description

kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. Available when the “spec.deploymentModel” is “KAFKA”.

Type

object

Required

  • address

  • topic

PropertyTypeDescription

address

string

address of the Kafka server

tls

object

tls client configuration. When using TLS, verify the address matches the Kafka port used for TLS, generally 9093. Note that, when eBPF agents are used, Kafka certificate needs to be copied in the agent namespace (by default it’s netobserv-privileged).

topic

string

kafka topic to use. It must exist, NetObserv will not create it.

.spec.kafka.tls

Description

tls client configuration. When using TLS, verify the address matches the Kafka port used for TLS, generally 9093. Note that, when eBPF agents are used, Kafka certificate needs to be copied in the agent namespace (by default it’s netobserv-privileged).

Type

object

PropertyTypeDescription

caCert

object

caCert defines the reference of the certificate for the Certificate Authority

enable

boolean

enable TLS

insecureSkipVerify

boolean

insecureSkipVerify allows skipping client-side verification of the server certificate If set to true, CACert field will be ignored

userCert

object

userCert defines the user certificate reference, used for mTLS (you can ignore it when using regular, one-way TLS)

.spec.kafka.tls.caCert

Description

caCert defines the reference of the certificate for the Certificate Authority

Type

object

PropertyTypeDescription

certFile

string

certFile defines the path to the certificate file name within the config map / Secret

certKey

string

certKey defines the path to the certificate private key file name within the config map / Secret. Omit when the key is not necessary.

name

string

name of the config map or Secret containing certificates

type

string

type for the certificate reference: config map or secret

.spec.kafka.tls.userCert

Description

userCert defines the user certificate reference, used for mTLS (you can ignore it when using regular, one-way TLS)

Type

object

PropertyTypeDescription

certFile

string

certFile defines the path to the certificate file name within the config map / Secret

certKey

string

certKey defines the path to the certificate private key file name within the config map / Secret. Omit when the key is not necessary.

name

string

name of the config map or Secret containing certificates

type

string

type for the certificate reference: config map or secret

.spec.loki

Description

loki, the flow store, client settings.

Type

object

PropertyTypeDescription

authToken

string

AuthToken describe the way to get a token to authenticate to Loki DISABLED will not send any token with the request HOST will use the local pod service account to authenticate to Loki FORWARD will forward user token, in this mode, pod that are not receiving user request like the processor will use the local pod service account. Similar to HOST mode.

batchSize

integer

batchSize is max batch size (in bytes) of logs to accumulate before sending

batchWait

string

batchWait is max time to wait before sending a batch

maxBackoff

string

maxBackoff is the maximum backoff time for client connection between retries

maxRetries

integer

maxRetries is the maximum number of retries for client connections

minBackoff

string

minBackoff is the initial backoff time for client connection between retries

querierUrl

string

querierURL specifies the address of the Loki querier service, in case it is different from the Loki ingester URL. If empty, the URL value will be used (assuming that the Loki ingester and querier are in the same server). + [IMPORTANT] ==== If you installed Loki using the Loki Operator, it is advised not to use querierUrl, as it can break the console access to Loki. If you installed Loki using another type of Loki installation, this does not apply. ====

staticLabels

object (string)

staticLabels is a map of common labels to set on each flow

statusUrl

string

statusURL specifies the address of the Loki /ready /metrics /config endpoints, in case it is different from the Loki querier URL. If empty, the QuerierURL value will be used. This is useful to show error messages and some context in the frontend

tenantID

string

tenantID is the Loki X-Scope-OrgID that identifies the tenant for each request. it will be ignored if instanceSpec is specified

timeout

string

timeout is the maximum time connection / request limit A Timeout of zero means no timeout.

tls

object

tls client configuration.

url

string

url is the address of an existing Loki service to push the flows to.

.spec.loki.tls

Description

tls client configuration.

Type

object

PropertyTypeDescription

caCert

object

caCert defines the reference of the certificate for the Certificate Authority

enable

boolean

enable TLS

insecureSkipVerify

boolean

insecureSkipVerify allows skipping client-side verification of the server certificate If set to true, CACert field will be ignored

userCert

object

userCert defines the user certificate reference, used for mTLS (you can ignore it when using regular, one-way TLS)

.spec.loki.tls.caCert

Description

caCert defines the reference of the certificate for the Certificate Authority

Type

object

PropertyTypeDescription

certFile

string

certFile defines the path to the certificate file name within the config map / Secret

certKey

string

certKey defines the path to the certificate private key file name within the config map / Secret. Omit when the key is not necessary.

name

string

name of the config map or Secret containing certificates

type

string

type for the certificate reference: config map or secret

.spec.loki.tls.userCert

Description

userCert defines the user certificate reference, used for mTLS (you can ignore it when using regular, one-way TLS)

Type

object

PropertyTypeDescription

certFile

string

certFile defines the path to the certificate file name within the config map / Secret

certKey

string

certKey defines the path to the certificate private key file name within the config map / Secret. Omit when the key is not necessary.

name

string

name of the config map or Secret containing certificates

type

string

type for the certificate reference: config map or secret

.spec.processor

Description

processor defines the settings of the component that receives the flows from the agent, enriches them, and forwards them to the Loki persistence layer.

Type

object

PropertyTypeDescription

debug

object

Debug allows setting some aspects of the internal configuration of the flow processor. This section is aimed exclusively for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Users setting its values do it at their own risk.

dropUnusedFields

boolean

dropUnusedFields allows, when set to true, to drop fields that are known to be unused by OVS, to save storage space.

enableKubeProbes

boolean

enableKubeProbes is a flag to enable or disable Kubernetes liveness and readiness probes

healthPort

integer

healthPort is a collector HTTP port in the Pod that exposes the health check API

imagePullPolicy

string

imagePullPolicy is the Kubernetes pull policy for the image defined above

kafkaConsumerAutoscaler

object

kafkaConsumerAutoscaler spec of a horizontal pod autoscaler to set up for flowlogs-pipeline-transformer, which consumes Kafka messages. This setting is ignored when Kafka is disabled.

kafkaConsumerBatchSize

integer

kafkaConsumerBatchSize indicates to the broker the maximum batch size, in bytes, that the consumer will accept. Ignored when not using Kafka. Default: 10MB.

kafkaConsumerQueueCapacity

integer

kafkaConsumerQueueCapacity defines the capacity of the internal message queue used in the Kafka consumer client. Ignored when not using Kafka.

kafkaConsumerReplicas

integer

kafkaConsumerReplicas defines the number of replicas (pods) to start for flowlogs-pipeline-transformer, which consumes Kafka messages. This setting is ignored when Kafka is disabled.

logLevel

string

logLevel of the collector runtime

metrics

object

Metrics define the processor configuration regarding metrics

port

integer

port of the flow collector (host port) By conventions, some value are not authorized port must not be below 1024 and must not equal this values: 4789,6081,500, and 4500

profilePort

integer

profilePort allows setting up a Go pprof profiler listening to this port

resources

object

resources are the compute resources required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

.spec.processor.debug

Description

Debug allows setting some aspects of the internal configuration of the flow processor. This section is aimed exclusively for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Users setting its values do it at their own risk.

Type

object

PropertyTypeDescription

env

object (string)

env allows passing custom environment variables to the NetObserv Agent. Useful for passing some very concrete performance-tuning options, such as GOGC and GOMAXPROCS that shouldn’t be publicly exposed as part of the FlowCollector descriptor, as they are only useful in edge debug and support scenarios.

.spec.processor.kafkaConsumerAutoscaler

Description

kafkaConsumerAutoscaler spec of a horizontal pod autoscaler to set up for flowlogs-pipeline-transformer, which consumes Kafka messages. This setting is ignored when Kafka is disabled. Please refer to HorizontalPodAutoscaler documentation (autoscaling/v2)

.spec.processor.metrics

Description

Metrics define the processor configuration regarding metrics

Type

object

PropertyTypeDescription

ignoreTags

array (string)

ignoreTags is a list of tags to specify which metrics to ignore

server

object

metricsServer endpoint configuration for Prometheus scraper

.spec.processor.metrics.server

Description

metricsServer endpoint configuration for Prometheus scraper

Type

object

PropertyTypeDescription

port

integer

the prometheus HTTP port

tls

object

TLS configuration.

.spec.processor.metrics.server.tls

Description

TLS configuration.

Type

object

PropertyTypeDescription

provided

object

TLS configuration.

type

string

Select the type of TLS configuration “DISABLED” (default) to not configure TLS for the endpoint, “PROVIDED” to manually provide cert file and a key file, and “AUTO” to use OKD auto generated certificate using annotations

.spec.processor.metrics.server.tls.provided

Description

TLS configuration.

Type

object

PropertyTypeDescription

certFile

string

certFile defines the path to the certificate file name within the config map / Secret

certKey

string

certKey defines the path to the certificate private key file name within the config map / Secret. Omit when the key is not necessary.

name

string

name of the config map or Secret containing certificates

type

string

type for the certificate reference: config map or secret

.spec.processor.resources

Description

resources are the compute resources required by this container. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

Type

object

PropertyTypeDescription

limits

integer-or-string

Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

requests

integer-or-string

Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

.status

Description

FlowCollectorStatus defines the observed state of FlowCollector

Type

object

Required

  • conditions
PropertyTypeDescription

conditions

array

conditions represent the latest available observations of an object’s state

conditions[]

object

Condition contains details for one aspect of the current state of this API Resource. —- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo’s current state. // Known .status.conditions.type are: “Available”, “Progressing”, and “Degraded” // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:”conditions,omitempty” patchStrategy:”merge” patchMergeKey:”type” protobuf:”bytes,1,rep,name=conditions” // other fields }

namespace

string

namespace where console plugin and flowlogs-pipeline have been deployed.

.status.conditions

Description

conditions represent the latest available observations of an object’s state

Type

array

.status.conditions[]

Description

Condition contains details for one aspect of the current state of this API Resource. —- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo’s current state. // Known .status.conditions.type are: “Available”, “Progressing”, and “Degraded” // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields }

Type

object

Required

  • lastTransitionTime

  • message

  • reason

  • status

  • type

PropertyTypeDescription

lastTransitionTime

string

lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable.

message

string

message is a human readable message indicating details about the transition. This might be an empty string.

observedGeneration

integer

observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance.

reason

string

reason contains a programmatic identifier indicating the reason for the condition’s last transition. Producers of specific condition types might define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field might not be empty.

status

string

status of the condition, one of True, False, Unknown.

type

string

type of condition in CamelCase or in foo.example.com/CamelCase. —- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deescalate is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)