Master and Node Configuration

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

Customizing master and node configuration after installation

The openshift start command and its subcommands (master to launch a master server and node to launch a node server) take a limited set of arguments that are sufficient for launching servers in a development or experimental environment.

However, these arguments are insufficient to describe and control the full set of configuration and security options that are necessary in a production environment. You must provide those options in the Master host files, at /etc/origin/master/master-config.yaml and the node configuration maps:

These files define options including overriding the default plug-ins, connecting to etcd, automatically creating service accounts, building image names, customizing project requests, configuring volume plug-ins, and much more.

This topic covers the available options for customizing your OKD master and node hosts, and shows you how to make changes to the configuration after installation.

These files are fully specified with no default values. Therefore, an empty value indicates that you want to start up with an empty value for that parameter. This makes it easy to reason about exactly what your configuration is, but it also makes it difficult to remember all of the options to specify. To make this easier, the configuration files can be created with the --write-config option and then used with the --config option.

Installation dependencies

Production environments should be installed using the standard cluster installation process. In production environments, it is a good idea to use multiple masters for the purposes of high availability (HA). A cluster architecture of three masters is recommended, and HAproxy is the recommended solution for this.

If etcd is installed on the master hosts, you must configure your cluster to use at least three masters, because etcd would not be able to decide which one is authoritative. The only way to successfully run only two masters is if you install etcd on hosts other than the masters.

Configuring masters and nodes

The method you use to configure your master and node configuration files must match the method that was used to install your OKD cluster. If you followed the standard cluster installation processe, then make your configuration changes in the Ansible inventory file.

If you followed the Manual installation method, then make your changes manually in the configuration files themselves.

To modify a node in your cluster, update the node configuration maps as needed. Do not manually edit the node-config.yaml file.

Making configuration changes using Ansible

For this section, familiarity with Ansible is assumed.

Only a portion of the available host configuration options are exposed to Ansible. After an OKD install, Ansible creates an inventory file with some substituted values. Modifying this inventory file and re-running the Ansible installer playbook is how you customize your OKD cluster.

While OKD supports using Ansible for cluster installation, using an Ansible playbook and inventory file, you can also use other management tools, such as Puppet, Chef, or Salt.

Use Case: Configuring the cluster to use HTPasswd authentication

  • This use case assumes you have already set up SSH keys to all the nodes referenced in the playbook.

  • The htpasswd utility is in the httpd-tools package:

    1. # yum install httpd-tools

To modify the Ansible inventory and make configuration changes:

  1. Open the ./hosts inventory file.

  2. Add the following new variables to the [OSEv3:vars] section of the file:

    1. # htpasswd auth
    2. openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
    3. # Defining htpasswd users
    4. #openshift_master_htpasswd_users={'<name>': '<hashed-password>', '<name>': '<hashed-password>'}
    5. # or
    6. #openshift_master_htpasswd_file=/etc/origin/master/htpasswd

    For HTPasswd authentication the openshift_master_identity_providers variable enables the authentication type. You can configure three different authentication options that use HTPasswd:

    • Specify only openshift_master_identity_providers if /etc/origin/master/htpasswd is already configured and present on the host.

    • Specify both openshift_master_identity_providers and openshift_master_htpasswd_file to copy a local htpasswd file to the host.

    • Specify both openshift_master_identity_providers and openshift_master_htpasswd_users to generate a new htpasswd file on the host.

    Because OKD requires a hashed password to configure HTPasswd authentication, you can use the htpasswd command, as shown in the following section, to generate the hashed password(s) for your user(s) or to create the flat file with the users and associated hashed passwords.

    The following example changes the authentication method from the default deny all setting to htpasswd and uses the specified file to generate user IDs and passwords for the jsmith and bloblaw users.

    1. # htpasswd auth
    2. openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
    3. # Defining htpasswd users
    4. openshift_master_htpasswd_users={'jsmith': '$apr1$wIwXkFLI$bAygtKGmPOqaJftB', 'bloblaw': '7IRJ$2ODmeLoxf4I6sUEKfiA$2aDJqLJe'}
    5. # or
    6. #openshift_master_htpasswd_file=/etc/origin/master/htpasswd
  3. Re-run the ansible playbook for these modifications to take effect:

    1. $ ansible-playbook -b -i ./hosts ~/src/openshift-ansible/playbooks/deploy_cluster.yml

    The playbook updates the configuration, and restarts the OKD master service to apply the changes.

You have now modified the master and node configuration files using Ansible, but this is just a simple use case. From here you can see which master and node configuration options are exposed to Ansible and customize your own Ansible inventory.

Using the htpasswd commmand

To configure the OKD cluster to use HTPasswd authentication, you need at least one user with a hashed password to include in the inventory file.

You can:

To create a user and hashed password:

  1. Run the following command to add the specified user:

    1. $ htpasswd -n <user_name>

    You can include the -b option to supply the password on the command line:

    1. $ htpasswd -nb <user_name> <password>
  2. Enter and confirm a clear-text password for the user.

    For example:

    1. $ htpasswd -n myuser
    2. New password:
    3. Re-type new password:
    4. myuser:$apr1$vdW.cI3j$WSKIOzUPs6Q

    The command generates a hashed version of the password.

You can then use the hashed password when configuring HTPasswd authentication. The hashed password is the string after the :. In the above example,you would enter:

  1. openshift_master_htpasswd_users={'myuser': '$apr1$wIwXkFLI$bAygtISk2eKGmqaJftB'}

To create a flat file with a user name and hashed password:

  1. Execute the following command:

    1. $ htpasswd -c /etc/origin/master/htpasswd <user_name>

    You can include the -b option to supply the password on the command line:

    1. $ htpasswd -c -b <user_name> <password>
  2. Enter and confirm a clear-text password for the user.

    For example:

    1. htpasswd -c /etc/origin/master/htpasswd user1
    2. New password:
    3. Re-type new password:
    4. Adding password for user user1

    The command generates a file that includes the user name and a hashed version of the user’s password.

You can then use the password file when configuring HTPasswd authentication.

For more information on the htpasswd command, see HTPasswd Identity Provider.

Making manual configuration changes

Use Case: Configure the cluster to use HTPasswd authentication

To manually modify a configuration file:

  1. Open the configuration file you want to modify, which in this case is the /etc/origin/master/master-config.yaml file:

  2. Add the following new variables to the **identityProviders** stanza of the file:

    1. oauthConfig:
    2. ...
    3. identityProviders:
    4. - name: my_htpasswd_provider
    5. challenge: true
    6. login: true
    7. mappingMethod: claim
    8. provider:
    9. apiVersion: v1
    10. kind: HTPasswdPasswordIdentityProvider
    11. file: /etc/origin/master/htpasswd
  3. Save your changes and close the file.

  4. Restart the master for the changes to take effect:

    1. # master-restart api
    2. # master-restart controllers

You have now manually modified the master and node configuration files, but this is just a simple use case. From here you can see all the master and node configuration options, and further customize your own cluster by making further modifications.

To modify a node in your cluster, update the node configuration maps as needed. Do not manually edit the node-config.yaml file.

Master Configuration Files

This section reviews parameters mentioned in the master-config.yaml file.

You can create a new master configuration file to see the valid options for your installed version of OKD.

Whenever you modify the master-config.yaml file, you must restart the master for the changes to take effect. See Restarting OKD services.

Admission Control Configuration

Table 1. Admission Control Configuration Parameters
Parameter NameDescription

AdmissionConfig

Contains the admission control plug-in configuration. OKD has a configurable list of admission controller plug-ins that are triggered whenever API objects are created or modified. This option allows you to override the default list of plug-ins; for example, disabling some plug-ins, adding others, changing the ordering, and specifying configuration. Both the list of plug-ins and their configuration can be controlled from Ansible.

APIServerArguments

Key-value pairs that will be passed directly to the Kube API server that match the API servers’ command line arguments. These are not migrated, but if you reference a value that does not exist the server will not start. These values may override other settings in KubernetesMasterConfig, which may cause invalid configurations. Use APIServerArguments with the event-ttl value to store events in etcd. The default is 2h, but it can be set to less to prevent memory growth:

  1. apiServerArguments:
  2. event-ttl:
  3. - 15m

ControllerArguments

Key-value pairs that will be passed directly to the Kube controller manager that match the controller manager’s command line arguments. These are not migrated, but if you reference a value that does not exist the server will not start. These values may override other settings in KubernetesMasterConfig, which may cause invalid configurations.

DefaultAdmissionConfig

Used to enable or disable various admission plug-ins. When this type is present as the configuration object under pluginConfig and if the admission plug-in supports it, this will cause an off by default admission plug-in to be enabled.

PluginConfig

Allows specifying a configuration file per admission control plug-in.

PluginOrderOverride

A list of admission control plug-in names that will be installed on the master. Order is significant. If empty, a default list of plug-ins is used.

SchedulerArguments

Key-value pairs that will be passed directly to the Kube scheduler that match the scheduler’s command line arguments. These are not migrated, but if you reference a value that does not exist the server will not start. These values may override other settings in KubernetesMasterConfig, which may cause invalid configurations.

Asset Configuration

Table 2. Asset Configuration Parameters
Parameter NameDescription

AssetConfig

If present, then the asset server starts based on the defined parameters. For example:

  1. assetConfig:
  2. logoutURL: “”
  3. masterPublicURL: https://master.ose32.example.com:8443
  4. publicURL: https://master.ose32.example.com:8443/console/
  5. servingInfo:
  6. bindAddress: 0.0.0.0:8443
  7. bindNetwork: tcp4
  8. certFile: master.server.crt
  9. clientCA: “”
  10. keyFile: master.server.key
  11. maxRequestsInFlight: 0
  12. requestTimeoutSeconds: 0

corsAllowedOrigins

To access the API server from a web application using a different host name, you must whitelist that host name by specifying corsAllowedOrigins in the configuration field or by specifying the —cors-allowed-origins option on openshift start. No pinning or escaping is done to the value. See Web Console for example usage.

DisabledFeatures

A list of features that should not be started. You will likely want to set this as null. It is very unlikely that anyone will want to manually disable features and that is not encouraged.

Extensions

Files to serve from the asset server file system under a subcontext.

ExtensionDevelopment

When set to true, tells the asset server to reload extension scripts and stylesheets for every request rather than only at startup. It lets you develop extensions without having to restart the server for every change.

ExtensionProperties

Key- (string) and value- (string) pairs that will be injected into the console under the global variable OPENSHIFT_EXTENSION_PROPERTIES.

ExtensionScripts

File paths on the asset server files to load as scripts when the web console loads.

ExtensionStylesheets

File paths on the asset server files to load as style sheets when the web console loads.

LoggingPublicURL

The public endpoint for logging (optional).

LogoutURL

An optional, absolute URL to redirect web browsers to after logging out of the web console. If not specified, the built-in logout page is shown.

MasterPublicURL

How the web console can access the OKD server.

MetricsPublicURL

The public endpoint for metrics (optional).

PublicURL

URL of the asset server.

Authentication and Authorization Configuration

Table 3. Authentication and Authorization Parameters
Parameter NameDescription

authConfig

Holds authentication and authorization configuration options.

AuthenticationCacheSize

Indicates how many authentication results should be cached. If 0, the default cache size is used.

AuthorizationCacheTTL

Indicates how long an authorization result should be cached. It takes a valid time duration string (e.g. “5m”). If empty, you get the default timeout. If zero (e.g. “0m”), caching is disabled.

Controller Configuration

Table 4. Controller Configuration Parameters
Parameter NameDescription

Controllers

List of the controllers that should be started. If set to none, no controllers will start automatically. The default value is which will start all controllers. When using , you may exclude controllers by prepending a - in front of their name. No other values are recognized at this time.

ControllerLeaseTTL

Enables controller election, instructing the master to attempt to acquire a lease before controllers start and renewing it within a number of seconds defined by this value. Setting this value non-negative forces pauseControllers=true. This value defaults off (0, or omitted) and controller election can be disabled with -1.

PauseControllers

Instructs the master to not automatically start controllers, but instead to wait until a notification to the server is received before launching them.

etcd Configuration

Table 5. etcd Configuration Parameters
Parameter NameDescription

Address

The advertised host:port for client connections to etcd.

etcdClientInfo

Contains information about how to connect to etcd. Specifies if etcd is run as embedded or non-embedded, and the hosts. The rest of the configuration is handled by the Ansible inventory. For example:

  1. etcdClientInfo:
  2. ca: ca.crt
  3. certFile: master.etcd-client.crt
  4. keyFile: master.etcd-client.key
  5. urls:
  6. - https://m1.aos.example.com:4001

etcdConfig

If present, then etcd starts based on the defined parameters. For example:

  1. etcdConfig:
  2. address: master.ose32.example.com:4001
  3. peerAddress: master.ose32.example.com:7001
  4. peerServingInfo:
  5. bindAddress: 0.0.0.0:7001
  6. certFile: etcd.server.crt
  7. clientCA: ca.crt
  8. keyFile: etcd.server.key
  9. servingInfo:
  10. bindAddress: 0.0.0.0:4001
  11. certFile: etcd.server.crt
  12. clientCA: ca.crt
  13. keyFile: etcd.server.key
  14. storageDirectory: /var/lib/origin/openshift.local.etcd

etcdStorageConfig

Contains information about how API resources are stored in etcd. These values are only relevant when etcd is the backing store for the cluster.

KubernetesStoragePrefix

The path within etcd that the Kubernetes resources will be rooted under. This value, if changed, will mean existing objects in etcd will no longer be located. The default value is kubernetes.io.

KubernetesStorageVersion

The API version that Kubernetes resources in etcd should be serialized to. This value should not be advanced until all clients in the cluster that read from etcd have code that allows them to read the new version.

OpenShiftStoragePrefix

The path within etcd that the OKD resources will be rooted under. This value, if changed, will mean existing objects in etcd will no longer be located. The default value is openshift.io.

OpenShiftStorageVersion

API version that OS resources in etcd should be serialized to. This value should not be advanced until all clients in the cluster that read from etcd have code that allows them to read the new version.

PeerAddress

The advertised host:port for peer connections to etcd.

PeerServingInfo

Describes how to start serving the etcd peer.

ServingInfo

Describes how to start serving. For example:

  1. servingInfo:
  2. bindAddress: 0.0.0.0:8443
  3. bindNetwork: tcp4
  4. certFile: master.server.crt
  5. clientCA: ca.crt
  6. keyFile: master.server.key
  7. maxRequestsInFlight: 500
  8. requestTimeoutSeconds: 3600

StorageDir

The path to the etcd storage directory.

Grant Configuration

Table 6. Grant Configuration Parameters
Parameter NameDescription

GrantConfig

Describes how to handle grants.

GrantHandlerAuto

Auto-approves client authorization grant requests.

GrantHandlerDeny

Auto-denies client authorization grant requests.

GrantHandlerPrompt

Prompts the user to approve new client authorization grant requests.

Method

Determines the default strategy to use when an OAuth client requests a grant.This method will be used only if the specific OAuth client does not provide a strategy of their own. Valid grant handling methods are:

  • auto: always approves grant requests, useful for trusted clients

  • prompt: prompts the end user for approval of grant requests, useful for third-party clients

  • deny: always denies grant requests, useful for black-listed clients

Image Configuration

Table 7. Image Configuration Parameters
Parameter NameDescription

Format

The format of the name to be built for the system component.

Latest

Determines if the latest tag will be pulled from the registry.

Image Policy Configuration

Table 8. Image Policy Configuration Parameters
Parameter NameDescription

DisableScheduledImport

Allows scheduled background import of images to be disabled.

MaxImagesBulkImportedPerRepository

Controls the number of images that are imported when a user does a bulk import of a Docker repository. This number defaults to 5 to prevent users from importing large numbers of images accidentally. Set -1 for no limit.

MaxScheduledImageImportsPerMinute

The maximum number of scheduled image streams that will be imported in the background per minute. The default value is 60.

ScheduledImageImportMinimumIntervalSeconds

The minimum number of seconds that can elapse between when image streams scheduled for background import are checked against the upstream repository. The default value is 15 minutes.

AllowedRegistriesForImport

Limits the docker registries that normal users may import images from. Set this list to the registries that you trust to contain valid Docker images and that you want applications to be able to import from. Users with permission to create Images or ImageStreamMappings via the API are not affected by this policy - typically only administrators or system integrations will have those permissions.

InternalRegistryHostname

Sets the hostname for the default internal image registry. The value must be in hostname[:port] format. For backward compatibility, users can still use OPENSHIFT_DEFAULT_REGISTRY environment variable but this setting overrides the environment variable. When this is set, the internal registry must have its hostname set as well. See setting the registry hostname for more details.

ExternalRegistryHostname

ExternalRegistryHostname sets the hostname for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The value is used in publicDockerImageRepository field in ImageStreams. The value must be in hostname[:port] format.

Kubernetes Master Configuration

Table 9. Kubernetes Master Configuration Parameters
Parameter NameDescription

APILevels

A list of API levels that should be enabled on startup, v1 as examples.

DisabledAPIGroupVersions

A map of groups to the versions (or *) that should be disabled.

KubeletClientInfo

Contains information about how to connect to kubelets.

KubernetesMasterConfig

Contains information about how to connect to kubelet’s KubernetesMasterConfig. If present, then start the kubernetes master with this process.

MasterCount

The number of expected masters that should be running. This value defaults to 1 and may be set to a positive integer, or if set to -1, indicates this is part of a cluster.

MasterIP

The public IP address of Kubernetes resources. If empty, the first result from net.InterfaceAddrs will be used.

MasterKubeConfig

File name for the .kubeconfig file that describes how to connect this node to the master.

ServicesNodePortRange

The range to use for assigning service public ports on a host. Default 30000-32767.

ServicesSubnet

The subnet to use for assigning service IPs.

StaticNodeNames

The list of nodes that are statically known.

Network Configuration

Choose the CIDRs in the following parameters carefully, because the IPv4 address space is shared by all users of the nodes. OKD reserves CIDRs from the IPv4 address space for its own use, and reserves CIDRs from the IPv4 address space for addresses that are shared between the external user and the cluster.

Table 10. Network Configuration Parameters
Parameter NameDescription

ClusterNetworkCIDR

The CIDR string to specify the global overlay network’s L3 space. This is reserved for the internal use of the cluster networking.

externalIPNetworkCIDRs

Controls what values are acceptable for the service external IP field. If empty, no externalIP may be set. It may contain a list of CIDRs which are checked for access. If a CIDR is prefixed with !, IPs in that CIDR will be rejected. Rejections will be applied first, then the IP checked against one of the allowed CIDRs. You must ensure this range does not overlap with your nodes, pods, or service CIDRs for security reasons.

HostSubnetLength

The number of bits to allocate to each host’s subnet. For example, 8 would mean a /24 network on the host.

ingressIPNetworkCIDR

Controls the range to assign ingress IPs from for services of type LoadBalancer on bare metal. It may contain a single CIDR that it will be allocated from. By default 172.46.0.0/16 is configured. For security reasons, you should ensure that this range does not overlap with the CIDRs reserved for external IPs, nodes, pods, or services.

HostSubnetLength

The number of bits to allocate to each host’s subnet. For example, 8 would mean a /24 network on the host.

NetworkConfig

To be passed to the compiled-in-network plug-in. Many of the options here can be controlled in the Ansible inventory.

  • NetworkPluginName (string)

  • ClusterNetworkCIDR (string)

  • HostSubnetLength (unsigned integer)

  • ServiceNetworkCIDR (string)

  • externalIPNetworkCIDRs (string array): Controls which values are acceptable for the service external IP field. If empty, no external IP may be set. It can contain a list of CIDRs which are checked for access. If a CIDR is prefixed with !, then IPs in that CIDR are rejected. Rejections are applied first, then the IP is checked against one of the allowed CIDRs. For security purposes, you should ensure this range does not overlap with your nodes, pods, or service CIDRs.

For Example:

  1. networkConfig:
  2. clusterNetworks
  3. - cidr: 10.3.0.0/16
  4. hostSubnetLength: 8
  5. networkPluginName: example/openshift-ovs-subnet
  6. # serviceNetworkCIDR must match kubernetesMasterConfig.servicesSubnet
  7. serviceNetworkCIDR: 179.29.0.0/16

NetworkPluginName

The name of the network plug-in to use.

ServiceNetwork

The CIDR string to specify the service networks.

OAuth Authentication Configuration

Table 11. OAuth Configuration Parameters
Parameter NameDescription

AlwaysShowProviderSelection

Forces the provider selection page to render even when there is only a single provider.

AssetPublicURL

Used for building valid client redirect URLs for external access.

Error

A path to a file containing a go template used to render error pages during the authentication or grant flow If unspecified, the default error page is used.

IdentityProviders

Ordered list of ways for a user to identify themselves.

Login

A path to a file containing a go template used to render the login page. If unspecified, the default login page is used.

MasterCA

CA for verifying the TLS connection back to the MasterURL.

MasterPublicURL

Used for building valid client redirect URLs for external access.

MasterURL

Used for making server-to-server calls to exchange authorization codes for access tokens.

OAuthConfig

If present, then the /oauth endpoint starts based on the defined parameters. For example:

  1. oauthConfig:
  2. assetPublicURL: https://master.ose32.example.com:8443/console/
  3. grantConfig:
  4. method: auto
  5. identityProviders:
  6. - challenge: true
  7. login: true
  8. mappingMethod: claim
  9. name: htpasswd_all
  10. provider:
  11. apiVersion: v1
  12. kind: HTPasswdPasswordIdentityProvider
  13. file: /etc/origin/openshift-passwd
  14. masterCA: ca.crt
  15. masterPublicURL: https://master.ose32.example.com:8443
  16. masterURL: https://master.ose32.example.com:8443
  17. sessionConfig:
  18. sessionMaxAgeSeconds: 3600
  19. sessionName: ssn
  20. sessionSecretsFile: /etc/origin/master/session-secrets.yaml
  21. tokenConfig:
  22. accessTokenMaxAgeSeconds: 86400
  23. authorizeTokenMaxAgeSeconds: 500

OAuthTemplates

Allows for customization of pages like the login page.

ProviderSelection

A path to a file containing a go template used to render the provider selection page. If unspecified, the default provider selection page is used.

SessionConfig

Holds information about configuring sessions.

Templates

Allows you to customize pages like the login page.

TokenConfig

Contains options for authorization and access tokens.

Project Configuration

Table 12. Project Configuration Parameters
Parameter NameDescription

DefaultNodeSelector

Holds default project node label selector.

ProjectConfig

Holds information about project creation and defaults:

  • DefaultNodeSelector (string): Holds the default project node label selector.

  • ProjectRequestMessage (string): The string presented to a user if they are unable to request a project via the projectrequest API endpoint.

  • ProjectRequestTemplate (string): The template to use for creating projects in response to projectrequest. It is in the format <namespace>/<template>. It is optional, and if it is not specified, a default template is used.

  • SecurityAllocator: Controls the automatic allocation of UIDs and MCS labels to a project. If nil, allocation is disabled:

    • mcsAllocatorRange (string): Defines the range of MCS categories that will be assigned to namespaces. The format is <prefix>/<numberOfLabels>[,<maxCategory>]. The default is s0/2 and will allocate from c0 → c1023, which means a total of 535k labels are available. If this value is changed after startup, new projects may receive labels that are already allocated to other projects. The prefix may be any valid SELinux set of terms (including user, role, and type). However, leaving the prefix at its default allows the server to set them automatically. For example, s0:/2 would allocate labels from s0:c0,c0 to s0:c511,c511 whereas s0:/2,512 would allocate labels from s0:c0,c0,c0 to s0:c511,c511,511.

    • mcsLabelsPerProject (integer): Defines the number of labels to reserve per project. The default is 5 to match the default UID and MCS ranges.

    • uidAllocatorRange (string): Defines the total set of Unix user IDs (UIDs) automatically allocated to projects, and the size of the block that each namespace gets. For example, 1000-1999/10 would allocate ten UIDs per namespace, and would be able to allocate up to 100 blocks before running out of space. The default is to allocate from 1 billion to 2 billion in 10k blocks, which is the expected size of ranges for container images when user namespaces are started.

ProjectRequestMessage

The string presented to a user if they are unable to request a project via the project request API endpoint.

ProjectRequestTemplate

The template to use for creating projects in response to a projectrequest. It is in the format namespace/template and it is optional. If it is not specified, a default template is used.

Scheduler Configuration

Table 13. Scheduler Configuration Parameters
Parameter NameDescription

SchedulerConfigFile

Points to a file that describes how to set up the scheduler. If empty, you get the default scheduling rules

Security Allocator Configuration

Table 14. Security Allocator Parameters
Parameter NameDescription

MCSAllocatorRange

Defines the range of MCS categories that will be assigned to namespaces. The format is <prefix>/<numberOfLabels>[,<maxCategory>]. The default is s0/2 and will allocate from c0 to c1023, which means a total of 535k labels are available (1024 choose 2 ~ 535k). If this value is changed after startup, new projects may receive labels that are already allocated to other projects. Prefix may be any valid SELinux set of terms (including user, role, and type), although leaving them as the default will allow the server to set them automatically.

SecurityAllocator

Controls the automatic allocation of UIDs and MCS labels to a project. If nil, allocation is disabled.

UIDAllocatorRange

Defines the total set of Unix user IDs (UIDs) that will be allocated to projects automatically, and the size of the block that each namespace gets. For example, 1000-1999/10 will allocate ten UIDs per namespace, and will be able to allocate up to 100 blocks before running out of space. The default is to allocate from 1 billion to 2 billion in 10k blocks (which is the expected size of the ranges container images will use once user namespaces are started).

Service Account Configuration

Table 15. Service Account Configuration Parameters
Parameter NameDescription

LimitSecretReferences

Controls whether or not to allow a service account to reference any secret in a namespace without explicitly referencing them.

ManagedNames

A list of service account names that will be auto-created in every namespace. If no names are specified, the ServiceAccountsController will not be started.

MasterCA

The CA for verifying the TLS connection back to the master. The service account controller will automatically inject the contents of this file into pods so they can verify connections to the master.

PrivateKeyFile

A file containing a PEM-encoded private RSA key, used to sign service account tokens. If no private key is specified, the service account TokensController will not be started.

PublicKeyFiles

A list of files, each containing a PEM-encoded public RSA key. If any file contains a private key, the public portion of the key is used. The list of public keys is used to verify presented service account tokens. Each key is tried in order until the list is exhausted or verification succeeds. If no keys are specified, no service account authentication will be available.

ServiceAccountConfig

Holds options related to service accounts:

  • LimitSecretReferences (boolean): Controls whether or not to allow a service account to reference any secret in a namespace without explicitly referencing them.

  • ManagedNames (string): A list of service account names that will be auto-created in every namespace. If no names are specified, then the ServiceAccountsController will not be started.

  • MasterCA (string): The certificate authority for verifying the TLS connection back to the master. The service account controller will automatically inject the contents of this file into pods so that they can verify connections to the master.

  • PrivateKeyFile (string): Contains a PEM-encoded private RSA key, used to sign service account tokens. If no private key is specified, then the service account TokensController will not be started.

  • PublicKeyFiles (string): A list of files, each containing a PEM-encoded public RSA key. If any file contains a private key, then OKD uses the public portion of the key. The list of public keys is used to verify service account tokens; each key is tried in order until either the list is exhausted or verification succeeds. If no keys are specified, then service account authentication will not be available.

Serving Information Configuration

Table 16. Serving Information Configuration Parameters
Parameter NameDescription

AllowRecursiveQueries

Allows the DNS server on the master to answer queries recursively. Note that open resolvers can be used for DNS amplification attacks and the master DNS should not be made accessible to public networks.

BindAddress

The ip:port to serve on.

BindNetwork

Controls limits and behavior for importing images.

CertFile

A file containing a PEM-encoded certificate.

CertInfo

TLS cert information for serving secure traffic.

ClientCA

The certificate bundle for all the signers that you recognize for incoming client certificates.

dnsConfig

If present, then start the DNS server based on the defined parameters. For example:

  1. dnsConfig:
  2. bindAddress: 0.0.0.0:8053
  3. bindNetwork: tcp4

DNSDomain

Holds the domain suffix.

DNSIP

Holds the IP.

KeyFile

A file containing a PEM-encoded private key for the certificate specified by CertFile.

MasterClientConnectionOverrides

Provides overrides to the client connection used to connect to the master. This parameter is not supported. To set QPS and burst values, see Setting Node QPS and Burst Values.

MaxRequestsInFlight

The number of concurrent requests allowed to the server. If zero, no limit.

NamedCertificates

A list of certificates to use to secure requests to specific host names.

RequestTimeoutSecond

The number of seconds before requests are timed out. The default is 60 minutes. If -1, there is no limit on requests.

ServingInfo

The HTTP serving information for the assets.

Volume Configuration

Table 17. Volume Configuration Parameters
Parameter NameDescription

DynamicProvisioningEnabled

A boolean to enable or disable dynamic provisioning. Default is true.

FSGroup

Enables local storage quotas on each node for each FSGroup. At present this is only implemented for emptyDir volumes, and if the underlying volumeDirectory is on an XFS filesystem.

MasterVolumeConfig

Contains options for configuring volume plug-ins in the master node.

NodeVolumeConfig

Contains options for configuring volumes on the node.

VolumeConfig

Contains options for configuring volume plug-ins in the node:

  • DynamicProvisioningEnabled (boolean): Default value is true, and toggles dynamic provisioning off when false.

VolumeDirectory

The directory that volumes are stored under.

Basic Audit

Audit provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators, or other components of the system.

Audit works at the API server level, logging all requests coming to the server. Each audit log contains two entries:

  1. The request line containing:

    1. A Unique ID allowing to match the response line (see #2)

    2. The source IP of the request

    3. The HTTP method being invoked

    4. The original user invoking the operation

    5. The impersonated user for the operation (self meaning himself)

    6. The impersonated group for the operation (lookup meaning user’s group)

    7. The namespace of the request or <none>

    8. The URI as requested

  2. The response line containing:

    1. The unique ID from #1

    2. The response code

Example output for user admin asking for a list of pods:

  1. AUDIT: id="5c3b8227-4af9-4322-8a71-542231c3887b" ip="127.0.0.1" method="GET" user="admin" as="<self>" asgroups="<lookup>" namespace="default" uri="/api/v1/namespaces/default/pods"
  2. AUDIT: id="5c3b8227-4af9-4322-8a71-542231c3887b" response="200"

The openshift_master_audit_config variable enables API service auditing. It takes an array of the following options:

Table 18. Audit Configuration Parameters
Parameter NameDescription

enabled

A boolean to enable or disable audit logs. Default is false.

auditFilePath

File path where the requests should be logged to. If not set, logs are printed to master logs.

maximumFileRetentionDays

Specifies maximum number of days to retain old audit log files based on the time stamp encoded in their filename.

maximumRetainedFiles

Specifies the maximum number of old audit log files to retain.

maximumFileSizeMegabytes

Specifies maximum size in megabytes of the log file before it gets rotated. Defaults to 100MB.

Because the OKD master API now runs as static pod, you must define the auditFilePath location in the /var/lib/origin or /etc/origin/master/ file.

Example Audit Configuration

  1. auditConfig:
  2. auditFilePath: "/var/lib/origin/audit-ocp.log"
  3. enabled: true
  4. maximumFileRetentionDays: 10
  5. maximumFileSizeMegabytes: 10
  6. maximumRetainedFiles: 10

Advanced Setup for the Audit Log

The directory /var/lib/origin will be created if it does not exist.

You can specify advanced audit log parameters by using the following parameter value format:

  1. openshift_master_audit_config={"enabled": true, "auditFilePath": "/var/lib/origin/openpaas-oscp-audit.log", "maximumFileRetentionDays": 14, "maximumFileSizeMegabytes": 500, "maximumRetainedFiles": 5}

Advanced Audit

The advanced audit feature provides several improvements over the basic audit functionality, including fine-grained events filtering and multiple output back ends.

To enable the advanced audit feature, provide the following values in the openshift_master_audit_config parameter:

  1. openshift_master_audit_config={"enabled": true, "auditFilePath": "/var/lib/origin/oscp-audit.log", "maximumFileRetentionDays": 14, "maximumFileSizeMegabytes": 500, "maximumRetainedFiles": 5, "policyFile": "/etc/origin/master/adv-audit.yaml", "logFormat":"json"}

The policy file /etc/origin/master/adv-audit.yaml must be available on each master node.

The following table contains additional options you can use.

Table 19. Advanced Audit Configuration Parameters
Parameter NameDescription

policyFile

Path to the file that defines the audit policy configuration.

policyConfiguration

An embedded audit policy configuration.

logFormat

Specifies the format of the saved audit logs. Allowed values are legacy (the format used in basic audit), and json.

webHookKubeConfig

Path to a .kubeconfig-formatted file that defines the audit webhook configuration, where the events are sent to.

webHookMode

Specifies the strategy for sending audit events. Allowed values are block (blocks processing another event until the previous has fully processed) and batch (buffers events and delivers in batches).

To enable the advanced audit feature, you must provide either policyFile or policyConfiguration describing the audit policy rules:

Sample Audit Policy Configuration

  1. apiVersion: audit.k8s.io/v1beta1
  2. kind: Policy
  3. rules:
  4. # Do not log watch requests by the "system:kube-proxy" on endpoints or services
  5. - level: None (1)
  6. users: ["system:kube-proxy"] (2)
  7. verbs: ["watch"] (3)
  8. resources: (4)
  9. - group: ""
  10. resources: ["endpoints", "services"]
  11. # Do not log authenticated requests to certain non-resource URL paths.
  12. - level: None
  13. userGroups: ["system:authenticated"] (5)
  14. nonResourceURLs: (6)
  15. - "/api*" # Wildcard matching.
  16. - "/version"
  17. # Log the request body of configmap changes in kube-system.
  18. - level: Request
  19. resources:
  20. - group: "" # core API group
  21. resources: ["configmaps"]
  22. # This rule only applies to resources in the "kube-system" namespace.
  23. # The empty string "" can be used to select non-namespaced resources.
  24. namespaces: ["kube-system"] (7)
  25. # Log configmap and secret changes in all other namespaces at the metadata level.
  26. - level: Metadata
  27. resources:
  28. - group: "" # core API group
  29. resources: ["secrets", "configmaps"]
  30. # Log all other resources in core and extensions at the request level.
  31. - level: Request
  32. resources:
  33. - group: "" # core API group
  34. - group: "extensions" # Version of group should NOT be included.
  35. # A catch-all rule to log all other requests at the Metadata level.
  36. - level: Metadata (1)
  37. # Log login failures from the web console or CLI. Review the logs and refine your policies.
  38. - level: Metadata
  39. nonResourceURLs:
  40. - /login* (8)
  41. - /oauth* (9)
1There are four possible levels every event can be logged at:
  • None - Do not log events that match this rule.

  • Metadata - Log request metadata (requesting user, time stamp, resource, verb, etc.), but not request or response body. This is the same level as the one used in basic audit.

  • Request - Log event metadata and request body, but not response body.

  • RequestResponse - Log event metadata, request, and response bodies.

2A list of users the rule applies to. An empty list implies every user.
3A list of verbs this rule applies to. An empty list implies every verb. This is Kubernetes verb associated with API requests (including get, list, watch, create, update, patch, delete, deletecollection, and proxy).
4A list of resources the rule applies to. An empty list implies every resource. Each resource is specified as a group it is assigned to (for example, an empty for Kubernetes core API, batch, build.openshift.io, etc.), and a resource list from that group.
5A list of groups the rule applies to. An empty list implies every group.
6A list of non-resources URLs the rule applies to.
7A list of namespaces the rule applies to. An empty list implies every namespace.
8Endpoint used by the web console.
9Endpoint used by the CLI.

For more information on advanced audit, see the Kubernetes documentation

Specifying TLS ciphers for etcd

You can specify the supported TLS ciphers to use in communication between the master and etcd servers.

  1. On each etcd node, upgrade etcd:

    1. # yum update etcd iptables-services
  2. Confirm that your etcd version is 3.2.22 or later:

    1. # etcd --version
    2. etcd Version: 3.2.22
  3. On each master host, specify the ciphers to enable in the /etc/origin/master/master-config.yaml file:

    1. servingInfo:
    2. ...
    3. minTLSVersion: VersionTLS12
    4. cipherSuites:
    5. - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
    6. - TLS_RSA_WITH_AES_256_CBC_SHA
    7. - TLS_RSA_WITH_AES_128_CBC_SHA
    8. ...
  4. On each master host, restart the master service:

    1. # master-restart api
    2. # master-restart controllers
  5. Confirm that the cipher is applied. For example, for TLSv1.2 cipher ECDHE-RSA-AES128-GCM-SHA256, run the following command:

    1. # openssl s_client -connect etcd1.example.com:2379 (1)
    2. CONNECTED(00000003)
    3. depth=0 CN = etcd1.example.com
    4. verify error:num=20:unable to get local issuer certificate
    5. verify return:1
    6. depth=0 CN = etcd1.example.com
    7. verify error:num=21:unable to verify the first certificate
    8. verify return:1
    9. 139905367488400:error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate:s3_pkt.c:1493:SSL alert number 42
    10. 139905367488400:error:140790E5:SSL routines:ssl23_write:ssl handshake failure:s23_lib.c:177:
    11. ---
    12. Certificate chain
    13. 0 s:/CN=etcd1.example.com
    14. i:/CN=etcd-signer@1529635004
    15. ---
    16. Server certificate
    17. -----BEGIN CERTIFICATE-----
    18. MIIEkjCCAnqgAwIBAgIBATANBgkqhkiG9w0BAQsFADAhMR8wHQYDVQQDDBZldGNk
    19. ........
    20. ....
    21. eif87qttt0Sl1vS8DG1KQO1oOBlNkg==
    22. -----END CERTIFICATE-----
    23. subject=/CN=etcd1.example.com
    24. issuer=/CN=etcd-signer@1529635004
    25. ---
    26. Acceptable client certificate CA names
    27. /CN=etcd-signer@1529635004
    28. Client Certificate Types: RSA sign, ECDSA sign
    29. Requested Signature Algorithms: RSA+SHA256:ECDSA+SHA256:RSA+SHA384:ECDSA+SHA384:RSA+SHA1:ECDSA+SHA1
    30. Shared Requested Signature Algorithms: RSA+SHA256:ECDSA+SHA256:RSA+SHA384:ECDSA+SHA384:RSA+SHA1:ECDSA+SHA1
    31. Peer signing digest: SHA384
    32. Server Temp Key: ECDH, P-256, 256 bits
    33. ---
    34. SSL handshake has read 1666 bytes and written 138 bytes
    35. ---
    36. New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256
    37. Server public key is 2048 bit
    38. Secure Renegotiation IS supported
    39. Compression: NONE
    40. Expansion: NONE
    41. No ALPN negotiated
    42. SSL-Session:
    43. Protocol : TLSv1.2
    44. Cipher : ECDHE-RSA-AES128-GCM-SHA256
    45. Session-ID:
    46. Session-ID-ctx:
    47. Master-Key: 1EFA00A91EE5FC5EDDCFC67C8ECD060D44FD3EB23D834EDED929E4B74536F273C0F9299935E5504B562CD56E76ED208D
    48. Key-Arg : None
    49. Krb5 Principal: None
    50. PSK identity: None
    51. PSK identity hint: None
    52. Start Time: 1529651744
    53. Timeout : 300 (sec)
    54. Verify return code: 21 (unable to verify the first certificate)
    1etcd1.example.com is the name of an etcd host.

Node Configuration Files

During installation, OKD creates a configmap in the openshift-node project for each type of node group:

  • node-config-master

  • node-config-infra

  • node-config-compute

  • node-config-all-in-one

  • node-config-master-infra

To make configuration changes to an existing node, edit the appropriate configuration map. A sync pod on each node watches for changes in the configuration maps. During installation, the sync pods are created by using sync Daemonsets, and a /etc/origin/node/node-config.yaml file, where the node configuration parameters reside, is added to each node. When a sync pod detects configuration map change, it updates the node-config.yaml on all nodes in that node group and restarts the appropriate nodes.

  1. $ oc get cm -n openshift-node
  2. NAME DATA AGE
  3. node-config-all-in-one 1 1d
  4. node-config-compute 1 1d
  5. node-config-infra 1 1d
  6. node-config-master 1 1d
  7. node-config-master-infra 1 1d

Sample configuration map for the node-config-compute group

  1. apiVersion: v1
  2. authConfig: (1)
  3. authenticationCacheSize: 1000
  4. authenticationCacheTTL: 5m
  5. authorizationCacheSize: 1000
  6. authorizationCacheTTL: 5m
  7. dnsBindAddress: 127.0.0.1:53
  8. dnsDomain: cluster.local
  9. dnsIP: 0.0.0.0 (2)
  10. dnsNameservers: null
  11. dnsRecursiveResolvConf: /etc/origin/node/resolv.conf
  12. dockerConfig:
  13. dockerShimRootDirectory: /var/lib/dockershim
  14. dockerShimSocket: /var/run/dockershim.sock
  15. execHandlerName: native
  16. enableUnidling: true
  17. imageConfig:
  18. format: registry.reg-aws.openshift.com/openshift3/ose-${component}:${version}
  19. latest: false
  20. iptablesSyncPeriod: 30s
  21. kind: NodeConfig
  22. kubeletArguments: (3)
  23. bootstrap-kubeconfig:
  24. - /etc/origin/node/bootstrap.kubeconfig
  25. cert-dir:
  26. - /etc/origin/node/certificates
  27. cloud-config:
  28. - /etc/origin/cloudprovider/aws.conf
  29. cloud-provider:
  30. - aws
  31. enable-controller-attach-detach:
  32. - 'true'
  33. feature-gates:
  34. - RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true
  35. node-labels:
  36. - node-role.kubernetes.io/compute=true
  37. pod-manifest-path:
  38. - /etc/origin/node/pods (4)
  39. rotate-certificates:
  40. - 'true'
  41. masterClientConnectionOverrides:
  42. acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
  43. burst: 40
  44. contentType: application/vnd.kubernetes.protobuf
  45. qps: 20
  46. masterKubeConfig: node.kubeconfig
  47. networkConfig: (5)
  48. mtu: 8951
  49. networkPluginName: redhat/openshift-ovs-subnet (6)
  50. servingInfo: (7)
  51. bindAddress: 0.0.0.0:10250
  52. bindNetwork: tcp4
  53. clientCA: client-ca.crt
  54. volumeConfig:
  55. localQuota:
  56. perFSGroup: null (8)
  57. volumeDirectory: /var/lib/origin/openshift.local.volumes
1Authentication and authorization configuration options.
2IP address prepended to a pod’s /etc/resolv.conf.
3Key value pairs that are passed directly to the Kubelet that match the Kubelet’s command line arguments.
4The path to the pod manifest file or directory. A directory must contain one or more manifest files. OKD uses the manifest files to create pods on the node.
5The pod network settings on the node.
6Software defined network (SDN) plug-in. Set to redhat/openshift-ovs-subnet for the ovs-subnet plug-in; redhat/openshift-ovs-multitenant for the ovs-multitenant plug-in; or redhat/openshift-ovs-networkpolicy for the ovs-networkpolicy plug-in.
7Certificate information for the node.
8Optional: PEM-encoded certificate bundle. If set, a valid client certificate must be presented and validated against the certificate authorities in the specified file before the request headers are checked for user names.

Do not manually modify the /etc/origin/node/node-config.yaml file.

The node configuration file determines the resources of a node. See the Allocating node resources section in the Cluster Administrator guide for more information.

Pod and Node Configuration

Table 20. Pod and Node Configuration Parameters
Parameter NameDescription

NodeConfig

The fully specified configuration starting an OKD node.

NodeIP

Node may have multiple IPs, so this specifies the IP to use for pod traffic routing. If not specified, network parse/lookup on the nodeName is performed and the first non-loopback address is used.

NodeName

The value used to identify this particular node in the cluster. If possible, this should be your fully qualified hostname. If you are describing a set of static nodes to the master, this value must match one of the values in the list.

PodEvictionTimeout

Controls grace period for deleting pods on failed nodes. It takes valid time duration string. If empty, you get the default pod eviction timeout.

ProxyClientInfo

Specifies the client cert/key to use when proxying to pods.

Docker Configuration

Table 21. Docker Configuration Parameters
Parameter NameDescription

AllowDisabledDocker

If true, the kubelet will ignore errors from Docker. This means that a node can start on a machine that does not have docker started.

DockerConfig

Holds Docker related configuration options

ExecHandlerName

The handler to use for executing commands in Docker containers.

Local Storage Configuration

You can use the XFS quota subsystem to limit the size of emptyDir volumes and volumes based on an emptyDir volume, such as secrets and configuration maps, on each node.

To limit the size of emptyDir volumes in an XFS filesystem, configure local volume quota for each unique FSGroup using the node-config-compute configuration map in the openshift-node project.

  1. apiVersion: kubelet.config.openshift.io/v1
  2. kind: VolumeConfig
  3. localQuota: (1)
  4. perFSGroup: 1Gi (2)
1Contains options for controlling local volume quota on the node.
2Set this value to a resource quantity representing the desired quota per [FSGroup], per node, such as 1Gi, 512Mi, and so forth. Requires the volumeDirectory to be on an XFS filesystem mounted with the grpquota option. The matching security context constraint fsGroup type must be set to MustRunAs.

If no FSGroup is specified, indicating the request matched an SCC with RunAsAny, the quota application is skipped.

Do not edit the /etc/origin/node/volume-config.yaml file directly. The file is created from the node-config-compute configuration map. Use the node-config-compute configuration map to create or edit the paramaters in the volume-config.yaml file.

Setting Node Queries per Second (QPS) Limits and Burst Values

The rate at which Kubelet talks to API server depends on Queries per Second (QPS) and burst values. The default values are good enough if there are limited pods running on each node. Provided there are enough CPU and memory resources on the node, the QPS and burst values can be tweaked in the /etc/origin/node/node-config.yaml file:

  1. kubeletArguments:
  2. kube-api-qps:
  3. - "20"
  4. kube-api-burst:
  5. - "40"

Then restart OKD node services.

The QPS and burst values above are defaults for OKD.

Parallel Image Pulls with Docker 1.9+

If you are using Docker 1.9+, you may want to consider enabling parallel image pulling, as the default is to pull images one at a time.

There is a potential issue with data corruption prior to Docker 1.9. However, starting with 1.9, the corruption issue is resolved and it is safe to switch to parallel pulls.

  1. kubeletArguments:
  2. serialize-image-pulls:
  3. - "false" (1)
1Change to true to disable parallel pulls. (This is the default config)

Passwords and Other Sensitive Data

For some authentication configurations, an LDAP bindPassword or OAuth clientSecret value is required. Instead of specifying these values directly in the master configuration file, these values may be provided as environment variables, external files, or in encrypted files.

Environment Variable Example

  1. ...
  2. bindPassword:
  3. env: BIND_PASSWORD_ENV_VAR_NAME

External File Example

  1. ...
  2. bindPassword:
  3. file: bindPassword.txt

Encrypted External File Example

  1. ...
  2. bindPassword:
  3. file: bindPassword.encrypted
  4. keyFile: bindPassword.key

To create the encrypted file and key file for the above example:

  1. $ oc adm ca encrypt --genkey=bindPassword.key --out=bindPassword.encrypted
  2. > Data to encrypt: B1ndPass0rd!

Run oc adm commands only from the first master listed in the Ansible host inventory file, by default /etc/ansible/hosts.

Encrypted data is only as secure as the decrypting key. Care should be taken to limit filesystem permissions and access to the key file.

Creating New Configuration Files

When defining an OKD configuration from scratch, start by creating new configuration files.

For master host configuration files, use the openshift start command with the --write-config option to write the configuration files. For node hosts, use the oc adm create-node-config command to write the configuration files.

The following commands write the relevant launch configuration file(s), certificate files, and any other necessary files to the specified --write-config or --node-dir directory.

Generated certificate files are valid for two years, while the certification authority (CA) certificate is valid for five years. This can be altered with the --expire-days and --signer-expire-days options, but for security reasons, it is recommended to not make them greater than these values.

To create configuration files for an all-in-one server (a master and a node on the same host) in the specified directory:

  1. $ openshift start --write-config=/openshift.local.config

To create a master configuration file and other required files in the specified directory:

  1. $ openshift start master --write-config=/openshift.local.config/master

To create a node configuration file and other related files in the specified directory:

  1. $ oc adm create-node-config \
  2. --node-dir=/openshift.local.config/node-<node_hostname> \
  3. --node=<node_hostname> \
  4. --hostnames=<node_hostname>,<ip_address> \
  5. --certificate-authority="/path/to/ca.crt" \
  6. --signer-cert="/path/to/ca.crt" \
  7. --signer-key="/path/to/ca.key"
  8. --signer-serial="/path/to/ca.serial.txt"
  9. --node-client-certificate-authority="/path/to/ca.crt"

When creating node configuration files, the --hostnames option accepts a comma-delimited list of every host name or IP address you want server certificates to be valid for.

Launching Servers Using Configuration Files

Once you have modified the master and/or node configuration files to your specifications, you can use them when launching servers by specifying them as an argument. Keep in mind that if you specify a configuration file, none of the other command line options you pass are respected.

To modify a node in your cluster, update the node configuration maps as needed. Do not manually edit the node-config.yaml file.

To launch an all-in-one server using a master configuration and a node configuration file:

  1. $ openshift start --master-config=/openshift.local.config/master/master-config.yaml --node-config=/openshift.local.config/node-<node_hostname>/node-config.yaml

To launch a master server using a master configuration file:

  1. $ openshift start master --config=/openshift.local.config/master/master-config.yaml

To launch a node server using a node configuration file:

  1. $ openshift start node --config=/openshift.local.config/node-<node_hostname>/node-config.yaml

Viewing Master and Node Logs

OKD collects log messages for debugging, using the systemd-journald.service for nodes and a script, called master-logs, for masters.

The number of lines displayed in the web console is hard-coded at 5000 and cannot be changed. To see the entire log, use the CLI.

The logging uses five log message severities based on Kubernetes logging conventions, as follows:

Table 22. Log Level Options
OptionDescription

0

Errors and warnings only

2

Normal information

4

Debugging-level information

6

API-level debugging information (request / response)

8

Body-level API debugging information

You can change the log levels independently for masters or nodes as needed.

View node logs

To view logs for the node system, run the following command:

  1. # journalctl -r -u <journal_name>

Use the -r option to show the newest entries first.

View master logs

To view logs for the master components, run the following command:

  1. # /usr/local/bin/master-logs <component> <container>

For example:

  1. # /usr/local/bin/master-logs controllers controllers
  2. # /usr/local/bin/master-logs api api
  3. # /usr/local/bin/master-logs etcd etcd

Redirect master log to a file

To redirect the output of master log into a file, run the following command:

  1. master-logs api api 2> file

Configuring Logging Levels

You can control which INFO messages are logged by setting the DEBUG_LOGLEVEL option in the in node configuration files or the /etc/origin/master/master.env file. Configuring the logs to collect all messages can lead to large logs that are difficult to interpret and can take up excessive space. Only collect all messages when you need to debug your cluster.

Messages with FATAL, ERROR, WARNING, and some INFO severities appear in the logs regardless of the log configuration.

To change the logging level:

  1. Edit the /etc/origin/master/master.env file for the master or /etc/sysconfig/atomic-openshift-node file for the nodes.

  2. Enter a value from the Log Level Options table in the DEBUG_LOGLEVEL field.

    For example:

    1. DEBUG_LOGLEVEL=4
  3. Restart the master or node host as appropriate. See Restarting OKD services.

After the restart, all new log messages will conform to the new setting. Older messages do not change.

The default log level can be set using the standard cluster installation process. For more information, see Cluster Variables.

The following examples are excerpts of redirected master log files at various log levels. System information has been removed from these examples.

Excerpt of master-logs api api 2> file output at loglevel=2

  1. W1022 15:08:09.787705 1 server.go:79] Unable to keep dnsmasq up to date, 0.0.0.0:8053 must point to port 53
  2. I1022 15:08:09.787894 1 logs.go:49] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0]
  3. I1022 15:08:09.787913 1 logs.go:49] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0]
  4. I1022 15:08:09.889022 1 dns_server.go:63] DNS listening at 0.0.0.0:8053
  5. I1022 15:08:09.893156 1 feature_gate.go:190] feature gates: map[AdvancedAuditing:true]
  6. I1022 15:08:09.893500 1 master.go:431] Starting OAuth2 API at /oauth
  7. I1022 15:08:09.914759 1 master.go:431] Starting OAuth2 API at /oauth
  8. I1022 15:08:09.942349 1 master.go:431] Starting OAuth2 API at /oauth
  9. W1022 15:08:09.977088 1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
  10. W1022 15:08:09.977176 1 swagger.go:38] No API exists for predefined swagger description /api/v1
  11. [restful] 2018/10/22 15:08:09 log.go:33: [restful/swagger] listing is available at https://openshift.com:443/swaggerapi
  12. [restful] 2018/10/22 15:08:09 log.go:33: [restful/swagger] https://openshift.com:443/swaggerui/ is mapped to folder /swagger-ui/
  13. I1022 15:08:10.231405 1 master.go:431] Starting OAuth2 API at /oauth
  14. W1022 15:08:10.259523 1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
  15. W1022 15:08:10.259555 1 swagger.go:38] No API exists for predefined swagger description /api/v1
  16. I1022 15:08:23.895493 1 logs.go:49] http: TLS handshake error from 10.10.94.10:46322: EOF
  17. I1022 15:08:24.449577 1 crdregistration_controller.go:110] Starting crd-autoregister controller
  18. I1022 15:08:24.449916 1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller
  19. I1022 15:08:24.496147 1 logs.go:49] http: TLS handshake error from 127.0.0.1:39140: EOF
  20. I1022 15:08:24.821198 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
  21. I1022 15:08:24.833022 1 cache.go:39] Caches are synced for AvailableConditionController controller
  22. I1022 15:08:24.865087 1 controller.go:537] quota admission added evaluator for: { events}
  23. I1022 15:08:24.865393 1 logs.go:49] http: TLS handshake error from 127.0.0.1:39162: read tcp4 127.0.0.1:443->127.0.0.1:39162: read: connection reset by peer
  24. I1022 15:08:24.966917 1 controller_utils.go:1026] Caches are synced for crd-autoregister controller
  25. I1022 15:08:24.967961 1 autoregister_controller.go:136] Starting autoregister controller
  26. I1022 15:08:24.967977 1 cache.go:32] Waiting for caches to sync for autoregister controller
  27. I1022 15:08:25.015924 1 controller.go:537] quota admission added evaluator for: { serviceaccounts}
  28. I1022 15:08:25.077984 1 cache.go:39] Caches are synced for autoregister controller
  29. W1022 15:08:25.304265 1 lease_endpoint_reconciler.go:176] Resetting endpoints for master service "kubernetes" to [10.10.94.10]
  30. E1022 15:08:25.472536 1 memcache.go:153] couldn't get resource list for servicecatalog.k8s.io/v1beta1: the server could not find the requested resource
  31. E1022 15:08:25.550888 1 memcache.go:153] couldn't get resource list for servicecatalog.k8s.io/v1beta1: the server could not find the requested resource
  32. I1022 15:08:29.480691 1 healthz.go:72] /healthz/log check
  33. I1022 15:08:30.981999 1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.servicecatalog.k8s.io
  34. E1022 15:08:30.990914 1 controller.go:111] loading OpenAPI spec for "v1beta1.servicecatalog.k8s.io" failed with: OpenAPI spec does not exists
  35. I1022 15:08:30.990965 1 controller.go:119] OpenAPI AggregationController: action for item v1beta1.servicecatalog.k8s.io: Rate Limited Requeue.
  36. I1022 15:08:31.530473 1 trace.go:76] Trace[1253590531]: "Get /api/v1/namespaces/openshift-infra/serviceaccounts/serviceaccount-controller" (started: 2018-10-22 15:08:30.868387562 +0000 UTC m=+24.277041043) (total time: 661.981642ms):
  37. Trace[1253590531]: [661.903178ms] [661.89217ms] About to write a response
  38. I1022 15:08:31.531366 1 trace.go:76] Trace[83808472]: "Get /api/v1/namespaces/aws-sb/secrets/aws-servicebroker" (started: 2018-10-22 15:08:30.831296749 +0000 UTC m=+24.239950203) (total time: 700.049245ms):

Excerpt of master-logs api api 2> file output at loglevel=4

  1. I1022 15:08:09.746980 1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: AlwaysDeny.
  2. I1022 15:08:09.747597 1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: ResourceQuota.
  3. I1022 15:08:09.748038 1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: openshift.io/ClusterResourceQuota.
  4. I1022 15:08:09.786771 1 start_master.go:458] Starting master on 0.0.0.0:443 (v3.10.45)
  5. I1022 15:08:09.786798 1 start_master.go:459] Public master address is https://openshift.com:443
  6. I1022 15:08:09.786844 1 start_master.go:463] Using images from "registry.access.redhat.com/openshift3/ose-<component>:v3.10.45"
  7. W1022 15:08:09.787046 1 dns_server.go:37] Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients
  8. W1022 15:08:09.787705 1 server.go:79] Unable to keep dnsmasq up to date, 0.0.0.0:8053 must point to port 53
  9. I1022 15:08:09.787894 1 logs.go:49] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0]
  10. I1022 15:08:09.787913 1 logs.go:49] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0]
  11. I1022 15:08:09.889022 1 dns_server.go:63] DNS listening at 0.0.0.0:8053
  12. I1022 15:08:09.893156 1 feature_gate.go:190] feature gates: map[AdvancedAuditing:true]
  13. I1022 15:08:09.893500 1 master.go:431] Starting OAuth2 API at /oauth
  14. I1022 15:08:09.914759 1 master.go:431] Starting OAuth2 API at /oauth
  15. I1022 15:08:09.942349 1 master.go:431] Starting OAuth2 API at /oauth
  16. W1022 15:08:09.977088 1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
  17. W1022 15:08:09.977176 1 swagger.go:38] No API exists for predefined swagger description /api/v1
  18. [restful] 2018/10/22 15:08:09 log.go:33: [restful/swagger] listing is available at https://openshift.com:443/swaggerapi
  19. [restful] 2018/10/22 15:08:09 log.go:33: [restful/swagger] https://openshift.com:443/swaggerui/ is mapped to folder /swagger-ui/
  20. I1022 15:08:10.231405 1 master.go:431] Starting OAuth2 API at /oauth
  21. W1022 15:08:10.259523 1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
  22. W1022 15:08:10.259555 1 swagger.go:38] No API exists for predefined swagger description /api/v1
  23. [restful] 2018/10/22 15:08:10 log.go:33: [restful/swagger] listing is available at https://openshift.com:443/swaggerapi
  24. [restful] 2018/10/22 15:08:10 log.go:33: [restful/swagger] https://openshift.com:443/swaggerui/ is mapped to folder /swagger-ui/
  25. I1022 15:08:10.444303 1 master.go:431] Starting OAuth2 API at /oauth
  26. W1022 15:08:10.492409 1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
  27. W1022 15:08:10.492507 1 swagger.go:38] No API exists for predefined swagger description /api/v1
  28. [restful] 2018/10/22 15:08:10 log.go:33: [restful/swagger] listing is available at https://openshift.com:443/swaggerapi
  29. [restful] 2018/10/22 15:08:10 log.go:33: [restful/swagger] https://openshift.com:443/swaggerui/ is mapped to folder /swagger-ui/
  30. I1022 15:08:10.774824 1 master.go:431] Starting OAuth2 API at /oauth
  31. I1022 15:08:23.808685 1 logs.go:49] http: TLS handshake error from 10.128.0.11:39206: EOF
  32. I1022 15:08:23.815311 1 logs.go:49] http: TLS handshake error from 10.128.0.14:53054: EOF
  33. I1022 15:08:23.822286 1 customresource_discovery_controller.go:174] Starting DiscoveryController
  34. I1022 15:08:23.822349 1 naming_controller.go:276] Starting NamingConditionController
  35. I1022 15:08:23.822705 1 logs.go:49] http: TLS handshake error from 10.128.0.14:53056: EOF
  36. +24.277041043) (total time: 661.981642ms):
  37. Trace[1253590531]: [661.903178ms] [661.89217ms] About to write a response
  38. I1022 15:08:31.531366 1 trace.go:76] Trace[83808472]: "Get /api/v1/namespaces/aws-sb/secrets/aws-servicebroker" (started: 2018-10-22 15:08:30.831296749 +0000 UTC m=+24.239950203) (total time: 700.049245ms):
  39. Trace[83808472]: [700.049245ms] [700.04027ms] END
  40. I1022 15:08:31.531695 1 trace.go:76] Trace[1916801734]: "Get /api/v1/namespaces/aws-sb/secrets/aws-servicebroker" (started: 2018-10-22 15:08:31.031163449 +0000 UTC m=+24.439816907) (total time: 500.514208ms):
  41. Trace[1916801734]: [500.514208ms] [500.505008ms] END
  42. I1022 15:08:44.675371 1 healthz.go:72] /healthz/log check
  43. I1022 15:08:46.589759 1 controller.go:537] quota admission added evaluator for: { endpoints}
  44. I1022 15:08:46.621270 1 controller.go:537] quota admission added evaluator for: { endpoints}
  45. I1022 15:08:57.159494 1 healthz.go:72] /healthz/log check
  46. I1022 15:09:07.161315 1 healthz.go:72] /healthz/log check
  47. I1022 15:09:16.297982 1 trace.go:76] Trace[2001108522]: "GuaranteedUpdate etcd3: *core.Node" (started: 2018-10-22 15:09:15.139820419 +0000 UTC m=+68.548473981) (total time: 1.158128974s):
  48. Trace[2001108522]: [1.158012755s] [1.156496534s] Transaction committed
  49. I1022 15:09:16.298165 1 trace.go:76] Trace[1124283912]: "Patch /api/v1/nodes/master-0.com/status" (started: 2018-10-22 15:09:15.139695483 +0000 UTC m=+68.548348970) (total time: 1.158434318s):
  50. Trace[1124283912]: [1.158328853s] [1.15713683s] Object stored in database
  51. I1022 15:09:16.298761 1 trace.go:76] Trace[24963576]: "GuaranteedUpdate etcd3: *core.Node" (started: 2018-10-22 15:09:15.13159057 +0000 UTC m=+68.540244112) (total time: 1.167151224s):
  52. Trace[24963576]: [1.167106144s] [1.165570379s] Transaction committed
  53. I1022 15:09:16.298882 1 trace.go:76] Trace[222129183]: "Patch /api/v1/nodes/node-0.com/status" (started: 2018-10-22 15:09:15.131269234 +0000 UTC m=+68.539922722) (total time: 1.167595526s):
  54. Trace[222129183]: [1.167517296s] [1.166135605s] Object stored in database

Excerpt of master-logs api api 2> file output at loglevel=8

  1. 1022 15:11:58.829357 1 plugins.go:84] Registered admission plugin "NamespaceLifecycle"
  2. I1022 15:11:58.839967 1 plugins.go:84] Registered admission plugin "Initializers"
  3. I1022 15:11:58.839994 1 plugins.go:84] Registered admission plugin "ValidatingAdmissionWebhook"
  4. I1022 15:11:58.840012 1 plugins.go:84] Registered admission plugin "MutatingAdmissionWebhook"
  5. I1022 15:11:58.840025 1 plugins.go:84] Registered admission plugin "AlwaysAdmit"
  6. I1022 15:11:58.840082 1 plugins.go:84] Registered admission plugin "AlwaysPullImages"
  7. I1022 15:11:58.840105 1 plugins.go:84] Registered admission plugin "LimitPodHardAntiAffinityTopology"
  8. I1022 15:11:58.840126 1 plugins.go:84] Registered admission plugin "DefaultTolerationSeconds"
  9. I1022 15:11:58.840146 1 plugins.go:84] Registered admission plugin "AlwaysDeny"
  10. I1022 15:11:58.840176 1 plugins.go:84] Registered admission plugin "EventRateLimit"
  11. I1022 15:11:59.850825 1 feature_gate.go:190] feature gates: map[AdvancedAuditing:true]
  12. I1022 15:11:59.859108 1 register.go:154] Admission plugin AlwaysAdmit is not enabled. It will not be started.
  13. I1022 15:11:59.859284 1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: AlwaysAdmit.
  14. I1022 15:11:59.859809 1 register.go:154] Admission plugin NamespaceAutoProvision is not enabled. It will not be started.
  15. I1022 15:11:59.859939 1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: NamespaceAutoProvision.
  16. I1022 15:11:59.860594 1 register.go:154] Admission plugin NamespaceExists is not enabled. It will not be started.
  17. I1022 15:11:59.860778 1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: NamespaceExists.
  18. I1022 15:11:59.863999 1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: NamespaceLifecycle.
  19. I1022 15:11:59.864626 1 register.go:154] Admission plugin EventRateLimit is not enabled. It will not be started.
  20. I1022 15:11:59.864768 1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: EventRateLimit.
  21. I1022 15:11:59.865259 1 register.go:154] Admission plugin ProjectRequestLimit is not enabled. It will not be started.
  22. I1022 15:11:59.865376 1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: ProjectRequestLimit.
  23. I1022 15:11:59.866126 1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: OriginNamespaceLifecycle.
  24. I1022 15:11:59.866709 1 register.go:154] Admission plugin openshift.io/RestrictSubjectBindings is not enabled. It will not be started.
  25. I1022 15:11:59.866761 1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: openshift.io/RestrictSubjectBindings.
  26. I1022 15:11:59.867304 1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: openshift.io/JenkinsBootstrapper.
  27. I1022 15:11:59.867823 1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: openshift.io/BuildConfigSecretInjector.
  28. I1022 15:12:00.015273 1 master_config.go:476] Initializing cache sizes based on 0MB limit
  29. I1022 15:12:00.015896 1 master_config.go:539] Using the lease endpoint reconciler with TTL=15s and interval=10s
  30. I1022 15:12:00.018396 1 storage_factory.go:285] storing { apiServerIPInfo} in v1, reading as __internal from storagebackend.Config{Type:"etcd3", Prefix:"kubernetes.io", ServerList:[]string{"https://master-0.com:2379"}, KeyFile:"/etc/origin/master/master.etcd-client.key", CertFile:"/etc/origin/master/master.etcd-client.crt", CAFile:"/etc/origin/master/master.etcd-ca.crt", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
  31. I1022 15:12:00.037710 1 storage_factory.go:285] storing { endpoints} in v1, reading as __internal from storagebackend.Config{Type:"etcd3", Prefix:"kubernetes.io", ServerList:[]string{"https://master-0.com:2379"}, KeyFile:"/etc/origin/master/master.etcd-client.key", CertFile:"/etc/origin/master/master.etcd-client.crt", CAFile:"/etc/origin/master/master.etcd-ca.crt", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
  32. I1022 15:12:00.054112 1 compact.go:54] compactor already exists for endpoints [https://master-0.com:2379]
  33. I1022 15:12:00.054678 1 start_master.go:458] Starting master on 0.0.0.0:443 (v3.10.45)
  34. I1022 15:12:00.054755 1 start_master.go:459] Public master address is https://openshift.com:443
  35. I1022 15:12:00.054837 1 start_master.go:463] Using images from "registry.access.redhat.com/openshift3/ose-<component>:v3.10.45"
  36. W1022 15:12:00.056957 1 dns_server.go:37] Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients
  37. W1022 15:12:00.065497 1 server.go:79] Unable to keep dnsmasq up to date, 0.0.0.0:8053 must point to port 53
  38. I1022 15:12:00.066061 1 logs.go:49] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0]
  39. I1022 15:12:00.066265 1 logs.go:49] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0]
  40. I1022 15:12:00.158725 1 dns_server.go:63] DNS listening at 0.0.0.0:8053
  41. I1022 15:12:00.167910 1 htpasswd.go:118] Loading htpasswd file /etc/origin/master/htpasswd...
  42. I1022 15:12:00.168182 1 htpasswd.go:118] Loading htpasswd file /etc/origin/master/htpasswd...
  43. I1022 15:12:00.231233 1 storage_factory.go:285] storing {apps.openshift.io deploymentconfigs} in apps.openshift.io/v1, reading as apps.openshift.io/__internal from storagebackend.Config{Type:"etcd3", Prefix:"openshift.io", ServerList:[]string{"https://master-0.com:2379"}, KeyFile:"/etc/origin/master/master.etcd-client.key", CertFile:"/etc/origin/master/master.etcd-client.crt", CAFile:"/etc/origin/master/master.etcd-ca.crt", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
  44. I1022 15:12:00.248136 1 compact.go:54] compactor already exists for endpoints [https://master-0.com:2379]
  45. I1022 15:12:00.248697 1 store.go:1391] Monitoring deploymentconfigs.apps.openshift.io count at <storage-prefix>//deploymentconfigs
  46. W1022 15:12:00.256861 1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
  47. W1022 15:12:00.258106 1 swagger.go:38] No API exists for predefined swagger description /api/v1

Restarting master and node services

To apply master or node configuration changes, you must restart the respective services.

To reload master configuration changes, restart master services running in control plane static pods using the master-restart command:

  1. # master-restart api
  2. # master-restart controllers

To reload node configuration changes, restart the node service on the node host:

  1. # systemctl restart atomic-openshift-node