2 - 节点需求


不管是单节点安装Rancher server,或高可用安装Rancher server,所有节点都需要满足以下的节点要求。

Rancher在以下操作系统及其后续的非主要发行版上受支持:

  • Ubuntu 16.04.x (64-bit)
    • Docker 17.03.x, 18.06.x, 18.09.x
  • Ubuntu 18.04.x (64-bit)
    • Docker 18.06.x, 18.09.x
  • Red Hat Enterprise Linux (RHEL)/CentOS 7.5+ (64-bit)
    • RHEL Docker 1.13
    • Docker 17.03.x, 18.06.x, 18.09.x
  • RancherOS 1.3.x+ (64-bit)
    • Docker 17.03.x, 18.06.x, 18.09.x
  • Windows Server version 1803 (64-bit)
    • Docker 17.06

1、Ubuntu、Centos操作系统有Desktop和Server版本,选择请安装server版本,别自己坑自己! 2、如果您正在使用RancherOS,请确保切换到受支持的Docker版本:sudo ros engine switch docker-18.09.2

硬件要求根据Rancher部署的K8S集群规模大小进行扩展,根据要求配置每个节点。

HA 节点需求(标准3节点)

部署规模集群数NodesvCPUsRAM
最多5个最多50个28 GB
最多15个最多200个416 GB
最多50个最多500个832 GB
大+最多100个最多1000个32128 GB
大++超过100+个超过1000+个联系 Rancher联系 Rancher

Single 节点需求

部署规模ClustersNodesvCPUsRAM
最多5个最多50个48 GB
最多15个最多200个816GB

节点IP地址

使用的每个节点(单节点安装,高可用性(HA)安装或集群中使用的worker节点)应配置静态IP。在DHCP的情况下,应配置DHCP IP保留以确保节点获得相同的IP分配。

端口需求

在HA集群中部署Rancher时,必须打开节点上的某些端口以允许与Rancher通信。必须打开的端口根据托管集群节点的计算机类型而变化,例如,如果要在基础结构托管的节点上部署Rancher,则必须为SSH打开22端口。下图描绘了需要为每种集群类型打开的端口。集群类型.

Basic Port Requirements

Rancher nodes:Nodes running the rancher/rancher container

Rancher nodes - Inbound rules

ProtocolPortSourceDescription
TCP80- Load balancer/proxy that does external SSL terminationRancher UI/API when external SSL termination is used
TCP443- etcd nodes- controlplane nodes- worker nodes- Hosted/Imported Kubernetes- any that needs to be able to use UI/APIRancher agent, Rancher UI/API, kubectl

Rancher nodes - Outbound rules

ProtocolPortDestinationDescription
TCP22- Any node IP from a node created using Node DriverSSH provisioning of nodes using Node Driver
TCP443- 35.160.43.145/32- 35.167.242.46/32- 52.33.59.17/32git.rancher.io (catalogs)
TCP2376- Any node IP from a node created using Node DriverDocker daemon TLS port used by Docker Machine
TCP6443- Hosted/Imported Kubernetes APIKubernetes apiserver

etcd nodes:Nodes with the role etcd

etcd nodes - Inbound rules

ProtocolPortSourceDescription
TCP2376- Rancher nodesDocker daemon TLS port used by Docker Machine(only needed when using Node Driver/Templates)
TCP2379- etcd nodes- controlplane nodesetcd client requests
TCP2380- etcd nodes- controlplane nodesetcd peer communication
UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
TCP9099- etcd node itself (local traffic, not across nodes)See Local node trafficCanal/Flannel livenessProbe/readinessProbe
TCP10250- controlplane nodeskubelet

etcd nodes - Outbound rules

ProtocolPortDestinationDescription
TCP443- Rancher nodesRancher agent
TCP2379- etcd nodesetcd client requests
TCP2380- etcd nodesetcd peer communication
TCP6443- controlplane nodesKubernetes apiserver
UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
TCP9099- etcd node itself (local traffic, not across nodes)See Local node trafficCanal/Flannel livenessProbe/readinessProbe

controlplane nodes:Nodes with the role controlplane

controlplane nodes - Inbound rules

ProtocolPortSourceDescription
TCP80- Any that consumes Ingress servicesIngress controller (HTTP)
TCP443- Any that consumes Ingress servicesIngress controller (HTTPS)
TCP2376- Rancher nodesDocker daemon TLS port used by Docker Machine(only needed when using Node Driver/Templates)
TCP6443- etcd nodes- controlplane nodes- worker nodesKubernetes apiserver
UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
TCP9099- controlplane node itself (local traffic, not across nodes)See Local node trafficCanal/Flannel livenessProbe/readinessProbe
TCP10250- controlplane nodeskubelet
TCP10254- controlplane node itself (local traffic, not across nodes)See Local node trafficIngress controller livenessProbe/readinessProbe
TCP/UDP30000-32767- Any source that consumes NodePort servicesNodePort port range

controlplane nodes - Outbound rules

ProtocolPortDestinationDescription
TCP443- Rancher nodesRancher agent
TCP2379- etcd nodesetcd client requests
TCP2380- etcd nodesetcd peer communication
UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
TCP9099- controlplane node itself (local traffic, not across nodes)See Local node trafficCanal/Flannel livenessProbe/readinessProbe
TCP10250- etcd nodes- controlplane nodes- worker nodeskubelet
TCP10254- controlplane node itself (local traffic, not across nodes)See Local node trafficIngress controller livenessProbe/readinessProbe

worker nodes:Nodes with the role worker

worker nodes - Inbound rules

ProtocolPortSourceDescription
TCP22- Linux worker nodes only- Any network that you want to be able to remotely access this node from.Remote access over SSH
TCP3389- Windows worker nodes only- Any network that you want to be able to remotely access this node from.Remote access over RDP
TCP80- Any that consumes Ingress servicesIngress controller (HTTP)
TCP443- Any that consumes Ingress servicesIngress controller (HTTPS)
TCP2376- Rancher nodesDocker daemon TLS port used by Docker Machine(only needed when using Node Driver/Templates)
UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
TCP9099- worker node itself (local traffic, not across nodes)See Local node trafficCanal/Flannel livenessProbe/readinessProbe
TCP10250- controlplane nodeskubelet
TCP10254- worker node itself (local traffic, not across nodes)See Local node trafficIngress controller livenessProbe/readinessProbe
TCP/UDP30000-32767- Any source that consumes NodePort servicesNodePort port range

worker nodes - Outbound rules

ProtocolPortDestinationDescription
TCP443- Rancher nodesRancher agent
TCP6443- controlplane nodesKubernetes apiserver
UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
TCP9099- worker node itself (local traffic, not across nodes)See Local node trafficCanal/Flannel livenessProbe/readinessProbe
TCP10254- worker node itself (local traffic, not across nodes)See Local node trafficIngress controller livenessProbe/readinessProbe

Information on local node traffic

Kubernetes healthchecks (livenessProbe and readinessProbe) are executed on the host itself. On most nodes, this is allowed by default. When you have applied strict host firewall (i.e. iptables) policies on the node, or when you are using nodes that have multiple interfaces (multihomed), this traffic gets blocked. In this case, you have to explicitly allow this traffic in your host firewall, or in case of public/private cloud hosted machines (i.e. AWS or OpenStack), in your security group configuration. Keep in mind that when using a security group as Source or Destination in your security group, that this only applies to the private interface of the nodes/instances.

Amazon EC2 security group when using Node Driver

If you are Creating an Amazon EC2 Cluster, you can choose to let Rancher create a Security Group called rancher-nodes. The following rules are automatically added to this Security Group.

Security group: rancher-nodes

Inbound rules

TypeProtocolPort RangeSource
SSHTCP220.0.0.0/0
HTTPTCP800.0.0.0/0
Custom TCP RuleTCP4430.0.0.0/0
Custom TCP RuleTCP23760.0.0.0/0
Custom TCP RuleTCP2379-2380sg-xxx (rancher-nodes)
Custom UDP RuleUDP4789sg-xxx (rancher-nodes)
Custom TCP RuleTCP64430.0.0.0/0
Custom UDP RuleUDP8472sg-xxx (rancher-nodes)
Custom TCP RuleTCP10250-10252sg-xxx (rancher-nodes)
Custom TCP RuleTCP10256sg-xxx (rancher-nodes)
Custom TCP RuleTCP30000-327670.0.0.0/0
Custom UDP RuleUDP30000-327670.0.0.0/0

Outbound rules

TypeProtocolPort RangeDestination
All trafficAllAll0.0.0.0/0