1 - 需求


一、操作系统

几乎所有安装了Docker的Linux操作系统都可以运行RKE,但是推荐使用Ubuntu 16.04,因为您大多数RKE的开发和测试都在Ubuntu 16.04上。

1. 某些操作系统有限制和特定要求:

  • SSH user - 用于访问节点的SSH用户,必须加入docker组:
  1. usermod -aG docker <user_name>

请参阅Manage Docker as a non-root user 以了解如何在不使用root用户的情况下配置对Docker的访问。

  • worker上禁用交换

  • 加载以下内核模块,可以使用以下方法检查:

    • modprobe module_name
    • lsmod | grep module_name
    • grep module_name /lib/modules/$(uname -r)/modules.builtin, 如果它是一个内置模块Module namebr_netfilterip6_udp_tunnelip_setip_set_hash_ipip_set_hash_netiptable_filteriptable_natiptable_mangleiptable_rawnf_conntrack_netlinknf_conntracknf_conntrack_ipv4nf_defrag_ipv4nf_natnf_nat_ipv4nf_nat_masquerade_ipv4nfnetlinkudp_tunnelvethvxlanx_tablesxt_addrtypext_conntrackxt_commentxt_markxt_multiportxt_natxt_recentxt_setxt_statisticxt_tcpudp
  • 必须应用以下sysctl设置

  1. net.bridge.bridge-nf-call-iptables=1

2. Red Hat Enterprise Linux(RHEL)/Oracle Enterprise Linux(OEL)/CentOS

如果使用Red Hat Enterprise Linux,Oracle Enterprise Linux或CentOS,由于Bugzilla 1527565您无法将root用户用作SSH用户。请根据您在节点上安装Docker的方式,按照以下说明正确设置Docker。

  • 使用docker-ce

检查是否安装docker-ce或docker-ee,可以执行以下命令检查已安装的软件包:

  1. rpm -q docker-ce
  • 使用RHEL/CentOS维护的Docker

如果您使用的是Red Hat/CentOS提供的Docker软件包,则软件包名称为docker。您可以执行以下命令检查已安装的软件包

  1. rpm -q docker

如果您使用的是Red Hat/CentOS提供的Docker软件包,该dockerroot组将自动添加到系统中。您需要编辑(或创建)/etc/docker/daemon.json以包含以下内容:

  1. {
  2. "group": "dockerroot"
  3. }

编辑或创建文件后重新启动Docker,重新启动Docker后,您可以检查Docker socket(/var/run/docker.sock)的组权限,该权限应显示为group(dockerroot)

  1. srw-rw----. 1 root dockerroot 0 Jul 4 09:57 /var/run/docker.sock

将要使用的SSH用户添加到该组,这不是root用户。

  1. usermod -aG dockerroot <user_name>

要验证用户配置是否正确,请注销节点并使用SSH用户重新登录,然后执行docker ps

  1. ssh <user_name>@node
  2. $ docker ps
  3. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

3. Red Hat Atomic

在尝试将RKE与Red Hat Atomic节点一起使用之前,需要对操作系统进行一些更新才能使RKE正常工作。

  • OpenSSH 版本

默认情况下,Atomic安装OpenSSH 6.4,它不支持SSH隧道,这是核心RKE要求,需要升级openssh。

  • 创建Docker Group

默认情况下,Atomic不附带Docker组,可以通过启用特定用户来启动RKE来更新Docker套接字的所有权。

  1. chown <user> /var/run/docker.sock

二、软件

  • Docker - 每个Kubernetes版本都支持不同的Docker版本。
    Kubernetes版本支持Docker版本(s)
    v1.13.xRHEL Docker 1.13, 17.03.2, 18.06.2, 18.09.2
    v1.12.xRHEL Docker 1.13, 17.03.2, 18.06.2, 18.09.2
    v1.11.xRHEL Docker 1.13, 17.03.2, 18.06.2, 18.09.2

您可以按照Docker安装说明操作,也可以使用Rancher的安装脚本安装Docker。对于RHEL,请参阅如何在Red Hat Enterprise Linux 7上安装Docker

Docker版本安装脚本
18.09.2curl https://releases.rancher.com/install-docker/18.09.2.sh | sh
18.06.2curl https://releases.rancher.com/install-docker/18.06.2.sh | sh
17.03.2curl https://releases.rancher.com/install-docker/17.03.2.sh | sh

确认安装的docker版本: docker version —format '{{.Server.Version}}'

  1. docker version --format '{{.Server.Version}}'
  2. 17.03.2-ce
  • OpenSSH 7.0+ - 必须在每个节点上安装OpenSSH。

三、端口

RKE node:Node that runs the rke commands

RKE node - Outbound rules

ProtocolPortSourceDestinationDescription
TCP22RKE node- Any node configured in Cluster Configuration FileSSH provisioning of node by RKE
TCP6443RKE node- controlplane nodesKubernetes apiserver

etcd nodes:Nodes with the role etcd

etcd nodes - Inbound rules

ProtocolPortSourceDescription
TCP2376- Rancher nodesDocker daemon TLS port used by Docker Machine(only needed when using Node Driver/Templates)
TCP2379- etcd nodes- controlplane nodesetcd client requests
TCP2380- etcd nodes- controlplane nodesetcd peer communication
UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
TCP9099- etcd node itself (local traffic, not across nodes)See Local node trafficCanal/Flannel livenessProbe/readinessProbe
TCP10250- controlplane nodeskubelet

etcd nodes - Outbound rules

ProtocolPortDestinationDescription
TCP443- Rancher nodesRancher agent
TCP2379- etcd nodesetcd client requests
TCP2380- etcd nodesetcd peer communication
TCP6443- controlplane nodesKubernetes apiserver
UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
TCP9099- etcd node itself (local traffic, not across nodes)See Local node trafficCanal/Flannel livenessProbe/readinessProbe

controlplane nodes:Nodes with the role controlplane

controlplane nodes - Inbound rules

ProtocolPortSourceDescription
TCP80- Any that consumes Ingress servicesIngress controller (HTTP)
TCP443- Any that consumes Ingress servicesIngress controller (HTTPS)
TCP2376- Rancher nodesDocker daemon TLS port used by Docker Machine(only needed when using Node Driver/Templates)
TCP6443- etcd nodes- controlplane nodes- worker nodesKubernetes apiserver
UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
TCP9099- controlplane node itself (local traffic, not across nodes)See Local node trafficCanal/Flannel livenessProbe/readinessProbe
TCP10250- controlplane nodeskubelet
TCP10254- controlplane node itself (local traffic, not across nodes)See Local node trafficIngress controller livenessProbe/readinessProbe
TCP/UDP30000-32767- Any source that consumes NodePort servicesNodePort port range

controlplane nodes - Outbound rules

ProtocolPortDestinationDescription
TCP443- Rancher nodesRancher agent
TCP2379- etcd nodesetcd client requests
TCP2380- etcd nodesetcd peer communication
UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
TCP9099- controlplane node itself (local traffic, not across nodes)See Local node trafficCanal/Flannel livenessProbe/readinessProbe
TCP10250- etcd nodes- controlplane nodes- worker nodeskubelet
TCP10254- controlplane node itself (local traffic, not across nodes)See Local node trafficIngress controller livenessProbe/readinessProbe

worker nodes:Nodes with the role worker

worker nodes - Inbound rules

ProtocolPortSourceDescription
TCP22- Linux worker nodes only- Any network that you want to be able to remotely access this node from.Remote access over SSH
TCP3389- Windows worker nodes only- Any network that you want to be able to remotely access this node from.Remote access over RDP
TCP80- Any that consumes Ingress servicesIngress controller (HTTP)
TCP443- Any that consumes Ingress servicesIngress controller (HTTPS)
TCP2376- Rancher nodesDocker daemon TLS port used by Docker Machine(only needed when using Node Driver/Templates)
UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
TCP9099- worker node itself (local traffic, not across nodes)See Local node trafficCanal/Flannel livenessProbe/readinessProbe
TCP10250- controlplane nodeskubelet
TCP10254- worker node itself (local traffic, not across nodes)See Local node trafficIngress controller livenessProbe/readinessProbe
TCP/UDP30000-32767- Any source that consumes NodePort servicesNodePort port range

worker nodes - Outbound rules

ProtocolPortDestinationDescription
TCP443- Rancher nodesRancher agent
TCP6443- controlplane nodesKubernetes apiserver
UDP8472- etcd nodes- controlplane nodes- worker nodesCanal/Flannel VXLAN overlay networking
TCP9099- worker node itself (local traffic, not across nodes)See Local node trafficCanal/Flannel livenessProbe/readinessProbe
TCP10254- worker node itself (local traffic, not across nodes)See Local node trafficIngress controller livenessProbe/readinessProbe

Information on local node traffic

Kubernetes healthchecks (livenessProbe and readinessProbe) are executed on the host itself. On most nodes, this is allowed by default. When you have applied strict host firewall (i.e. iptables) policies on the node, or when you are using nodes that have multiple interfaces (multihomed), this traffic gets blocked. In this case, you have to explicitly allow this traffic in your host firewall, or in case of public/private cloud hosted machines (i.e. AWS or OpenStack), in your security group configuration. Keep in mind that when using a security group as Source or Destination in your security group, that this only applies to the private interface of the nodes/instances.

If you are using an external firewall, make sure you have this port opened between the machine you are using to run rke and the nodes that you are going to use in the cluster.

iptables 放行端口TCP/6443

  1. # Open TCP/6443 for all
  2. iptables -A INPUT -p tcp --dport 6443 -j ACCEPT
  3. # Open TCP/6443 for one specific IP
  4. iptables -A INPUT -p tcp -s your_ip_here --dport 6443 -j ACCEPT

firewalld放行端口TCP/6443

  1. # Open TCP/6443 for all
  2. firewall-cmd --zone=public --add-port=6443/tcp --permanent
  3. firewall-cmd --reload
  4. # Open TCP/6443 for one specific IP
  5. firewall-cmd --permanent --zone=public --add-rich-rule='
  6. rule family="ipv4"
  7. source address="your_ip_here/32"
  8. port protocol="tcp" port="6443" accept'
  9. firewall-cmd --reload