Requirements

As an HCI solution on bare metal servers, Harvester has some minimum requirements as outlined below.

Hardware Requirements

To get the Harvester server up and running the following minimum hardware is required:

TypeRequirements
CPUx86_64 only. Hardware-assisted virtualization is required. 8-core processor minimum; 16-core or above preferred
Memory32 GB minimum, 64 GB or above preferred
Disk Capacity140 GB minimum for testing, 500 GB or above preferred for production
Disk Performance5,000+ random IOPS per disk(SSD/NVMe). Management nodes (first 3 nodes) must be fast enough for Etcd
Network Card1 Gbps Ethernet minimum for testing, 10Gbps Ethernet recommended for production
Network SwitchTrunking of ports required for VLAN support

We recommend server-class hardware for best results. Laptops and nested virtualization are not officially supported.

Networking

Harvester Hosts Inbound Rules

ProtocolPortSourceDescription
TCP2379Harvester management nodesEtcd client port
TCP2381Harvester management nodesEtcd health checks
TCP2380Harvester management nodesEtcd peer port
TCP10010Harvester management and compute nodesContainerd
TCP6443Harvester management nodesKubernetes API
TCP9345Harvester management nodesKubernetes API
TCP10252Harvester management nodesKube-controller-manager health checks
TCP10257Harvester management nodesKube-controller-manager secure port
TCP10251Harvester management nodesKube-scheduler health checks
TCP10259Harvester management nodesKube-scheduler secure port
TCP10250Harvester management and compute nodesKubelet
TCP10256Harvester management and compute nodesKube-proxy health checks
TCP10258Harvester management nodesCloud-controller-manager
TCP9091Harvester management and compute nodesCanal calico-node felix
TCP9099Harvester management and compute nodesCanal CNI health checks
UDP8472Harvester management and compute nodesCanal CNI with VxLAN
TCP2112Harvester management nodesKube-vip
TCP6444Harvester management and compute nodesRKE2 agent
TCP6060Harvester management and compute nodesNode-disk-manager
TCP10246/10247/10248/10249Harvester management and compute nodesNginx worker process
TCP8181Harvester management and compute nodesNginx-ingress-controller
TCP8444Harvester management and compute nodesNginx-ingress-controller
TCP10245Harvester management and compute nodesNginx-ingress-controller
TCP80Harvester management and compute nodesNginx
TCP9796Harvester management and compute nodesNode-exporter
TCP30000-32767Harvester management and compute nodesNodePort port range
TCP22Harvester management and compute nodessshd
UDP68Harvester management and compute nodesWicked
TCP3260Harvester management and compute nodesiscsid

Typically, all outbound traffic will be allowed.

Integrating Harvester with Rancher

If you want to integrate Harvester with Rancher, you need to make sure, that all Harvester nodes can connect to TCP port 443 of the Rancher load balancer.

The VMs of Kubernetes clusters, that are provisioned from Rancher into Harvester, also need to be able to connect to TCP port 443 of the Rancher load balancer. Otherwise the cluster won’t be manageable by Rancher. For more information see also Rancher Architecture.

Guest clusters

As for the port requirements for the guest clusters deployed inside Harvester virtual machines, refer to the following links.