Harvester

Fail to Deploy a Multi-node Cluster Due to Incorrect HTTP Proxy Setting

ISO Installation Without a Harvester Configuration File

Configure HTTP Proxy During Harvester Installation

In some environments, you configure http-proxy of OS Environment during Harvester installation.

Configure HTTP Proxy After First Node is Ready

After the first node is installed successfully, you login into the Harvester GUI to configure http-proxy of Harvester System Settings.

Then you continue to add more nodes to the cluster.

One Node Becomes Unavailable

The issue you may encounter:

  1. The first node is installed successfully.
  2. The second node is installed successfully.
  3. The third node is installed successfully.
  4. Then the second node changes to Unavialable state and cannot recover automatically.

Solution

When the nodes in the cluster do not use the HTTP Proxy to communicate with each other, after the first node is installed successfully, you need to configure http-proxy.noProxy against the CIDR used by those nodes.

For example, your cluster assigns IPs from CIDR 172.26.50.128/27 to nodes via DHCP/static setting, please add this CIDR to noProxy.

After setting this, you can continue to add new nodes to the cluster.

For more details, please refer to Harvester issue 3091.

ISO Installation With a Harvester Configuration File

When a Harvester configuration file is used in ISO installation, please configure proper http-proxy in Harvester System Settings.

PXE Boot Installation

When PXE Boot Installation is adopted, please configure proper http-proxy in OS Environment and Harvester System Settings.

Generate a Support Bundle

Users can generate a support bundle in the Harvester GUI with the following steps:

  • Click the Support link at the bottom-left of Harvester Web UI. Harvester - 图1

  • Click Generate Support Bundle button. Harvester - 图2

  • Enter a useful description for the support bundle and click Create to generate and download a support bundle. Harvester - 图3

Access Embedded Rancher and Longhorn Dashboards

Available as of v1.1.0

You can now access the embedded Rancher and Longhorn dashboards directly on the Support page, but you must first go to the Preferences page and check the Enable Extension developer features box under Advanced Features.

Harvester - 图4

Harvester - 图5note

We only support using the embedded Rancher and Longhorn dashboards for debugging and validation purposes. For Rancher’s multi-cluster and multi-tenant integration, please refer to the docs here.

I can’t access Harvester after I changed SSL/TLS enabled protocols and ciphers

If you changed SSL/TLS enabled protocols and ciphers settings and you no longer have access to Harvester GUI and API, it’s highly possible that NGINX Ingress Controller has stopped working due to the misconfigured SSL/TLS protocols and ciphers. Follow these steps to reset the setting:

  1. Following FAQ to SSH into Harvester node and switch to root user.
  1. $ sudo -s
  1. Editing setting ssl-parameters manually using kubectl:
  1. # kubectl edit settings ssl-parameters
  1. Deleting the line value: ... so that NGINX Ingress Controller will use the default protocols and ciphers.
  1. apiVersion: harvesterhci.io/v1beta1
  2. default: '{}'
  3. kind: Setting
  4. metadata:
  5. name: ssl-parameters
  6. ...
  7. value: '{"protocols":"TLS99","ciphers":"WRONG_CIPHER"}' # <- Delete this line
  1. Save the change and you should see the following response after exit from the editor:
  1. setting.harvesterhci.io/ssl-parameters edited

You can further check the logs of Pod rke2-ingress-nginx-controller to see if NGINX Ingress Controller is working correctly.

Network interfaces are not showing up

You may need help finding the correct interface with a 10G uplink since the interface is not showing up. The uplink doesn’t show up when the ixgbe module fails to load because an unsupported SFP+ module type is detected.

How to identify the issue with the unsupported SFP?

Execute the command lspci | grep -i net to see the number of NIC ports connected to the motherboard. By running the command ip a, you can gather information about the detected interfaces. If the number of detected interfaces is less than the number of identified NIC ports, then it’s likely that the problem arises from using an unsupported SFP+ module.

Testing

You can perform a simple test to verify whether the unsupported SFP+ is the cause. Follow these steps on a running node:

  1. Create the file /etc/modprobe.d/ixgbe.conf manually with the content:
  1. options ixgbe allow_unsupported_sfp=1
  1. Then run following command:
  1. rmmod ixgbe && modprobe ixgbe

If the above steps are successful and the missing interface shows, we can confirm that the issue is an unsupported SFP+. However, the above test is not permanent and will be flushed out once rebooted.

Solution

Due to support issues, Intel restricts the types of SFPs used on their NICs. To make the above changes persistent, adding the following content to a config.yaml during installation is recommended.

  1. os:
  2. write_files:
  3. - content: |
  4. options ixgbe allow_unsupported_sfp=1
  5. path: /etc/modprobe.d/ixgbe.conf
  6. - content: |
  7. name: "reload ixgbe module"
  8. stages:
  9. boot:
  10. - commands:
  11. - rmmod ixgbe && modprobe ixgbe
  12. path: /oem/99_ixgbe.yaml