*: This version is deprecated since it is no longer supported by its creator. We continue to support it, but support will be removed in the future.
TCP ports 2379, 2380, 6443, 10250, 10251 and 10252 open between cluster nodes
The following table lists information about the core directory requirements.
Name | Location | Requirements |
---|---|---|
etcd | /var/lib/etcd/ | This directory has a high I/O requirement. See Cloud Disk Performance. |
kURL | /var/lib/kurl/ | 5 GB kURL installs additional dependencies in the directory /var/lib/kurl/, including utilities, system packages, and container images. This directory must be writeable by the kURL installer and must have sufficient disk space. This directory can be overridden with the flag |
kubelet | /var/lib/kubelet/ | 30 GiB and less than 80% full. See Host Preflights. |
The following table lists the add-on directory locations and disk space requirements, if applicable. For any additional requirements, see the specific topic for the add-on.
Name | Location | Requirements |
---|---|---|
Containerd | /var/lib/containerd/ | N/A |
Docker | /var/lib/docker/ /var/lib/dockershim/ |
Docker: 30 GB and less than 80% full Dockershim: N/A See Docker Add-on. |
Longhorn | /var/lib/longhorn/ | This directory should have enough space to hold a complete copy of every PersistentVolumeClaim that will be in the cluster. See Longhorn Add-on. For host preflights, it should have 50GiB total space and be less than 80% full. See Host Preflights. |
OpenEBS | /var/openebs/ | N/A |
Rook | Versions earlier than 1.0.4-x: /opt/replicated/rook Versions 1.0.4-x and later: /var/lib/rook/ |
/opt/replicated/rook requires a minimum of 10GB and less than 80% full. /var/lib/rook/ requires a 10 GB block device. See Rook Add-on. |
Weave | /var/lib/cni/ /var/lib/weave/ |
N/A |
The following domains need to be accessible from servers performing online kURL installs. IP addresses for these services can be found in replicatedhq/ips.
Host | Description |
---|---|
amazonaws.com | tar.gz packages are downloaded from Amazon S3 during embedded cluster installations. The IP ranges to allowlist for accessing these can be scraped dynamically from the AWS IP Address Ranges documentation. |
k8s.gcr.io | Images for the Kubernetes control plane are downloaded from the Google Container Registry repository used to publish official container images for Kubernetes. For more information on the Kubernetes control plane components, see the Kubernetes documentation. |
k8s.kurl.sh | Kubernetes cluster installation scripts and artifacts are served from kurl.sh. Bash scripts and binary executables are served from kurl.sh. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA. |
No outbound internet access is required for airgapped installations.
The kURL install script will prompt to disable firewalld. Note that firewall rules can affect communications between containers on the same machine, so it is recommended to disable these rules entirely for Kubernetes. Firewall rules can be added after or preserved during an install, but because installation parameters like pod and service CIDRs can vary based on local networking conditions, there is no general guidance available on default requirements. See Advanced Options for installer flags that can preserve these rules.
The following ports must be open between nodes for multi-node clusters:
Protocol | Direction | Port Range | Purpose | Used By |
---|---|---|---|---|
TCP | Inbound | 6443 | Kubernetes API server | All |
TCP | Inbound | 2379-2380 | etcd server client API | Primary |
TCP | Inbound | 10250 | kubelet API | Primary |
UDP | Inbound | 8472 | Flannel VXLAN | All |
TCP | Inbound | 6783 | Weave Net control | All |
UDP | Inbound | 6783-6784 | Weave Net data | All |
TCP | Inbound | 9090 | Rook CSI RBD Plugin Metrics | All |
Protocol | Direction | Port Range | Purpose | Used By |
---|---|---|---|---|
TCP | Inbound | 10250 | kubelet API | Primary |
UDP | Inbound | 8472 | Flannel VXLAN | All |
TCP | Inbound | 6783 | Weave Net control | All |
UDP | Inbound | 6783-6784 | Weave Net data | All |
TCP | Inbound | 9090 | Rook CSI RBD Plugin Metrics | All |
These ports are required for Kubernetes, Flannel, and Weave Net.
In addition to the ports listed above that must be open between nodes, the following ports should be available on the host for components to start TCP servers accepting local connections.
Port | Purpose |
---|---|
2381 | etcd health and metrics server |
6781 | weave network policy controller metrics server |
6782 | weave metrics server |
10248 | kubelet health server |
10249 | kube-proxy metrics server |
9100 | prometheus node-exporter metrics server |
10257 | kube-controller-manager health server |
10259 | kube-scheduler health server |
In addition to the networking requirements described in the previous section, operating a cluster with high availability adds additional constraints.
To operate the Kubernetes control plane in HA mode, it is recommended to have a minimum of 3 primary nodes.
In the event that one of these nodes becomes unavailable, the remaining two will still be able to function with an etcd quorom.
As the cluster scales, dedicating these primary nodes to control-plane only workloads using the noSchedule
taint should be considered.
This will affect the number of nodes that need to be provisioned.
The number of required secondary nodes is primarily a function of the desired application availability and throughput. By default, primary nodes in kURL also run application workloads. At least 2 nodes should be used for data durability for applications that use persistent storage (i.e. databases) deployed in-cluster.
Highly available cluster setups that do not leverage EKCO's internal load balancing capability require a load balancer to route requests to healthy nodes. The following requirements need to be met for load balancers used on the control plane (primary nodes):
The load balancer must support hairpinning, i.e. nodes referring to eachother through the load balancer IP.
The IP or DNS name and port of the load balancer should be provided as an argument to kURL during the HA setup. See Highly Available K8s for more install information.
For more information on configuring load balancers in the public cloud for kURL installs see Public Cloud Load Balancing.
Load balancer requirements for application workloads vary depending on workload.
The following example cloud VM instance/disk combinations are known to provide sufficient performance for etcd and will pass the write latency preflight.