The EKCO add-on is a utility tool to perform maintenance operations on a kURL cluster.
The kURL add-on installs the EKCO operator into the kURL namespace.
spec:
ekco:
version: "latest"
nodeUnreachableToleration: 1h
minReadyMasterNodeCount: 2
minReadyWorkerNodeCount: 0
rookShouldUseAllNodes: true
shouldDisableRebootServices: true
shouldDisableClearNodes: false
shouldEnablePurgeNodes: false
autoUpgradeSchedule: Sat 17:30
enableInternalLoadBalancer: true
Flag | Usage |
---|---|
version | The version of EKCO to be installed. |
nodeUnreachableToleration | How long a Node must be unreachable before considered dead. Default is 1h. |
minReadyMasterNodeCount | Don't purge the node if it will result in less than this many ready primaries. Default is 2. |
minReadyWorkerNodeCount | Don't purge the node if it will result in less than this many ready workers. Default is 0. |
rookShouldUseAllNodes | This will disable management of nodes in the CephCluster resource. If false, ekco will add nodes to the storage list and remove them when a node is purged. |
shouldDisableRebootServices | Do not install the systemd shutdown service that cordons a node and deletes pods with PVC and Shared FS volumes mounted. |
shouldDisableClearNodes | Do not force-delete pods stuck in terminating state on unreachable nodes |
shouldEnablePurgeNodes | Automatically delete and cleanup unreachable nodes |
autoUpgradeSchedule | Schedule to automatically check for and install updates, when `auto-upgrades-enabled` is used |
enableInternalLoadBalancer | Run an internal load balanacer with HAProxy listening on localhost:6444 on all nodes |
podImageOverrides | A list of Pod container image overrides in the format "[original]=[overridden]" |
This section describes maintenance tasks that the EKCO operator performs.
The clear nodes feature ensures that pods running on an unreachable node are quickly rescheduled to healthy nodes.
When a node is unreachable for more than forty seconds, Kubernetes changes the node's ready status to Unknown
.
After five minutes in the Unknown state, Kubernetes deletes all of the pods on the unreachable node so they can be rescheduled on healthy nodes.
The deleted pods typically remain in the Terminating state since kubelet is not reachable to confirm that the pods have stopped.
If a pod mounts a PVC, it maintains its lock on the PVC while stuck in the Terminating state and replacement pods are not able to start.
This can cause applications using PVCs to be unavailable longer than the five minute grace period applied by Kubernetes.
To avoid extended downtime, the EKCO operator watches for nodes in the Unknown state for more than five minutes and force deletes all pods on those nodes that have been terminating for at least 30 seconds. This 30 seconds, in addition to the 5 minute 40 second latency period before Kubernetes begins deleting pods on unreachable nodes, means that a minimum of 6 minutes 10 seconds passes before pods can begin to be rescheduled. In practice, pods take 7 to 10 minutes to be rescheduled due to a variety of factors, such as whether EKCO itself was on the lost node and the image pull times on the healthy nodes.
The clear node feature is a safer alternative to the purge node feature and is enabled by default. When using the clear node feature and a node is lost, the cluster is degraded until the node is cleaned up. In a degraded state, new nodes cannot join the cluster, the cluster cannot be upgraded, and cluster components report health warnings. For more information, see to the command below for manually purging a lost node.
When enabled, the EKCO operator automatically purges failed nodes that have been unreachable for more than node_unreachable_toleration
(Default: 5 minutes).
The following steps are taken during a purge:
ceph osd purge <id>
.useAllNodes: true
.A command is made available on all primary nodes to manually purge a node. This command takes a parameter [name]
of the node that you want to purge. The command inherits all of the configuration from the EKCO operator running in the cluster.
$ ekco-purge-node --help
Manually purge a Kurl cluster node
Usage:
ekco purge-node [name] [flags]
Flags:
--certificates_dir string Kubernetes certificates directory (default "/etc/kubernetes/pki")
-h, --help help for purge-node
--maintain_rook_storage_nodes Add and remove nodes to the ceph cluster and scale replication of pools
--min_ready_master_nodes int Minimum number of ready primary nodes required for auto-purge (default 2)
--min_ready_worker_nodes int Minimum number of ready secondary nodes required for auto-purge
Global Flags:
--config string Config file (default is /etc/ekco/config.yaml)
--log_level string Log level (default "info")
The EKCO operator is responsible for appending nodes to the CephCluster storage.nodes
setting to include the node in the list of nodes used by Ceph for storage. This operation only appends nodes. Removing nodes is done during the purge.
EKCO is also responsible for adjusting the Ceph block pool, filesystem, and object store replication factor up and down in accordance with the size of the cluster from min_ceph_pool_replication
(Default: 1) to max_ceph_pool_replication
(Default: 3).
EKCO supports automatic certificate rotation for the registry add-on and the Kubernetes control plane since version 0.5.0 and for the KOTS add-on since version 0.7.0.
EKCO installs the ekco-reboot.service
to safely unmount pod volumes before the system shutdown.
This service runs /opt/ekco/shutdown.sh
when it is stopped, which happens automatically when the system begins to shutdown.
The shutdown script deletes pods on the current node that mount volumes provisioned by Rook and cordons the node.
When the ekco-reboot.service
is started, it runs /opt/ekco/startup.sh
.
This happens automatically when the system starts after docker is running.
This script uncordons the node.
The shutdown script can fail to complete because it depends on services running on the node to be available to delete pods, but these services can be shutting down already. To avoid race conditions, manually run the ekco-reboot service's shutdown script before proceeding with the system shutdown or reboot:
/opt/ekco/shutdown.sh
EKCO 0.11.0+ can maintain an internal load balancer forwarding all traffic from host port 6444 to one of the Kubernetes API server pods. To do this, EKCO runs HAProxy as a static pod on all nodes. EKCO ensures that, when new nodes are added and removed from the cluster, the correct HAProxy configuration is applied on all nodes.
If an auto-upgrade schedule is included in your kURL specification and the end user passes the auto-upgrades-enabled
flag to the install script, a systemd service named ekco-upgrade
is installed to automatically check for and install updates to Kubernetes and the add-ons.
The schedule must be a valid systemd calendar event time.
This feature is available in version 0.8.0+.
The following specification and command enables automatic upgrades for an installation:
spec:
ekco:
version: "latest"
autoUpgradeSchedule: Sat 17:30
curl -sSL https://kurl.sh/<installer-id> | sudo bash -s auto-upgrades-enabled
Auto-upgrades are only supported on single-node online installations. This feature is only relevant for named installers that have changes to kURL specifications without changes to the URL where the specification is hosted.
To view the logs from the last attempted upgrade, use the following command:
journalctl -u ekco-upgrade.service
Use this command to see the previous and next scheduled upgrade times for the ekco-upgrade service:
systemctl list-timers
With auto-resource scaling, EKCO automatically scales some cluster resources that were installed by kURL to the specified replica count.
Auto-resource scaling is useful because a subset of cluster resources require that all nodes join the cluster before the desired replica count can be fulfilled. This can cause issues such as false positives in health check error reporting. Auto-resource scaling helps to avoid issues like this by scaling this subset of custom resources to the specified replica count without requiring that all nodes join the cluster.
Auto-resource scaling is available in v0.13.0 and later.
EKCO adds an admission controller that can be configured to override container images in pods.
A list can be specified in the podImageOverrides
property as an array of strings in the format [original]=[overridden]
.
For example:
ekco:
version: latest
podImageOverrides:
- projectcontour/contour:v1.18.0=myregistry.io/contour:v1.18.0-fips