cilium_cni¶
- Code: kubernetes/cilium_cni
Configuration¶
# defaults
enabled: false
commands:
install:
script:
- ansible-playbook k8s_cilium_cni.yml
uninstall:
script:
- ansible-playbook k8s_cilium_cni.yml
namespace: kube-system
subdomain: cilium
version: 1.10.5
enabled: false
Note
When enabled
is true
, cilium will be used as active CNI for k3s
Uninstall¶
After uninstalling cilium_cni
from a running cluster, it's possible that not all pods are automatically managed by your new CNI or Pod Networking ceases to work at all. When this happens, ssh into each node (cloudstack ssh master-0
etc) and check if there are iptables rules left from cilium: iptables -L -vn | grep -i cilium
. If there are rules left, you can clean them with the following commands:
iptables -D FORWARD -m comment --comment "cilium-feeder: CILIUM_FORWARD" -j CILIUM_FORWARD
iptables -D INPUT -m comment --comment "cilium-feeder: CILIUM_INPUT" -j CILIUM_INPUT
iptables -D OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT" -j CILIUM_OUTPUT
iptables -F CILIUM_INPUT
iptables -F CILIUM_FORWARD
iptables -F CILIUM_OUTPUT
iptables -X CILIUM_OUTPUT
iptables -X CILIUM_FORWARD
iptables -X CILIUM_INPUT
Once all rules are gone, reboot all nodes.
Last update:
July 7, 2022