Kubernetes Cheatsheet [WIP]

kubectl get pods #get pods from default namespace
kubectl get pods -o wide #get pods from default namespace
kubectl get nodes #get nodes
kubectl get pods --all-namespaces #list all pods
kubectl drain node_name --ignore-deamonsets --force #drain a node, cannot delete pods managed by replicationcontroller, replicaset, job, deamonset, statefulset
kubectl uncordon node_name # allow pods to run on the node after draining

# implement a manifest, create a pod
kubectl apply -f pod.yaml

# List of supported resources
kubectl api-resources

# get output in a different format or filter the results
kubectl get pods -o json
kubectl get pods -o yaml
kubectl get pods -o wide --sort-by .spec.nodeName
kubectl get pods -n kube-system --selector k8s-app=calico-node

# info-gathering

kubectl describe pod pod_name

# execute commands in pods 
# -c is the containername

kubectl exec my-pod -c busybox -- echo "test output"

# delete a pod

kubectl delete pod my-pod

# describe a deployment
kubectl describe deployment my-deployment


# get a deamonset
Kubectl get deamonset



Those are two different approaches of managing resources

Imperative Management
kubectl create is what we call Imperative Management. On this approach you tell the Kubernetes API what you want to create, replace or delete, not how you want your K8s cluster world to look like.

Declarative Management
kubectl apply is part of the Declarative Management approach, where changes that you may have applied to a live object (i.e. through scale) are "maintained" even if you apply other changes to the object.


# Run kubectl create to see a list of objects that can be created with imperative commands.
kubectl create

# Create a deployment imperatively.
kubectl create deployment my-deployment --image=nginx

# Do a dry run to get some sample yaml without creating the object.
kubectl create deployment my-deployment --image=nginx --dry-run -o yaml

# Save the yaml to a file.
kubectl create deployment my-deployment --image=nginx --dry-run -o yaml > deployment.yml

# Create the object using the file.
kubectl create -f deployment.yml

# Scale a deployment and record the command. 
# We can see the command in kubectl describe in annotations 
kubectl scale deployment my-deployment --replicas=5 --record












--- 

# upgrade kubeadm

kubectl drain <control plane node name> --ignore-daemonsets

sudo apt-get update && \
sudo apt-get install -y --allow-change-held-packages kubeadm=1.20.2-00

kubeadm version

Drain the control plane node.
kubectl drain <control plane node name> --ignore-daemonsets
Upgrade kubeadm.
sudo apt-get update && \
sudo apt-get install -y --allow-change-held-packages kubeadm=1.20.2-00
kubeadm version
Plan the upgrade.
sudo kubeadm upgrade plan v1.20.2
Upgrade the control plane components.
sudo kubeadm upgrade apply v1.20.2
Upgrade kubelet and kubectl on the control plane node.
sudo apt-get update && \
sudo apt-get install -y --allow-change-held-packages kubelet=1.20.2-00 kubectl=1.20.2-00
Restart kubelet.
sudo systemctl daemon-reload
sudo systemctl restart kubelet
Uncordon the control plane node.
kubectl uncordon <control plane node name>
Verify that the control plane is working.
kubectl get nodes
Upgrade the worker nodes.
Note: In a real-world scenario, you should not perform upgrades on all worker nodes at the same time. Make sure enough nodes are available at any given time to provide uninterrupted service.
Run the following on the control plane node to drain worker node 1:
kubectl drain <worker 1 node name> --ignore-daemonsets --force
Log in to the first worker node, then Upgrade kubeadm.
sudo apt-get update && \
sudo apt-get install -y --allow-change-held-packages kubeadm=1.20.2-00
kubeadm version
Upgrade the kubelet configuration on the worker node.
sudo kubeadm upgrade node
Upgrade kubelet and kubectl on the worker node.
sudo apt-get update && \
sudo apt-get install -y --allow-change-held-packages kubelet=1.20.2-00 kubectl=1.20.2-00
Restart kubelet.
sudo systemctl daemon-reload
sudo systemctl restart kubelet
From the control plane node, uncordon worker node 1.
kubectl uncordon <worker 1 node name>
Repeat the upgrade process for worker node 2.
From the control plane node, drain worker node 2.
kubectl drain <worker 2 node name> --ignore-daemonsets --force
On the second worker node, upgrade kubeadm.
sudo apt-get update && \
sudo apt-get install -y --allow-change-held-packages kubeadm=1.20.2-00
kubeadm version
Perform the upgrade on worker node 2.
sudo kubeadm upgrade node
sudo apt-get update && \
sudo apt-get install -y --allow-change-held-packages kubelet=1.20.2-00 kubectl=1.20.2-00
sudo systemctl daemon-reload
sudo systemctl restart kubelet
From the control plane node, uncordon worker node 2.
kubectl uncordon <worker 2 node name>
Verify that the cluster is upgraded and working.
kubectl get nodes



Original documentation: Upgrading kubeadm clusters | Kubernetes



# Backup etcD
etcdctl --endpoitns $ENDPOINT snapshot save file_name
etcdctl snapshot restore file_name

# service accounts

# Create a basic ServiceAccount.

vi my-serviceaccount.yml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-serviceaccount
kubectl create -f my-serviceaccount.yml

# Create a ServiceAccount with an imperative command.

kubectl create sa my-serviceaccount2 -n default

# View your ServiceAccount.

kubectl get sa

# Attach a Role to the ServiceAccount with a RoleBinding.

vi sa-pod-reader.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: sa-pod-reader
  namespace: default
subjects:
- kind: ServiceAccount
  name: my-serviceaccount
  namespace: default
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io


# kubectl create -f sa-pod-reader.yml
# Get additional information for the ServiceAccount.

kubectl describe sa my-serviceaccount

# monitoring 

kubectl top pod
kubectl top pod --sort-by cpu
kubectl top pod --selector app=test-app
kubectl top node


# label a node


Kubectl label nodes node_name node_lablel=somelabel


Static pod = managed directly by a kubelet on a node (kubelet will create a mirror pod for each static API)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s