Usage of signals with kill

$ kill -l
 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1
11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM
16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP
21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ
26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR
31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3
38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8
43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12
53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7
58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2
63) SIGRTMAX-1 64) SIGRTMAX

An example to send SIGINT signal in shell

In the following example, we start strace for a given process for given amounts of seconds. Once the time is up, we send SIGINT(Ctrl+C) signal to stop the tracing process. And a trace report will be generated to the output file strace.c.out.

$ cat strace_summary.sh
processname=$1
duration=$2
while true
do
    strace -p `pidof $processname` -c > strace.c.out 2>&1 &
    sleep $duration
    break
done

kill -2 `pidof strace`

Reference

Install the virtctl client tool

Basic VirtualMachineInstance operations can be performed with the stock kubectl utility. However, the virtctl binary utility is required to use advanced features such as:

  • Serial and graphical console access

It also provides convenience commands for:

  • Starting and stopping VirtualMachineInstances
  • Live migrating VirtualMachineInstances
  • Uploading virtual machine disk images

There are two ways to get it:

  • the most recent version of the tool can be retrieved from the official release page
  • it can be installed as a kubectl plugin using krew

Example:

$ export VERSION=v0.48.1
$ wget https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-linux-amd64
$ ln -s virtctl-v0.48.1-linux-amd64 virtctl
$ chmod +x virtctl-v0.48.1-linux-amd64
$ ./virtctl version
Client Version: version.Info{GitVersion:"v0.48.1", ...}

Access the virtual machine console

$ ./virtctl -h
virtctl controls virtual machine related operations on your kubernetes cluster.

Available Commands:
  addvolume         add a volume to a running VM
  console           Connect to a console of a virtual machine instance.
  expose            Expose a virtual machine instance, virtual machine, or virtual machine instance replica set as a new service.
  fslist            Return full list of filesystems available on the guest machine.
  guestfs           Start a shell into the libguestfs pod
  guestosinfo       Return guest agent info about operating system.
  help              Help about any command
  image-upload      Upload a VM image to a DataVolume/PersistentVolumeClaim.
  migrate           Migrate a virtual machine.
  pause             Pause a virtual machine
  permitted-devices List the permitted devices for vmis.
  port-forward      Forward local ports to a virtualmachine or virtualmachineinstance.
  removevolume      remove a volume from a running VM
  restart           Restart a virtual machine.
  soft-reboot       Soft reboot a virtual machine instance
  ssh               Open a SSH connection to a virtual machine instance.
  start             Start a virtual machine.
  stop              Stop a virtual machine.
  unpause           Unpause a virtual machine
  usbredir          Redirect a usb device to a virtual machine instance.
  userlist          Return full list of logged in users on the guest machine.
  version           Print the client and server version information.
  vnc               Open a vnc connection to a virtual machine instance.

Use "virtctl <command> --help" for more information about a given command.
Use "virtctl options" for a list of global command-line options (applies to all commands).

$ ./virtctl console vm1
[root@vm1 output]# hostname
vm1

Reference

Flamingbytes is an independent publication launched in December 2022 by relentlesstorm.

If you have any questions, you can reach me by email.

In some circumstances, we want to control which node the virtual machine or pod deploys to. The node selector can be used to assign virtual machine or pod to a node.

Add label to a node

The label can be added to a node from either command line or openshift web console.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ oc get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready master 46h v1.20.11+63f841a
master2 Ready master 46h v1.20.11+63f841a
master3 Ready master 46h v1.20.11+63f841a
worker1 Ready worker 45h v1.20.11+63f841a
worker2 Ready worker 45h v1.20.11+63f841a
worker3 Ready,SchedulingDisabled worker 45h v1.20.11+63f841a

$ oc label nodes worker1 worker-node-name=w1

$ oc describe node worker1 | grep worker-node-name
worker-node-name=w1

$ oc get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
worker1 Ready worker 45h v1.20.11+63f841a <omitted..>worker-node-name=w1
<omitted..>

This can also be done through openshift web console by clicking the “edit labels” option on a node.

Image

Add a nodeSelector field to the virtual machine

A node selector can be added through openshift web console.

Image

Image

1
2
3
4
$ oc describe vm vm1
Spec:
Node Selector:
Worker - Node - Name: w1

Reference

StatefulSet is the workload API object used to manage stateful applications.

Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.

If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods that replace any that have failed.

Source

The example below demonstrates the components of a StatefulSet.

$ cat perfbench-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: perfbench
spec:
  serviceName: perfbench
  replicas: 1
  selector:
    matchLabels:
      app: perfbench
  template:
    metadata:
      labels:
        app: perfbench
    spec:
      containers:
      - name: perfbench
        image: noname/perfbench:latest
        volumeMounts:
        - name: perfbench-data
          mountPath: /perfdata
        - name: perfbench-log
          mountPath: /perflog
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: perfbench-data
    spec:
      storageClassName: <storage-class>
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
  - metadata:
      name: perfbench-log
    spec:
      storageClassName: <storage-class>
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

The statefulset can be created as below.

$ kubectl apply -f perfbench-statefulset.yaml

In case of any failure to create the statefulset, you can check the events with the following commands.

$ kubectl describe pod perfbench-0
Name:         perfbench-0
Namespace:    default
Priority:     0
Node:         <hostname>/<ip-address>
Start Time:   Sat, 26 Feb 2022 06:22:27 +0000
Labels:       app=perfbench
              controller-revision-hash=perfbench-7657fb8779
              statefulset.kubernetes.io/pod-name=perfbench-0
Annotations:  cni.projectcalico.org/containerID: c8569d9a3f01f546ca92fb2d0cba98d4d971933e53ab83373ed34c91040d92bc
              cni.projectcalico.org/podIP: 192.168.201.145/32
              cni.projectcalico.org/podIPs: 192.168.201.145/32
Status:       Running
IP:           192.168.201.145
IPs:
  IP:           192.168.201.145
Controlled By:  StatefulSet/perfbench
Containers:
  perfbench:
    Container ID:   docker://b6a2c838837395c1c85913d4735e3167d3cea6b5ed3ade276584461d643eaee5
    Image:          noname/perfbench:latest
    Image ID:       docker-pullable://<noname/perfbench@sha256:5460e3c04ea972afcde3db092b514919867f87974d012f046c538ac816c7aaae
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sat, 26 Feb 2022 06:22:35 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /perfdata from perfbench-data (rw)
      /perflog from perfbench-log (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8qhw (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  perfbench-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  perfbench-data-perfbench-0
    ReadOnly:   false
  perfbench-log:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  perfbench-log-perfbench-0
    ReadOnly:   false
  kube-api-access-x8qhw:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  20s (x2 over 21s)  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         18s                default-scheduler  Successfully assigned default/perfbench-0 to <hostname>
  Normal   Pulling           16s                kubelet            Pulling image "<noname/perfbench:latest"
  Normal   Pulled            11s                kubelet            Successfully pulled image "<noname/perfbench:latest" in 4.368378217s
  Normal   Created           10s                kubelet            Created container perfbench
  Normal   Started           10s                kubelet            Started container perfbench

You can check the pod status and login to the pod container as below.

$ kubectl get pod
NAME         READY   STATUS    RESTARTS   AGE
perfbench-0   1/1     Running   0          3m44s

$ kubectl exec -it perfbench-0 -- bash

Reference

In the kubernetes environment, we can keep a container(pod) alive and avoid it exits immediately after starting.

Method one

1.
Use a long-time-run command in Dockerfile

CMD ["sh", "-c", "tail -f /dev/null"]

or

CMD ["sh", "-c", "sleep infinity"]

2.
Build the docker image and push to docker repository

3.
Launch the container

$ kubectl run mycontainer -it --image=<docker-image-name>

Method two

When to deploy an application with kubernetes statefulset, we also can add it to the statefulset yaml file instead of adding it to the docker image through Dockerfile.

$ cat myapp.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myapp
spec:
  serviceName: myapp
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: noname/myapp:latest
        command: ["sh", "-c", "tail -f /dev/null"]
        imagePullPolicy: Always
        volumeMounts:
        - name: myapp-data
          mountPath: /data
        - name: myapp-log
          mountPath: /log
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: myapp-data
    spec:
      storageClassName: <storage-class>
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
  - metadata:
      name: myapp-log
    spec:
      storageClassName: <storage-class>
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

$ kubectl apply -f myapp.yaml          

Install Ghost locally

GitHub Pages is a great solution for hosting static website. If you use the Ghost to manage your website, you can install Ghost locally and convert it to a static website in order to host it on Github.

Install Ubuntu virtual machine on Windows(optional)

Install Ghost-CLI

Ghost-CLI is a commandline tool to help you get Ghost installed and configured for use, quickly and easily.

$ npm install ghost-cli@latest -g

Ghost runs as background process and remains running until you stop or restart it. The following are some useful commands:

ghost help
ghost ls
ghost log
ghost stop
ghost start

Install Ghost

$ mkdir my-ghost-website
$ cd my-ghost-website
$ ghost install local

Once the Ghost is installed you can access the website on http://localhost:2368/ghost for Ghost Admin.

Generate the static website

In order to generate the static website, we’ll use the Ghost static site generator. For the Linux and macOS, this tool can be used directly.

$ sudo npm install -g ghost-static-site-generator

Now, we can push the generated static pages to the Github repository named username.github.io.

To generate the static pages, we can run the following command. The static pages are generated in a folder called static.

$ gssg --url https://username.github.io

Push the static pages to Github

Before pushing the static pages to Github, a repository called username.github.io should be created on Github website.

The static pages can be pushed to Github repository as below.

$ cd static
$ git init .
$ git remote add origin https://github.com/username/username.github.io.git
$ git branch -M main
$ git add .
$ git commit -m 'Init my website'
$ git push -u origin main

After the push, Github will build and deploy the pages automatically. And the updated website will be available to access in a few minutes.

Reference

Here is an example to demonstrate how to assign statefulset and pod to the target worker node.

affinity helps define which node will be assigned for the pod by using the node labels.

$ kubectl get nodes -o wide --show-labels

$ kubectl label nodes <node-name> noname.io/fio=true

$ cat statefulset-node-select.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: fiobench
spec:
  serviceName: fiobench
  replicas: 1
  selector:
    matchLabels:
      app: fiobench
  template:
    metadata:
      labels:
        app: fiobench
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: noname.io/fio
                operator: In
                values:
                - "true"
      containers:
      - name: fiobench
        image: noname/perfbench:fio
        imagePullPolicy: Always
        volumeMounts:
        - name: fiobench-data-1
          mountPath: /fiodata1
        - name: fiobench-data-2
          mountPath: /fiodata2
        securityContext:
          privileged: true
      imagePullSecrets:
      - name: regcred
  volumeClaimTemplates:
  - metadata:
      name: fiobench-data-1
    spec:
      storageClassName: fio-repl-node-select
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1400Gi
  - metadata:
      name: fiobench-data-2
    spec:
      storageClassName: fio-repl-node-select
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1400Gi

$ kubectl apply -f statefulset-node-select.yaml
$ kubectl get statefulset
$ kubectl get pod -o wide

Log in to Docker Hub

In order to pull a image from Docker Hub, you must authenticate with a registry. Use docker tool to log in to the Docker Hub as below. A username and password is needed to log in.

$ docker login

The login process creates or updates the config.json file which holds an authorization token.

$ cat /root/.docker/config.json
{
    "auths": {
        "https://index.docker.io/v1/": {
            "auth": "xxx="
        }
    }
}

Create a Secret based on existing credentials

A Kubernetes cluster uses the Secret of kubernetes.io/dockerconfigjson type to authenticate with a container registry to pull a private image.

If you already ran docker login, you can copy that credential into Kubernetes:

$ kubectl create secret generic regcred --from-file=.dockerconfigjson=/root/.docker/config.json --type=kubernetes.io/dockerconfigjson

You can inspect the Secret as below.

$ kubectl get secret regcred --output=yaml

apiVersion: v1
data:
  .dockerconfigjson: <base64-formatted-docker-credentials>
kind: Secret
metadata:
  creationTimestamp: "2022-02-28T22:25:43Z"
  name: regcred
  namespace: default
  resourceVersion: "1503624"
  uid: yyy
type: kubernetes.io/dockerconfigjson

The value of the .dockerconfigjson field is a base64 representation of your Docker credentials. To understand what is in the .dockerconfigjson field, convert the secret data to a readable format:

$ kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
{
    "auths": {
        "https://index.docker.io/v1/": {
            "auth": "xxx="
        }
    }
}

Create a Pod that uses the Secret to pull image

$ vi my-private-reg-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: private-reg
spec:
  containers:
  - name: private-reg-container
    image: <your-private-image>
  imagePullSecrets:
  - name: regcred

$ kubectl apply -f my-private-reg-pod.yaml
$ kubectl get pod private-reg  

Note that the imagePullSecrets field specifies that Kubernetes should get the credentials from a Secret named regcred in order to pull a container image from Docker Hub.

Install kubectl binary

The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@localhost ~]# curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
[root@localhost ~]# sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
[root@localhost ~]# ls -la /usr/local/bin/kubectl
[root@localhost ~]# kubectl version --output=yaml
clientVersion:
buildDate: "2022-09-21T14:33:49Z"
compiler: gc
gitCommit: 5835544ca568b757a8ecae5c153f317e5736700e
gitTreeState: clean
gitVersion: v1.25.2
goVersion: go1.19.1
major: "1"
minor: "25"
platform: linux/amd64
kustomizeVersion: v4.5.7

Use kubeconfig file to access remote Kubernetes cluster

A kubeconfig is a YAML file with all the Kubernetes cluster details, certificate, and secret token to authenticate the cluster. You might get this config file directly from the cluster administrator or from a cloud platform if you are using managed Kubernetes cluster. A kubeconfig file is a file to configure access to Kubernetes when to use with kubectl cli tool.

When to deploy Kubernetes cluster, a kubeconfig is automatically generated. It’s saved in ~/.kube/config drectory.

1
2
[root@host1 ~]#  ls -la ~/.kube/config
-rw------- 1 root root 5577 May 9 02:25 /root/.kube/config

You can access the Kubernetes cluster by providing the kubeconfig file remotely.

1
2
3
4
5
6
7
[root@localhost ~]# scp host1:/root/.kube/config ./kubeconfig-remote
[root@localhost ~]# kubectl --kubeconfig=./kubeconfig-remote get nodes
NAME STATUS ROLES AGE VERSION
host0 Ready <none> 135d v1.19.2
host1 Ready <none> 135d v1.19.2
host2 Ready <none> 135d v1.19.2
host3 Ready master 135d v1.19.2
0%