Virtual clusters are fully working Kubernetes clusters that run on top of other Kubernetes clusters. Compared to fully separate “real” clusters, virtual clusters reuse worker nodes and networking of the host cluster. They have their own control plane and schedule all workloads into a single namespace of the host cluster. Like virtual machines, virtual clusters partition a single physical cluster into multiple separate ones.
For more about the vcluster, please refer to its official website. The goal for this post is to learn how to deploy vcluster in an existing kubernetes cluster.
Install vCluster CLI
Requirements:
kubectl (check via kubectl version)
helm v3 (check with helm version)
a working kube-context with access to a Kubernetes cluster (check with kubectl get namespaces)
$ export KUBECONFIG=./kubeconfig $ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME node11 Ready <none> 39h v1.27.2 10.10.0.11 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://23.0.6 node12 Ready control-plane 39h v1.27.2 10.10.0.12 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://23.0.6 node13 Ready <none> 39h v1.27.2 10.10.0.13 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://24.0.6 node14 Ready <none> 39h v1.27.2 10.10.0.14 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://24.0.6
Create namespace for vcluster:
1 2 3 4 5
$ kubectl create namespace vcluster-my-vcluster $ kubectl get ns NAME STATUS AGE vcluster-my-vcluster Active 7s [...]
Make a storageclass default for vcluster:
1 2 3 4 5
$ kubectl patch storageclass px-db -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' $ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE px-db (default) kubernetes.io/portworx-volume Delete Immediate true 29h [...]
Deploy vcluster:
1 2 3 4 5 6 7 8 9 10
$ helm template my-vcluster vcluster --repo https://charts.loft.sh -n vcluster-my-vcluster | kubectl apply -f - serviceaccount/vc-my-vcluster created serviceaccount/vc-workload-my-vcluster created configmap/my-vcluster-coredns created configmap/my-vcluster-init-manifests created role.rbac.authorization.k8s.io/my-vcluster created rolebinding.rbac.authorization.k8s.io/my-vcluster created service/my-vcluster created service/my-vcluster-headless created statefulset.apps/my-vcluster created
$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@cluster.local cluster.local kubernetes-admin
$ vcluster list
NAME | NAMESPACE | STATUS | VERSION | CONNECTED | CREATED | AGE | PRO --------------+----------------------+---------+---------+-----------+-------------------------------+--------+------ my-vcluster | vcluster-my-vcluster | Running | 0.16.4 | | 2023-10-31 21:40:04 +0000 UTC | 52m30s |
$ kubectl get statefulset -n vcluster-my-vcluster NAME READY AGE my-vcluster 1/1 52m
$ kubectl get pod -n vcluster-my-vcluster NAME READY STATUS RESTARTS AGE coredns-68bdd584b4-79rpj-x-kube-system-x-my-vcluster 1/1 Running 0 27m my-vcluster-0 2/2 Running 0 52m
$ kubectl get pvc -n vcluster-my-vcluster NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-my-vcluster-0 Bound pvc-97bddb5c-47ee-4b62-916a-240c96e84676 5Gi RWO px-db 52m
$ kubectl get ns NAME STATUS AGE vcluster-my-vcluster Active 54m [...]
$ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE px-db (default) kubernetes.io/portworx-volume Delete Immediate true 30h [...]
Connect to vcluster:
1 2 3 4 5 6 7 8
$ vcluster connect my-vcluster -n vcluster-my-vcluster 22:08:33 done Switched active kube context to vcluster_my-vcluster_vcluster-my-vcluster_kubernetes-admin@cluster.local 22:08:33 warn Since you are using port-forwarding to connect, you will need to leave this terminal open - Use CTRL+C to return to your previous kube context - Use `kubectl get namespaces` in another terminal to access the vcluster Forwarding from 127.0.0.1:11459 -> 8443 Forwarding from [::1]:11459 -> 8443 Handling connection for 11459
Open a new terminal and verify the vcluster nodes:
1 2 3
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME node11 Ready <none> 4m43s v1.28.2+k3s1 10.10.62.211 <none> Fake Kubernetes Image 4.19.76-fakelinux docker://19.3.12
Note: By default, there is only one node available for vcluster.
Disconnect from vcluster:
1 2
$ vcluster disconnect 22:37:06 info Successfully disconnected from vcluster: my-vcluster and switched back to the original context: kubernetes-admin@cluster.local
Open a new bash with the vcluster KUBECONFIG defined:
$ kubectl get ns NAME STATUS AGE kube-system Active 45m kube-public Active 45m kube-node-lease Active 45m default Active 45m
$ vcluster disconnect 22:51:03 info Successfully disconnected from vcluster: my-vcluster and switched back to the original context: kubernetes-admin@cluster.local
Delete vcluster:
1 2 3 4 5 6 7 8 9 10 11 12
$ vcluster list
NAME | NAMESPACE | STATUS | VERSION | CONNECTED | CREATED | AGE | PRO --------------+----------------------+---------+---------+-----------+-------------------------------+----------+------ my-vcluster | vcluster-my-vcluster | Running | 0.16.4 | | 2023-10-31 21:40:04 +0000 UTC | 1h12m38s |
$ kubectl delete namespace vcluster-my-vcluster
$ vcluster list
NAME | NAMESPACE | STATUS | VERSION | CONNECTED | CREATED | AGE | PRO -------+-----------+--------+---------+-----------+---------+-----+------
$ cat cluster_values.yaml sync: services: enabled: true configmaps: enabled: true secrets: enabled: true endpoints: enabled: true pods: enabled: true ephemeralContainers: false status: false events: enabled: true persistentvolumeclaims: enabled: true ingresses: enabled: true fake-nodes: enabled: true# will be ignored if nodes.enabled = true fake-persistentvolumes: enabled: true# will be ignored if persistentvolumes.enabled = true nodes: enabled: true # If nodes sync is enabled, and syncAllNodes = true, the virtual cluster # will sync all nodes instead of only the ones where some pods are running. syncAllNodes: true # nodeSelector is used to limit which nodes get synced to the vcluster, # and which nodes are used to run vcluster pods. # A valid string representation of a label selector must be used. nodeSelector: "" # syncNodeChanges allows vcluster user edits of the nodes to be synced down to the host nodes. # Write permissions on node resource will be given to the vcluster. syncNodeChanges: false persistentvolumes: enabled: true storageclasses: enabled: false legacy-storageclasses: enabled: true priorityclasses: enabled: true networkpolicies: enabled: true volumesnapshots: enabled: false poddisruptionbudgets: enabled: true serviceaccounts: enabled: true
$ vcluster create my-vcluster -f vcluster_values.yaml 23:00:34 info Creating namespace vcluster-my-vcluster 23:00:34 info Create vcluster my-vcluster... 23:00:34 info execute command: helm upgrade my-vcluster /tmp/vcluster-0.16.4.tgz-1464010202 --kubeconfig /tmp/2856381900 --namespace vcluster-my-vcluster --install --repository-config='' --values /tmp/901005439 --values vcluster_values.yaml 23:00:35 done Successfully created virtual cluster my-vcluster in namespace vcluster-my-vcluster 23:00:35 info Waiting for vcluster to come up... 23:00:51 warn vcluster is waiting, because vcluster pod my-vcluster-0 has status: ContainerCreating 23:01:02 done Switched active kube context to vcluster_my-vcluster_vcluster-my-vcluster_kubernetes-admin@cluster.local 23:01:02 warn Since you are using port-forwarding to connect, you will need to leave this terminal open - Use CTRL+C to return to your previous kube context - Use `kubectl get namespaces` in another terminal to access the vcluster Forwarding from 127.0.0.1:11704 -> 8443 Forwarding from [::1]:11704 -> 8443
Use CTRL+C to switch back to original context:
1 2 3
$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@cluster.local cluster.local kubernetes-admin
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME node12 Ready control-plane 9m59s v1.27.2 10.10.50.155 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://23.0.6 node13 Ready <none> 9m59s v1.27.2 10.10.9.202 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://24.0.6 node14 Ready <none> 9m59s v1.27.2 10.10.7.186 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://24.0.6 node11 Ready <none> 9m59s v1.27.2 10.10.53.174 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://23.0.6
Check the node labels:
1 2 3 4
$ kubectl get nodes -o wide --show-labels NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME LABELS node14 Ready <none> 14m v1.27.2 10.10.7.186 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://24.0.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/fio=true,kubernetes.io/hostname=node14,kubernetes.io/os=linux [...]
Note: The node label which was created in host cluster is also available in vcluster. In this example, the label is “fio=true”
Disconnect from the vcluster:
1 2 3 4 5 6 7 8 9 10 11
$ vcluster disconnect $ exit $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@cluster.local cluster.local kubernetes-admin
$ vcluster list
NAME | NAMESPACE | STATUS | VERSION | CONNECTED | CREATED | AGE | PRO --------------+----------------------+---------+---------+-----------+-------------------------------+------+------ my-vcluster | vcluster-my-vcluster | Running | 0.16.4 | | 2023-10-31 23:00:35 +0000 UTC | 6m6s |
Note: The storageclass created in host cluster is also available in vcluster.
Create namespace in vcluster:
1 2 3 4 5 6 7 8 9
$ kubectl create ns app-fio
$ kubectl get ns NAME STATUS AGE default Active 22m kube-system Active 22m kube-public Active 22m kube-node-lease Active 22m app-fio Active 4s
Verify namespace in host cluster context:
1 2 3 4 5 6 7 8 9 10 11 12
$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@cluster.local cluster.local kubernetes-admin
$ kubectl get ns NAME STATUS AGE default Active 41h kube-node-lease Active 41h kube-public Active 41h kube-system Active 41h portworx Active 31h vcluster-my-vcluster Active 24m
Note: The created namespace in vcluster does not appear in the host cluster context.
$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@cluster.local cluster.local kubernetes-admin
$ kubectl get ns NAME STATUS AGE default Active 41h kube-node-lease Active 41h kube-public Active 41h kube-system Active 41h portworx Active 31h vcluster-my-vcluster Active 51m
$ kubectl get pod No resources found in default namespace. $ kubectl get pvc No resources found in default namespace.
$ kubectl get pod -n app-fio No resources found in app-fio namespace. $ kubectl get pvc -n app-fio No resources found in app-fio namespace
Note: The created namespace app-fio doesn’t appear in the host cluster context. Thus, the related POD and PVCs doesn’t appear either.