This document guides the cloud users how to create a kubernetes cluster and deploy a simple application.
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
Step 1. Create a Kubernetes (k8s) cluster
- Note that a separate cluster template for k8s needs to be created for each project.
- Use "Calico" as a network driver. "flannel" seem to have a issue.
- It is handy to have a ingress controller. Add a label of 'ingress_controller="octavia"' to enable the octavia ingress controller. Other type of ingress controllers are not tested yet.
- From the Container Infra section, use the template to create the k8s cluster of the size you want.
- A single master is only configured and tested for now.
- You can add additional worker nodes later (you can delete some of them), but I don't think you can change the flavor of the nodes.
- creating a k8s cluster may take tens of minutes and it may fail. Just delete the failed one and recreate one.
- The ssh key-pair is usually not important. Just use the default.
- Use the existing network and keep cluster api private.
- Note that a cluster will be also be created under the Cluster Infra → KASI cluster section.
Step 2. Setup connection to the created k8s cluster
- It is best to work from the gateway node of your project, i.e., we assume that you have a direct network connection to each k8s nodes.
- It seems that it is best to use the openstack cli to generate kube config files.
- To use the openstack cli commands, you need to setup the envirments. You can download the rc file from the openstack dashboard (API access → Download OpenStack RC file → OpenStack RC file )
- Download the rc file to the gateway node.
- Make sure you have "kubectl" command installed on the gateway : https://kubernetes.io/ko/docs/tasks/tools/install-kubectl-linux/
Setup the openstep rc
# the name of the rc file will reflect your project name. In this case, the project name is 'spherex' > source spherex-openrc.sh # this will ask for a password. Use the same password that yo use for the dashboard. # The rc file need to be loaded before you use openstack cli tools.
setup kube config
# somehow, openstack cli packages from ubuntu does not work. Instead, we install them via pip command. > sudo apt-get install python-dev python3-pip # install pip > sudo pip install python-openstackclient python-magnumclient # now it's time to fetch the kube config for your cluster > openstack coe cluster config YOUR-K8S-CLUSTER-NAME # The above command will create a file named "config" under the current directory. This is basically a kube config file that you can use with kubectl command. # You may environment variable of "KUBECONFIG" to this file, as suggested by the above command. You may instead copy this file under "~/.kube"
Now your "kubectl" command is connected to your newly created cluster.
kubectl get nodes
> kubectl get nodes NAME STATUS ROLES AGE VERSION spherex-k8s-calico100-4giy6vd2vahl-master-0 Ready master 7d2h v1.18.2 spherex-k8s-calico100-4giy6vd2vahl-node-0 Ready <none> 7d2h v1.18.2 spherex-k8s-calico100-4giy6vd2vahl-node-1 Ready <none> 7d2h v1.18.2 spherex-k8s-calico100-4giy6vd2vahl-node-2 Ready <none> 7d2h v1.18.2 > kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-795c4545c7-rl56t 1/1 Running 0 7d2h calico-node-4rs5j 1/1 Running 0 7d2h calico-node-7bj8r 1/1 Running 0 7d2h calico-node-8slht 1/1 Running 0 7d2h calico-node-rxg5s 1/1 Running 0 7d2h coredns-5f98bf4db7-l6cd7 1/1 Running 0 21h coredns-5f98bf4db7-vzhn8 1/1 Running 0 21h dashboard-metrics-scraper-6b4884c9d5-p87b5 1/1 Running 0 7d2h k8s-keystone-auth-n2dkh 1/1 Running 0 7d2h kube-dns-autoscaler-75859754fd-fd99t 1/1 Running 0 7d2h kubernetes-dashboard-c98496485-wl4r4 1/1 Running 0 7d2h npd-4bw99 1/1 Running 0 7d2h npd-5sg2c 1/1 Running 0 7d2h npd-cg6pc 1/1 Running 0 7d2h octavia-ingress-controller-0 1/1 Running 0 7d2h openstack-cloud-controller-manager-796tr 1/1 Running 0 7d2h
Step 3. Setup Storage class
Kubernetes on the Openstack platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for OpenStack Cinder. To use this, you need to create a StorageClass object.
sc-cinder-yaml
apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: standard annotations: storageclass.beta.kubernetes.io/is-default-class: "true" labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: EnsureExists provisioner: kubernetes.io/cinder
- Once you have a file with above contents (the file is named as 'sc-cinder.yaml' in this example)
> kubectl apply -f sc0cinder.yaml > kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard (default) kubernetes.io/cinder Delete Immediate false 7d1h
Step 4. Deploy your app
simple example with Helm 3
- ste-by-step example