Merikanto

一簫一劍平生意,負盡狂名十五年

Doing CI/CD with Kubernetes

If you are getting started with containers, you will likely want to know how to automate building, testing, and deployment. By taking a Cloud Native approach to these processes, you can leverage the right infrastructure APIs to package and deploy applications in an automated way.

Two building blocks for doing automation include container images and container orchestrators.

Over the last year or so, Kubernetes has become the default choice for container orchestration. In this first article of the CI/CD with Kubernetes series, you will:

  • Build container images with Docker, Buildah, and Kaniko.
  • Set up a Kubernetes cluster with Terraform, and create Deployments and Services.
  • Extend the functionality of a Kubernetes cluster with Custom Resources.

By the end of this post, you will have container images built with Docker, Buildah, and Kaniko, and a Kubernetes cluster with Deployments, Services, and Custom Resources.

Please note: The kubernetes cluster is set on the DigitalOcean platform using kubeadm and Terraform.



Image Build with Docker & Buildah

A container image is a self-contained entity with its own application code, runtime, and dependencies that you can use to create and run containers. You can use different tools to create container images, and in this step you will build containers with two of them: Docker and Buildah.


Build with Dockerfiles

Docker builds your container images automatically by reading instructions from a Dockerfile, a text file that includes the commands required to assemble a container image. Using the docker image build command, you can create an automated build that will execute the command-line instructions provided in the Dockerfile. When building the image, you will also pass the build context with the Dockerfile, which contains the set of files required to create an environment and run an application in the container image.

Typically, you will create a project folder for your Dockerfile and build context. Create a folder called demo to begin:

1
2
mkdir demo
cd demo

Next, create a Dockerfile inside the demo folder:

1
vim Dockerfile

Add the following content to the file:

1
2
3
4
5
6
7
8
9
10
11
12
13
# ~/demo/Dockerfile
FROM ubuntu:16.04

LABEL MAINTAINER neependra@cloudyuga.guru

RUN apt-get update \
&& apt-get install -y nginx \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
&& echo "daemon off;" >> /etc/nginx/nginx.conf

EXPOSE 80
CMD ["nginx"]

This Dockerfile consists of a set of instructions that will build an image to run Nginx. During the build process ubuntu:16.04 will function as the base image, and the nginx package will be installed. Using the CMD instruction, you’ve also configured nginx to be the default command when the container starts.

Next, you’ll build the container image with the docker image build command, using the current directory (.) as the build context. Passing the -t option to this command names the image merikanto/nginx:latest:

1
sudo docker image build -t merikanto/nginx:latest .

Your image is now built. You can list your Docker images using the following command:

1
docker image ls

You can now use the merikanto/nginx:latest image to create containers.


Build with Project Atomic-Buildah

Buildah is a CLI tool, developed by Project Atomic, for quickly building Open Container Initiative (OCI)-compliant images. OCI provides specifications for container runtimes and images in an effort to standardize industry best practices.

Buildah can create an image either from a working container or from a Dockerfile. It can build images completely in user space without the Docker daemon, and can perform image operations like build, list, push, and tag. In this step, you’ll compile Buildah from source and then use it to create a container image.

To install Buildah you will need the required dependencies, including tools that will enable you to manage packages and package security, among other things. Run the following commands to install these packages:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cd

sudo apt-get install software-properties-common

sudo add-apt-repository ppa:alexlarsson/flatpak

sudo add-apt-repository ppa:gophers/archive

sudo apt-add-repository ppa:projectatomic/ppa

sudo apt-get update

sudo apt-get install bats btrfs-tools git libapparmor-dev \
libdevmapper-dev libglib2.0-dev libgpgme11-dev libostree-dev \
libseccomp-dev libselinux1-dev skopeo-containers go-md2man

Because you will compile the buildah source code to create its package, you’ll also need to install Go:

1
2
3
4
5
6
7
sudo apt-get update
sudo curl -O https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
sudo tar -xvf go1.8.linux-amd64.tar.gz
sudo mv go /usr/local
sudo echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.profile
source ~/.profile
go version

You will see the following output, indicating a successful installation:

1
2
# Output
go version go1.8 linux/amd64

You can now get the buildah source code to create its package, along with the runc binary. runc is the implementation of the OCI container runtime, which you will use to run your Buildah containers.

Run the following commands to install runc and buildah:

1
2
3
4
5
6
7
8
mkdir ~/buildah
cd ~/buildah
export GOPATH=`pwd`
git clone https://github.com/containers/buildah ./src/github.com/containers/buildah
cd ./src/github.com/containers/buildah
make runc all TAGS="apparmor seccomp"
sudo cp ~/buildah/src/github.com/opencontainers/runc/runc /usr/bin/.
sudo apt install buildah

Next, create the /etc/containers/registries.conf file to configure your container registries:

1
sudo vim /etc/containers/registries.conf

Add the following content to the file to specify your registries:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# /etc/containers/registries.conf

# This is a system-wide configuration file used to
# keep track of registries for various container backends.
# It adheres to TOML format and does not support recursive
# lists of registries.

# The default location for this configuration file is /etc/containers/registries.conf.

# The only valid categories are: 'registries.search', 'registries.insecure',
# and 'registries.block'.

[registries.search]
registries = ['docker.io', 'registry.fedoraproject.org', 'quay.io', 'registry.access.redhat.com', 'registry.centos.org']

# If you need to access insecure registries, add the registry's fully-qualified name.
# An insecure registry is one that does not have a valid SSL certificate or only does HTTP.
[registries.insecure]
registries = []

# If you need to block pull access from a registry, uncomment the section below
# and add the registries fully-qualified name.
#
# Docker only
[registries.block]
registries = []

The registries.conf configuration file specifies which registries should be consulted when completing image names that do not include a registry or domain portion.

Now run the following command to build an image, using the https://github.com/do-community/rsvpapp-webinar1 repository as the build context. This repository also contains the relevant Dockerfile:

1
sudo buildah build-using-dockerfile -t rsvpapp:buildah github.com/do-community/rsvpapp-webinar1 

This command creates an image named rsvpapp:buildah from the Dockerfille available in the https://github.com/do-community/rsvpapp-webinar1 repository.

To list the images, use the following command:

1
sudo buildah images

One of these images is localhost/rsvpapp:buildah, which you just created. The other, docker.io/teamcloudyuga/python:alpine, is the base image from the Dockerfile.

Once you have built the image, you can push it to Docker Hub. This will allow you to store it for future use. You will first need to login to your Docker Hub account from the command line:

1
docker login -u merikanto -p your-dockerhub-password

Once the login is successful, you will get a file, ~/.docker/config.json, that will contain your Docker Hub credentials. You can then use that file with buildah to push images to Docker Hub.

For example, if you wanted to push the image you just created, you could run the following command, citing the authfile and the image to push:

1
2
sudo buildah push --authfile ~/.docker/config.json \
rsvpapp:buildah docker://merikanto/rsvpapp:buildah

You can also push the resulting image to the local Docker daemon using the following command:

1
sudo buildah push rsvpapp:buildah docker-daemon:rsvpapp:buildah

Finally, take a look at the Docker images you have created:

1
sudo docker image ls

As expected, you should now see a new image, rsvpapp:buildah, that has been exported using buildah.

You now have experience building container images with two different tools, Docker and Buildah. Let’s move on to discussing how to set up a cluster of containers with Kubernetes.



Set Up a K8S Cluster

We will set up a K8S cluster on Digital Ocean with kubeadm& Terraform.

There are different ways to set up Kubernetes on DigitalOcean. To learn more about how to set up Kubernetes with kubeadm, for example, you can look at How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 18.04.

Since this post discusses taking a Cloud Native approach to application development, we’ll apply this methodology when setting up our cluster. Specifically, we will automate our cluster creation using kubeadm and Terraform, a tool that simplifies creating and changing infrastructure.

Using your personal access token, you will connect to DigitalOcean with Terraform to provision 3 servers. You will run the kubeadm commands inside of these VMs to create a 3-node Kubernetes cluster containing one master node and two workers.

On your Ubuntu server, create a pair of SSH keys, which will allow password-less logins to your VMs:

1
ssh-keygen -t rsa

You will see the following output:

1
2
3
# Output
Generating public/private rsa key pair.
Enter file in which to save the key (~/.ssh/id_rsa):

Press ENTER to save the key pair in the ~/.ssh directory in your home directory, or enter another destination.

Next, you will see the following prompt:

1
2
# Output
Enter passphrase (empty for no passphrase):

In this case, press ENTER without a password to enable password-less logins to your nodes. Get your public key by running the following command, which will display it in your terminal:

1
cat ~/.ssh/id_rsa.pub

Add this key to your DigitalOcean account by following these directions.

Next, install Terraform:

1
2
3
4
5
6
sudo apt-get update
sudo apt-get install unzip
wget https://releases.hashicorp.com/terraform/0.11.7/terraform_0.11.7_linux_amd64.zip
unzip terraform_0.11.7_linux_amd64.zip
sudo mv terraform /usr/bin/.
terraform version

You will see output confirming your Terraform installation:

1
2
# Output
Terraform v0.11.7

Next, run the following commands to install kubectl, a CLI tool that will communicate with your Kubernetes cluster, and to create a ~/.kube directory in your user’s home directory:

1
2
3
4
5
6
7
sudo apt-get install apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo touch /etc/apt/sources.list.d/kubernetes.list
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install kubectl
mkdir -p ~/.kube

Creating the ~/.kube directory will enable you to copy the configuration file to this location. You’ll do that once you run the Kubernetes setup script later in this section. By default, the kubectl CLI looks for the configuration file in the ~/.kube directory to access the cluster.

Next, clone the sample project repository for this post, which contains the Terraform scripts for setting up the infrastructure:

1
git clone https://github.com/do-community/k8s-cicd-webinars.git

Go to the Terrafrom script directory:

1
cd k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/

Get a fingerprint of your SSH public key:

1
ssh-keygen -E md5 -lf ~/.ssh/id_rsa.pub | awk '{print $2}'

You will see output like the following, with the highlighted portion representing your key:

1
2
# Output
MD5:dd:d1:b7:0f:6d:30:c0:be:ed:ae:c7:b9:b8:4a:df:5e

Keep in mind that your key will differ from what’s shown here.

Save the fingerprint to an environmental variable so Terraform can use it:

1
export FINGERPRINT=dd:d1:b7:0f:6d:30:c0:be:ed:ae:c7:b9:b8:4a:df:5e

Next, export your DO personal access token:

1
export TOKEN=your-do-access-token

Now take a look at the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ project directory:

1
2
3
ls
Output
cluster.tf destroy.sh files outputs.tf provider.tf script.sh

This folder contains the necessary scripts and configuration files for deploying your Kubernetes cluster with Terraform.

Execute the script.sh script to trigger the Kubernetes cluster setup:

1
./script.sh

When the script execution is complete, kubectl will be configured to use the Kubernetes cluster you’ve created.

List the cluster nodes using kubectl get nodes:

1
kubectl get nodes

You now have one master and two worker nodes in the Ready state.

With a Kubernetes cluster set up, you can now explore another option for building container images: Kaniko from Google.



Build Container Images with Kaniko

Earlier in this post, you built container images with Dockerfiles and Buildah. But what if you could build container images directly on Kubernetes? There are ways to run the docker image build command inside of Kubernetes, but this isn’t native Kubernetes tooling. You would have to depend on the Docker daemon to build images, and it would need to run on one of the Pods in the cluster.

A tool called Kaniko allows you to build container images with a Dockerfile on an existing Kubernetes cluster. In this step, you will build a container image with a Dockerfile using Kaniko. You will then push this image to Docker Hub.

In order to push your image to Docker Hub, you will need to pass your Docker Hub credentials to Kaniko. In the previous step, you logged into Docker Hub and created a ~/.docker/config.json file with your login credentials. Let’s use this configuration file to create a Kubernetes ConfigMap object to store the credentials inside the Kubernetes cluster. The ConfigMap object is used to store configuration parameters, decoupling them from your application.

To create a ConfigMap called docker-config using the ~/.docker/config.json file, run the following command:

1
sudo kubectl create configmap docker-config --from-file=$HOME/.docker/config.json

Next, you can create a Pod definition file called pod-kaniko.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory (though it can go anywhere).

First, make sure that you are in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory:

1
cd ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/

Create the pod-kaniko.yml file:

1
vim pod-kaniko.yml

Add the following content to the file to specify what will happen when you deploy your Pod. Be sure to replace merikanto in the Pod’s args field with your own Docker Hub username:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/pod-kaniko.yaml
apiVersion: v1
kind: Pod
metadata:
name: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args: ["--dockerfile=./Dockerfile",
"--context=/tmp/rsvpapp/",
"--destination=docker.io/merikanto/rsvpapp:kaniko",
"--force" ]
volumeMounts:
- name: docker-config
mountPath: /root/.docker/
- name: demo
mountPath: /tmp/rsvpapp
restartPolicy: Never
initContainers:
- image: python
name: demo
command: ["/bin/sh"]
args: ["-c", "git clone https://github.com/do-community/rsvpapp-webinar1.git /tmp/rsvpapp"]
volumeMounts:
- name: demo
mountPath: /tmp/rsvpapp
restartPolicy: Never
volumes:
- name: docker-config
configMap:
name: docker-config
- name: demo
emptyDir: {}

This configuration file describes what will happen when your Pod is deployed. First, the Init container will clone the Git repository with the Dockerfile, https://github.com/do-community/rsvpapp-webinar1.git, into a shared volume called demo. Init containers run before application containers and can be used to run utilties or other tasks that are not desirable to run from your application containers. Your application container, kaniko, will then build the image using the Dockerfile and push the resulting image to Docker Hub, using the credentials you passed to the ConfigMap volume docker-config.

To deploy the kaniko pod, run the following command:

1
kubectl apply -f pod-kaniko.yml 

You will see the following confirmation:

1
2
# Output
pod/kaniko created

Get the list of pods:

1
kubectl get pods

You will see the following list:

1
2
3
# Output
NAME READY STATUS RESTARTS AGE
kaniko 0/1 Init:0/1 0 47s

Wait a few seconds, and then run kubectl get pods again for a status update:

1
kubectl get pods

You will see the following:

1
2
3
# Output
NAME READY STATUS RESTARTS AGE
kaniko 1/1 Running 0 1m

Finally, run kubectl get pods once more for a final status update:

1
kubectl get pods

This sequence of output tells you that the Init container ran, cloning the GitHub repository inside of the demo volume. After that, the Kaniko build process ran and eventually finished.

Check the logs of the pod:

1
kubectl logs kaniko

From the logs, you can see that the kaniko container built the image from the Dockerfile and pushed it to your Docker Hub account.

You can now pull the Docker image. Be sure again to replace merikanto with your Docker Hub username:

1
docker pull merikanto/rsvpapp:kaniko

You have now successfully built a Kubernetes cluster and created new images from within the cluster. Let’s move on to discussing Deployments and Services.



K8S Deployments & Services

Kubernetes Deployments allow you to run your applications. Deployments specify the desired state for your Pods, ensuring consistency across your rollouts. In this step, you will create an Nginx deployment file called deployment.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/ directory to create an Nginx Deployment.

First, open the file:

1
vim deployment.yml

Add the following configuration to the file to define your Nginx Deployment:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terraform/deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80

This file defines a Deployment named nginx-deployment that creates three pods, each running an nginx container on port 80.

To deploy the Deployment, run the following command:

1
kubectl apply -f deployment.yml

You will see a confirmation that the Deployment was created:

1
2
# Output
deployment.apps/nginx-deployment created

List your Deployments:

1
kubectl get deployments

You can see that the nginx-deployment Deployment has been created and the desired and current count of the Pods are same: 3.

To list the Pods that the Deployment created, run the following command:

1
kubectl get pods

You can see from this output that the desired number of Pods are running.

To expose an application deployment internally and externally, you will need to create a Kubernetes object called a Service. Each Service specifies a ServiceType, which defines how the service is exposed. In this example, we will use a NodePort ServiceType, which exposes the Service on a static port on each node.

To do this, create a file, service.yml, in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/ directory:

1
vim service.yml

Add the following content to define your Service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/service.yml
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
selector:
app: nginx
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30111

These settings define the Service, nginx-service, and specify that it will target port 80 on your Pod. nodePort defines the port where the application will accept external traffic.

To deploy the Service run the following command:

1
kubectl apply -f service.yml

You will see a confirmation:

1
2
# Output
service/nginx-service created

List the Services:

1
kubectl get service

You will see the following list:

1
2
3
4
# Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h
nginx-service NodePort 10.100.98.213 <none> 80:30111/TCP 7s

Your Service, nginx-service, is exposed on port 30111 and you can now access it on any of the node’s public IPs. For example, navigating to http://node_1_ip:30111 or http://node_2_ip:30111 should take you to Nginx’s standard welcome page.

Once you have tested the Deployment, you can clean up both the Deployment and Service:

1
2
kubectl delete deployment nginx-deployment
kubectl delete service nginx-service

These commands will delete the Deployment and Service you have created.

Now that you have worked with Deployments and Services, let’s move on to creating Custom Resources.



Create Custom Resources in K8S

Kubernetes offers limited but production-ready functionalities and features. It is possible to extend Kubernetes’ offerings, however, using its Custom Resources feature. In Kubernetes, a resource is an endpoint in the Kubernetes API that stores a collection of API objects. A Pod resource contains a collection of Pod objects, for instance. With Custom Resources, you can add custom offerings for networking, storage, and more. These additions can be created or removed at any point.

In addition to creating custom objects, you can also employ sub-controllers of the Kubernetes Controller component in the control plane to make sure that the current state of your objects is equal to the desired state. The Kubernetes Controller has sub-controllers for specified objects. For example, ReplicaSet is a sub-controller that makes sure the desired Pod count remains consistent. When you combine a Custom Resource with a Controller, you get a true declarative API that allows you to specify the desired state of your resources.

In this step, you will create a Custom Resource and related objects.

To create a Custom Resource, first make a file called crd.yml in the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/ directory:

1
vim crd.yml

Add the following Custom Resource Definition (CRD):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/crd.yml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: webinars.digitalocean.com
spec:
group: digitalocean.com
version: v1
scope: Namespaced
names:
plural: webinars
singular: webinar
kind: Webinar
shortNames:
- wb

To deploy the CRD defined in crd.yml, run the following command:

1
kubectl create -f crd.yml 

You will see a confirmation that the resource has been created:

1
2
# Output
customresourcedefinition.apiextensions.k8s.io/webinars.digitalocean.com created

The crd.yml file has created a new RESTful resource path: /apis/digtialocean.com/v1/namespaces/*/webinars.

You can now refer to your objects using webinars, webinar, Webinar, and wb, as you listed them in the names section of the CustomResourceDefinition. You can check the RESTful resource with the following command:

1
kubectl proxy & curl 127.0.0.1:8001/apis/digitalocean.com

Note: If you followed the initial server setup post, then you will need to allow traffic to port 8001 in order for this test to work. Enable traffic to this port with the following command:

sudo ufw allow 8001

You will see the following output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# Output
HTTP/1.1 200 OK
Content-Length: 238
Content-Type: application/json
Date: Fri, 03 Aug 2018 06:10:12 GMT

{
"apiVersion": "v1",
"kind": "APIGroup",
"name": "digitalocean.com",
"preferredVersion": {
"groupVersion": "digitalocean.com/v1",
"version": "v1"
},
"serverAddressByClientCIDRs": null,
"versions": [
{
"groupVersion": "digitalocean.com/v1",
"version": "v1"
}
]
}

Next, create the object for using new Custom Resources by opening a file called webinar.yml:

1
vim webinar.yml

Add the following content to create the object:

1
2
3
4
5
6
7
8
# ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom/webinar.yml
apiVersion: "digitalocean.com/v1"
kind: Webinar
metadata:
name: webinar1
spec:
name: webinar
image: nginx

Run the following command to push these changes to the cluster:

1
kubectl apply -f webinar.yml 

You will see the following output:

1
2
# Output
webinar.digitalocean.com/webinar1 created

You can now manage your webinar objects using kubectl. For example:

1
kubectl get webinar

You now have an object called webinar1. If there had been a Controller, it would have intercepted the object creation and performed any defined operations.


### Deleting a Custom Resource Definition To delete all of the objects for your Custom Resource, use the following command:
1
kubectl delete webinar --all

You will see:

1
2
# Output
webinar.digitalocean.com "webinar1" deleted

Remove the Custom Resource itself:

1
kubectl delete crd webinars.digitalocean.com

You will see a confirmation that it has been deleted:

1
2
# Output
customresourcedefinition.apiextensions.k8s.io "webinars.digitalocean.com" deleted

After deletion you will not have access to the API endpoint that you tested earlier with the curl command.

This sequence is an introduction to how you can extend Kubernetes functionalities without modifying your Kubernetes code.


## Delete the K8S Cluster To destroy the Kubernetes cluster itself, you can use the destroy.sh script from the ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom folder. Make sure that you are in this directory:
1
cd ~/k8s-cicd-webinars/webinar1/2-kubernetes/1-Terrafrom

Run the script:

1
./destroy.sh

By running this script, you’ll allow Terraform to communicate with the DigitalOcean API and delete the servers in your cluster.