Docker Guide

By Tim Keaveny

Docker – Hands On

——————————————————————————————————————————————————

DEPLOYING A DOCKER CONTAINER

[+] Installing Docker

Sudo yum -y install docker

[+] Steps to Install Docker

  • SSH into the machine
  • Install Docker via YUM

sudo yum -y install docker

  • Escalate to Root privileges:

sudo -i

  • Setup a Docker Group:

groupadd <desired-group-name-here>

groupadd docker

  • Add a [your] user to the group:

usermod -aG <group-name-here> <desired-username-here>

usermod -aG docker cloud_user

  • Enable && Start Docker:

systemctl  enable –now docker

  • Check if Docker is running:

docker ps

  • Logout of the Root user account

“Ctrl + D”    OR    “^ + D”

[+] Create a Docker Image – Using “Hello World” Example Image

  • Pull the [Hello World] image down into Docker

Docker pull <image-name-here>

docker pull docker.io/library/hello-world

  • Check to see if the Docker Image successfully pulled down:

docker images

  • Start up the new Docker container:

docker run <image-name-here>

docker run hello-world

  • Check that this particular image behaved as expected:
    • * started the container
    • * print content
    • * shut down

docker ps -a

[+] 

——————————————————————————————————————————————————

DEPLOYING A STATIC WEBSITE TO THE CONTAINER

[+] Lab Scenrio

  • In this lab we have the following objectives:
    • * Pull the Spacebones Docker Image
    • * Start the static website container and redirect HTTP port 80 to the host
    • * Confirm the container r”treatseekers” is running.

[+] Lab Steps

  • SSH into the lab environment with the provided credentials
  • Display current installed images, or pull the desired imaged down:

docker ps

  • Run the target docker image with the following options set:
    • * Run in “Detached” mode: “-d”
    • * Rename the container image: “–name”
    • * Forward the docker container’s HTTP port 80 to the Host’s HTTP port 80:

docker run -d [detached] —name <new-desired-image-name-here> -p [port] <container-port-here>:<host-port-here>  <target-docker-image-here>

docker run -d –name traetseekers -p 80:80 spacebones/doge

  • Verify the Docker container is running:

docker ps

[+] 

——————————————————————————————————————————————————

BUILDING CONTAINER IMAGES

[+] Lab Steps

  • SSH into the lab environment
  • Check what docker images are available on the machine:

docker images

  • Pull down a CentOS-6 docker image:

docker pull <desired-docker-image-here>

docker pull centos:6

  • Confirm your docker image was successfully pulled down:

docker images

  • To start building a website on top of this docker images, run the container in “Interactive” mode:

docker run -it <docker-image-name-here> /bin/bash

docker run -it centos:6 /bin/bash

  • Once running and within the docker image, run a YUM update (this should be done with all docker containers when first spinning them up):

yum -y update

  • Install Apache and Git onto the Docker container:

Yum -y <target-pkg(s)-to-install>

yum -y install httpd git

  • Clone the desired git repository into the container:

Git clone https://github.com/linuxacademy/content-dockerquest-spacebones

  • Copy the repository over into the Docker container’s web directory (“/var/www/html/“) …the default web directory for an apache server in this case:

cp content-dockerquest-spacebones/doge/* /var/www/html/

  • Rename the “welcome.conf” file (“/etc/httpd/conf.d/welcome.conf”) to “welcome.bak” …this is so apache won’t try to run it, and will ignore it when starting up:

mv /etc/httpd/conf.d/welcome.conf /etc/httpd/conf.d/welcome.bak

  • To make sure Apache starts upcorectli when we run our container, we’re going to set it to start at boot with check config:

chkconfig httpd on

  • Now exit the container:

exit

  • Check for available images again:

docker ps -a

  • Commit the Docker image either with the images’s UUID or the container’s name:

docker commit <uuid-or-name-here> <image-name-here>:<desired-imnage-tage-name-here>

docker commit b4cc5d3c5e9b spacebones:thewebsite

  • Confirm the commit was successful (you should see a new image with the properties you specified above):

docker images

SUCCESS!!!!!!!

——————————————————————————————————————————————————

DOCKER BUILD

[+] Command Format for Docker Build

Docker build -t [tag] <desired-image-name>:<desired-image-tag>  <desired-path-to-build-in>

docker build -t salt-master:deb .

[+] 

——————————————————————————————————————————————————

DATA STORAGE AND NETWORKING WITH DOCKER

[+] Creating Data Containers

  • Create the Docker Data Container

docker create -v [volume] <desired-directory-to-bind-volume-to> –name <desired-container-name> <desired-image-to-run> /bin/true

docker create -v /data –name boneyard spacebones/postgres /bin/true

  • Make & mount Data containers for desired images:

docker run -d —volumes-from <container-name> –name <desired-name-for-container-being-ran> <desired-target-image-namespace>/<desired-target-image-name>

docker run -d —volumes-from boneyard –name cheese space bones/postgres

  • Confirm changes made:

docker ps

docker  volume list

  • Inspect an container to see if the volume really mounted:

docker inspect <desired-target-image>

docker inspect cheese

[+] Container Networking with Links

  • Run the Spacebones website container in detached mode so we still have access to the CMD line

docker run -d [detached] -p [port] <container-port>:<target-host-port-number> –name spacebones <container-namespace>/<container-name>:<container-tag>

docker run -d -p 80:80 –name spacebones spacebones/spacebones:thewebsite

  • Confirm the container is running:

docker ps

  • Create Network Link:

docker run -d [detached] -P [publish all exposed ports to any random port that’s exposed on the host] –name <desired-container-name> –link <namespace-of-container-to-link-to>:<container-tag-name> <db-container-image-namespace>/<db-container-name>

dokcer run -d -P –name treatlist –link spacebones:spacebones spacebones/postgres

  • Check to see if the container was created:

docker ps 

  • Unfortunately this doesn’t tell you if the Link was successfully created.
  • So we are going to use the “docker inspect” command to inspect further.

docker inspect -f “{{ <pattern-to-serch-for> }}” <name-of-container-we-are-linking-from>

docker inspect -f “{{ .HostConfig.Links }}” treatlist

[+] Container Networking With Networks

  • WHAT ARE WE DOING?
    • * We are going to prepare our [spacebones] servers by creating a Docker Bridge Network [named borkspace], which’ll then be used for secure communication between two clients. 
    • * So we’re going to create a new Docker Network Bridge on the 192.168.10.0/24 network range named “borkspace” (*don’t worry, this is much easier than creating Linux network bridges)
  • SSH into the target machine.
  • Afterwards, check to see Docker is running:

docker ps

  • List out the current networks:

docker network ls

  • Also take a look at the existing Bridge Network:

docker network inspect <name-or-uuid-here>

docker network inspect bridge

  • This should return output similar to that below:

[

    {

        “Name”: “bridge”,

        “Id”: “bc9fe6d20459fee05fe535a8e0c997708d59ff667fd90205b680e200f1807ded”,

        “Created”: “2019-06-26T10:25:22.942647019-04:00”,

        “Scope”: “local”,

        “Driver”: “bridge”,

        “EnableIPv6”: false,

        “IPAM”: {

            “Driver”: “default”,

            “Options”: null,

            “Config”: [

                {

                    “Subnet”: “172.17.0.0/16”

                }

            ]

        },

        “Internal”: false,

        “Attachable”: false,

        “Containers”: {},

        “Options”: {

            “com.docker.network.bridge.default_bridge”: “true”,

            “com.docker.network.bridge.enable_icc”: “true”,

            “com.docker.network.bridge.enable_ip_masquerade”: “true”,

            “com.docker.network.bridge.host_binding_ipv4”: “0.0.0.0”,

            “com.docker.network.bridge.name”: “docker0”,

            “com.docker.network.driver.mtu”: “1500”

        },

        “Labels”: {}

    }

]

  • Now, go ahead and create our “borkspace” network:

docker network create —driver=<driver-name-here> —subnet=<desired-subnet-here> —gateway=<desired-gateway-ip-here> <desired-network-bridge-name-here>

docker network create —driver=bridge —subnet=192.168.10.0/24 —gateway=192.168.10.250 borkspace

  • To confirm changes, check the available Docker networks:

docker network ls

  • No further inspect the new Docker Network Bridge we just created:

docker network inspect <docker-network-bridge-name-here>

docker network inspect borkspace

  • Now, per the lab’s instructions, we need to launch a new container named “treattransfer”

docker run -it [interactive] –name treatransfer –network=<desired-network-to-launch-on> <desired-image-namepsace>/<desired-image-tag>

docker run -it –name treattransfer –network=borkspace spacebones/nyancat

[+] Persistent Data Volumes

  • Create a new Docker Volume:

docker volume create <desired-volume-name>

docker volume create missionstatus

  • Confirm the new volume was created:

docker volume ls

  • To gain further details on the volume, run:

docker volume inspect <target-volume-name>

docker volume inspect missionstatus

  • The output of the above command should return the following (***be sure to note the MOUNTPOINT***):

[

    {

        “CreatedAt”: “2019-06-26T13:32:07-04:00”,

        “Driver”: “local”,

        “Labels”: {},

        “Mountpoint”: “/var/lib/docker/volumes/missionstatus/_data”,

        “Name”: “missionstatus”,

        “Options”: {},

        “Scope”: “local”

    }

]

  • Next, we want to copy the contents from the directory “content-dockerquest-spacebones/volumes” over into “/var/lib/docker/volumes/missionstatus/_data”
  • In order to do this we need to first elevate our privileges to the Root user:

sudo -i

  • Now copy over the content from “/home/cloud_user/content-dockerquest-spacebones/volumes/*” into “/var/lib/docker/volumes/missionstatus/_data”

cp -r <source-path> <destination-path>

cp -r /home/cloud_user/content-dockerquest-spacebones/volumes/* /var/lib/docker/volumes/missionstatus/_data/

  • Now check to confirm the content copied over into the “_data” directory that is the MountPoint:

ls <path-of-the-mountpoint-data-directory>

ls /var/lib/docker/volumes/missionstatus/_data

  • Next, start a new container named “fishin-mission” running the base httpd image available on DockerHub, with the “missionstatus” volume mounted

docker run -d [detached] -p [port] <container-port>:<host-port> —name <desired-container-name-here> —mount [mountpoint] source=<desired-source-volume-name>,target=<desired-path-to-mount-source-volume> <desired-container-image-name>

docker run -d -p 80:80 —name fishing-mission —mount source=missionstatus,target=/usr/local/apache2/htdocs httpd

  • Confirm the container was successfully launched:

docker ps

  • Finally, test out the launch by visiting the containers public IP

——————————————————————————————————————————————————

DOING MORE WITH DOCKER

[+] Container Logging – Learning Objectives

  • Learning Objectives
    • * Configure Syslog
    • * Configure Docker to Use Syslog
    • * Create a Container Using Syslog
    • * Create a Container Using a JSON File
    • * Verify that the “syslog-logging” Container is Sending it’s Logs to Syslog
    • * Verify that the “json-logging” Container is Sending its Logs to the JSON File

DETAILED STEPS

  • Configure Syslog
    • * open “rsyslog.conf” in order to make a few changes:

vim /etc/rsyslog.conf

  • * Uncomment the two UDP syslog receptions:

#$ModLoad imudp

#$UDPServerRun 514

TO

$ModLoad imudp

$UDPServerRun 514

  • Configure Docker to use Syslog
    • * Create the “daemon.json”. File:

sudo mkdir /etc/docker

vim /etc/docker/daemon.json

  • * Add the following:

{

    “log-driver”: “syslog”,

    “log-opts”: {

        “syslog-address”: “udp://PRIVATE_IP:514″

    }

}

  • Create a Container Using Syslog
    • * Enable and Start the Docker Service:

sudo systemctl enable docker

sudo systemctl start docker

  • * Create a container called “syslog-logging” using the “httpd” image:

docker container run -d –name syslog-logging httpd

  • Create a Container Using a JSON File
    • * Create a container that uses the JSON file for logging:

docker container run -d –name json-logging –log-driver json-file httpd

  • Verify that the “syslog-logging” Container is Sending it’s Logs to Syslog
    • * Make sure that the “syslog-logging” container its logging to syslog by checking the “messages” log file:

tail /var/log/messeges

  • Verify that the “json-logging” Container is Sending its Logs to the JSON File
    • * Execute “docker logs” for the “json-logging” container:

docker logs join-logging

[+] Container Logging – Lab Step-by-Step

  • SSH into the machine
  • Since we’ll be working with protected system files, we need to escalate our privileges to Root:

sudo su –

  • Now, we’ll configure “rsyslog.conf” for UDP, configure Docker to use syslog as the default logging driver, and then start up Docker.
  • We’ll spin up a container using the “httpd” Docker image (*again by default, this should be using the syslog driver).
  • Then we’ll spin up a second container using “httpd” as the image and specify that we’ll use the json file log driver:

vim /etc/rsyslog.conf

  • Uncomment the following lines within he file:

#$ModLoad imudp

#$UDPServerRun 514

TO

$ModLoad imudp

$UDPServerRun 514

  • Next, after modifying the file, go and start “rsyslog”:

systemctl start rsyslog

  • Now that we have “rsyslog” running, we need to go make some changes to Docker to make sure that it uses “syslog” as the default logging driver.
  • To do this we’re going to create a new file called “daemon.json” and it’s going to be located in directory we’ll create next called “/etc/dokcer”:

mkdir /etc/docker

vim /etc/docker/daemon.json

  • In “daemon.json” we’re going to specify “syslog” as the log driver.
  • We also need to include some log options, which is going to be the address to the “syslog” server.
  • Add the following to the newly created “daemon.json” file:

{

    “log-driver”: “syslog”,

    “log-opts”: {

        “syslog-address”: “udp://<private-ip-of-docker-instance-here>:514″

    }

}

  • After creating the “daemon.json” file, we can go Enable && Start Docker:

systemctl enable docker 

systemctl start docker

  • Now that Docker is running, we can go and “tail” the “messages” log file located in the “/var/log” directory in order to see if we have any logs coming in from Docker:

tail /var/log/messages

  • Let’s test our setup.
  • Create a new container

docker container run -d –name syslog-logging httpd

**********************************************************************************************************************************************

UPDATING CONTAINERS WITH WATCHTOWER

**********************************************************************************************************************************************

[+] Updating Containers With Watchtower – Learning Objectives

CREATING THE DOCKERFILE

  • Create a Dockerfile: 

vi Dockerfile

  • The Dockerfile should contain the following:

FROM node

RUN mkdir -p /var/node

ADD content-express-demo-app/ /var/node/

WORKDIR /var/node

RUN npm install

CMD ./bin/www

BUILD THE DOCKER IMAGE

  • Build the Docker image:

docker build -t USERNAME/express -f Dockerfile .

PUSH THE IMAGE TO DOCKER HUB

  • Login into Docker:

docker login

  • Push the image to Docker Hub:

docker push USERNAME/express

CREATE A DEMO CONTAINER

  • Create the container that Watchtower will monitor

docker run -d –name demo app -p 80:3000 –restart always USERNAME/express

CREATE THE WATCHTOWER CONTAINER

  • Create the Watchtower container that will monitor the “demo-app” container:

docker run -d –name watchtower –restart always -v /var/run/docker.sock v2tec/watchtower -I 30

UPDATE THE DOCKER IMAGE

  • Update the Docker image:

Dokcer build -t USERNAME/express -f Dockerfile .

  • Re-push the image to Docker Hub:

docker push USERNAME/express:latest

[+] Updating Containers With Watchtower – Lab Step-by-Step

  • SSH into the machine
  • Elevate your privileges to Root:

sudo su –

  • Create a Dockerfile:

vim Dockerfile

  • Add the following contents to the Dockerfile:

FROM node

RUN mkdir -p /var/node

ADD content-express-demo-app/ /var/node/

WORKDIR /var/node

RUN npm install

CMD ./bin/www

  • Next, login into Docker Hub (*you must already have a Docker Hub account setup!):

docker login

  • Build the Docker image:

docker build -t [–tag] <your-docker-hub-username-here>/<desired-name-of-image-here> – [–file] Dockerfile <desired-path-to-build-in-here>

docker build -t tkeaveny/express -f Dockerfile .

  • Push the newly build Docker image to Docker Hub:

docker push <your-docker-hub-username-here>/<desired-docker-image-name-here>

docker push tkeaveny/express

  • Create the “demo-app” container (* the application container itself):

docker run -d [–detached] –name demo-app -p [–port] <desired-host-port-here>:<desired-container-port-here> –restart [restart-policy] <desired-policy-here> [i.e. “always”] <your-docker-hub-username-here>/<desired-docker-image-name-here>

docker run -d –name demo-app -p 80:3000 –restart always tkeaveny/express

  • Verify the container was successfully run:

docker ps

  • Create the Watchtower container (*we give it a Docker socket so it has access to Docker and can start/stop containers):

docker run -d –name watchtower –restart always -v /var/run/docker.sock:/var/run/docker.sockv2tec/watchtower -i [interval (in seconds)] 30

  • Verify:

docker ps

  • Make a change to the Dockerfile to test the functionality of the Watchtower container’s logic:

vim Dockerfile

  • Rebuild the docker image

docker build -t tkeaveny/express -f Dockerfile .

  • Check to see if the rebuilt “tkeaveny/express” container has “Created” timestamp that is less than the “watchtower” container’s “Created” timestamp (*since we originally created the “tkeaveny/express” container first, so it would originally be an older timestamp):

docker ps

**********************************************************************************************************************************************

ADDING METADATA AND LABELS

**********************************************************************************************************************************************

[+] Adding Metadata and Labels – Learning Objectives

CREATE A DOCKERFILE

  • Create a Dockerfile using the following instructions:

FROM node

LABEL maintainer=”EMAIL_ADDRESS”

ARG BUILD_VERSION

ARG BUILD_DATE

ARG APPLICATION_NAME

LABEL org.label-schema.build-date=$BUILD_DATE

LABEL org.label-schema.applicaiton=$APPLICATION_NAME

LABEL org.label-schema.version=$BUILD_VERSION

RUN mkdir -p /var/node

ADD weather-app/ /var/node/

WORKDIR /var/node

RUN npm install

EXPOSE 3000

CMD ./bin/www

BUILD THE DOCKER IMAGE

  • Build the Docker image using the following parameters:

docker build -t rivethead42/weather-app –build-arg BUILD_DATE=$(date -u +’%Y-%m-%dT%H:%M:%SZ’) \

–build-arg APPLICATION_NAME=weather-app –build-arg BUILD_VERSION=v1.0  -f Dockerfile .

PUSH THE IMAGE TO DOCKER HUB

  • Push the Weather-App image to Docker Hub:

dokcer image push USERNAME/weather-app

CREATE THE ‘WEATHER-APP’ CONTAINER

  • Start the “weather-app” container:

docker run -d –name demo-app -p 80:3000 –restart always USERNAME/weather-app

CHECK OUT VERSION V.1.1 OF THE WEATHER APP

  • In the “weather-app” directory, check out version v1.1 of the “weather-app”:

cd weather-app

git checkout v1.1

cd ../

REBUILD THE WEATHER APP IMAGE

  • Rebuild and push the “weather-app” image:

docker build -t rivethead42/weather-app –build-arg BUILD_DATE=$(date -u +’%Y-%m-%dT%H:%M:%SZ’) /

–build-arg APPLICATION_NAME=weather-app –build-arg BUILD_VERSION=v1.1  -f Dockerfile .

docker push rivethead42/weather-app

[+] Lab Step-by-Step

  • In this lab we’ll have two servers.
    • * Docker Workstation – This is where we’ll be performing the work and creating our Dockerfile here.
    • * Docker Server – This is where we’ll spin up our container.
  • First thing we’ll want to do is access our “Docker Workstation”:

ssh <docker-workstation-username-here>@<docker-workstation-public-ip-here>

  • Elevate the user’s permissions to Root:

sudo su –

  • Next, login into the Docker Server:

ssh <docker-server-username-here>@<docker-server-public-ip-here>

  • Elevate the user’s permissions to Root in the Docker Server as well:

sudo su –

>>>

LEARNING CHECK:

  • What we’ll be doing is deploying a weather-app using Docker:
    • * We’re going to build out a Dockerfile
    • * We’re going to be using some Metadata within our Dockerfile using Labels
      • *** build_date
      • *** application_name
      • *** version_of_the_application
    • * Build the Docker image
    • * Push the Docker image to Docker Hub
    • * Deploy “weather-app” container
    • * Once the “weather-app” container is up and running, we’re going to: 
      • *** change the branch of our application
      • *** rebuild the image
      • *** push back up to Docker Hub
      • *** have Watchtower redeploy the app for us.

NOTE:

  • When it comes to working on our Dockerfile and building our Docker image, we’ll be doing this on our “Docker Workstation”, NOT on the “Docker Server”!!!!!!
  • This is because if we go out and deploy our container, and we try to make updates to our image, it’s not going to create the proper Tags that we need.
  • By using 2 servers in our live environment, we’re able to prevent this conflict.
  • When it comes to running our container, we’ll be doing this on our Docker Server

<<<

  • Create the Dockerfile:

vim Dockerfile

  • Add the following to the Dockerfile:

# FROM <desired-base-image-name-here>

FROM node

# LABEL <desired-tag-name-here>=“<desired-tag-value-here>”

LABEL maintainer=“[email protected]

# <argument-keyword…ARG> <argument-name-here>

ARG BUILD_VERSION

ARG BUILD_VERSION

ARG BUILD_DATE

ARG APPLICATION_NAME

# <label-keyword…LABEL> <when-using-a-domain-use-the-domain-as-part-of-the-schema-name>=$<ARG-name-here>  (e.g. “test.com” —> com.test-schema.<Label-name-here>)

LABEL org.label-schema.build-date=$BUILD_DATE

LABEL org.label-schema.applicaiton=$APPLICATION_NAME

LABEL org.label-schema.version=$BUILD_VERSION

# create a directory for application code

RUN mkdir -p /var/node

# Add the code from the “weather-app”

ADD weather-app/ /var/node/

# make “/var/node/“ the working directory…any actions will be performed here

WORKDIR /var/node

# execute a “npm install”

RUN npm install

# expose port 3000

EXPOSE 3000

# specify the CMD command

CMD ./bin/www

  • Next, we’re going to login into Docker Hub:

docker login

  • Build the Docker image:

docker build -t [tag] <Docker-Hub-username-here>/<image-name-here> <ARG/LABEL-name-here>=<desired-ARG/LABEL-value-here> \

—build-arg BUILD_DATE=$(date -u + ‘%Y-%m-%dT%H:%M:%SZ’) —build-arg APPLICATION_NAME=weather-app —build-arg BUILD_VERSION=v1.0 -f [file] Dockerfile .

docker build -t tkeaveny/weather-app –build-arg BUILD_DATE=$(date -u + ‘%Y-%m-%dT%H:%M:%SZ’) –build-arg APPLICATION_NAME=weather-app –build-arg BUILD_VERSION=v1.0 -f Dockerfile .

  • Confirm the image was created:

docker images

  • Inspect the Docker image and ensure that the Labels were successfully integrated:

docker inspect <docker-image-uuid-OR-docker-image-name>

docker inspect 23c3c6c08a84

  • Push the new Docker image to Docker Hub:

docker push <docker-hub-username>/<docker-image-name>\

docker push tkeaveny/weather-app

>>>MOVE OVER TO DOCKER SERVER NOW!!!

  • Now that we created and pushed our new Docker image to Docker Hub, along with our specified Labels, we’ll move over into our Docker Server and launch our container:

docker run -d [detached] –name weather-app -p <desired-target-host-port>:<desired-container-serving-port> –restart always <Docker-Hub-username-here>/<Docker-image-name-here>

docker run -d –name weather-app -p 80:3000 –restart always tkeaveny/weather-app 

>>>MOVE BACK OVER TO DOCKER WORKSTATION – (AFTER LAUNCHING THE CONTAINER IN THE DOCKER SERVER)

  • Check to see the available directories in the Docker Workstation (there should be a directory with the name of the new application, “weather-app”):

ls

  • Change directories into the new “weather-app” directory:

cd weather-app/

  • Checkout a new Git branch for the [weather-app] application and name it “v1.1″:

git checkout v1.1

  • Change directories out of the “weather-app” directory by one level (…back to previous working dir):

cd ../

  • Now, rebuild the Docker image, BUT this time, change the “BUILD_VERSION” Label’s value to match that of the application’s new version (BUILD_VERSION=v1.0    ——>    BUILD_VERSION=v1.1):

docker build -t tkeaveny/weather-app –build-arg BUILD_DATE=$(date -u + ‘%Y-%m-%dT%H:%M:%SZ’) –build-arg APPLICATION_NAME=weather-app –build-arg BUILD_VERSION=v1.1 -f Dockerfile .

  • Verify the changes you just made by running “docker inspect <docker-image-uuid-or-name-here>”:

docker inspect dda299667960

  • Next, push the newly re-built image to Docker Hub:

docker push tkeaveny/weather-app

>>>MOVE BACK OVER TO DOCKER SERVER NOW!!!

  • Move back over to the Docker Server and check to see if Watchtower updated our Docker container (the “STATUS” should be updated recently…it should have been updated more recently than the “watchtower” container):

docker ps

  • Inspect the “weather-app” container (you should see that the version has been updated within the container’s “BUILD_VERSION” Label value):

docker inspect <container-uuid-here>

docker inspect f70dacee6d57

ALL DONE!!!!!!!!!

**********************************************************************************************************************************************

LOAD BALANCING CONTAINERS

**********************************************************************************************************************************************

[+] Load Balancing Containers – Learning Objectives

CREATE A DOCKER COMPOSE FILE

  • The containers of your “docker-compose.yaml” should look like the following:

version: ‘3.2’

services:

  weather-app1:

      build: ./weather-app

      tty: true

      networks:

       – frontend

  weather-app2:

      build: ./weather-app

      tty: true

      networks:

       – frontend

  weather-app3:

      build: ./weather-app

      tty: true

      networks:

       – frontend

  loadbalancer:

      build: ./load-balancer

      image: nginx

      tty: true

      ports:

       – ’80:80′

      networks:

       – frontend

networks:

  frontend:

UPDATE `NGINX.CONF`

  • The contents of your “nginx.conf” file should look like the following:

events { worker_connections 1024; }

http {

  upstream localhost {

    server weather-app1:3000;

    server weather-app2:3000;

    server weather-app3:3000;

  }

  server {

    listen 80;

    server_name localhost;

    location / {

      proxy_pass http://localhost;

      proxy_set_header Host $host;

    }

  }

}

EXECUTE `DOCKER-COMPOSE UP`

  • Execute a “docker-compose up”:

/usr/local/bin/docker-compose up –build -d

CREATE A DOCKER SERVICE USING DOCKER SWARM

  • Create a Docker service by executing the following:

docker service create –name nginx-app –publish published=8080,target=80 –replicas=2 nginx

[+] Lab Step-by-Step

  • In this Lab, we have our containers built and deployed.
  • However, we don’t have Load Balancing solution set up.
  • What we’ve been tasked to do is create 2 “proof-of-concepts”:
    • * Nginx Load Balancer
    • * Docker Swarm Service

NGINX LOAD BALANCER

  • Use “docker-compose” to create a new Nginx Load Balancer (as well as 3 other instances using the “weather-app” image)
  • Nginx will be using port 80 and will route traffic back to port 3000 on the “weather-app” containers

CREATE DOCKER SWARM SERVICE (*called “nginx-app”)

  •  The Docker Swarm Service will have 2 replicas.
  • The Docker Swarm Service will be using the NGINX image
  • The Publish Port will be 8080 and it’ll be targeting port 80 on the container.

WHAT ARE WE DOING?

  • We have two Cloud Servers for our environment, “Swarm-Server-1” and “Swarm-Server-2”.
  • “Swarm-Server-1” will function as the “Swarm Server Master”.
  • “Swarm-Server-2” will function as a “Worker-Node”

>>>

NOTE:

  • For this lab, we’re going to learn how to implement Load Balancing in two different ways.
  • In the first way, we’re going to be using NGINX
    • * It’s going to act as a Load Balancer, which’ll then send traffic to our “weather-app” containers
    • * For this approach, we’ll be using “docker-compose” to setup our service.
    • * It’s a lot easier to setup Load Balancing with several other containers with “docker-compose”
  • The second way will be using Docker Swarm

<<<

>>>GO TO THE FIRST DOCKER SWARM SERVER (Swarm-Server-1)

  • For this lab, if you check the content of the current directory (“/home/root”), you’ll see a folder named “lb-challenge”:

ls

  • If you change directories and inspect the contents of the “lb-challenge” directory, you’ll see two sub-directories, “load-balancer” && “weather-app”:

cd lb-challenge/

ls

  • The two above mentioned sub-directories of “lb-challenge” contain the following (…“load-balancer” && “weather-app”):
    • * load-balancer – this is where the Dockerfile and NGINX configuration file are located.
    • * weather-app – this is where we have the Dockerfile to build the “weather-app” along with the Source Code (*for “weather-app”)
  • When we execute “docker-compose up”, “docker-compose” is going to build the “Load Balancing” image as well as the “weather-app” image.
  • Now, create a “docker-compose.yaml” file in the “Swarm-Server-1”:

vim docker-compose.yml

  • We’re going to create 3 “weather-app” services (e.g. “weather-app1”, “weather-app2”, and “weather-a[[3”):

version: ‘3.2’

services:

  # service’s name

  weather-app1:

      # specify we’re going to build a Docker image

      build: ./weather-app

      # set “tty” to True or False

      tty: true

      # specify your networks

      networks:

       # specified network name

       – frontend

  weather-app2:

      build: ./weather-app

      tty: true

      networks:

       – frontend

  weather-app3:

      build: ./weather-app

      tty: true

      networks:

       – frontend

  # load balancing service

  loadbalancer:

      # where to build

      build: ./load-balancer

      image: nginx

      tty: true

      # define ports to be used  (i.e.  <target-host-port>/<source-container-port>)

      ports:

       – ’80:80’

      # define the network it uses

      networks:

       – frontend

# now that services are defined, define our network

networks:

  frontend:

  • Next, change directories into the “load-balancer” directory:

cd load-balancer

  • Check to see what contents are available in the “load-balancer” directory:

ls

  • Edit the NGINX configuration file (“nginx.conf”):

vim nginx.conf

*****************

FROM>>>

events { worker_connections 1024; }

http {

  upstream localhost {

    # Weather App config goes here

  }

  server {

    # Server config goes here

  }

}

TO>>>

events { worker_connections 1024; }

http {

  upstream localhost {

    server weather-app1:3000;

    server weather-app2:3000;

    server weather-app3:3000;

  }

  server {

    listen 80;

    server_name localhost;

    location / {

      proxy_pass http://localhost;

      proxy_set_header Host $host;

    }

  }

}

*****************

  • Navigate back up one directory (*because this is the location of the “docker-compose” file):

cd ../

  • Run “docker-compose up”:

docker-compose up –build -d [detached]

  • Check to see if the containers were successfully built:

docker ps

  • Test out the application deployment by going into your browser and visiting “Swarm-Server-1”s Public IP:
http://34.200.221.252
  • Back in “Swarm-Server-1” there is a file “swarm-token.txt” with a command to deploy another container in the seconds Swarm Server:
  • Print out the contents of this file, copy the command, and navigate to “Starm-Server-2”, Paste && Run the command:

cat swarm-token.txt

>>>NAVIGATE TO SWARM SERVER 2

docker swarm join –token SWMTKN-1-0b06z5gw5guankfm4jdbstbiq66m45vchorpkhijc95vsfz8dk-5k91zdp8ojqda05ic35j85jfv 10.0.1.195:2377

>>>Navigate back to SWARM SERVER 1:

  • Create a new Service for NGINX:

docker service create –name nginx-app –publish published=8080,target=80 –replicas=2 nginx

  • Now, if you go back to your browser, navigate to Swarm-Server-1’s Public IP BUT specify the designated NGINX port we defined (i.e. 8080)
http://34.200.221.252:8080/

**********************************************************************************************************************************************

BUILD SERVICES WITH DOCKER COMPOSE

**********************************************************************************************************************************************

[+] Build Services with Docker Compose – Learning Objectives

CREATE A GHOST BLOG AND MYSQL SERVICES

  • Create a “docker-compose.yml” file tin the root directory:

vim docker-compose.yml

  • Add the following contents to it:

 version: ‘3’

 services:

   ghost:

     image: ghost:1-alpine

     container_name: ghost-blog

     restart: always

     ports:

       – 80:2368

     environment:

       database__client: mysql

       database__connection__host: mysql

       database__connection__user: root

       database__connection__password: P4sSw0rd0!

       database__connection__database: ghost

     volumes:

       – ghost-volume:/var/lib/ghost

     depends_on:

       – mysql

   mysql:

     image: mysql:5.7

     container_name: ghost-db

     restart: always

     environment:

       MYSQL_ROOT_PASSWORD: P4sSw0rd0!

     volumes:

       – mysql-volume:/var/lib/mysql

 volumes:  

   ghost-volume:  

   mysql-volume:  

BRING UP THE GHOST BLOG SERVICE

  • Start up the Docker Compose service:

docker-compose up -d

[+] Lab Step-by-Step

  • SSH into the machine
  • Elevate privileges to Root:

sudo su –

  • Create a “docker-compose.yml” file:

vim docker-compose.yml

  • Add the following content to the newly created “docker-compose.yml” file:

# version of docker-compose to use

 version: ‘3′

# define our services

 services:

   # service name

   ghost:

     # base image for service

     image: ghost:1-alpine

     # container name for service

     container_name: ghost-blog

     # restart policy

     restart: always

     # port mapping for service (i.e. <target-host-port>:<source-container-port>)

     ports:

       – 80:2368

     # our defined environment variables

     environment:

       # database type

       database__client: mysql

       # specified database container “Ghost Blog” will talk to

       database__connection__host: mysql

       # database user to use

       database__connection__user: root

       # database password

       database__connection__password: P4sSw0rd0!

       # database name

       database__connection__database: ghost

     # define volumes to use with containers

     volumes:

       # volume mapping (*the contents of the Ghost Blog application are stored here)

       – ghost-volume:/var/lib/ghost

     # define dependencies here (*we want the “MySQL” container to come up first before the “Ghost-Blog” container)

     depends_on:

       # container name

       – mysql

   # define the MySQL service

   mysql:

     image: mysql:5.7

     container_name: ghost-db

     restart: always

     environment:

       MYSQL_ROOT_PASSWORD: P4sSw0rd0!

     # volume mapping for MySQL (*MySQL data is stored here)

     volumes:

       – mysql-volume:/var/lib/mysql

# define our volumes

 volumes:  

   ghost-volume:  

   mysql-volume:  

  • Next, spin up the containers:

docker-compose up -d

  • Check to see if the containers were successfully started:

docker ps

NOTE:

  • The “Ghost-Blog” container will restart a few times
  • It does this because it needs to communicate with the MySQL container and set the Tables up.
  • Once complete, the instances will stop restarting and everything should work as intended up to this point.
  • This can be verified by executing Docker Logs.
  • Retrieve the Docker Logs for the “Ghost-Blog” container:

docker logs <desired-target-container-uuid-here>

docker logs 4f0de8ef4c5b

——————————————————————————————————————————————————

MONITORING WITH PROMETHEUS

**********************************************************************************************************************************************

Section Introduction Lecture

**********************************************************************************************************************************************

[+] Lecture: Prometheus with Containers

  • There are two things to focus on:
    • * How to use “docker stats” (how to get info on containers)
    • * Using Prometheus to monitor your containers using cAdvisor

**********************************************************************************************************************************************

MONITORING CONTAINERS WITH PROMETHEUS

**********************************************************************************************************************************************

[+] Monitoring Containers with Prometheus – Learning Objectives

CREATE A “PROMETHEUS.YML” FILE

  • In the root directory, create “prometheus.yml”:

vim prometheus.yml

  • Add the following content to the new “prometheus.yml” file:

scrape_configs:

– job_name: cadvisor

  scrape_interval: 5s

  static_configs:

  – targets:

    – cadvisor:8080

CREATE A PROMETHEUS SERVICE

  • Create a “docker-compose.yml” file:

vim docker-compose.yml

  • Add the following content to the newly created “docker-compose.yml” file:

version: ‘3’

services:

  prometheus:

    image: prom/prometheus:latest

    container_name: prometheus

    ports:

      – 9090:9090

    command:

      – –config.file=/etc/prometheus/prometheus.yml

    volumes:

      – ./prometheus.yml:/etc/prometheus/prometheus.yml

    depends_on:

      – cadvisor

  cadvisor:

    image: google/cadvisor:latest

    container_name: cadvisor

    ports:

      – 8080:8080

    volumes:

      – /:/rootfs:ro

      – /var/run:/var/run:rw

      – /sys:/sys:ro

      – /var/lib/docker:/var/lib/docker:ro

  • Stand up the environment:

docker-compose up -d

CRAETE “STATS.SH”

  • Create “stats.sh”, in “/root”:

vim stats.sh

  • Add the following content to the newly created “stat.sh”. File:

docker stats –format “table {{.Name}} {{.ID}} {{.MemUsage}} {{.CPUPerc}}”

  • Make sure the file can be executed:

sudo chmod a+x statts.sh

  • Execute the script:

./stats.sh

  • Once finished, exit by pressing “Ctrl+C”:

Ctrl + C

[+] Lab Step-by-Step

  • SSH into the machine
  • Once logged in, elevate user privileges to Root:

sudo su –

  • Create a file called “prometheus.yml”

vim prometheus.yml

  • Add the following content to the “prometheus.yml” file:
    • * There are several things that we can setup in the “prometheus.yml” file.
    • * However, what we are primarily concerned about at this point is setting up “scrape_config”
    • * Everything else will just use the Prometheus defaults.

# configuration name

scrape_configs:

# specified job name

– job_name: cadvisor

  # scrape intervals

  scrape_interval: 5s

  static_configs:

  # specify targets

  – targets:

    # target naming format: <container-name-here>:<container-port-here>

    – cadvisor:8080

  • Next, create a “docker-compose.yml” file:

vim docker-compose.yml

  • Add the following content to the newly created “docker-compose.yml” file:

# docker-compose version

version: ‘3’

services:

  # define the “prometheus” service

  prometheus:

    # specify the image and tag:    <repository-name-here>/<image-name-here>:<image-tag-here>

    image: prom/prometheus:latest

    # desired container

    container_name: prometheus

    ports:

      # map ports: <target-host-port>:<source-container-port>

      – 9090:9090

    command:

      # desired command to run in Terminal

      – –config.file=/etc/prometheus/prometheus.yml

    # volume mappings

    volumes:

      – ./prometheus.yml:/etc/prometheus/prometheus.yml

    # dependencies

    depends_on:

      – cadvisor

  # define “cAdvisor” service

  cadvisor:

    image: google/cadvisor:latest

    container_name: cadvisor

    ports:

      – 8080:8080

    volumes:

      # “ro” stands for READ ONLY

      – /:/rootfs:ro

      – /var/run:/var/run:rw

      – /sys:/sys:ro

      – /var/lib/docker:/var/lib/docker:ro

  • Now, after creating the above files, spin up the containers:

docker-compose up -d

  • Confirm the containers were successfully spun up:

docker ps

  • Check the containers by navigating to the Public IP in your browser, with port 9090=“prometheus” and port 8080=“cAdvisor”

FORMAT:

http://<server-public-ip-here>:<target-service-port-here>

PROMETHEUS:

http://3.80.77.109:9090

CADVISOR

http://3.80.77.109:8080
  • Let’s check the output of the “docker stats” command so we can get a bit of information:

docker stats

  • To exit, press “Ctrl+C”:

 Ctrl + C

  • What we’re going to do to make handling of metrics a bit easier is to. Create an executable file called “stats.sh”:

vim stats.sh

  • Add the following content to the “stats.sh” file:

#! /bin/bash

docker stats –format “table {{.Name}} {{.ID}} {{.MemUsage}} {{.CPUPerc}}”

  • After saving the file

**********************************************************************************************************************************************

USING GRAFANA WITH PROMETHEUS FOR ALERTING AND MONITORING

**********************************************************************************************************************************************

[+] Using Grafana with Prometheus for Alerting and Monitoring – Learning Objectives

CONFIGURE DOCKER

  • Open “/etc/docker/daemon.json” and add the following:

{

  “metrics-addr” : “0.0.0.0:9323”,

  “experimental” : true

}

UPDATE “PROMETHEUS.YML”

  • Edit the “prometheus.yml” file in the “/root” directory:

vim ~/prometheus.yml

  • Change the contents to reflect the following:

scrape_configs:

  – job_name: prometheus

    scrape_interval: 5s

    static_configs:

    – targets:

      – prometheus:9090

      – node-exporter:9100

      – pushgateway:9091

      – cadvisor:8080

  – job_name: docker

    scrape_interval: 5s

    static_configs:

    – targets:

      – PRIVATE_IP_ADDRESS:9323

UPDATE “DOCKER-COMPOSE.YML”

  • Edit “docker-compose.yml” in the “/root” directory:

vim ~/docker-compose.yml

  • Change the contents to reflect the following:

version: ‘3’

services:

  prometheus:

    image: prom/prometheus:latest

    container_name: prometheus

    ports:

      – 9090:9090

    command:

      – –config.file=/etc/prometheus/prometheus.yml

    volumes:

      – ./prometheus.yml:/etc/prometheus/prometheus.yml:ro

    depends_on:

      – cadvisor

  cadvisor:

    image: google/cadvisor:latest

    container_name: cadvisor

    ports:

      – 8080:8080

    volumes:

      – /:/rootfs:ro

      – /var/run:/var/run:rw

      – /sys:/sys:ro

      – /var/lib/docker/:/var/lib/docker:ro

  pushgateway:

    image: prom/pushgateway

    container_name: pushgateway

    ports:

      – 9091:9091

  node-exporter:

    image: prom/node-exporter:latest

    container_name: node-exporter

    restart: unless-stopped

    expose:

      – 9100

  grafana:

    image: grafana/grafana

    container_name: grafana

    ports:

      – 3000:3000

    environment:

      – GF_SECURITY_ADMIN_PASSWORD=password

    depends_on:

      – prometheus

      – cadvisor

INSTALL THE DOCKER AND SYSTEM MONITORING DASHBOARD

  • Click the “+” sign on the left side of the Grafana interface.
  • Click “Import”
  • Copy the contents of the JSON file included in the lab instructions.
  • Paste the contents of the file into the import screen of the Grafana interface, and click “Load”.
  • In the upper right-hand corner, click on “Refresh every 5m” and select “Last 5 minutes”.

[+] Lab Step-by-Step

  • SSH into the machine
  • Elevate user privileges to Root:

sudo su –

  • Create a new file called “daemon.json” in the “/etc/docker/“ folder:

vim /etc/docker/daemon.json

  • Add the following content to the “/etc/docker/daemon.json” file:

{

  “metrics-addr” : “0.0.0.0:9323”,

  “experimental” : true

}

  • After adding the content to the file and saving it, restart the “docker” service using the “systemctl” command:

systemctl restart docker

  • Now, let’s make some changes to the Prometheus configuration:

vim ~/prometheus.yml

  • Make the following changes to the contents of the “prometheus.yml” file:

FROM

scrape_configs:

  – job_name: cadvisor

    scrape_interval: 5s

    static_configs:

    – targets:

      – cadvisor:8080

TO

scrape_configs:

  – job_name: prometheus

    scrape_interval: 5s

    static_configs:

    – targets:

      – prometheus:9090

      – node-exporter:9100

      – pushgateway:9091

      – cadvisor:8080

  – job_name: docker

    scrape_interval: 5s

    static_configs:

    – targets:

      – PRIVATE_IP_ADDRESS:9323

  • After updating and saving the content of the “prometheus.yml” file, open and add the following content of the “docker-compose.yml” file:

version: ‘3’

services:

  prometheus:

    image: prom/prometheus:latest

    container_name: prometheus

    ports:

      – 9090:9090

    command:

      – –config.file=/etc/prometheus/prometheus.yml

    volumes:

      – ./prometheus.yml:/etc/prometheus/prometheus.yml:ro

    depends_on:

      – cadvisor

  cadvisor:

    image: google/cadvisor:latest

    container_name: cadvisor

    ports:

      – 8080:8080

    volumes:

      – /:/rootfs:ro

      – /var/run:/var/run:rw

      – /sys:/sys:ro

      – /var/lib/docker/:/var/lib/docker:ro

  pushgateway:

    image: prom/pushgateway

    container_name: pushgateway

    ports:

      – 9091:9091

  node-exporter:

    image: prom/node-exporter:latest

    container_name: node-exporter

    restart: unless-stopped

    expose:

      – 9100

  grafana:

    image: grafana/grafana

    container_name: grafana

    ports:

      – 3000:3000

    environment:

      – GF_SECURITY_ADMIN_PASSWORD=password

    depends_on:

      – prometheus

      – cadvisor

  • Next, in order to apply the changes we’ve just made, run the “docker-compose up” command:

docker-compose up -d

  • Check to see if the containers were spun up successfully via Prometheus.
    • * you can do this by opening your web browser and navigating to the container’s Public IP, port 9090:
http://<container-public-ip-here>:9090

3.80.75.229:9090

  • Here you will be presented with the Prometheus dashboard.
  • From within the top toolbar, expand the “Status” drop-down menu and select “targets”.
  • This should present you with a Graphical Interface displaying the various service’s status.
  • Next, while still in the web browser, we want to to a look at the Grafana service to ensure everything is up and running.
  • You can access the Grafanadashbard by naviagting to the container’s Public IP, port 3000
http://<container-public-ip-here>:3000

3.80.75.229:3000

  • Login into Grafana using any default credentials we set (i.e. Environment Variables)
    • * For the lab, within the “docker-compose.yml” file, we set the Environment Variable called “GF_SECURITY_ADMIN_PASSWORD”, and gave it the value “password”.
    • * With this set, our credentials will:
      • username: “admin”
      • Password: “password”

NOTE

  • If you forgot to set this environment variable, or it just didn’t set properly, the default credentials for Grafana is:
    • username: “admin”
    • password: “admin”
  • Now that we’re successfully logged into Grafana, we’re going to create a new “Data Source”.
  • Click “Add Data Source” from within the Grafana dashboard.
  • On the next screen you’ll be prompted to “Choose a Source Type”.
    • * For this option, choose “Prometheus”
  • You will also notice that the Input Field labeled “URL”, within the “HTTP” section, there is a placeholder with a value of “http://localhost:9090”.
  • What we’re going to do replace “localhost” with the container’s Private IP:

URL=> http://<container-private-ip-here>:9090

URL=> http://10.0.1.206:9090

  • Afterwards, scroll to the bottom of the screen and click “Save & Test”
  • The next step is to import our Dashboard.
  • Select and expand the “+” symbol located in the upper-left corner of the screen.
  • Click the option “Import”
  • Here, we want to paste in the JSON data that is our desired dashboard.
    • * For the purpose of this demo, we’re provided a link with JSON data for a dashboard:
https://raw.githubusercontent.com/linuxacademy/content-intermediate-docker-quest-prometheus/master/dashboards/docker_and_system.json
  • Copy the JSON data from the provided link above (*or whichever dashboard JSON code you desire) and past it into the TextArea on the “Import” screen.
  • Click the “Load” button.
  • We will be redirected to another screen where we can modify some of the Data sources configs.
  • On this page, under the “Options” section, there is a Field labeled “Prometheus” with a drop-down menu as its input method.
  • From this drop-down menu, select the option “Prometheus”.
  • Click the “Import” button
  • We will be redirected to a new Dashboard screen that displays the metrics for the Prometheus service.
  • For this demo, change the time window for the metrics from “Last 24hrs” to “Last 5 minutes” (*the option is accessible via a button located in the top-right corner of the screen…there’s a small “clock” icon within the button)
  • Now that we got the dashboard setup, let’s explore a bit further.
  • From within the Aside menu located on the left-side of the Dashboard’s screen, hover over the Bell icon and select the option “Notification Channels”
  • Click “Add Channel”
  • Provide the following values for this screen:
    • Name: “Email Alerts”
    • Type: “Email”
    • Default: False
    • Include Image: True
    • Disable Resolve Message: False
    • Send Reminders: False

——————————————————————————————————————————————————

WORKING WITH DOCKER SWARM

**********************************************************************************************************************************************

SETTING UP DOCKER SWARRM

**********************************************************************************************************************************************

[+] Setting Up Docker Swarm – Learning Objectives

  • For this lab we’re going to set up a docker swarm that consist of 3 nodes.
  • We’ll verify everything works by creating a Docker Service.

INITIALIZE THE DOCKER SWARM

  • Swarm Server 1 will be the “Swarm Master”
  • Initialize the Docker swarm:

docker swarm init

ADD ADDITIONAL NODES TO THE SWARM

  • Add your worker nodes to the swarm:

docker swarm join token TOKEN IP_ADDRESS:2377

CREATE A SWARM SERVICE

  • From the “Master Node”, create a service to test your swarm configuration:

docker service create –name weather-app –publish published=80,target=3000 –replicas=33 weather-app

[+] Lab Step-by-Step

  • SSH into “Swarm Server 1”
  • Elevate user permissions to Root:

sudo su –

  • ***** Open TWO new tabs in the Terminal and SSH into “Swarm Server 2” & “Swarm Server 3” (*follow the same instructions as in the prior step) ******
  • Now, go back to “Swarm Server 1” which’ll serve as the “Swarm Master” and is where our Docker images are located.
  • Verify that the images are available in “Swarm Server 1” by running “docker images”:

docker images

  • After verifying the Docker images are present, initialize the Docker Swarm:

docker swarm init

  • The output of this command will return token:
    • * This token will be used to add additional nodes to the swarm b y simply pasting in the token into the desired Docker Server.

EXAMPLE

docker swarm join –token SWMTKN-1-4z38ikyycswrd1aujw81agxqqdmplscmt8wnglddxna3t80nbo-6l0cuw01y31ph096gspuwnz89 10.0.1.35:2377

>>>MOVE OVER TO SWARM SERVER 2!!!

  • Paste the Token in to “Swarm Server 2” to add this server as a node to the Docker Swarm (*of which “Swarm Server 1” is the Swarm Master)

>>>MOVE OVER TO SWARM SERVER 3!!!

  • Paste the Token in to “Swarm Server 3” to add this server as a node to the Docker Swarm (*of which “Swarm Server 1” is the Swarm Master)

>>>MOVE BACK OVER TO SWARM SERVER 1…the “Swarm Master”!!!

  • Confirm that “Swarm Server 2” and “Swarm Server 3” was successfully added to the Docker Swarm by running the “docker node ls” command:

docker node ls

  • Next, let’s create the Docker Service

docker service create –name <desired-service-name-here> –publish published=<host-port-here>,target=<container-port-here> –replicas=<desired-number-of-replicas-here> <desired-base-image-for-docker-service-here>

docker service create –name weather-app –publish published=80,target=3000 –replicas=3 weather-app

  • Confirm that the Docker Service was successfully created by running the “docker service ls” command:

docker service ls

**********************************************************************************************************************************************

BACKING UP AND RESTORING DOCKER SWARM

**********************************************************************************************************************************************

[+] What Are We Doing?

  • Create a demo on how to backup and restore docker swarm
  • Setup a Docker Swarm with up to 3 nodes.
  • Scale the Backup Service with up to 3 replicas
  • Backup your Master Node
  • Go restore the Swarm on the Backup instance.
  • ***** In this lab there are 4 servers (1=Swarm Matser Node, 2=Worker Nodes, 1=Backup Swarm Server) *****

[+] Backing Up and Restoring Docker Swarm – Learning Objectives

BACK UP HTE SWARM MASTER

  • Stop the Docker Service on the Mater Node:

systemctl stop docker

  • Backup the “/var/lib/docker/swarm/“ directory:

systemctl stop docker

tar czvf swarm.tgz /var/lib/docker/swarm/

RESTORE THE SWARM ON THE BACKUP MASTER

  • Copy the swarm backup from the Master Node to the Backup Master:

scp swarm.tar.tgz <username>@<backup-server-ip-address>:/home/<username>

scp swarm.tar.tgz cloud_user@BACKUP_IP_ADDRESS:/home/lcoud_user/

  • From the Backup Master, extract the backup file:

tar xzvf swarm.tar.tgz

  • Copy the swarm directory to “/var/lib/docker/swarm”:

cd /var/lib/docker

cp -rf swarm/ /var/lib/docker/

  • Reinitialize the swarm:

docker swarm init –force-new-cluster

ADD THE WORKER NODES TO THE RESTORED CLUSTER

  • Remove each node from the old swarm:

docker swarm leave

  • Add each node to the Backup swarm:

Docker swarm join –token TOKEN IP_ADDRESS:2377

DISTRIBUTE THE REPLICAS ACROSS THE SWARM

  • Scale the replicas down to 1:

docker service scale backup=1

  • Next, scale the replicas up to 3 to distribute the replicas across the swarm:

docker service scale backup=3

[+] Lab Step-by-Step

  • SSH into all of your servers (i.e. “Swarm Server 1”, “Swarm Server 2”, “Swarm Server 3”, “Backup Swarm Server”)
  • Elevate the user permissions within each server to Root:

sudo su –

>>>MOVE BACK INTO THE “SWARM MASTER” (“Swarm Server 1”):

  • Perform a quick “ls” to view what’s available on the “Swarm Master” (i.e. “Swarm Server 1”):

ls

  • Here, you’d see there’s a file named “swarm-token.txt” that contains a Token we’ll use for adding additional Nodes to the Docker Swarm.
  • The contents of “swarm-token.txt” is the following:

“””

To add a worker to this swarm, run the following command:

    docker swarm join –token SWMTKN-1-04h039ojysxbgh4wkiudj94t73l0zhsanwulkh3l00oeqjqf6n-6qsfkc8ej4lne403ckhpjka7u 10.0.1.132:2377

To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.

“””

  • What we want to do is copy the single line from the “swarm-token.txy” file:

docker swarm join –token SWMTKN-1-4z38ikyycswrd1aujw81agxqqdmplscmt8wnglddxna3t80nbo-6l0cuw01y31ph096gspuwnz89 10.0.1.35:2377

>>>MOVE OVER AND PERFORM THE FOLLOWING ACIONS IN SWARM SERVER 2 AND SWARM SERVER 3 EACH:

  • Now, past this line in the Terminal shell of “Swarm Server 2” and “Swarm Server 3” in order to add them the Docker Swarm
    • * The output returned from this action should read:

“This node joined the swarm as a worker”

  • Upon completing this action, you should have the “Swarm Server 2” and “Swarm Server 3” added to the Docker Swarm.

>>>MOVE BACK OVER TO SWARM SERVER 1 (the “Swarm Master”):

  • Now, let’s check to see if a Docker Service is running by executing the “docker service ls” command:

docker service ls

  • Executing this command will reveal that we currently have a single service running named “backup”
  • What we want to do is scale this up using the “docker service scale” command:

docker service scale backup=3 

  • Make sure that the command executed successfully by running a “docker service ps” command, BUT filter the results by specifying the name “backup”:

docker service ps backup

  • The return output should display 3 numerically ordered node, of which their names begin with the prefix “backup.” and their iteration (e.g. “backup.1”, “backup.2”, and “backup.3”)
  • The next thing to do is to backup the Docker Swarm.
  • To do this, we first have to stop Docker:

systemctl stop docker

  • Now that Docker was successfully stopped, we can begin to backup the Swarm directory (i.e. “/var/lib/docker/swarm/”).
  • To do this, we’re going to create a “Tarball”:

tar czvf swarm.tgz /var/lib/docker/swarm

  • This will create a new Tarball zip file named “swarm.tgz”
  • The next step is to copy the newly created Tarball file over to the Backup Node.
  • We can accomplish this by using the “scp” command (“SSH COPY”):

scp <path-to-desired-file-to-copy-here> <target-backup-server-username>@<target-backup-server-private-ip>:<desired-path-on-target-backup-server-to-copy-the-file-to>

scp swarm.tgz [email protected]:/home/cloud_user

  • Afterwards, let’s check the Backup Server to see if the file was successfully copied over…

>>>MOVE OVER TO THE BACKUP SERVER NOW!!!

  • Change directories to where we copied the “swarm.tgz” file from the Swarm Master:

cd /home/cloud_user

  • Next, untarnished the “swarm.tgz” file:

tar zxvf swarm.tgz

  • This will untarnished the “swarm.tgz” file and a new folder will be available in the current working directory (“/home/cloud_user”) named “var”, which is the “Swarm Directory”.
  • Navigate into the new Swarm Directory named “var” (…FROM YOUR CURRENT WORKING DIR OF “/home/cloud_user”):

cd var/lib/docker

  • Once in the Swarm Directory, run a “ls” command to view the available files and directories:

ls

  • Next, copy over the entire backed-used directory and it’s contents to the Backup Server’s “/var/lib/docker” directory

cp -rf swarm/ /var/lib/docker

  • Now restart docker:

systemctl restart docker

  • Reinitialize Docker Swarm:

docker swarm init —force-new-cluster

  • The output of this command will be another Swarm Token…BUT it’ll still have the original Swarm Master’s Private IP within the Token Command:

docker swarm join –token SWMTKN-1-04h039ojysxbgh4wkiudj94t73l0zhsanwulkh3l00oeqjqf6n-6qsfkc8ej4lne403ckhpjka7u <OLD-SWARM-MASTER-PRIVATE-IP-IS-STILL-HERE>:2377

docker swarm join –token SWMTKN-1-04h039ojysxbgh4wkiudj94t73l0zhsanwulkh3l00oeqjqf6n-6qsfkc8ej4lne403ckhpjka7u 10.0.1.132:2377

  • BUT change the PRIVATE IP of the TOKEN:

docker swarm join –token SWMTKN-1-04h039ojysxbgh4wkiudj94t73l0zhsanwulkh3l00oeqjqf6n-6qsfkc8ej4lne403ckhpjka7u <BACKUP-SERVER-PRIVATE-IP-HERE>:2377

docker swarm join –token SWMTKN-1-04h039ojysxbgh4wkiudj94t73l0zhsanwulkh3l00oeqjqf6n-6qsfkc8ej4lne403ckhpjka7u 10.0.1.171:2377

  • ***** We now have to remove the Worker Nodes from the OLD Docker Swarm && Join them to the NEW Docker Swarm *****
  • To do this, we have to go into each Worker Node server and run the following commands (*…BE SURE TO COPY THE SWARM TOKEN FROM ABOVE WITH THE CORRECT PRIVATE IP OF THE BACKUP SERVER!!!!!)

>>>MOVE OVER TO “SWARM SERVER 2”

  • Run the “docker swam leave” command to detach this Worker Node (“Swarm Server 2”) from the previous (“OLD”) swarm:

docker swarm leave

  • Paste in the NEW Swarm Token from Backup Server (***** with the correct Private IP *****)

docker swarm join –token SWMTKN-1-04h039ojysxbgh4wkiudj94t73l0zhsanwulkh3l00oeqjqf6n-6qsfkc8ej4lne403ckhpjka7u <BACKUP-SERVER-PRIVATE-IP-HERE>:2377

docker swarm join –token SWMTKN-1-04h039ojysxbgh4wkiudj94t73l0zhsanwulkh3l00oeqjqf6n-6qsfkc8ej4lne403ckhpjka7u 10.0.1.171:2377

>>>MOVE OVER TO “SWARM SERVER 3”

  • Run the “docker swam leave” command to detach this Worker Node (“Swarm Server 2”) from the previous (“OLD”) swarm:

docker swarm leave

  • Paste in the NEW Swarm Token from Backup Server (***** with the correct Private IP *****)

docker swarm join –token SWMTKN-1-04h039ojysxbgh4wkiudj94t73l0zhsanwulkh3l00oeqjqf6n-6qsfkc8ej4lne403ckhpjka7u <BACKUP-SERVER-PRIVATE-IP-HERE>:2377

docker swarm join –token SWMTKN-1-04h039ojysxbgh4wkiudj94t73l0zhsanwulkh3l00oeqjqf6n-6qsfkc8ej4lne403ckhpjka7u 10.0.1.171:2377

>>>MOVE OVER TO “BACKUP SERVER”

  • Check to see if the “Backup” service is running:

docker service ls

  • Now, list out all the available “backup” services on the Backup Server by running the “docker service ps”, BUT filter for the specific Service name “backup”:

docker service ps backup

  • ***** After executing the above command, you’ll notice we. Have more than 3 replicas available (*More than we want!) *****
  • This is because we essentially created the replicas on the original Swarm Master and re-executed the same commands when performing the backup.
  • This can be resolved by running the “docker service scale” command again, BUT to reduce the replicas FIRST, then increase the replicas back to the desired 3.
  • This will reduce replicas with incorrect info to 1, THEN increase the replicas to 3 with the correct info.
    • * So, to adjust the amount of replicas we’ll run the below command in the following format: 

docker service scale <desired-target-service-name-here>=<desired number-of-replicas-here>

  • Scale down service to 1 replica:

docker service scale backup=1

  •  Now, scale the replicas back up to 3:

docker service scale backup=3

  • DONE!!!!!!

**********************************************************************************************************************************************

SCALING A DOCKER SWARM SERVICE

**********************************************************************************************************************************************

[+] What Are We Doing?

  • We are going to set up a Docker Swarm with 2 Masters and 3 Worker Nodes
  • Ensure none of our services are running on our Manager Nodes, but JUST on the Worker Nodes
  • After setting up the Swarm, we are going to c create a Service
  • We’ll then scale the Service UP to 5 replicas
  • Afterwards, we’ll scale the Service back DOWN to 2 replicas

[+] Scaling a Docker Swarm Service – Learning Objectives

CREATE A SWARM

  • Create a swarm with 2 Masters and 3 Worker Nodes
  • Initialize the swarm:

docker swarm init

  • Use the “join” command to add the 3 Worker Nodes

docker swarm join –token TOKEN IP_ADDRESS:2377

  • Generate the Master Token:

docker swarm join-token manger

DRAIN THE MASTERS

  • Set the availability to “drain” for “Master1”:

docker node update –availability drain Master1

  • Set the availability to “drain” for “Master2”:

docker node update –availability drain Master2

CREATE A SERVICE

  • Create a Service with 3 replicas :

docker service create –name httpd -p 80:80 –replicas 3 httpd

SCALE THE SERVICE UP TO 5 REPLICAS

  • Scale the “httpd” service up to 5 replicas:

docker service scale httpd=5

SCALE THE SERVICE DOWN TO 2 REPLICAS

  • Scale the “httpd”service down to 2 replicas:

docker service scale httpd=2

[+] Lab Step-by-Step

  • SSH into “Swarm Master 1”
  • Initialize the Docker Swarm:

docker swarm init

  • This command will return a Docker Swarm Token as its output:
    • * We will only run the below “docker swarm join” command on the “Worker Nodes”

Swarm initialized: current node (kji70o8hmcma28zr5n740jgys) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join –token SWMTKN-1-5cuyat6oin7yo209jaaey6r27rej9t2sx9qqwdaa2b1hckr46e-cyrh1bf04o1ly4kkbyjze5btr 10.0.1.157:2377

To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.

>>>MOVE OVER TO && RUN THE BELOW COMMAND ON “SWARM WORKER 1, 2, AND 3” EACH:

  • Now, copy the Docker Swarm Join Token and paste the command into each of the “Worker Nodes” (* “Swarm Worker 1”, “Swarm Worker 2”, and “Swarm Worker 3”):

docker swarm join –token SWMTKN-1-5cuyat6oin7yo209jaaey6r27rej9t2sx9qqwdaa2b1hckr46e-cyrh1bf04o1ly4kkbyjze5btr 10.0.1.157:2377

>>>MOVE BACK OVER TO “SWARM MASTER 1”:

  • Now back within “Swarm Master 1”, urn the following command:

docker swarm join-token <desired-docker-swarm-join-token-type-here>

docker swarm join-token manager

  • His will return a new Docker Swarm “Manager Token” as its output:
    • * If you take a close look, you’ll see that this Token is different from the one we generated for the “Worker Nodes”
    • * This “Manager Token” will be used for “Swarm Manager 2” in order to add it to our Docker Swarm

To add a manager to this swarm, run the following command:

    docker swarm join –token SWMTKN-1-5cuyat6oin7yo209jaaey6r27rej9t2sx9qqwdaa2b1hckr46e-9f3h58346pey2v3i1w04an96b 10.0.1.157:2377

  • Copy the above Manager Token into your clipboard so we can paste and run it in “Swarm Master 2″

>>>MOVE OVER TO “SWARM MASTER 2”:

  • Now that we’ve just moved over to “Swarm Master 2”, let’s paste the Manager Token from the step prior into the Terminal and execute the command:
    • * After running this command, we’ll want to move back over to “Swarm Master 1”

docker swarm join –token SWMTKN-1-5cuyat6oin7yo209jaaey6r27rej9t2sx9qqwdaa2b1hckr46e-9f3h58346pey2v3i1w04an96b 10.0.1.157:2377

>>>MOVE OVER TO SWARM MASTER 1:

  • Back in “Swarm Master 1”, let’s list out all the available nodes associated with this “Swarm Master 1”’s Docker Swarm:

docker node ls

  • The returned output of the above command should look like the following:

ID                                                   HOSTNAME                         STATUS            AVAILABILITY        MANAGER STATUS      ENGINE VERSION

kwkzqddk51hnkp7hiyl5bfs76        ip-10-0-1-14.ec2.internal     Ready               Active                                                          18.06.0-ce

s6q828cu94dc4cr0d8jhfeviq         ip-10-0-1-53.ec2.internal     Ready               Active                                                          18.06.0-ce

swfhlivf9xj8r5tg9myhdlx1c            ip-10-0-1-66.ec2.internal     Ready               Active                                                          18.06.0-ce

n7cbmdb4j0kzguel59knkhbud      ip-10-0-1-88.ec2.internal     Ready               Active                    Reachable                     18.06.0-ce

kji70o8hmcma28zr5n740jgys *     ip-10-0-1-157.ec2.internal   Ready               Active                    Leader                           18.06.0-ce

  • Reviewing the output of the above command, we’ll notice a few things:
    • * There are 5 nodes in total
    • * There is a node with the “Manager Status” of “Leader” (*which is our primary Swarm Master…i.e. “Swarm Master 1”)
    • * There is a node with the “Manager Status” of “Reachable” (*which is our secondary Swarm Master…i.e. “Swarm Master 2″)
  • What we want to do is to ensure that ONLY our Worker Nodes are used when we creating our Service
    • * We can do this by setting “Availability” to “drain” on BOTH of the Swarm Masters
  • As explained in the bullet point prior, change the “Availability” of the Swarm Masters to “drain” by running the following command for each one (“Swarm Master 1” && “Swarm Master 2”):
    • * BOTH commands can be run within “Swarm Master 1’s” Terminal

docker node update –availability drain <desired-target-swarm-master-node-ID-here>

docker node update –availability drain 

docker node update –availability drain

  • By executing the above commands, this configures the swarm so that ONLY the Worker Nodes will run the Service
  • Next, let’s stand up our desired Service:

docker service create –name <desired-service-name-here> -p [–port] <desired-target-host-port>:<desired-source-container-port> –replicas <desired-number-of-replicas-here> <desired-base-docker-image-here>

docker service create –name httpd -p 80:80 –replicas 3 httpd

  • Confirm the Docker Service was successfully launched with the configurations we specified:

docker service ps httpd

  • Next, let’s scale UP the newly spun-up Service:

docker service scale httpd=5

  • Confirm the desired Service scaled UP as intended:

docker service ps httpd

  • Now, let’s scale back DOWN the Service:

docker service scale httpd=2

——————————————————————————————————————————————————

CONTAINER ORCHESTRATION WITH KUBERNETES

**********************************************************************************************************************************************

Section Introduction Lecture

**********************************************************************************************************************************************

 [+] Lecture: Orchestration With Kubernetes

**********************************************************************************************************************************************

Setting Up a Kubernetes Cluster With Docker

**********************************************************************************************************************************************

[+] What Are We Doing?

  • Create a Kubernetes Cluster that consist of three nodes
  • Afterwards, create a Pod as well as a Service
  • Test the setup

[+] Setting Up a Kubernetes Cluster With Docker – Learning Objectives

CONFIGURE THE KUBERNETES CLUSTER

  • *** Install Kubernetes on ALL 3 Nodes ***
  • Add the Kubernetes Repository:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

  • After pressing enter to execute the above command, yo’ll be prompted in the Terminal shell for input via the “>” symbol
  • Add the following content at the Terminal’s shell prompt in order to add it to the newly created “kubernetes.repo” file:

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

exclude=kube*

  • ****** In order to commit the changes to the new repo and close to the Terminal’s shell prompt, enter the text “EOF” as the last line for the prompt and then commit it by pressing “Enter”:

EOF

  • Disable SELinux:

setenforce 0

  • Install Kubernetes:

yum install -y kubelet-1.12.7 kubeadm-1.12.7 kubectl-1.12.7 kubernetes-cni-0.6.0 –disableexcludes=kubernetes

systemctl enable kubelet && systemctl start kubelet

  • Set “net.bridge.beidge-nf-call-iptables” to “1” in your “sysctl”:

vim /etc/sysctl.d/k8s.conf

  • Add the following content to the newly created “k8s.conf”:

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

  • Reload:

sysctl –system

  • On ******ONLY***** the second and third nodes:
    • * Execute the “join” command that is generated when the Master Node is initialized via “kubeadmin”

INITIALIZE THE MASTER NODE

  • Initialize the Master Node:

kubeadm init –pod-network-cidr=10.244.0.0/16 –kubernetes-version=v1.11.3

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

  • Copy the Join Token that was output from the “kubeadmin init” command,oor execute this command to create a new one:

kubeadmin token create –print-join-command

  • You will need this token to add the second and third servers to the cluster

INSTALL FRABIC ON THE MASTER NODE

  • Install “Fabric”:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

CREATE A POD

  • Create “pod.yml”:

vim pod.yml

  • Add the following content to the newly cerated “pod.yml” file:

apiVersion: v1

kind: Pod

metadata:

  name: nginx-pod-demo

  labels:

    app: nginx-demo

spec:

  containers:

  – image: nginx:latest

    name: nginx-demo

    ports:

    – containerPort: 80

    imagePullPolicy: Always

  • Create the pod:

kubectl create -f pod.yml

CREATE A SERVICE

  • Create “service.yml”:

vim service.yml

  • Add the following content to “service.yml”:

kind: Service

apiVersion: v1

metadata:

  name: service-demo

spec:

  selector:

    app: nginx-demo

  ports:

  – protocol: TCP

    port: 80

    targetPort: 80

  type: NodePort

  • Create the Service:

kubectl create -f service.yml

[+] Lab Step-by-Step

  • SSH into all three of the machines
  • Elevate user permissions to Root.

>>>MOVE OVER TO KUBERNETES NODE 1:

  • Add the Kubernetes repository to “Kubernetes Node 1”:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo 

  • This will trigger the Terminal to populate a shell prompt via the “>” symbol
  • Enter the following as input for the prompt:

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

exclude=kube*

  • ***** In order to commit the changes to the new repo and close to the Terminal’s shell prompt, enter the text “EOF” as the last line for the prompt and then commit it by pressing “Enter”:

EOF

  • Now that the Kubernetes repository is installed, we need to DISABLE “SELinux” by running the following command:

setenforce 0

  • After disabling SELinux, we can begin installing Kubernetes.
  • ***** In this demo, we will be pinning this to a specific version of Kubernetes (i.e. “v1.12.7” for “Kubelet”, “Kubeadm”, and “Kubectl”) *****
    • * If we DONT do this and Kubernetes releases an update, there’s a good possibility that it’ll break THIS lab.

yum install -y kubelet-1.12.7 kubeadm-1.12.7 kubectl-1.12.7 –disableexcludes=kubernetes

  • Another behavior we want to define is:
    • * Whenever we REBOOT our system, we want to ensure that “Kubelet” RESTARTS as well:

systemctl enable kubelet && systemctl start kubelet

  • Next, set up the Bridge Network:
    • * This process of writing to a file via the “EOF” command is the same as before (*”a few steps prior”), by entering “EOF” as the final line in the file and pressing “Enter” to commit the changes:

cat <<EOF > /etc/sysctl.d/k8s.conf

  • Once the above command is executed, you’ll be presented with a prompt from the Terminal via the “>” symbol.
  • Enter the following content into this prompt:

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

  • Commit the changes and close the prompt by entering && executing the following as the last line in the file/prompt:

EOF

  • Ensure the changes we‘ve just made takes affect by running the following command:

sysctl –system

  • Now that’s taken care of, we’ll now initialize a Kubernetes Cluster:

kubeadm init –pod-network-cidr=<desired-ip-address-range>/<desired-cidr-here> –kubernetes-version=v<desired-version-number-here>

kubeadm init –pod-network-cidr=10.244.0.0/16 –kubernetes-version=v1.12.7

  • After successfully running the above command in the step prior, you should be returned the following output, which is instructions on how to begin using your Cluster  (*including the necessary commands to do so):

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node

as root:

  kubeadm join 10.0.1.19:6443 –token ljse2a.lac0h00ocsqxitdk –discovery-token-ca-cert-hash sha256:aef542c1cc9ee6baef07911369681ec3bf1a91393ecef57656075388941b3d46

  • From the above returned output, copy, paste, and execute the following commands:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

  • One final thing we need to do is to install && setup “Flannel” to ensure we have networking setup properly

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

  • The next steps is to setup “Nodes 1 & 2”
  • To do this, we’re essentially going to repeat the steps we followed when setting up the Primary server we’re currently on

>>>MOVE OVER TO KUBERNETES NODE 2:

  • Install the Kubernetes repository:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

  • After running the above command, you’ll be presented with a prompt via a “>” symbol.
  • Paste and execute the following at the prompt:

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

exclude=kube*

  • Commit the changes and close the file by entering and executing the following on the last line of the prompt:

EOF

  • Disable SELinux:

setenforce 0

  • Install Kubernetes:

yum install -y kubelet-1.12.7 kubeadm-1.12.7 kubectl-1.12.7 –disableexcludes=kubernetes

  • Next, Enable && Start the “kubelet” Service:

systemctl enable kubelet && systemctl start kubelet

  • Configure a Network Bridge:
    • * This process of writing to a file via the “EOF” command is the same as before (*”a few steps prior”), by entering “EOF” as the final line in the file and pressing “Enter” to commit the changes:

cat <<EOF > /etc/sysctl.d/k8s.conf

  • Once the above command is executed, you’ll be presented with a prompt from the Terminal via the “>” symbol.
  • Enter the following content into this prompt:

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

  • Commit the changes and close the prompt by entering && executing the following as the last line in the file/prompt:

EOF

  • Ensure the changes we‘ve just made takes affect by running the following command:

sysctl –system

>>>MOVE OVER TO “KUBERNETES NODE 3”:

  • Repeat the exact same steps we just went through for “Kubernetes Node 1” and “Kubernetes Node 2”…….

!!!!!!!!!!!!!!!!!!!!

IF YOU LOSE YOUR ADMIN TOKEN

YOU CAN GENERATE A NEW ONE BY:

!!!!!!!!!!!!!!!!!!!!

  • Run the following command to generate a new “Admin Token”

kubeadm token create –print-join-command

>>>MOVE BACK OVER TO “KUBERNETES NODE 1”:

  • Check to see what nodes are available in the Kubernetes Cluster:

kubectl get nodes

  • Now that we have the Kubernetes Cluster established, we can go and create our Pod as well as our Service.
  • When we declare othe Pod Service, we want to do it in a way where it’s “DECLARATIVE” rather than “IMPERATIVE”
    • * IMPERATIVE – means that we can do it via the command line, using all sorts flags
    • * DECLARATIVE – means that we’ll spell everything out in a YML file, and it’ll tell Kubernetes what the End-State of the Pod is going to look like.
  • Create a new file called “pod.yml”:

vim pod.yml

  • Add the following content to the newly created “pod.yml” file:

# desired api version

apiVersion: v1

# this tells kubernetes what we’re creating

kind: Pod

metadata:

  name: nginx-pod-demo

  labels:

    # labels are “key: value” pairs && this’s how we’ll expose this pod to make it Public

    app: nginx-demo

spec:

  containers:

  – image: nginx:latest

    name: nginx-demo

    ports:

    – containerPort: 80

    imagePullPolicy: Always

  • Now, create the Pod:

kubectl create -f <desired-pod-file-path-here>

kubectl create -f pod.yml

  • Confirm the Pod was successfully created and is running:

kubectl get pods

  • Now, create the service

vim service.yml

  • Add the following content to the newly created “service.yml” file:

kind: Service

apiVersion: v1

metadata:

  name: service-demo

spec:

  selector:

    app: nginx-demo

  ports:

  – protocol: TCP

    port: 80

    targetPort: 80

  type: NodePort

**********************************************************************************************************************************************

Scaling Pods in Kubernetes

**********************************************************************************************************************************************

[+] Scaling Pods in Kubernetes – Lecture

COMLPETE THE KUBERNETES CLUSTER

  • Initialize the cluster:

kubeadm init –pod-network-cidr=10.244.0.0/16 –kubernetes-version=v1.11.3

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

  • Install “Flannel” on the Master:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

CREATE THE DEPLOYMENT

  •  Create the deployment:

vim deployment.yml

  • Add the following to “deployment.yml”:

apiVersion: apps/v1

kind: Deployment

metadata:

  name: httpd-deployment

  labels:

    app: httpd

spec:

  replicas: 3

  selector:

    matchLabels:

      app: httpd

  template:

    metadata:

      labels:

        app: httpd

    spec:

      containers:

      – name: httpd

        image: httpd:latest

        ports:

        – containerPort: 80

  • Spin up the deployment:

kubectl create -f deployment.yml

CREATE THE SERVICE

  • Create the service:

vim service.yml

  • Add the following to “service.yml”:

kind: Service

apiVersion: v1

metadata:

  name: service-deployment

spec:

  selector:

    app: httpd

  ports:

  – protocol: TCP

    port: 80

    targetPort: 80

  type: NodePort

  • Create the service:

kubectl create -f service.yml

SCALE THE DEPLOYMENT UP OT 5 REPLICAS

  • Scale the deployment yup to 5 replicas:

vim deployment.yml

  • Change the number of replicas to 5:

spec:

  replicas: 5

  • Apply the changes:

kubectl apply -f deployment.yml

SCALE THE DEPLOYMENT DOWN TO 2 REPLICAS

  • Reduce the number of replicas down to 2:

vim delpoyment.yml

  • Change the number of replicas to 2:

spec:

  replicas: 5

  • Apply the changes:

kubectl apply -f deployment.yml

[+] Lab Step-by-Step

  • SSH into the machine
  • For this demo, everything has already been pre-installed.
  • All we have to do is initializes it:

kubeadm init –pod-network-cidr=10.244.0.0/16 –kubernetes-version=v<desired-kubernetes-version-here>

  • After successfully running the above command in the step prior, you should be returned the following output, which is instructions on how to begin using your Cluster  (*including the necessary commands to do so):
    • * There are two sets of commands provided.
      • *** To start a Kubernetes Cluster
      • *** To add Nodes to the Kubernetes Cluster

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node

as root:

  kubeadm join 10.0.1.198:6443 –token mo6qjo.qm1uibfl8nztjs70 –discovery-token-ca-cert-hash sha256:06132345470c5b36a5bf164d6f1360b5648d573ca22f3def90d72fda0288982e

  • Run the following commands locally (in the current server):

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

  • Now copy the second set of command from the output returned a few steps prior.
  • Add the 2 remaining Nodes to the newly created cluster:

>>>MOVE OVER TO KUBERNETES NODE 2

  • Paste and run the following command in “Kubernetes Node 2”s Terminal:

kubeadm join 10.0.1.198:6443 –token mo6qjo.qm1uibfl8nztjs70 –discovery-token-ca-cert-hash sha256:06132345470c5b36a5bf164d6f1360b5648d573ca22f3def90d72fda0288982e

>>>MOVE OVER TO KUBERNETES NODE 3

  • Paste and run the following command in “Kubernetes Node 3”s Terminal:

kubeadm join 10.0.1.198:6443 –token mo6qjo.qm1uibfl8nztjs70 –discovery-token-ca-cert-hash sha256:06132345470c5b36a5bf164d6f1360b5648d573ca22f3def90d72fda0288982e

>>>MOVE OVER TO THE KUBERNETES MASTER (“KUBERNETES NODE 1”):

  • Install “Flannel”

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

  • Now that Kubernetes is up-and-running, lets create our deployment:

vim deployment.yml

  • Add the following content tot he newly cerated “deployment.yml” file:

apiVersion: apps/v1

# the “D” in “Deployment” MUST be capitalized!!!

kind: Deployment

metadata:

  name: httpd-deployment

  labels:

    app: httpd

spec:

  replicas: 3

  selector:

    # the “L” in “matchLabels” MUST be capitalized!!!

    matchLabels:

      app: httpd

  template:

    metadata:

      labels:

        app: httpd

    spec:

      containers:

      – name: httpd

        image: httpd:latest

        ports:

        – containerPort: 80

  • Spin up the deployment:

kubectl create -f deployment.yml

  • Next, create a Service:

vim service.yml

  • Check to see if the Deployment and Service were successfully created:

kubectl get pods

kubectl get services

  • ****** We can further test this by opening a web browser, navigating to the Kubernetes Master Node’s Public IP, AND specifying the TCP port our service is running on ******
    • * We can determine what port the service is running by executing the “kubectl get services” command (It’ll be listed under the “PORTS” column in the following format:     <target-host-port>:<source-container-port>)
  • The next thing we want to do is Scale-up the number of replicas to 5:
  • To do this, we’re going to modify the “deployment.yml” file:

vim deployment.yml

  • Edit the contents of “deployment.yml” to the following:

apiVersion: apps/v1

kind: Deployment

metadata:

  name: httpd-deployment

  labels:

    app: httpd

spec:

  replicas: 5

  selector:

    matchLabels:

      app: httpd

  template:

    metadata:

      labels:

        app: httpd

    spec:

      containers:

      – name: httpd

        image: httpd:latest

        ports:

        – containerPort: 80

  • ***** Now apply the changes to “deployment.yml” by running “kubectl”s “APPLY” flag *****:

kubectl apply -f deployment.yml

**********************************************************************************************************************************************

Creating a HELM Chart

**********************************************************************************************************************************************

[+] Creating a HELM Chart – Learning Objectives

INSTALL HELM

  • Use “cURL” to create a local copy of the Helm install script

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > /tmp/get_helm.sh

  • Use “chmod” to modify access permissions for the install script:

chmod 700 /tmp/get_helm.sh

  • Set the version to v2.8.2:

DESIRED_VERSION=v2.8.2 /tmp/get_helm.sh

  • Initialize Helm:

helm init –wait

CREATE THE HELM CHART

  • Create the “charts” directory and change directory into “charts/“:

mkdir charts

cd charts

  • Create the chart for “httpd”:

helm create httpd

  • Edit “httpd/values.yaml”:

replicaCount: 1

image:

  repository: httpd

  tag: latest

  pullPolicy: IfNotPresent

service:

  type: NodePort

  port: 80

ingress:

  enabled: false

  annotations: {}

  path: /

  hosts:

    – chart-example.local

  tls: []

resources: {}

nodeSelector: {}

tolerations: []

affinity: {}

CREATE YOUR APPLICATION USING HELM

  • Create your application:

helm install –name my-httpd ./httpd/

[+] Lab Step-by-Step

  • SSH into the machine.
  • For the purpose of this demo, the everything has been pre-configured.
  • So there’s no need to add the “Worker Nodes” to the Kubernetes Cluster.
  • Use “cURL” to create a local copy of the Helm install script:

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > /tmp/get_helm.sh

  • Ensure a copy of the script was successfully created locally:
    • * Simply print out the content of the file that created when performing the “cURL” command (i.e. “/tmp/get_helm.sh”)

cat /tmp/get_helm.sh

  • Use “chmod” to modify the access permissions for the helm install script:

chmod 700 /tmp/get_helm.sh

  • When we install Helm, we’ll wan tot specify a version.
    • * For the purpose of this demo, we’re going to specify version 2.8.2
    • * In order to specify the version of Helm, we are going to create an Environment Variable (*this will create the env variable && execute the install script):

<DESIRED-ENVIRONMENT-VARIABLE-NAME-HERE>=v<desired-version-number-here> <path-to-desired-target-script>

DESIRED_VERSION=v2.8.2 /tmp/get_helm.sh

  • Next, initialize Helm:

helm init –wait

  • Give Helm the permissions it needs to work with Kubernetes:

kubectl –namespace=kube-system create clusterrolebinding add-on-cluster-admin –clusterrole=cluster-admin –serviceaccount=kube-system:default

  • Now, just to make sure things work correctly, run a “helm ls” command:
    • * If everything is OK, it will return NOTHING.
    • * only errors should be returned from this command.

helm ls

  • Next, we can begin creating the charts.
  • Create a new directory called “charts” && change directories into the newly created folder:

mkdir charts

cd charts

  • Now, create the chart:

helm create <desired-chart-name-here>

helm create httpd

  • Now, if we run the “helm ls” command again, we should see the the new chart called “httpd”:

helm ls

  • Inside of the new “helm” chart directory, we should see several files.
  • The only file that is relevant and would potentially require change is the “values.yaml” file (* ”helm/values.yaml”)
  • Modify “httpd/values.yaml”

replicaCount: 1

image:

  repository: httpd

  tag: latest

  pullPolicy: IfNotPresent

service:

  type: NodePort

  port: 80

ingress:

  enabled: false

  annotations: {}

  path: /

  hosts:

    – chart-example.local

  tls: []

resources: {}

nodeSelector: {}

tolerations: []

affinity: {}

  • Back out of the directory:

cd ../

  • Now, install the application:

helm install –name <desired-application-name-here> <path-of-desired-helm-chart-here>

helm install –name my-httpd ./httpd/

  • After the application is successfully created, it should return the following output:
    • * Under the “NOTES” section of the returned output, there is a set of commands for us to execute in order for us to obtain the application’s URL:

NAME:   httpd

LAST DEPLOYED: Fri Jul  5 18:15:12 2019

NAMESPACE: default

STATUS: DEPLOYED

RESOURCES:

==> v1/Service

NAME   TYPE      CLUSTER-IP      EXTERNAL-IP  PORT(S)       AGE

httpd  NodePort  10.107.175.203  <none>       80:32490/TCP  0s

==> v1beta2/Deployment

NAME   DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE

httpd  1        1        1           0          0s

==> v1/Pod(related)

NAME                    READY  STATUS             RESTARTS  AGE

httpd-5c84cfd78f-8n94q  0/1    ContainerCreating  0         0s

NOTES:

1. Get the application URL by running these commands:

  export NODE_PORT=$(kubectl get –namespace default -o jsonpath=”{.spec.ports[0].nodePort}” services httpd)

  export NODE_IP=$(kubectl get nodes –namespace default -o jsonpath=”{.items[0].status.addresses[0].address}”)

  echo http://$NODE_IP:$NODE_PORT

  • Copy and Paste the above mentioned set of commands:

export NODE_PORT=$(kubectl get –namespace default -o jsonpath=”{.spec.ports[0].nodePort}” services httpd)

export NODE_IP=$(kubectl get nodes –namespace default -o jsonpath=”{.items[0].status.addresses[0].address}”)

echo http://$NODE_IP:$NODE_PORT

Leave a Reply

Your email address will not be published. Required fields are marked *