My road to Docker Certified Associate

This is the continuation of my earlier post My road to AWS Certified Solution Architect and AWS Certified Security - Specialty Certification

https://medium.com/@devopslearning/my-road-to-aws-certified-solution-architect-394676f15680

YAY I cleared the exam

WARNING: Some House Keeping task, before reading this blog

1: As everyone needs to sign NDA with Docker, I can’t tell you the exact question asked during the exam neither I have GB of memory, but I can give you the pointers what to expect in the exam.

2: As we all know Docker world updates quite frequently, so some of the stuff might not be relevant after a few days/weeks/months.

3: Please don’t ask for any exam dumps or question, that defeats the whole purpose of the exam.

Exam Preparation

  • My own effort :-), thank you, everyone, who joined me in this journey
  • I highly recommend the Linux Academy Courses to everyone

https://linuxacademy.com/cp/modules/view/id/347?redirect_uri=https://app.linuxacademy.com/search?query=docker

https://linuxacademy.com/cp/modules/view/id/270?redirect_uri=https://app.linuxacademy.com/search?query=docker

https://linuxacademy.com/cp/modules/view/id/314?redirect_uri=https://app.linuxacademy.com/search?query=docker

  • If you are just looking to clear the exam, the Zeal Vora course is pretty good but it lacks the in-depth of docker concepts.

https://www.udemy.com/course/docker-certified-associate/

  • Dockercon Videos: I highly recommend going through these videos, as they will give you enough in-depth knowledge about each service.
  • Docker Documentation: Docker documentation is pretty good, please go through it before sitting in the exam

Once you are done with the above preparation, it’s a good time to gauge your knowledge, check the Docker provided sample question.

https://docker.cdn.prismic.io/docker%2Fa2d454ff-b2eb-4e9f-af0e-533759119eee_dca+study+guide+v1.0.1.pdf

Now coming back to the exam, the entire exam is divided into six main topics.

Based on my experience, you must need to know

  • Docker Enterprise & Registry
  • Docker Swarm

As they both cover 40% of the exam and will make a difference between whether you pass or fail the exam.

Exam Pattern

  • 55 multiple choice questions in 90 minutes
  • Designed to validate professionals with a minimum of 6 to 12 months of Docker experience
  • Remotely proctored on your Windows or Mac computer
  • Available globally in English
  • USD $195 or Euro €175 purchased online
  • Results delivered immediately

https://success.docker.com/certification

Domain 1: Orchestration 25%

  • You must know how to create swarm service
docker service create —name myservice —replicas 3 nginx
  • Difference between global vs replicated mode
In case of global mode, one container is created in every node in the cluster
  • Scaling Swarm service and what is the difference between scale vs update(In my exam I see 2-3 question related to this topic)
docker service scale <service name>=5

—> In case of scale, we can specify multiple services

docker service update —replicas 5 <service name>

—>In case of an update, we can only specify one service

  • Draining Swarm node
docker node update —availability drain <node id>
docker node update —availability active <node id>
  • Docker Stack 
docker stack deploy —compose-file docker-compose.yml
  • Placement Constraints
docker service create --name myservice --constraint node.lable.region==us-west-2 nginx
docker service create --name myservice --constraint node.lable.region!=us-west-2 nginx
  • Adding label to the node
docker node update --label-add region=us-west-2 <swarm worker node id>
  • Quorum(Remember this formula n-1/2), where N is the number of hosts
  • For example, in a swarm with 5 nodes, if you lose 3 nodes, you don’t have a quorum(Please go through this at least once)
  • Distribute manager nodes

In addition to maintaining an odd number of manager nodes, pay attention to datacenter topology when placing managers. For optimal fault-tolerance, distribute manager nodes across a minimum of 3 availability-zones to support failures of an entire set of machines or common maintenance scenarios. If you suffer a failure in any of those zones, the swarm should maintain the quorum of manager nodes available to process requests and rebalance workloads(Please go through this atleast once)

Domain 2: Image Creation, Management, and Registry 20%

  • Dockerfile directives
  • Difference between ADD(support URL and tar extraction) and COPY(allow copy from source to destination)
  • Difference between CMD(we can override it) and ENTRYPOINT(we cannot override it)
  • Understand how HEALTHCHECK directive works.
  • How to login to a private registry
docker login <private registry url>
  • How to add Insecure Registry to Docker

To add an insecure docker registry, add the file /etc/docker/daemon.json with the following content:

{
    "insecure-registries" : [ "hostname.example.com:5000" ]
}
  • How to search an image
docker search <image name>
  • How to search for an official docker image
$ docker search nginx -f "is-official=true"
NAME                DESCRIPTION                STARS               OFFICIAL            AUTOMATED
nginx               Official build of Nginx.   12094               [OK]   
  • Commit/Save/Exporting an image
docker commit <container id> <image name>
docker save image_name >> image_name.tar
docker export <container_id> >> container.tar
docker load < container.tar

NOTE: When we export the container like this, the resulting imported image will be flatened.
  • Understand the difference between filter vs format with respect to docker images
  -f, --filter filter   Filter output based on conditions provided

docker images --filter "dangling=true"

      --format string   Pretty-print images using a Go template

docker images --format "{{.ID}}: {{.Repository}}"
  • Advantage of using multi-stage build
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image
  • Prevent tags from being overwritten

Make tags immutable

You can enable tag immutability on a repository when you create it, or at any time after.

Domain 3: Installation and Configuration 15%

DTR backup process

  • When we backup DTR, images are not backed up
  • User/Organization are not backed up and should be backed up using UCP

https://docs.docker.com/ee/admin/backup/back-up-dtr/

Swarm Routing Mesh

  • All nodes within the cluster participate in the ingress routing mesh
docker service create --name myservice --publish published=8080,target=80 nginx

Namespaces

  • Make sure you remember which namespaces are enabled by default and which are not
  • User namespace is not enabled by default

Domain 4: Networking 15%

  • Should be aware of Network Driver
    Bridge(default)
    Host(removes the network isolation)
    None(disable the networking)
    Understand Overlay Network (default in the swarm) and how it works
  • Why do we need to use the overlay networks?
  • How to encrypt the overlay network?
docker network create --opt encrypted --driver overlay my-overlay-network
  • Difference between -p and -P and how to use its with container as well as with Swarm.
  -p, --publish list                   Publish a container's port(s) to the host
  -P, --publish-all                    Publish all exposed ports to random ports
  • You can use the -p flag and add /udp suffix to the port number
docker run -p 53160:53160/udp -t -i busybox

Domain 5: Security 15%

  • This is somehow critically important, make sure you understand the usage of Docker content trust and how to enable it
Docker Content Trust (DCT) provides the ability to use digital signatures for data sent to and received from remote Docker registries. These signatures allow client-side or runtime verification of the integrity and publisher of specific image tags.
  • Enabling Docker Content Registry
export DOCKER_CONTENT_TRUST=1
  • Use of UCP client bundles
  • Docker Secrets(we cannot update or rename a secret, but we can revoke a secret and grant access to it). Also, check the process to create a secret and use it with service
Step1: Create a file

 $ cat mysecret 
username: admin
pass: admin123

Step2: Create a secret from a file or we can even do it from STDIN.

$ docker secret create mysupersecret mysecret 
uzvrfy96205o541pql1xgym4s

Step 3: Now let’s create a container using this secret
$ docker service create --name mynginx1 --secret mysupersecret nginx
ueugjjkuhbbvrrszya1zb5gxs
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service converged
  • Lock the swarm cluster
docker swarm update --autolock=true
  • Use of Docker Group
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.

If you don’t want to preface the docker command with sudo, create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group.
sudo usermod -aG docker $USER
  • Docker with cgroups

Docker also makes use of kernel control groups for resource allocation and isolation. A cgroup limits an application to a specific set of resources. Control groups allow Docker Engine to share available hardware resources to containers and optionally enforce limits and constraints.

Docker Engine uses the following cgroups:

  • Memory cgroup for managing accounting, limits and notifications.
  • CPU group for managing user / system CPU time and usage.
  • CPUSet cgroup for binding a group to specific CPU. Useful for real time applications and NUMA systems with localized memory per CPU.
  • BlkIO cgroup for measuring & limiting amount of blckIO by group.
  • net_cls and net_prio cgroup for tagging the traffic control.
  • Devices cgroup for reading / writing access devices.
  -m, --memory bytes                   Memory limit
      --memory-reservation bytes       Memory soft limit
  -c, --cpu-shares int                 CPU shares (relative weight)
      --cpus decimal                   Number of CPUs
      --cpuset-cpus string             CPUs in which to allow execution (0-3, 0,1)
  • Understand the difference between docker limits vs reservation

the limit is a (hard limit) and reservation is a (soft limit)

Domain 6: Storage and Volumes 10%

  • Make sure, you understand the difference between bind mount vs volume
Bind mounts: A bind mount is a file or folder stored anywhere on the container host filesystem, mounted into a running container. The main difference a bind mount has from a volume is that since it can exist anywhere on the host filesystem, processes outside of Docker can also modify it.
  • Also please pay special attention to spelling mistakes eg:

-v or –volume –> correct

–volumes –> incorrect (extra s in volumes)

 -v, --volume list                    Bind mount a volume
  • Just skim through the output of docker volume inspect(field to pay special attention Mountpoint)
docker volume inspect <volume name>

docker volume inspect jenkins_home
[
    {
        "CreatedAt": "2019-07-17T21:08:33Z",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/jenkins_home/_data", <----
        "Name": "jenkins_home",
        "Options": null,
        "Scope": "local"
    }
]
  • We can mount the same volume to multiple containers
  • Also, remember that we can –volumes-from to mount volumes from the specified container(s)
--volumes-from list              Mount volumes from the specified container(s)
  • Make sure you understand the difference between device-mapper(loop-lvm(testing) vs direct-lvm(production))
loop-lvm: This configuration is only appropriate for testing. The loop-lvm mode makes use of a ‘loopback’ mechanism that allows files on the local disk to be read from and written to as if they were an actual physical disk or block device

direct-lvm: Production hosts using the devicemapper storage driver must use direct-lvm mode. This mode uses block devices to create the thin pool. This is faster than using loopback devices, uses system resources more efficiently, and block devices can grow as needed
  • Running –rm with docker run also remove the volume associated with it
docker run --rm -v /mytestvol busybox

Miscellaneous

  • Take a look at all the command available under docker system
docker system --help

Usage:	docker system COMMAND

Manage Docker

Commands:
  df          Show docker disk usage
  events      Get real time events from the server
  info        Display system-wide information
  prune       Remove unused data
  • How to setup docker in debug mode?

Edit /etc/docker/daemon.json, setting"debug": true. Create this file, If this file does not exist, it should look like this when complete:

{
    "debug": true
}
  • Restart Policy

To configure the restart policy for a container, use the --restart flag when using the docker run command. The value of the --restart flag can be any of the following:

  • How to copy Docker images from one host to another without using a repository
sudo docker save -o <path for generated tar file> <image name>

sudo docker save -o /home/matrix/matrix-data.tar matrix-data

Once you think you are fully prepared, please try to solve the 9 questions at the end of Docker Certified Associate Guide

https://docker.cdn.prismic.io/docker%2Fa2d454ff-b2eb-4e9f-af0e-533759119eee_dca+study+guide+v1.0.1.pdf

Final Words

  • The key take away from this exam is, you can easily clear this exam if you know Docker Enterprise Server, Docker Registry as well as Docker Swarm in depth.
  • The last exam I wrote was the AWS Security Specialist Exam where a question was scenario-based and some of them are almost one page long, here most of the questions are too the point.
  • So keep calm and write this exam and let me know in case if you have any questions.

21 Days of Docker – Day 21

Welcome to Day 21 for 21 Days of Docker

Thanks, everyone for joining 21 Days of Docker, I learned a lot and I believe you guys also got a chance to learn something out of my blogs.

In the next few weeks, I am coming up with 21 Days of AWS using Terraform, so stay tuned:-).

Thanks, everyone, and Happy Learning!.

Day 1: Introduction to Docker

Day 2: First Docker Container

Day 3: Building Container Continue

Day 4: Docker Container Under The Hood

Day 5: Introduction to Dockerfile

Day 6: Introduction to Dockerfile – Part 2

Day 7: Use multip-stage builds with Dockerfile

Day 8: Docker Images, Layers & Containers

Day 9 : Docker Networking – Part 1

Day 10: Docker Networking – Part 2

Day 11: Docker Networking – Part 3

Day 12: Docker Storage – Part 1

Day 13: Docker Storage – Part 2

Day 14: Introduction to Docker Swarm – Part 1

Day 15 : Introduction to Docker Swarm – Part 2

Day 16: Introduction to Docker Swarm – Part 3

Day 17: Building Effective Docker Image

Day 18: Docker Security

Day 19: Docker Networking Deep Dive

Day 20 – Introduction and Installation of Docker Enterprise Edition

Day 1 – Introduction to Docker

Day 2 – Introduction to Docker Images and Dockerfile

Day 3 – Docker Networking

Please follow me with my Journey

This time to make learning more interactive, I am adding

  • Slack
  • Meetup

Please feel free to join this group.

Slack: 

https://100daysofdevops.slack.com/join/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group https://www.meetup.com/100daysofdevops/

21 Days of Docker-Day 20 – Introduction and Installation of Docker Enterprise Edition

Welcome to Day 20 of 21 Days of Docker, the topic for today Introduction and Installation of Docker Enterprise Edition

What is Docker Enterprise Edition?

Docker Enterprise Edition (Docker EE) is designed for enterprise development and IT teams who build, ship, and run business-critical applications in production and at scale.

Docker Enterprise Features

  • Role-based access control
  • LDAP/AD integration
  • Image scanning and signing enforcement policies
  • Security policies

Universal Control Plane

  • Docker Universal Control Plane (UCP) is the enterprise-grade cluster management solution from Docker. You install it on-premises or in your virtual private cloud, and it helps you manage your Docker cluster and applications through a single interface.

Docker Trusted Registry

  • Docker Trusted Registry (DTR) is the enterprise-grade image storage solution from Docker. You install it behind your firewall so that you can securely store and manage the Docker images you use in your applications.

Installing Docker Enterprise Edition

Setup

  • 3 Ubuntu Hosts
  • 1 Universal Control Plane(UCP) manager
  • 1 Docker Trusted Registry(DTR)
  • 1 Worker Node
  • Let take a look, how we can install and configure the Docker EE engine, UCP, and DTR.
  • Perform the following steps on all three servers:
  1. Start a free trial for Docker EE: If you don’t have a Docker EE trial already started, then launch one here: https://hub.docker.com/editions/enterprise/docker-ee-trial. This free trial lasts up to a month, but another one can be started right after it expires.
  2. Go to https://hub.docker.com/my-content and retrieve a unique URL for Docker EE.
  3. Click Setup.
  4. Copy the URL generated for Docker EE.
  5. Set a few environment variables. Ensure that the unique URL generated for Docker EE is also used here:
DOCKER_EE_URL=<YOUR_DOCKER_EE_URL> 
DOCKER_EE_VERSION=18.09
  • Verify that the required packages install successfully:
$ sudo apt-get install -y \
>     apt-transport-https \
>     ca-certificates \
>     curl \
>     software-properties-common
[sudo] password for cloud_user:
Reading package lists... Done
Building dependency tree
Reading state information... Done
ca-certificates is already the newest version (20180409).
curl is already the newest version (7.58.0-2ubuntu3.8).
The following additional packages will be installed:
  python3-software-properties
The following NEW packages will be installed:
  apt-transport-https
The following packages will be upgraded:
  python3-software-properties software-properties-common
2 upgraded, 1 newly installed, 0 to remove and 101 not upgraded.
Need to get 35.3 kB of archives.
After this operation, 166 kB of additional disk space will be used.
Get:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 apt-transport-https all 1.6.12 [1692 B]
Get:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/main amd64 software-properties-common all 0.96.24.32.11 [9996 B]
Get:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/main amd64 python3-software-properties all 0.96.24.32.11 [23.6 kB]
Fetched 35.3 kB in 0s (1949 kB/s)
Selecting previously unselected package apt-transport-https.
(Reading database ... 112422 files and directories currently installed.)
Preparing to unpack .../apt-transport-https_1.6.12_all.deb ...
Unpacking apt-transport-https (1.6.12) ...
Preparing to unpack .../software-properties-common_0.96.24.32.11_all.deb ...
Unpacking software-properties-common (0.96.24.32.11) over (0.96.24.32.7) ...
Preparing to unpack .../python3-software-properties_0.96.24.32.11_all.deb ...
Unpacking python3-software-properties (0.96.24.32.11) over (0.96.24.32.7) ...
Setting up apt-transport-https (1.6.12) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up python3-software-properties (0.96.24.32.11) ...
Processing triggers for dbus (1.12.2-1ubuntu1.1) ...
Setting up software-properties-common (0.96.24.32.11) ...
  • Add the gpg key and repository using the unique URL for Docker EE
 curl -fsSL "${DOCKER_EE_URL}/ubuntu/gpg" | sudo apt-key add -
OK

$ sudo add-apt-repository \
>    "deb [arch=$(dpkg --print-architecture)] $DOCKER_EE_URL/ubuntu \
>    $(lsb_release -cs) \
>    stable-$DOCKER_EE_VERSION"
Hit:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease
Get:4 https://storebits.docker.com/ee/trial/sub-8158a9b3-de4e-4753-b73e-d386fca163ff/ubuntu bionic InRelease [116 kB]
Get:5 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:6 https://storebits.docker.com/ee/trial/sub-8158a9b3-de4e-4753-b73e-d386fca163ff/ubuntu bionic/stable-18.09 amd64 Packages [6386 B]
Fetched 211 kB in 0s (454 kB/s)
Reading package lists... Done
  • Install Docker EE:
$ sudo apt-get update
Hit:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:4 https://storebits.docker.com/ee/trial/sub-8158a9b3-de4e-4753-b73e-d386fca163ff/ubuntu bionic InRelease
Get:5 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Fetched 88.7 kB in 1s (166 kB/s)
Reading package lists... Done

$ sudo apt-get install -y docker-ee=5:18.09.4~3-0~ubuntu-bionic
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  aufs-tools cgroupfs-mount containerd.io docker-ee-cli libltdl7 pigz
The following NEW packages will be installed:
  aufs-tools cgroupfs-mount containerd.io docker-ee docker-ee-cli libltdl7 pigz
0 upgraded, 7 newly installed, 0 to remove and 101 not upgraded.
Need to get 56.5 MB of archives.
After this operation, 266 MB of additional disk space will be used.
Get:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/universe amd64 pigz amd64 2.4-1 [57.4 kB]
Get:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/universe amd64 aufs-tools amd64 1:4.9+20170918-1ubuntu1 [104 kB]
Get:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/universe amd64 cgroupfs-mount all 1.4 [6320 B]
Get:4 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/main amd64 libltdl7 amd64 2.4.6-2 [38.8 kB]
Get:5 https://storebits.docker.com/ee/trial/sub-8158a9b3-de4e-4753-b73e-d386fca163ff/ubuntu bionic/stable-18.09 amd64 containerd.io amd64 1.2.5-1 [19.9 MB]
Get:6 https://storebits.docker.com/ee/trial/sub-8158a9b3-de4e-4753-b73e-d386fca163ff/ubuntu bionic/stable-18.09 amd64 docker-ee-cli amd64 5:18.09.10~3-0~ubuntu-bionic [17.1 MB]
Get:7 https://storebits.docker.com/ee/trial/sub-8158a9b3-de4e-4753-b73e-d386fca163ff/ubuntu bionic/stable-18.09 amd64 docker-ee amd64 5:18.09.4~3-0~ubuntu-bionic [19.3 MB]
Fetched 56.5 MB in 2s (30.9 MB/s)
Selecting previously unselected package pigz.
(Reading database ... 112428 files and directories currently installed.)
Preparing to unpack .../0-pigz_2.4-1_amd64.deb ...
Unpacking pigz (2.4-1) ...
Selecting previously unselected package aufs-tools.
Preparing to unpack .../1-aufs-tools_1%3a4.9+20170918-1ubuntu1_amd64.deb ...
Unpacking aufs-tools (1:4.9+20170918-1ubuntu1) ...
Selecting previously unselected package cgroupfs-mount.
Preparing to unpack .../2-cgroupfs-mount_1.4_all.deb ...
Unpacking cgroupfs-mount (1.4) ...
Selecting previously unselected package containerd.io.
Preparing to unpack .../3-containerd.io_1.2.5-1_amd64.deb ...
Unpacking containerd.io (1.2.5-1) ...
Selecting previously unselected package docker-ee-cli.
Preparing to unpack .../4-docker-ee-cli_5%3a18.09.10~3-0~ubuntu-bionic_amd64.deb ...
Unpacking docker-ee-cli (5:18.09.10~3-0~ubuntu-bionic) ...
Selecting previously unselected package docker-ee.
Preparing to unpack .../5-docker-ee_5%3a18.09.4~3-0~ubuntu-bionic_amd64.deb ...
Unpacking docker-ee (5:18.09.4~3-0~ubuntu-bionic) ...
Selecting previously unselected package libltdl7:amd64.
Preparing to unpack .../6-libltdl7_2.4.6-2_amd64.deb ...
Unpacking libltdl7:amd64 (2.4.6-2) ...
Setting up aufs-tools (1:4.9+20170918-1ubuntu1) ...
Setting up containerd.io (1.2.5-1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
Processing triggers for ureadahead (0.100.0-20) ...
Setting up cgroupfs-mount (1.4) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Setting up docker-ee-cli (5:18.09.10~3-0~ubuntu-bionic) ...
Processing triggers for systemd (237-3ubuntu10.29) ...
Setting up libltdl7:amd64 (2.4.6-2) ...
Setting up docker-ee (5:18.09.4~3-0~ubuntu-bionic) ...
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket.
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up pigz (2.4-1) ...
Processing triggers for ureadahead (0.100.0-20) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
  • Add user  access to run the Docker commands
$ sudo usermod -a -G docker centos

NOTE: You need to resource your env back or just logout and log back in

  • Test the Docker EE installation to verify that it’s working:
$ docker version
Client:
 Version:           18.09.10
 API version:       1.39
 Go version:        go1.12.10
 Git commit:        2408617bbf
 Built:             Fri Oct  4 20:56:49 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Enterprise
 Engine:
  Version:          18.09.4
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       c3516c4
  Built:            Wed Mar 27 18:02:16 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Set Up the UCP Manager

  • With docker installation out of the way, let install UCP manager on UCP manager host
  • Pull the UCP image:
$ docker image pull docker/ucp:3.1.5
3.1.5: Pulling from docker/ucp
cd784148e348: Pull complete
3871e7d70c20: Pull complete
b7a92b3565bc: Pull complete
Digest: sha256:334d7b21e30d7b3caeea6dd0fb4e79633b7035a4fd63ea479dfac19470206012
Status: Downloaded newer image for docker/ucp:3.1.5
  • Use the UCP image for the installation:
$ docker container run --rm -it --name ucp \
>   -v /var/run/docker.sock:/var/run/docker.sock \
>   docker/ucp:3.1.5 install \
>   --host-address 10.0.1.101 \
>   --interactive
INFO[0000] Your engine version 18.09.4, build c3516c4 (4.15.0-1034-aws) is compatible with UCP 3.1.5 (e3b1ac1)
Admin Username: admin <--- Provide a username
Admin Password:  <-- and a password
Confirm Admin Password:
INFO[0014] Pulling required images... (this may take a while)
INFO[0014] Pulling docker/ucp-dsinfo:3.1.5
INFO[0022] Pulling docker/ucp-cfssl:3.1.5
INFO[0022] Pulling docker/ucp-interlock-extension:3.1.5
INFO[0023] Pulling docker/ucp-agent:3.1.5
INFO[0024] Pulling docker/ucp-calico-node:3.1.5
INFO[0027] Pulling docker/ucp-compose:3.1.5
INFO[0028] Pulling docker/ucp-kube-compose-api:3.1.5
INFO[0029] Pulling docker/ucp-kube-dns:3.1.5
INFO[0030] Pulling docker/ucp-etcd:3.1.5
INFO[0031] Pulling docker/ucp-kube-dns-dnsmasq-nanny:3.1.5
INFO[0033] Pulling docker/ucp-interlock:3.1.5
INFO[0033] Pulling docker/ucp-auth:3.1.5
INFO[0034] Pulling docker/ucp-hyperkube:3.1.5
INFO[0042] Pulling docker/ucp-controller:3.1.5
INFO[0045] Pulling docker/ucp-auth-store:3.1.5
INFO[0046] Pulling docker/ucp-metrics:3.1.5
INFO[0049] Pulling docker/ucp-interlock-proxy:3.1.5
INFO[0050] Pulling docker/ucp-pause:3.1.5
INFO[0050] Pulling docker/ucp-swarm:3.1.5
INFO[0051] Pulling docker/ucp-kube-compose:3.1.5
INFO[0052] Pulling docker/ucp-kube-dns-sidecar:3.1.5
INFO[0053] Pulling docker/ucp-calico-cni:3.1.5
INFO[0055] Pulling docker/ucp-calico-kube-controllers:3.1.5
INFO[0060] Pulling docker/ucp-azure-ip-allocator:3.1.5
WARN[0061] None of the hostnames we'll be using in the UCP certificates [ip-10-0-1-101 127.0.0.1 172.17.0.1 10.0.1.101] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases: <-- Hit enter and select default aliases
INFO[0000] Initializing a new swarm at 10.0.1.101
INFO[0013] Installing UCP with host address 10.0.1.101 - If this is incorrect, please specify an alternative address with the '--host-address' flag
INFO[0013] Deploying UCP Service...
INFO[0089] Installation completed on ip-10-0-1-101 (node kwropkch0bblk0ua5ld7gezco)
INFO[0089] UCP Instance ID: 9wbsl1mze4aiqy1k9h6s7h98q
INFO[0089] UCP Server SSL: SHA-256 Fingerprint=40:72:CC:57:89:C0:83:1E:42:28:7B:B8:23:AE:A5:A5:7D:AA:FF:2A:EC:F7:36:DD:08:79:7E:70:29:17:2B:A6
INFO[0089] Login to UCP at https://10.0.1.101:443
INFO[0089] Username: admin
INFO[0089] Password: (your admin password)
  • In a web browser go to: https://[UCP manager Public IP] for accessing the UCP manager
  • Use the admin credentials that were created during the initial setup process to log in.
  • Note: A warning about the self-signed certificate’s validity may emerge. This notification can be disregarded

Add Both UCP Workers to the Cluster

  1. Navigate back to the UCP manager interface in a web browser to retrieve the worker join command. We will also generate a docker swarm join command that can be copied.
  2. Click Shared Resources
  3. Click Nodes
  4. Click Add Node.
  5. Apply the following values on the Add Node page:
    • Node type: Linux
    • Node role: Worker
$ docker swarm join --token SWMTKN-1-3uz01qrsbwsk4nj8p5u22e8k9f1sc1adu850y0hazn9hn15bai-aefa5kpqvy0lqj1a5s0u9t9ez 10.0.1.101:2377
This node joined a swarm as a worker.

$ docker swarm join --token SWMTKN-1-3uz01qrsbwsk4nj8p5u22e8k9f1sc1adu850y0hazn9hn15bai-aefa5kpqvy0lqj1a5s0u9t9ez 10.0.1.101:2377
This node joined a swarm as a worker.

Set Up Docker Trusted Registry

Get the DTR setup command from the UCP manager by performing the following steps:

  1. Access the UCP manager from a web browser.
  2. Click Admin > Admin Settings.
  3. Click Docker Trusted Registry.
  4. On the Admin Settings page locate the UCP Node section.
  5. Click ip-10-0-1-102.
  6. Click the checkbox labeled Disable TLS verification for UCP.
$ docker run -it --rm docker/dtr install  --ucp-node ip-10-0-1-102  --ucp-username admin  --ucp-url https://10.0.1.101  --ucp-insecure-tls
Unable to find image 'docker/dtr:latest' locally
latest: Pulling from docker/dtr
9d48c3bd43c5: Pull complete
dcfa06138f1d: Pull complete
3a8b460c24c5: Pull complete
4bb8be37e77e: Pull complete
ba41549fd9f6: Pull complete
Digest: sha256:e1eae7579a6a1793d653dd97df9297464600e2644565fa2ed33c351f942facf6
Status: Downloaded newer image for docker/dtr:latest
INFO[0000] Beginning Docker Trusted Registry installation
ucp-password:
INFO[0004] Validating UCP cert
INFO[0004] Connecting to UCP
INFO[0004] health checking ucp
INFO[0004] The UCP cluster contains the following nodes without port conflicts: ip-10-0-1-102, ip-10-0-1-103
INFO[0004] Searching containers in UCP for DTR replicas
INFO[0004] Searching containers in UCP for DTR replicas
INFO[0004] verifying [80 443] ports on ip-10-0-1-102
INFO[0008] Waiting for running dtr-phase2 container to finish
INFO[0008] starting phase 2
INFO[0000] Validating UCP cert
INFO[0000] Connecting to UCP
INFO[0000] health checking ucp
INFO[0000] Verifying your system is compatible with DTR
INFO[0000] Checking if the node is okay to install on
INFO[0000] Using default overlay subnet: 10.1.0.0/24
INFO[0000] Creating network: dtr-ol
INFO[0000] Connecting to network: dtr-ol
INFO[0000] Waiting for phase2 container to be known to the Docker daemon
INFO[0001] Setting up replica volumes...
INFO[0002] Creating initial CA certificates
INFO[0002] Bootstrapping rethink...
INFO[0002] Creating dtr-rethinkdb-729902891973...
INFO[0008] Establishing connection with Rethinkdb
INFO[0009] Waiting for database dtr2 to exist
INFO[0009] Waiting for database dtr2 to exist
INFO[0009] Waiting for database dtr2 to exist
INFO[0010] Generated TLS certificate.                    dnsNames="[*.com *.*.com example.com *.dtr *.*.dtr]" domains="[*.com *.*.com 172.17.0.1 example.com *.dtr *.*.dtr]" ipAddresses="[172.17.0.1]"
INFO[0011] License config copied from UCP.
INFO[0011] Migrating db...
INFO[0000] Establishing connection with Rethinkdb
INFO[0000] Migrating database schema                     fromVersion=0 toVersion=10
INFO[0003] Waiting for database notaryserver to exist
INFO[0003] Waiting for database notaryserver to exist
INFO[0003] Waiting for database notaryserver to exist
INFO[0005] Waiting for database notarysigner to exist
INFO[0005] Waiting for database notarysigner to exist
INFO[0006] Waiting for database notarysigner to exist
INFO[0006] Waiting for database jobrunner to exist
INFO[0007] Waiting for database jobrunner to exist
INFO[0007] Waiting for database jobrunner to exist
INFO[0010] Migrated database from version 0 to 10
INFO[0021] Starting all containers...
INFO[0021] Getting container configuration and starting containers...
INFO[0021] Automatically configuring rethinkdb cache size to 2000 mb
INFO[0022] Recreating dtr-rethinkdb-729902891973...
INFO[0028] Creating dtr-registry-729902891973...
INFO[0032] Creating dtr-garant-729902891973...
INFO[0037] Creating dtr-api-729902891973...
INFO[0060] Creating dtr-notary-server-729902891973...
INFO[0064] Recreating dtr-nginx-729902891973...
INFO[0071] Creating dtr-jobrunner-729902891973...
INFO[0077] Creating dtr-notary-signer-729902891973...
INFO[0082] Creating dtr-scanningstore-729902891973...
INFO[0087] Trying to get the kv store connection back after reconfigure
INFO[0087] Establishing connection with Rethinkdb
INFO[0088] Verifying auth settings...
INFO[0088] Successfully registered dtr with UCP
INFO[0088] Installation is complete
INFO[0088] Replica ID is set to: 729902891973
INFO[0088] You can use flag '--existing-replica-id 729902891973' when joining other replicas to your Docker Trusted Registry Cluster

NOTE: Don’t forget to change the –ucp-url to Private IP of UCP server

  • Once the installation is complete, browse the DTR url
  • Note: Once again, a warning about the self-signed certificate’s validity may emerge. This notification can be disregarded

Please follow me with my Journey

This time to make learning more interactive, I am adding

  • Slack
  • Meetup

Please feel free to join this group.

Slack: 

https://100daysofdevops.slack.com/join/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group https://www.meetup.com/100daysofdevops/

21 Days of Docker-Day 19 -Docker Networking Deep Dive

On Day 9, Day 10 and Day 11 I discussed Docker Networking, let go one level down and dig deeper into it.

What is Container Networking Model(CNM) and Libnetwork?

  • The CNM is an open-source container networking specification that contributed to the community by Docker Inc.
  • Docker’s libnetwork is a library that provides an implementation for CNM.
  • However, third-party plugins can be used to replace the built-in Docker driver.
  • Libnetwork is cross-platform and open-source.
  • CNM has interfaces for both IPAM plugins and network plugin. The IPAM plugin APIs can be used to create/delete address pools and allocate/deallocate container IP addresses. The network plugin APIs are used to create/delete networks and add/remove containers from networks.

Docker Networking on Linux

  • Docker networking uses the Linux Kernel extensive networking capabilities(eg: TCP/IP stack, VXLAN, DNS)
  • Docker networking utilizes many Linux Kernel networking features(network namespaces, bridges, iptables, veth pairs…)
  • Linux Bridges: L2 virtual switches implemented in the kernel
  • Network namespaces: Used for isolating container network stacks
  • veth pairs: Connecting containers to container networks
  • iptables: Used for port mapping, load balancing, network isolation

Docker Overlay Driver

  • The overlay network driver creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it (including swarm service containers) to communicate securely. Docker transparently handles routing of each packet to and from the correct Docker daemon host and the correct destination container.

How does it works?

  • The overlay driver uses VXLAN technology to build the network
  • A VXLAN tunnel is created through the underlay network(s)

Network Troubleshooting

  • The first step in any troubleshooting is to check the container logs
docker container logs <container id>
  • If you want to check docker daemon logs
sudo journalctl -u docker
  • Let discuss one handy tool called netshoot
  • Docker network troubleshooting can become complex. With a proper understanding of how Docker networking works and the right set of tools, you can troubleshoot and resolve these networking issues. The netshoot container has a set of powerful networking troubleshooting tools that can be used to troubleshoot Docker networking issues.
  • If you’re having networking issues with your application’s container, you can launch netshoot with that container’s network namespace.
  • netshoot includes a set of powerful tools
apache2-utils
bash
bind-tools
bird
bridge-utils
busybox-extras
calicoctl
conntrack-tools
ctop
curl
dhcping
drill
ethtool
file
fping
iftop
iperf
iproute2
ipset
  • To illustrate that let’s take a simple example
  • Let create a custom bridge network
$ docker network create my-custom-net
b56567c0609dda35f8312c08b6c974875217447e6d3618ea9d63256e3013d2cf
  • Run and attach a container to this network
$ docker container run -dt --name my-nginx --network my-custom-net -p 80:80 nginx
0564f1f854c004911ea42edd0d7492a8c5f32cafba72993022ec99d36e0c8c33
  • Let say we are facing some issue in connecting to the nginx container, now to check that
 docker container run --rm --network my-custom-net nicolaka/netshoot curl my-nginx:80
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
100   612  100   612    0     0  76500      0 --:--:-- --:--:-- --:--:-- 87428
  • One of the most powerful features of netshoot, if you’re having networking issues with your application’s container, you can launch netshoot with that container’s network namespace like this :

$ docker run -it --net container:<container_name> nicolaka/netshoot

  • The case we have discussed above, we can do it
$ docker container run --rm -it --net container:my-nginx nicolaka/netshoot curl localhost:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

For more info

https://github.com/nicolaka/netshoot

Please follow me with my Journey

This time to make learning more interactive, I am adding

  • Slack
  • Meetup

Please feel free to join this group.

Slack: 

https://100daysofdevops.slack.com/join/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group https://www.meetup.com/100daysofdevops/

21 Days of Docker-Day 18 -Docker Security

Welcome to Day 18 of 21 Days of Docker. The topic for today is Docker Security. Today my focus is on three topics

  • Security scanning using Docker Trusted Registry
  • Managing secret in Swarm
  • Docker Content Trust
  • Encrypting Overlay Network

Security scanning using Docker Trusted Registry

Docker Trusted Registry comes along with Docker Enterprise Edition and I really like this feature called DTR scanning. Before going there what is Docker Trusted Registry?

Docker Trusted Registry is an on-premises registry that allows
enterprises to store and manage their Docker images on-premise.

DTR run a security scan on your image, you can view the results.

Managing secret in Swarm

  • In terms of Docker Swarm services, a secret is a blob of data, such as a password, SSH private key, SSL certificate, or another piece of data that should not be transmitted over a network or stored unencrypted in a Dockerfile or in your application’s source code.
  • In Docker 1.13 and higher, you can use Docker secrets to centrally manage this data and securely transmit it to only those containers that need access to it.
  • Secrets are encrypted during transit and at rest in a Docker swarm.
  • A given secret is only accessible to those services which have been granted explicit access to it, and only while those service tasks are running.
  • Step1: Create a file
 $ cat mysecret 
username: admin
pass: admin123
  • Step2: Create a secret from a file or we can even do it from STDIN.
$ docker secret create mysupersecret mysecret 
uzvrfy96205o541pql1xgym4s
  • Step3: List the secrets
$ docker secret ls
ID                          NAME                DRIVER              CREATED             UPDATED
uzvrfy96205o541pql1xgym4s   mysupersecret                           13 seconds ago      13 seconds ago
  • Step4: Secret is encrypted, so even if we try to inspect , we can’t see the secret
$ docker secret inspect uzvrfy96205o541pql1xgym4s
[
    {
        "ID": "uzvrfy96205o541pql1xgym4s",
        "Version": {
            "Index": 41
        },
        "CreatedAt": "2019-10-25T17:22:01.841559963Z",
        "UpdatedAt": "2019-10-25T17:22:01.841559963Z",
        "Spec": {
            "Name": "mysupersecret",
            "Labels": {}
        }
    }
]
  • Now let’s create a container using this secret
$ docker service create --name mynginx1 --secret mysupersecret nginx
ueugjjkuhbbvrrszya1zb5gxs
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service converged 
  • Let’s login to the container
$ docker exec -it 48a4c7e74a8e bash
root@48a4c7e74a8e:/# cd /run
root@48a4c7e74a8e:/run# ls
lock  nginx.pid  secrets  utmp
# cd secrets/
  • As you can see container has access to the secret file
# cat mysupersecret 
username: admin
pass: admin123

Docker Content Trust

  • When transferring data among networked systems, trust is a central concern. In particular, when communicating over an untrusted medium such as the internet, it is critical to ensure the integrity and the publisher of all the data a system operates on. You use the Docker Engine to push and pull images (data) to a public or private registry. Content trust gives you the ability to verify both the integrity and the publisher of all the data received from a registry over any channel.
  • Docker Content Trust (DCT) provides the ability to use digital signatures for data sent to and received from remote Docker registries. These signatures allow client-side or runtime verification of the integrity and publisher of specific image tags.
  • Through DCT, image publishers can sign their images and image consumers can ensure that the images they use are signed.
  • First, log in to Docker Hub. Enter your Docker Hub credentials when prompted.
   docker login
  • Generate a delegation key pair. 
$ docker trust key generate lakhera2014
Generating key for lakhera2014...
Enter passphrase for new lakhera2014 key with ID 8480add: 
Repeat passphrase for new lakhera2014 key with ID 8480add: 
Successfully generated and loaded private key. Corresponding public key available: /Users/plakhera/lakhera2014.pub
  • Then we’ll add ourselves as a signer to an image repository.
$ docker trust key generate lakhera2014
Generating key for lakhera2014...
Enter passphrase for new lakhera2014 key with ID 8480add: 
Repeat passphrase for new lakhera2014 key with ID 8480add: 
Successfully generated and loaded private key. Corresponding public key available: /Users/plakhera/lakhera2014.pub
plakhera-ltm:~ plakhera$ docker trust signer add --key lakhera2014.pub lakhera2014  lakhera2014/mydct-test
Adding signer "lakhera2014" to lakhera2014/mydct-test...
Initializing signed repository for lakhera2014/mydct-test...
You are about to create a new root signing key passphrase. This passphrase
will be used to protect the most sensitive key in your signing system. Please
choose a long, complex passphrase and be careful to keep the password and the
key file itself secure and backed up. It is highly recommended that you use a
password manager to generate the passphrase and keep it safe. There will be no
way to recover this key. You can find the key in your config directory.
Enter passphrase for new root key with ID e638abc: 
Repeat passphrase for new root key with ID e638abc: 
Enter passphrase for new repository key with ID 5d6a8b3: 
Repeat passphrase for new repository key with ID 5d6a8b3: 
Successfully initialized "lakhera2014/mydct-test"
Successfully added signer: lakhera2014 to lakhera2014/mydct-test

NOTE: Once again, be sure to make note of the passphrases used.

  • Let’s create a Dockerfile
$ cat Dockerfile 
FROM busybox
CMD echo coming from dct
  • Build the image
$ docker build -t lakhera2014/mydct:unsigned .
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e
 ---> 19485c79a9bb
Step 2/2 : CMD echo coming from dct
 ---> Using cache
 ---> df88a4715edc
Successfully built df88a4715edc
Successfully tagged lakhera2014/mydct:unsigned
Tagging busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e as busybox:latest
  • Run it
$ docker container run lakhera2014/mydct:unsigned
coming from dct
  • Let’s enable Docker Content Trust now and see the result
export DOCKER_CONTENT_TRUST=1
$ docker container run lakhera2014/mydct:unsigned
docker: Error: remote trust data does not exist for docker.io/lakhera2014/mydct: notary.docker.io does not have trust data for docker.io/lakhera2014/mydct.
See 'docker run --help'.
  • Let’s build the image but this time giving signed tag
$ docker build -t lakhera2014/mydct:signed .
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e
 ---> 19485c79a9bb
Step 2/2 : CMD echo coming from dct
 ---> Using cache
 ---> df88a4715edc
Successfully built df88a4715edc
Successfully tagged lakhera2014/mydct:signed
Tagging busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e as busybox:latest
  • Push a signed tag to the repo. Enter the passphrase — this will be the one that was chosen earlier when running the docker trust key generate command
$ docker trust sign lakhera2014/mydct:signed
Enter passphrase for root key with ID e638abc: 
Enter passphrase for new repository key with ID f626d57: 
Repeat passphrase for new repository key with ID f626d57: 
Enter passphrase for lakhera2014 key with ID 8480add: 
Created signer: lakhera2014
Finished initializing signed repository for lakhera2014/mydct:signed
Signing and pushing trust data for local image lakhera2014/mydct:signed, may overwrite remote trust data
The push refers to repository [docker.io/lakhera2014/mydct]
6c0ea40aef9d: Mounted from library/busybox 
signed: digest: sha256:13bc8974eb312c18d28ea46a356ab0af01565f7259eb6042803a2eda4f56b52f size: 527
Signing and pushing trust metadata
Enter passphrase for lakhera2014 key with ID 8480add: 
Successfully signed docker.io/lakhera2014/mydct:signed
  • NOTE: docker trust sign also pushes the image to dockerhub.
  • Run the container again
$ docker container run lakhera2014/mydct:signed
coming from dct
  • Turn off Docker Content Trust and attempt to run the unsigned image again
$ export DOCKER_CONTENT_TRUST=0
$ docker container run lakhera2014/mydct:unsigned
coming from dct

Encrypting Overlay Network

  • We can encrypt communication between containers on overlay networks in order to provide greater security within our swarm cluster.
  • Use the –opt encrypted flag when creating overlay network to encrypt it
docker network create --opt encrypted --driver overlay <network name>
  • Create a service using the overlay network
docker service create --name my-encrypted-overlay --network <overlay network name> --replicas 3 nginx

Please follow me with my Journey

This time to make learning more interactive, I am adding

  • Slack
  • Meetup

Please feel free to join this group.

Slack: 

https://100daysofdevops.slack.com/join/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group https://www.meetup.com/100daysofdevops/

21 Days of Docker-Day 17 -Building Effective Docker Image

Welcome to Day 17 of 21 Days of Docker. The topic for today is the Building Effective Docker Image.

What are containers layers?

On Day 8, I talked about Docker Images and layers.

Image is build of a bunch of image layers and when we boot the container using this image, container adds a read-write layer on the top of it.

Now at this stage, the question you need to ask yourself, why do I care how many layers I have?

  • More layers mean a larger image. The larger the image, the longer it takes to both build, push and pull from the registry.
  • Smaller images mean faster builds and deploys.

How can I reduce my layers?

  • Use shared base images where possible.
  • Limit the data written to the container layer.
  • Chain RUN statements
  • Prevent cache misses at build for as long as possible.

Other steps we can do, to build an optimized image

Choosing the right base image

centos                                                                                  latest              9f38484d220f        7 months ago        202MB

vs

python                                                                                  3.7.3-alpine        2caaa0e9feab        3 months ago        87.2MB
  • In this case, if I am building a Python application, I don’t need entire centos operating, I just need a minimal base OS(which is provided by alpine), with all the Python binaries installed on the top of it.
FROM ubuntu:latest  --> FROM python:3.7.3-alpine

LABEL maintainer="[email protected]"

RUN apt-get update -y && \-->we don't need this as Python image already include it
    apt-get install -y python-pip python-dev

COPY ./requirements.txt /app/requirements.txt

WORKDIR /app

RUN pip install -r requirements.txt

COPY . /app

ENTRYPOINT [ "python" ]

CMD [ "app.py" ]

NOTE: There might be cases where you need a full base OS

  • Security
  • Compliance
  • Ease of development

Use of Scratch

  • It’s a special empty Dockerfile.
  • Use this to build your own base images.
  • OR build a minimal image that run a binary and nothing else.
FROM scratch
COPY hello /
CMD ["/hello"]

More info: https://docs.docker.com/develop/develop-images/baseimages/

Use Multi-stage Build as I mentioned on Day7

Please follow me with my Journey

This time to make learning more interactive, I am adding

  • Slack
  • Meetup

Please feel free to join this group.

Slack: 

https://100daysofdevops.slack.com/join/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group https://www.meetup.com/100daysofdevops/

21 Days of Docker-Day 16 -Introduction to Docker Compose

Welcome to Day 16 of 21 Days of Docker. The topic for today is a docker compose

Docker compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Install Compose

  • Run this command to download the current stable release of Docker Compose:
$ sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  • Set the execute permission
sudo chmod +x /usr/local/bin/docker-compose
  • To verify it
$ docker-compose --version
docker-compose version 1.24.1, build 4667896b
  • Define the services that make up your app in docker-compose.yml
version: '3'
services:
  web:
    image: nginx
    ports:
      - "80:80"
  redis:
    image: "redis:alpine"
  • This Compose file defines two services: web and redis.

Web service

  • The web service uses an nginx image. It then binds the container and the host machine to the exposed port, 8080. This example service uses the default port for the nginx web server, 80.

Redis service

  • The redis service uses a public Redis image pulled from the Docker Hub registry.
  • To validate your configuration
$ docker-compose config
'services:
  redis:
    image: redis:alpine
  web:
    image: nginx
    ports:
    - 8080:80/tcp
version: '3.0'
  • Start up your application by running docker-compose up
$ docker-compose up -d

Creating network "cloud_user_default" with the default driver
Creating cloud_user_redis_1 ... done
Creating cloud_user_web_1   ... done
  • To bring down the service
$ docker-compose down
Removing cloud_user_web_1   ... done
Removing cloud_user_redis_1 ... done
Removing network cloud_user_default

Famous WordPress example, I will leave up to you guys to test it

https://docs.docker.com/compose/wordpress/

version: '3.3'

services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: somewordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
       WORDPRESS_DB_NAME: wordpress
volumes:
    db_data: {}

Deploy a stack to a swarm

When running Docker Engine in swarm mode, you can use docker stack deploy to deploy a complete application stack to the swarm. The deploy command accepts a stack description in the form of a Compose file.

$ docker stack deploy -c docker-compose.yml mystack
Creating network mystack_default
Creating service mystack_web
Creating service mystack_redis
  • List stacks
$ docker stack ls
NAME                SERVICES            ORCHESTRATOR
mystack             2                   Swarm
  • List the tasks in the stack
$ docker stack ps  mystack
ID                  NAME                IMAGE               NODE                          DESIRED STATE       CURRENT STATE            ERROR               PORTS
gd34he1ej70p        mystack_redis.1     redis:alpine        plakhera14c.example.com   Running             Running 14 seconds ago                       
r3d8fmrigxwx        mystack_web.1       nginx:latest        plakhera12c.example.com   Running             Running 18 seconds ago                       
  • As you can see redis is deployed on plakhera14c.example.com and web on plakhera12c.example.com
  • List the services in the stack
$ docker stack services mystack
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
3mcj15hyaq6a        mystack_web         replicated          1/1                 nginx:latest        *:8080->80/tcp
j1h8c1ygn796        mystack_redis       replicated          1/1                 redis:alpine        
  • Remove one or more stacks
$ docker stack rm  mystack
Removing service mystack_redis
Removing service mystack_web
Removing network mystack_default

Please follow me with my Journey

This time to make learning more interactive, I am adding

  • Slack
  • Meetup

Please feel free to join this group.

Slack: 

https://100daysofdevops.slack.com/join/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group https://www.meetup.com/100daysofdevops/

21 Days of Docker-Day 15 -Introduction to Docker Swarm- Part 2

On Day 14, I gave you the basic introduction to Docker Swarm, let explore swarm more in-depth

Adding Network and Publishing Ports to Swarm Tasks

  • Publishing port for Swarm tasks is similar to what we did for docker
$ docker service create --name mypublishportservice --replicas 2 -p 8080:80 nginx
khwcntzjltfmxu8rwxy208wb2
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 
  • Verify it
$ docker service ls
ID                  NAME                   MODE                REPLICAS            IMAGE               PORTS
juqr8xuetjh1        myglobal               global              3/3                 nginx:latest        
khwcntzjltfm        mypublishportservice   replicated          2/2                 nginx:latest        *:8080->80/tcp

$ docker service ps mypublishportservice
ID                  NAME                     IMAGE               NODE                          DESIRED STATE       CURRENT STATE            ERROR               PORTS
mtgrus1pk5m0        mypublishportservice.1   nginx:latest        plakhera12c.mylabserver.com   Running             Running 20 seconds ago                       
ul3vtb79mu8r        mypublishportservice.2   nginx:latest        plakhera14c.mylabserver.com   Running             Running 19 seconds ago                       
[cloud_user@plakhera12c ~]$ docker container ls
  • Go to one of the nodes and try to access it on port 8080
$ curl 172.31.21.46:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Lock your swarm to protect its encryption key

  • When Docker restarts, both the TLS key used to encrypt communication among swarm nodes and the key used to encrypt and decrypt Raft logs on disk, are loaded into each manager node’s memory. 
  • Docker 1.13 introduces the ability to protect the mutual TLS encryption key and the key used to encrypt and decrypt Raft logs at rest, by allowing you to take ownership of these keys and to require manual unlocking of your managers. This feature is called autolock.

Enable or disable autolock on an existing swarm

  • To enable autolock on an existing swarm, set the autolock flag to true.
# docker swarm update --autolock=true
Swarm updated.
To unlock a swarm manager after it restarts, run the `docker swarm unlock`
command and provide the following key:

    SWMKEY-1-XXXXXXXXXXXXXXXXXXXXXX

Please remember to store this key in a password manager, since without it you
will not be able to restart the manager.
  • Store the key in a safe place, such as in a password manager.
  • When Docker restarts, you need to unlock the swarm. A locked swarm causes an error like the following when you try to start or restart a service:
$ sudo systemctl restart docker
  • To unlock a locked swarm, use docker swarm unlock.
$ docker swarm unlock
Please enter unlock key: 

View the current unlock key for a running swarm

$ docker swarm unlock-key
To unlock a swarm manager after it restarts, run the `docker swarm unlock`
command and provide the following key:

    SWMKEY-1-XXXXXXXXXXXXXXXXXXXXXXX

Please remember to store this key in a password manager, since without it you
will not be able to restart the manager.

Rotate the unlock key

$ docker swarm unlock-key --rotate
Successfully rotated manager unlock key.

To unlock a swarm manager after it restarts, run the `docker swarm unlock`
command and provide the following key:

    SWMKEY-1-XXXXXXXXXXXXXXXXXXXXXXX

Please remember to store this key in a password manager, since without it you
will not be able to restart the manager.

Disable autolock

docker swarm update --autolock=false

Mount Volumes with Swarm

  • To mount/create the volume using Swarm
$ docker service create --name mytestvolservice --mount type=volume,source=mytestvol,target=/mytestvol nginx
uh2t9a60p11f7ylehr6jrv0uo
overall progress: 1 out of 1 tasks 
1/1: running   
verify: Service converged 
  • To verify it
$ docker service ps mytestvolservice
ID                  NAME                 IMAGE               NODE                          DESIRED STATE       CURRENT STATE            ERROR               PORTS
yzrwdg5vwgbl        mytestvolservice.1   nginx:latest        plakhera13c.mylabserver.com   Running             Running 13 seconds ago                       
  • As you can see this is been created on plakhera13 machine, let’s login to that machine
$ docker volume ls
DRIVER              VOLUME NAME
local               mytestvol
  • Get more information about the volume
$ docker volume inspect mytestvol
[
    {
        "CreatedAt": "2019-10-21T01:43:03Z",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/mytestvol/_data", <---
        "Name": "mytestvol",
        "Options": null,
        "Scope": "local"
    }
]
$ sudo ls -l /var/lib/docker/volumes/mytestvol/_data
total 0
  • Let’s login to the container
$ docker exec -it c98ffe8fe655 bash
# cd mytestvol/
# touch mytestfile
  • Logout from the container and see if the file exist on the Docker host
$ sudo ls -l /var/lib/docker/volumes/mytestvol/_data
total 0
-rw-r--r--. 1 root root 0 Oct 21 01:45 mytestfile
  • Now the question is what will happen if you remove the service, is the Volume still exist?
$ docker service rm mytestvolservice
mytestvolservice
  • Let’s verify it, yay yes 🙂
$ docker volume ls
DRIVER              VOLUME NAME
local               mytestvol

Add or remove label metadata

Node labels provide a flexible method of node organization. You can also use node labels in service constraints. Apply constraints when you create a service to limit the nodes where the scheduler assigns tasks for the service.

  • First let get’s the node id
$ docker node ls
ID                            HOSTNAME                      STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
ws27unxgekvajgwtnj43tywsy *   plakhera12c.mylabserver.com   Ready               Active              Leader              19.03.4
ce05mu24b2p600ecejmy5gj8x     plakhera13c.mylabserver.com   Ready               Active                                  19.03.4
s0tcp7y6sw5l4bc0d622mtv21     plakhera14c.mylabserver.com   Ready               Active                                  19.03.4
  • Apply the label
$ docker node update --label-add region=us-west-2 s0tcp7y6sw5l4bc0d622mtv21
s0tcp7y6sw5l4bc0d622mtv21
  • Verify it
$ docker node inspect s0tcp7y6sw5l4bc0d622mtv21
[
    {
        "ID": "s0tcp7y6sw5l4bc0d622mtv21",
        "Version": {
            "Index": 145
        },
        "CreatedAt": "2019-10-23T03:08:41.624202115Z",
        "UpdatedAt": "2019-10-23T05:29:34.502968372Z",
        "Spec": {
            "Labels": { <----------
                "region": "us-west-2"
            },
  • Create the service using the constraint
$ docker service create --name myserviceconstraint1 --constraint node.labels.region==us-west-2 --replicas 1 nginx
s1cq2avedvt1xeetbjv353j0a
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service converged 
  • Verify it
$ docker service ps myserviceconstraint1
ID                  NAME                     IMAGE               NODE                          DESIRED STATE       CURRENT STATE            ERROR               PORTS
shjn105n69hi        myserviceconstraint1.1   nginx:latest        plakhera14c.mylabserver.com   Running             Running 24 seconds ago                       

Please follow me with my Journey

This time to make learning more interactive, I am adding

  • Slack
  • Meetup

Please feel free to join this group.

Slack: 

https://100daysofdevops.slack.com/join/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group https://www.meetup.com/100daysofdevops/

21 Days of Docker-Day 14 -Introduction to Docker Swarm- Part 1

What is Docker Swarm?

Docker Swarm is a clustering and scheduling tool for Docker containers. With Swarm, you can establish and manage a cluster of Docker nodes as a single virtual system.

Swarm Manager: Swarm manager purpose is to receive commands on behalf of the cluster and assign containers to Swarm nodes

Worker Node: is responsible for running container workload.

Service: To deploy your application to a swarm, we must need to submit a service definition to a manager node. Swarm manager node dispatches units of work called tasks to worker nodes.

Installing and Configuring Docker Swarm

Setup

One Swarm Manager
Two Worker Node
  • On all three servers, install Docker Community Edition.
# Update all the package listing
sudo apt-get update

# All Dependent Package
sudo apt-get -y install apt-transport-https ca-certificates \  curl gnupg-agent software-properties-common

# Download and Install GPG Key for Docker Repository
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

# Add the Docker Repository
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) \  stable"

# Update all the package listing based on new repository
sudo apt-get update

# Install Docker Package
sudo apt-get install -y docker-ce=5:18.09.5~3-0~ubuntu-bionic docker-ce-cli=5:18.09.5~3-0~ubuntu-bionic containerd.io

NOTE: Docker Swarm comes bundled with Docker, so we just need to install Docker Package.

  • Add non-root users to the Docker group so that you can run docker commands as non-root/normal users.
sudo usermod -a -G docker <username>

NOTE: Log out each server, then log back in.

  • Verify the version of docker
$ docker version
 Client:
 Version:           18.09.5
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        e8ff056
 Built:             Thu Apr 11 04:43:57 2019
 OS/Arch:           linux/amd64
 Experimental:      false
 Server: Docker Engine - Community
 Engine:
 Version:          18.09.5
 API version:      1.39 (minimum version 1.12)
 Go version:       go1.10.8
 Git commit:       e8ff056
 Built:            Thu Apr 11 04:10:53 2019
 OS/Arch:          linux/amd64
 Experimental:     false 

Configuring the Swarm Manager

  • On the swarm manager server, initialize the swarm
docker swarm init --advertise-addr <swarm manager private IP>;
$ docker swarm init --advertise-addr 10.0.1.101
 Swarm initialized: current node (orkjv9q2gitaypqlnp93lq5dd) is now a manager.
 To add a worker to this swarm, run the following command:
 docker swarm join --token SWMTKN-1-2azpcn2q6gblghcrp1jmiqp5f4oi49jfr7g2yveezvginsodul-cykhommuu9w8vnsi6tcxe063b 10.0.1.101:2377
 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. 
  • In case if you missed this command, there are ways to retrieve it
$ docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-2azpcn2q6gblghcrp1jmiqp5f4oi49jfr7g2yveezvginsodul-cykhommuu9w8vnsi6tcxe063b 10.0.1.101:2377

Adding the Worker Node to the Cluster

  • Go to both the worker node and execute this command
$  docker swarm join --token SWMTKN-1-2azpcn2q6gblghcrp1jmiqp5f4oi49jfr7g2yveezvginsodul-cykhommuu9w8vnsi6tcxe063b 10.0.1.101:2377
This node joined a swarm as a worker.
  • Go back to the swarm manager and list all the nodes
$ docker node ls
 ID                            HOSTNAME                      STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
 w45sgx9gjpijtrq445avrd9wt *   plakhera12c.mylabserver.com   Ready               Active              Leader              19.03.4
 b5lyurguajincw1l9see0etr7     plakhera13c.mylabserver.com   Ready               Active                                  19.03.4
 x1v3x9ejsnpzgateb9xhbi431     plakhera14c.mylabserver.com   Ready               Active                                  19.03.4 
  • * in front of plakhera12c.mylabserver.com shows its a manager/Leader node

21 Days of Docker-Day 13 -Docker Storage – Part 2

Welcome to Day 13 of 21 days of Docker. In the first part of the series, we see all the issues that are been faced if you store the data inside the container, so you need a reliable place to store the data.

Docker has two options for containers to store files in the host machine so that the files are persisted even after the container stops

  • volumes
  • bind mounts
  • tmpfs
  • Volumes are stored in a part of the host filesystem which is managed by Docker (/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem. Volumes are the best way to persist data in Docker.
  • Bind mounts may be stored anywhere on the host system. They may even be important system files or directories. Non-Docker processes on the Docker host or a Docker container can modify them at any time.
  • tmpfs mounts are stored in the host system’s memory only, and are never written to the host system’s filesystem.

One more issue I want to highlight, This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem.

Before digging deeper into volumes, let start with a simple example, up to this point we know that overlay2 is our default driver and docker storage directory location is /var/lib/docker

  • Let’s go one level down
cd /var/lib/docker/overlay2
brw------- 1 root root 202, 1 Oct 18 00:00 backingFsBlockDev
drwx------ 3 root root     47 Oct 18 00:15 7d3b75987d3723a085cb7c7482386f9a75bf2f4b40f7c8f0ba6f3450e9da14a2
drwx------ 2 root root     40 Oct 18 00:17 l
  • Let spin one container
# docker container run -dt alpine sh
6a315533797a336f1b5ee51140eb5dbc2b5d7bc28146ca0ad212ff23146cf5fa
  • Again run ls -l /var/lib/docker/overlay2
# ls -ltr
total 0
brw------- 1 root root 202, 1 Oct 18 00:00 backingFsBlockDev
drwx------ 3 root root     47 Oct 18 00:15 7d3b75987d3723a085cb7c7482386f9a75bf2f4b40f7c8f0ba6f3450e9da14a2
drwx------ 2 root root    108 Oct 18 00:20 l
drwx------ 4 root root     72 Oct 18 00:20 cbcaa1e4c60c97533105b5858ff220d72cc31258dea7456d9f789e369e721af2-init <---
drwx------ 5 root root     69 Oct 18 00:20 cbcaa1e4c60c97533105b5858ff220d72cc31258dea7456d9f789e369e721af2 <---
  • You will see to additional layer, which is container writable layer
  • Now let’s stop and delete the container
# docker stop 6a315533797a
6a315533797a
# docker rm 6a315533797a
6a315533797a
  • Again run ls -l /var/lib/docker/overlay2
# ls -ltr
total 0
brw------- 1 root root 202, 1 Oct 18 00:00 backingFsBlockDev
drwx------ 3 root root     47 Oct 18 00:15 7d3b75987d3723a085cb7c7482386f9a75bf2f4b40f7c8f0ba6f3450e9da14a2
drwx------ 2 root root     40 Oct 18 00:22 l
  • So those two layers are gone now, that means all our data that is stored in that layer, we need some way to store our data persistently
  • To verify any volume present in your system
# docker volume ls
DRIVER              VOLUME NAME
  • To create a volume
# docker volume create mytestvol
mytestvol
  • Verify it again
# docker volume ls
DRIVER              VOLUME NAME
local               mytestvol
  • Now run a container using this Volume
# docker container run -dt --name voltest -v mytestvol:/etc alpine sh
0dbffcc4beb99787ee45d6dbfd2f87ab0ba819614ce9bd6403d6b2e9c5edb8
  • To get more information about the Volume
# docker volume inspect mytestvol
[
    {
        "CreatedAt": "2019-10-18T00:27:51Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/mytestvol/_data", <------
        "Name": "mytestvol",
        "Options": {},
        "Scope": "local"
    }
]