21 Days of AWS using Terraform – Day 1- Introduction to Terraform

Session link

GitHub Link

https://github.com/100daysofdevops/21_days_of_aws_using_terraform

Hello everyone and welcome to Day 1 of 21 Days of AWS using Terraform, the topic for today is Introduction to Terraform. Let’s begin our AWS automation journey using Terraform.

What is terraform?

Terraform is a tool for provisioning infrastructure(or managing Infrastructure as Code). It supports multiple providers(eg, AWS, Google Cloud, Azure, OpenStack..).

https://www.terraform.io/docs/providers/index.html

Installing Terraform

Installing Terraform is pretty straightforward, download it from Terraform download page and select the appropriate package based on your operating system.

https://learn.hashicorp.com/terraform/getting-started/install.html

https://www.terraform.io/downloads.html

I am using Centos7, so these are the steps I need to follow, to install terraform on Centos7.

Step1: Download the Package
$ wget https://releases.hashicorp.com/terraform/0.12.13/terraform_0.12.13_linux_amd64.zip
--2019-11-08 18:47:29--  https://releases.hashicorp.com/terraform/0.12.13/terraform_0.12.13_linux_amd64.zip
Resolving releases.hashicorp.com (releases.hashicorp.com)... 2a04:4e42:2f::439, 151.101.201.183
Connecting to releases.hashicorp.com (releases.hashicorp.com)|2a04:4e42:2f::439|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 16341595 (16M) [application/zip]
Saving to: ‘terraform_0.12.13_linux_amd64.zip’

100%[==================================================================================================================================================================>] 16,341,595  --.-K/s   in 0.09s   

2019-11-08 18:47:29 (172 MB/s) - ‘terraform_0.12.13_linux_amd64.zip’ saved [16341595/16341595]

Step2: Unzip it
$ unzip terraform_0.12.13_linux_amd64.zip 
Archive:  terraform_0.12.13_linux_amd64.zip
  inflating: terraform      

Step3: Add the binary to PATH environment variable
sudo cp terraform /usr/local/bin/
sudo chmod +x /usr/local/bin/terraform

Step4:  Logout and log back in
  • To verify terraform is installed properly
$ terraform version
Terraform v0.12.13

As mentioned above terraform support many providers, for my use case I am using AWS.

Prerequisites1: Existing AWS Account(OR Setup a new account)
2: IAM full access(OR at least have AmazonEC2FullAccess)
3: AWS Credentials(AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY)

Once you have pre-requisites 1 and 2 done, the first step is to export Keys AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

export AWS_ACCESS_KEY_ID="your access key id here"
export AWS_SECRET_ACCESS_KEY="your secret access key id here"

NOTE: These two variables are bound to your current shell, in case of reboot, or if open a new shell window, these changes will be lost

With all pre-requisites in place, it’s time to write your first terraform code, but before that just a brief overview about terraform language

  • Terraform code is written in the HashiCorp Configuration Language(HCL)
  • All the code ends with the extension of .tf
  • It’s a declarative language(We need to define what infrastructure we want and terraform will figure out how to create it)

In this first example I am going to build EC2 instance, but before creating EC2 instance go to AWS console and think what the different things we need to build EC2 instance are

  • Amazon Machine Image(AMI)
  • Instance Type
  • Network Information(VPC/Subnet)
  • Tags
  • Security Group
  • Key Pair

Let break each of these steps by step

  • Amazon Machine Image(AMI): It’s an Operating System Image used to run EC2 instance. For this example, I am using Centos 7 ami-01ed306a12b7d1c96. We can create our own AMI using AWS console or Packer.

https://www.packer.io/intro/

Other ways to find out the AMI id for Centos7 Image is

$ aws ec2 describe-images \
>       --owners aws-marketplace \
>       --filters Name=product-code,Values=aw0evgkw8e5c1q413zgy5pjce \
>       --query 'Images[*].[CreationDate,Name,ImageId]' \
>       --filters "Name=name,Values=CentOS Linux 7*" \
>       --region us-west-2 \
>       --output table \
>   | sort -r
|                                                                        DescribeImages                                                                         |
|  2019-01-30T23:43:37.000Z|  CentOS Linux 7 x86_64 HVM EBS ENA 1901_01-b7ee8a69-ee97-4a49-9e68-afaee216db2e-ami-05713873c6794f575.4  |  ami-01ed306a12b7d1c96  |
|  2018-06-13T15:58:14.000Z|  CentOS Linux 7 x86_64 HVM EBS ENA 1805_01-b7ee8a69-ee97-4a49-9e68-afaee216db2e-ami-77ec9308.4           |  ami-3ecc8f46           |
|  2018-05-17T09:30:44.000Z|  CentOS Linux 7 x86_64 HVM EBS ENA 1804_2-b7ee8a69-ee97-4a49-9e68-afaee216db2e-ami-55a2322a.4            |  ami-5490ed2c           |
|  2018-04-04T00:11:39.000Z|  CentOS Linux 7 x86_64 HVM EBS ENA 1803_01-b7ee8a69-ee97-4a49-9e68-afaee216db2e-ami-8274d6ff.4           |  ami-0ebdd976           |
|  2017-12-05T14:49:18.000Z|  CentOS Linux 7 x86_64 HVM EBS 1708_11.01-b7ee8a69-ee97-4a49-9e68-afaee216db2e-ami-95096eef.4            |  ami-b63ae0ce           |
+--------------------------+----------------------------------------------------------------------------------------------------------+-------------------------+
+--------------------------+----------------------------------------------------------------------------------------------------------+-------------------------+
-----------------------------------------------------------------------------------------------------------------------------------------------------------------

For more info please refer to AWS official doc

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html

  • Instance Type: Type of EC2 instance to run, as every instance type provides different capabilities(CPU, Memory, I/O). For this example, I am using t2.micro(1 Virtual CPU, 1GB Memory)

For more info please refer to Amazon EC2 instance type

https://aws.amazon.com/ec2/instance-types/

  • Network Information(VPC/Subnet Id): Virtual Private Cloud(VPC) is an isolated area of AWS account that has it’s own virtual network and IP address space. For this example, I am using default VPC which is part of a new AWS account. In case if you want to set up your instance in custom VPC, you need to add two additional parameters(vpc_security_group_ids and subnet_id) to your terraform code

For more information about VPC, please refer

https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html

Tags: Tag we can think of a label that helps to categorize AWS resources.

For more information about the tag, please refer

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html

Security Group: Security group can think like a virtual firewall, and it controls the traffic of your instance

For more information about Security Group, please refer

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html

Key Pair: Key Pair is used to access EC2 instance

For more info about Key Pair, please refer to

https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#key-pairs

Let review all these parameters and see what we already have and what we need to create to spun our first EC2 instance

  • Amazon Machine Image(AMI) → ami-01ed306a12b7d1c96
  • Instance Type → t2.micro
  • Network Information(VPC/Subnet) → Default VPC
  • Tags
  • Security Group
  • Key Pair

So we already have AMI, Instance Type and Network Information, we need to write terraform code for rest of the parameter to spun our first EC2 instance.

Let start with Key Pair

Go to terraform documentation and search for aws key pair

https://www.terraform.io/docs/providers/aws/r/key_pair.html

resource "aws_key_pair" "example" {
  key_name = "example-key"
  public_key = ""
}

Let try to dissect this code

  • Under resource, we specify the type of resource to create, in this case we are using aws_key_pair as a resource
  • an example is an identifier which we can use throughout the code to refer back to this resource
  • key_name: Is the name of the Key
  • public_key: Is the public portion of ssh generated Key

The same thing we need to do for Security Group, go back to the terraform documentation and search for the security group.

https://www.terraform.io/docs/providers/aws/r/security_group.html

resource "aws_security_group" "examplesg" {
  
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}
  • Same thing let do with this code, first two parameter is similar to key pair
  • ingress: refer to in-bound traffic to port 22, using protocol tcp
  • cidr_block: list of cidr_block where you want to allow this traffic(This is just as an example please never use 0.0.0.0/0)

With Key_pair and Security Group in place it’s time to create first EC2 instance.

https://www.terraform.io/docs/providers/aws/d/instance.html

resource "aws_instance" "ec2_instance" {
  ami = "ami-01ed306a12b7d1c96"
  instance_type = "t2.micro"
  vpc_security_group_ids = ["${aws_security_group.examplesg.id}"]
  key_name = "${aws_key_pair.example.id}"
  tags = {
    Name = "my-first-ec2-instance"
  }
}
  • In this case we just need to check the doc and fill out all the remaining values
  • To refer back to the security group we need to use interpolation syntax by terraform

https://www.terraform.io/docs/configuration-0-11/interpolation.html

This look like

"${var_to_interpolate}

Whenever you see a $ sign and curly braces inside the double quotes, that means terraform is going to interpolate that code specially. To get the id of the security group

"${aws_security_group.examplesg.id}"

Same thing applied to key pair

"${aws_key_pair.example.id}"

Our code is ready but we are missing one thing, provider before starting any code we need to tell terraform which provider we are using(aws in this case)

provider "aws" {
region = "us-west-2"
}
  • This tells terraform that you are going to use AWS as provider and you want to deploy your infrastructure in us-west-2 region
  • AWS has datacenter all over the world, which are grouped in region and availability zones. Region is a separate geographic area(Oregon, Virginia, Sydney) and each region has multiple isolated datacenters(us-west-2a,us-west-2b..)

For more info about regions and availability zones, please refer to the below doc

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html

provider "aws"{
  region = "us-west-2"
}

resource "aws_key_pair" "example" {
  key_name = "example-key"
  public_key = "ssh public key"
}


resource "aws_security_group" "examplesg" {
  ingress {
    from_port = 22
    protocol = "tcp"
    to_port = 22
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_instance" "ec2_instance" {
  ami = "ami-01ed306a12b7d1c96"
  instance_type = "t2.micro"
  vpc_security_group_ids = ["${aws_security_group.examplesg.id}"]
  key_name = "${aws_key_pair.example.id}"
  tags = {
    Name = "my-first-ec2-instance"
  }
}

Before running terraform command to spun our first EC2 instance, run terraform fmt command which will rewrite terraform configuration files to a canonical format and style

$ terraform  fmt
main.tf
  • The first command we are going to run to setup our instance is terraform init, what this will do is going to download code for a provider(aws) that we are going to use.
  • Terraform binary contains the basic functionality for terraform but it doesn’t come with the code for any of the providers(eg: AWS, Azure and GCP), so when we are first starting to use terraform we need to run terraform init to tell terraform to scan the code and figure out which providers we are using and download the code for them.
  • By default, the provider code will be downloaded into a .terraform directory which is a scrarch directory(we may want to add it in a .gitignore).

NOTE: It’s safe to run terraform init command multiple times as it’s idempotent.

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.35.0...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.aws: version = "~> 2.35"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
  • Next command we are going to run is “terraform plan”, this will tell what terraform actually do before making any changes
  • This is good way of making any sanity check before making actual changes to env
  • Output of terraform plan command looks similar to Linux diff command

1: (+ sign): Resource going to be created

2: (- sign): Resources going to be deleted

3: (~ sign): Resource going to be modified

t$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_instance.ec2_instance will be created
  + resource "aws_instance" "ec2_instance" {
      + ami                          = "ami-01ed306a12b7d1c96"
      + arn                          = (known after apply)
      + associate_public_ip_address  = (known after apply)
      + availability_zone            = (known after apply)
      + cpu_core_count               = (known after apply)
      + cpu_threads_per_core         = (known after apply)
      + get_password_data            = false
      + host_id                      = (known after apply)
      + id                           = (known after apply)
      + instance_state               = (known after apply)
      + instance_type                = "t2.micro"
      + ipv6_address_count           = (known after apply)
      + ipv6_addresses               = (known after apply)
      + key_name                     = (known after apply)
      + network_interface_id         = (known after apply)
      + password_data                = (known after apply)
      + placement_group              = (known after apply)
      + primary_network_interface_id = (known after apply)
      + private_dns                  = (known after apply)
      + private_ip                   = (known after apply)
      + public_dns                   = (known after apply)
      + public_ip                    = (known after apply)
      + security_groups              = (known after apply)
      + source_dest_check            = true
      + subnet_id                    = (known after apply)
      + tags                         = {
          + "Name" = "my-first-ec2-instance"
        }
      + tenancy                      = (known after apply)
      + volume_tags                  = (known after apply)
      + vpc_security_group_ids       = (known after apply)

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + snapshot_id           = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      + ephemeral_block_device {
          + device_name  = (known after apply)
          + no_device    = (known after apply)
          + virtual_name = (known after apply)
        }

      + network_interface {
          + delete_on_termination = (known after apply)
          + device_index          = (known after apply)
          + network_interface_id  = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }
    }

  # aws_key_pair.example will be created
  + resource "aws_key_pair" "example" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + key_name    = "example-key"
      + public_key  = ""
    }

  # aws_security_group.examplesg will be created
  + resource "aws_security_group" "examplesg" {
      + arn                    = (known after apply)
      + description            = "Managed by Terraform"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = [
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + description      = ""
              + from_port        = 22
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "tcp"
              + security_groups  = []
              + self             = false
              + to_port          = 22
            },
        ]
      + name                   = (known after apply)
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + vpc_id                 = (known after apply)
    }

Plan: 3 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
  • To apply these changes, run terraform apply
$ terraform  apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_instance.ec2_instance will be created
  + resource "aws_instance" "ec2_instance" {
      + ami                          = "ami-01ed306a12b7d1c96"
      + arn                          = (known after apply)
      + associate_public_ip_address  = (known after apply)
      + availability_zone            = (known after apply)
      + cpu_core_count               = (known after apply)
      + cpu_threads_per_core         = (known after apply)
      + get_password_data            = false
      + host_id                      = (known after apply)
      + id                           = (known after apply)
      + instance_state               = (known after apply)
      + instance_type                = "t2.micro"
      + ipv6_address_count           = (known after apply)
      + ipv6_addresses               = (known after apply)
      + key_name                     = (known after apply)
      + network_interface_id         = (known after apply)
      + password_data                = (known after apply)
      + placement_group              = (known after apply)
      + primary_network_interface_id = (known after apply)
      + private_dns                  = (known after apply)
      + private_ip                   = (known after apply)
      + public_dns                   = (known after apply)
      + public_ip                    = (known after apply)
      + security_groups              = (known after apply)
      + source_dest_check            = true
      + subnet_id                    = (known after apply)
      + tags                         = {
          + "Name" = "my-first-ec2-instance"
        }
      + tenancy                      = (known after apply)
      + volume_tags                  = (known after apply)
      + vpc_security_group_ids       = (known after apply)

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + snapshot_id           = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      + ephemeral_block_device {
          + device_name  = (known after apply)
          + no_device    = (known after apply)
          + virtual_name = (known after apply)
        }

      + network_interface {
          + delete_on_termination = (known after apply)
          + device_index          = (known after apply)
          + network_interface_id  = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }
    }

  # aws_key_pair.example will be created
  + resource "aws_key_pair" "example" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + key_name    = "example-key"
      + public_key  = ""
    }

  # aws_security_group.examplesg will be created
  + resource "aws_security_group" "examplesg" {
      + arn                    = (known after apply)
      + description            = "Managed by Terraform"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = [
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + description      = ""
              + from_port        = 22
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "tcp"
              + security_groups  = []
              + self             = false
              + to_port          = 22
            },
        ]
      + name                   = (known after apply)
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + vpc_id                 = (known after apply)
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes <------------------------

aws_key_pair.example: Creating...
aws_security_group.examplesg: Creating...
aws_key_pair.example: Creation complete after 0s [id=example-key]
aws_security_group.examplesg: Creation complete after 2s [id=sg-0e1a943b062aa2315]
aws_instance.ec2_instance: Creating...
aws_instance.ec2_instance: Still creating... [10s elapsed]
aws_instance.ec2_instance: Still creating... [20s elapsed]
aws_instance.ec2_instance: Still creating... [30s elapsed]
aws_instance.ec2_instance: Creation complete after 33s [id=i-0d1ab0d8c1fc57f3d]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
  • What terraform is doing here is reading code and translating it to api calls to providers(aws in this case)

W00t you have deployed your first EC2 server using terraform

Go back to the EC2 console to verify your first deployed server

Let say after verification you realize that I need to give more meaningful tag to this server, so the rest of the code remain the same and you modified the tag parameter

  tags = {
    Name = "my-webserver-instance"
  }
  • Run terraform plan command again, as you can see
  ~ tags                         = {
      ~ "Name" = "my-first-ec2-instance" -> "my-webserver-instance"
    }
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

aws_key_pair.example: Refreshing state... [id=example-key]
aws_security_group.examplesg: Refreshing state... [id=sg-0e1a943b062aa2315]
aws_instance.ec2_instance: Refreshing state... [id=i-0d1ab0d8c1fc57f3d]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # aws_instance.ec2_instance will be updated in-place
  ~ resource "aws_instance" "ec2_instance" {
        ami                          = "ami-01ed306a12b7d1c96"
        arn                          = "arn:aws:ec2:us-west-2:355622012945:instance/i-0d1ab0d8c1fc57f3d"
        associate_public_ip_address  = true
        availability_zone            = "us-west-2a"
        cpu_core_count               = 1
        cpu_threads_per_core         = 1
        disable_api_termination      = false
        ebs_optimized                = false
        get_password_data            = false
        id                           = "i-0d1ab0d8c1fc57f3d"
        instance_state               = "running"
        instance_type                = "t2.micro"
        ipv6_address_count           = 0
        ipv6_addresses               = []
        key_name                     = "example-key"
        monitoring                   = false
        primary_network_interface_id = "eni-0e5ddbe136b7d599d"
        private_dns                  = "ip-172-31-28-131.us-west-2.compute.internal"
        private_ip                   = "172.31.28.131"
        public_dns                   = "ec2-54-185-56-146.us-west-2.compute.amazonaws.com"
        public_ip                    = "54.185.56.146"
        security_groups              = [
            "terraform-20191110052042521000000001",
        ]
        source_dest_check            = true
        subnet_id                    = "subnet-3b929b42"
      ~ tags                         = {
          ~ "Name" = "my-first-ec2-instance" -> "my-webserver-instance"
        }
        tenancy                      = "default"
        volume_tags                  = {}
        vpc_security_group_ids       = [
            "sg-0e1a943b062aa2315",
        ]

        credit_specification {
            cpu_credits = "standard"
        }

        root_block_device {
            delete_on_termination = false
            encrypted             = false
            iops                  = 100
            volume_id             = "vol-0f48e9a8f42ac6dc4"
            volume_size           = 8
            volume_type           = "gp2"
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

  • Now if we can think about it, how does terraform knows that there only change in the tag parameter and nothing else
  • Terraform keep track of all the resources it already created in .tfstate files, so its aware of the resources that already exist.
$ ls -la
total 28
drwxr-xr-x  4 prashant prashant 4096 Nov  9 21:25 .
drwxr-xr-x 31 prashant prashant 4096 Nov  9 21:14 ..
-rw-r--r--  1 prashant prashant 1009 Nov  9 21:24 main.tf
drwxr-xr-x  3 prashant prashant 4096 Nov  9 21:15 .terraform
-rw-rw-r--  1 prashant prashant 5348 Nov  9 21:21 terraform.tfstate
  • If you notice at the top it says “Refreshing Terraform state in-memory prior to plan…”
  • If I refresh my webbrowser after running terraform apply.
$ terraform apply
aws_key_pair.example: Refreshing state... [id=example-key]
aws_security_group.examplesg: Refreshing state... [id=sg-0e1a943b062aa2315]
aws_instance.ec2_instance: Refreshing state... [id=i-0d1ab0d8c1fc57f3d]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # aws_instance.ec2_instance will be updated in-place
  ~ resource "aws_instance" "ec2_instance" {
        ami                          = "ami-01ed306a12b7d1c96"
        arn                          = "arn:aws:ec2:us-west-2:355622012945:instance/i-0d1ab0d8c1fc57f3d"
        associate_public_ip_address  = true
        availability_zone            = "us-west-2a"
        cpu_core_count               = 1
        cpu_threads_per_core         = 1
        disable_api_termination      = false
        ebs_optimized                = false
        get_password_data            = false
        id                           = "i-0d1ab0d8c1fc57f3d"
        instance_state               = "running"
        instance_type                = "t2.micro"
        ipv6_address_count           = 0
        ipv6_addresses               = []
        key_name                     = "example-key"
        monitoring                   = false
        primary_network_interface_id = "eni-0e5ddbe136b7d599d"
        private_dns                  = "ip-172-31-28-131.us-west-2.compute.internal"
        private_ip                   = "172.31.28.131"
        public_dns                   = "ec2-54-185-56-146.us-west-2.compute.amazonaws.com"
        public_ip                    = "54.185.56.146"
        security_groups              = [
            "terraform-20191110052042521000000001",
        ]
        source_dest_check            = true
        subnet_id                    = "subnet-3b929b42"
      ~ tags                         = {
          ~ "Name" = "my-first-ec2-instance" -> "my-webserver-instance"
        }
        tenancy                      = "default"
        volume_tags                  = {}
        vpc_security_group_ids       = [
            "sg-0e1a943b062aa2315",
        ]

        credit_specification {
            cpu_credits = "standard"
        }

        root_block_device {
            delete_on_termination = false
            encrypted             = false
            iops                  = 100
            volume_id             = "vol-0f48e9a8f42ac6dc4"
            volume_size           = 8
            volume_type           = "gp2"
        }
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_instance.ec2_instance: Modifying... [id=i-0d1ab0d8c1fc57f3d]
aws_instance.ec2_instance: Modifications complete after 2s [id=i-0d1ab0d8c1fc57f3d]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

In most of the cases we are working in team where we want to share this code with rest of team members and the best way to share code is by using GIT

git add main.tf
git commit -m "first terraform EC2 instance"
vim .gitignore
git add .gitignore
git commit -m "Adding gitignore file for terraform repository"
  • Via .gitignore we are telling terraform to ignore(.terraform folder(temporary directory for terraform)and all *.tfstates file(as this file may contain secrets))
$ cat .gitignore
.terraform
*.tfstate
*.tfstate.backup
  • Create a shared git repository
git remote add origin https://github.com/<user name>/terraform.git
  • Push the code
$ git push -u origin master
  • To Perform cleanup whatever we have created so far, run terraform destroy
$ terraform destroy
aws_key_pair.example: Refreshing state... [id=example-key]
aws_security_group.examplesg: Refreshing state... [id=sg-0e1a943b062aa2315]
aws_instance.ec2_instance: Refreshing state... [id=i-0d1ab0d8c1fc57f3d]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # aws_instance.ec2_instance will be destroyed
  - resource "aws_instance" "ec2_instance" {
      - ami                          = "ami-01ed306a12b7d1c96" -> null
      - arn                          = "arn:aws:ec2:us-west-2:355622012945:instance/i-0d1ab0d8c1fc57f3d" -> null
      - associate_public_ip_address  = true -> null
      - availability_zone            = "us-west-2a" -> null
      - cpu_core_count               = 1 -> null
      - cpu_threads_per_core         = 1 -> null
      - disable_api_termination      = false -> null
      - ebs_optimized                = false -> null
      - get_password_data            = false -> null
      - id                           = "i-0d1ab0d8c1fc57f3d" -> null
      - instance_state               = "running" -> null
      - instance_type                = "t2.micro" -> null
      - ipv6_address_count           = 0 -> null
      - ipv6_addresses               = [] -> null
      - key_name                     = "example-key" -> null
      - monitoring                   = false -> null
      - primary_network_interface_id = "eni-0e5ddbe136b7d599d" -> null
      - private_dns                  = "ip-172-31-28-131.us-west-2.compute.internal" -> null
      - private_ip                   = "172.31.28.131" -> null
      - public_dns                   = "ec2-54-185-56-146.us-west-2.compute.amazonaws.com" -> null
      - public_ip                    = "54.185.56.146" -> null
      - security_groups              = [
          - "terraform-20191110052042521000000001",
        ] -> null
      - source_dest_check            = true -> null
      - subnet_id                    = "subnet-3b929b42" -> null
      - tags                         = {
          - "Name" = "my-webserver-instance"
        } -> null
      - tenancy                      = "default" -> null
      - volume_tags                  = {} -> null
      - vpc_security_group_ids       = [
          - "sg-0e1a943b062aa2315",
        ] -> null

      - credit_specification {
          - cpu_credits = "standard" -> null
        }

      - root_block_device {
          - delete_on_termination = false -> null
          - encrypted             = false -> null
          - iops                  = 100 -> null
          - volume_id             = "vol-0f48e9a8f42ac6dc4" -> null
          - volume_size           = 8 -> null
          - volume_type           = "gp2" -> null
        }
    }

  # aws_key_pair.example will be destroyed
  - resource "aws_key_pair" "example" {
      - fingerprint = "45:dd:55:74:bf:a3:07:3f:c7:02:9a:c1:9a:37:bf:40" -> null
      - id          = "example-key" -> null
      - key_name    = "example-key" -> null
      - public_key  = ""
    }

  # aws_security_group.examplesg will be destroyed
  - resource "aws_security_group" "examplesg" {
      - arn                    = "arn:aws:ec2:us-west-2:355622012945:security-group/sg-0e1a943b062aa2315" -> null
      - description            = "Managed by Terraform" -> null
      - egress                 = [] -> null
      - id                     = "sg-0e1a943b062aa2315" -> null
      - ingress                = [
          - {
              - cidr_blocks      = [
                  - "0.0.0.0/0",
                ]
              - description      = ""
              - from_port        = 22
              - ipv6_cidr_blocks = []
              - prefix_list_ids  = []
              - protocol         = "tcp"
              - security_groups  = []
              - self             = false
              - to_port          = 22
            },
        ] -> null
      - name                   = "terraform-20191110052042521000000001" -> null
      - owner_id               = "355622012945" -> null
      - revoke_rules_on_delete = false -> null
      - tags                   = {} -> null
      - vpc_id                 = "vpc-79e55a01" -> null
    }

Plan: 0 to add, 0 to change, 3 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

aws_instance.ec2_instance: Destroying... [id=i-0d1ab0d8c1fc57f3d]
aws_instance.ec2_instance: Still destroying... [id=i-0d1ab0d8c1fc57f3d, 10s elapsed]
aws_instance.ec2_instance: Still destroying... [id=i-0d1ab0d8c1fc57f3d, 20s elapsed]
aws_instance.ec2_instance: Still destroying... [id=i-0d1ab0d8c1fc57f3d, 30s elapsed]
aws_instance.ec2_instance: Destruction complete after 30s
aws_key_pair.example: Destroying... [id=example-key]
aws_security_group.examplesg: Destroying... [id=sg-0e1a943b062aa2315]
aws_key_pair.example: Destruction complete after 0s
aws_security_group.examplesg: Destruction complete after 0s

Destroy complete! Resources: 3 destroyed.
  • Looking forward for you guys to join this journey

In addition to that, I am going to host 5 meetups whose aim is to build the below architecture.

  • Meetup: https://www.meetup.com/100daysofdevops
  • Day1(Nov 10): Introduction to Terraform https://www.meetup.com/100daysofdevops/events/266192294/
  • Day 2(Nov 16): Building VPC using Terraform
  • Day 3(Nov 17): Creating EC2 Instance inside this VPC using Terraform
  • Day 4(Nov 23): Adding Application Load Balancer and Auto-Scaling to the EC2 instance created on Day 3
  • Day5(Nov 24): Add Backend MySQL Database and CloudWatch Alarm using Terraform

21 Days of AWS using Terraform

Thanks, everyone who was the part of my earlier journey

100 Days of DevOps

21 Days of Docker

Starting from November 10, I am starting a Program, 21 Days of AWS using Terraform and the main idea behind this is to spend at least one hour of every day for next 21 days in Sharing AWS knowledge using terraform and then share progress via

Looking forward for you guys to join this journey.

In addition to that, I am going to host 5 meetups whose aim is to build the below architecture.

  • Meetup: https://www.meetup.com/100daysofdevops
  • Day1(Nov 10): Introduction to Terraform https://www.meetup.com/100daysofdevops/events/266192294/
  • Day 2(Nov 16): Building VPC using Terraform
  • Day 3(Nov 17): Creating EC2 Instance inside this VPC using Terraform
  • Day 4(Nov 23): Adding Application Load Balancer and Auto-Scaling to the EC2 instance created on Day 3
  • Day5(Nov 24): Add Backend MySQL Database and CloudWatch Alarm using Terraform

My road to Docker Certified Associate

This is the continuation of my earlier post My road to AWS Certified Solution Architect and AWS Certified Security - Specialty Certification

https://medium.com/@devopslearning/my-road-to-aws-certified-solution-architect-394676f15680

YAY I cleared the exam

WARNING: Some House Keeping task, before reading this blog

1: As everyone needs to sign NDA with Docker, I can’t tell you the exact question asked during the exam neither I have GB of memory, but I can give you the pointers what to expect in the exam.

2: As we all know Docker world updates quite frequently, so some of the stuff might not be relevant after a few days/weeks/months.

3: Please don’t ask for any exam dumps or question, that defeats the whole purpose of the exam.

Exam Preparation

  • My own effort :-), thank you, everyone, who joined me in this journey
  • I highly recommend the Linux Academy Courses to everyone

https://linuxacademy.com/cp/modules/view/id/347?redirect_uri=https://app.linuxacademy.com/search?query=docker

https://linuxacademy.com/cp/modules/view/id/270?redirect_uri=https://app.linuxacademy.com/search?query=docker

https://linuxacademy.com/cp/modules/view/id/314?redirect_uri=https://app.linuxacademy.com/search?query=docker

  • If you are just looking to clear the exam, the Zeal Vora course is pretty good but it lacks the in-depth of docker concepts.

https://www.udemy.com/course/docker-certified-associate/

  • Dockercon Videos: I highly recommend going through these videos, as they will give you enough in-depth knowledge about each service.
  • Docker Documentation: Docker documentation is pretty good, please go through it before sitting in the exam

Once you are done with the above preparation, it’s a good time to gauge your knowledge, check the Docker provided sample question.

https://docker.cdn.prismic.io/docker%2Fa2d454ff-b2eb-4e9f-af0e-533759119eee_dca+study+guide+v1.0.1.pdf

Now coming back to the exam, the entire exam is divided into six main topics.

Based on my experience, you must need to know

  • Docker Enterprise & Registry
  • Docker Swarm

As they both cover 40% of the exam and will make a difference between whether you pass or fail the exam.

Exam Pattern

  • 55 multiple choice questions in 90 minutes
  • Designed to validate professionals with a minimum of 6 to 12 months of Docker experience
  • Remotely proctored on your Windows or Mac computer
  • Available globally in English
  • USD $195 or Euro €175 purchased online
  • Results delivered immediately

https://success.docker.com/certification

Domain 1: Orchestration 25%

  • You must know how to create swarm service
docker service create —name myservice —replicas 3 nginx
  • Difference between global vs replicated mode
In case of global mode, one container is created in every node in the cluster
  • Scaling Swarm service and what is the difference between scale vs update(In my exam I see 2-3 question related to this topic)
docker service scale <service name>=5

—> In case of scale, we can specify multiple services

docker service update —replicas 5 <service name>

—>In case of an update, we can only specify one service

  • Draining Swarm node
docker node update —availability drain <node id>
docker node update —availability active <node id>
  • Docker Stack 
docker stack deploy —compose-file docker-compose.yml
  • Placement Constraints
docker service create --name myservice --constraint node.lable.region==us-west-2 nginx
docker service create --name myservice --constraint node.lable.region!=us-west-2 nginx
  • Adding label to the node
docker node update --label-add region=us-west-2 <swarm worker node id>
  • Quorum(Remember this formula n-1/2), where N is the number of hosts
  • For example, in a swarm with 5 nodes, if you lose 3 nodes, you don’t have a quorum(Please go through this at least once)
  • Distribute manager nodes

In addition to maintaining an odd number of manager nodes, pay attention to datacenter topology when placing managers. For optimal fault-tolerance, distribute manager nodes across a minimum of 3 availability-zones to support failures of an entire set of machines or common maintenance scenarios. If you suffer a failure in any of those zones, the swarm should maintain the quorum of manager nodes available to process requests and rebalance workloads(Please go through this atleast once)

Domain 2: Image Creation, Management, and Registry 20%

  • Dockerfile directives
  • Difference between ADD(support URL and tar extraction) and COPY(allow copy from source to destination)
  • Difference between CMD(we can override it) and ENTRYPOINT(we cannot override it)
  • Understand how HEALTHCHECK directive works.
  • How to login to a private registry
docker login <private registry url>
  • How to add Insecure Registry to Docker

To add an insecure docker registry, add the file /etc/docker/daemon.json with the following content:

{
    "insecure-registries" : [ "hostname.example.com:5000" ]
}
  • How to search an image
docker search <image name>
  • How to search for an official docker image
$ docker search nginx -f "is-official=true"
NAME                DESCRIPTION                STARS               OFFICIAL            AUTOMATED
nginx               Official build of Nginx.   12094               [OK]   
  • Commit/Save/Exporting an image
docker commit <container id> <image name>
docker save image_name >> image_name.tar
docker export <container_id> >> container.tar
docker load < container.tar

NOTE: When we export the container like this, the resulting imported image will be flatened.
  • Understand the difference between filter vs format with respect to docker images
  -f, --filter filter   Filter output based on conditions provided

docker images --filter "dangling=true"

      --format string   Pretty-print images using a Go template

docker images --format "{{.ID}}: {{.Repository}}"
  • Advantage of using multi-stage build
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image
  • Prevent tags from being overwritten

Make tags immutable

You can enable tag immutability on a repository when you create it, or at any time after.

Domain 3: Installation and Configuration 15%

DTR backup process

  • When we backup DTR, images are not backed up
  • User/Organization are not backed up and should be backed up using UCP

https://docs.docker.com/ee/admin/backup/back-up-dtr/

Swarm Routing Mesh

  • All nodes within the cluster participate in the ingress routing mesh
docker service create --name myservice --publish published=8080,target=80 nginx

Namespaces

  • Make sure you remember which namespaces are enabled by default and which are not
  • User namespace is not enabled by default

Domain 4: Networking 15%

  • Should be aware of Network Driver
    Bridge(default)
    Host(removes the network isolation)
    None(disable the networking)
    Understand Overlay Network (default in the swarm) and how it works
  • Why do we need to use the overlay networks?
  • How to encrypt the overlay network?
docker network create --opt encrypted --driver overlay my-overlay-network
  • Difference between -p and -P and how to use its with container as well as with Swarm.
  -p, --publish list                   Publish a container's port(s) to the host
  -P, --publish-all                    Publish all exposed ports to random ports
  • You can use the -p flag and add /udp suffix to the port number
docker run -p 53160:53160/udp -t -i busybox

Domain 5: Security 15%

  • This is somehow critically important, make sure you understand the usage of Docker content trust and how to enable it
Docker Content Trust (DCT) provides the ability to use digital signatures for data sent to and received from remote Docker registries. These signatures allow client-side or runtime verification of the integrity and publisher of specific image tags.
  • Enabling Docker Content Registry
export DOCKER_CONTENT_TRUST=1
  • Use of UCP client bundles
  • Docker Secrets(we cannot update or rename a secret, but we can revoke a secret and grant access to it). Also, check the process to create a secret and use it with service
Step1: Create a file

 $ cat mysecret 
username: admin
pass: admin123

Step2: Create a secret from a file or we can even do it from STDIN.

$ docker secret create mysupersecret mysecret 
uzvrfy96205o541pql1xgym4s

Step 3: Now let’s create a container using this secret
$ docker service create --name mynginx1 --secret mysupersecret nginx
ueugjjkuhbbvrrszya1zb5gxs
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service converged
  • Lock the swarm cluster
docker swarm update --autolock=true
  • Use of Docker Group
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.

If you don’t want to preface the docker command with sudo, create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group.
sudo usermod -aG docker $USER
  • Docker with cgroups

Docker also makes use of kernel control groups for resource allocation and isolation. A cgroup limits an application to a specific set of resources. Control groups allow Docker Engine to share available hardware resources to containers and optionally enforce limits and constraints.

Docker Engine uses the following cgroups:

  • Memory cgroup for managing accounting, limits and notifications.
  • CPU group for managing user / system CPU time and usage.
  • CPUSet cgroup for binding a group to specific CPU. Useful for real time applications and NUMA systems with localized memory per CPU.
  • BlkIO cgroup for measuring & limiting amount of blckIO by group.
  • net_cls and net_prio cgroup for tagging the traffic control.
  • Devices cgroup for reading / writing access devices.
  -m, --memory bytes                   Memory limit
      --memory-reservation bytes       Memory soft limit
  -c, --cpu-shares int                 CPU shares (relative weight)
      --cpus decimal                   Number of CPUs
      --cpuset-cpus string             CPUs in which to allow execution (0-3, 0,1)
  • Understand the difference between docker limits vs reservation

the limit is a (hard limit) and reservation is a (soft limit)

Domain 6: Storage and Volumes 10%

  • Make sure, you understand the difference between bind mount vs volume
Bind mounts: A bind mount is a file or folder stored anywhere on the container host filesystem, mounted into a running container. The main difference a bind mount has from a volume is that since it can exist anywhere on the host filesystem, processes outside of Docker can also modify it.
  • Also please pay special attention to spelling mistakes eg:

-v or –volume –> correct

–volumes –> incorrect (extra s in volumes)

 -v, --volume list                    Bind mount a volume
  • Just skim through the output of docker volume inspect(field to pay special attention Mountpoint)
docker volume inspect <volume name>

docker volume inspect jenkins_home
[
    {
        "CreatedAt": "2019-07-17T21:08:33Z",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/jenkins_home/_data", <----
        "Name": "jenkins_home",
        "Options": null,
        "Scope": "local"
    }
]
  • We can mount the same volume to multiple containers
  • Also, remember that we can –volumes-from to mount volumes from the specified container(s)
--volumes-from list              Mount volumes from the specified container(s)
  • Make sure you understand the difference between device-mapper(loop-lvm(testing) vs direct-lvm(production))
loop-lvm: This configuration is only appropriate for testing. The loop-lvm mode makes use of a ‘loopback’ mechanism that allows files on the local disk to be read from and written to as if they were an actual physical disk or block device

direct-lvm: Production hosts using the devicemapper storage driver must use direct-lvm mode. This mode uses block devices to create the thin pool. This is faster than using loopback devices, uses system resources more efficiently, and block devices can grow as needed
  • Running –rm with docker run also remove the volume associated with it
docker run --rm -v /mytestvol busybox

Miscellaneous

  • Take a look at all the command available under docker system
docker system --help

Usage:	docker system COMMAND

Manage Docker

Commands:
  df          Show docker disk usage
  events      Get real time events from the server
  info        Display system-wide information
  prune       Remove unused data
  • How to setup docker in debug mode?

Edit /etc/docker/daemon.json, setting"debug": true. Create this file, If this file does not exist, it should look like this when complete:

{
    "debug": true
}
  • Restart Policy

To configure the restart policy for a container, use the --restart flag when using the docker run command. The value of the --restart flag can be any of the following:

  • How to copy Docker images from one host to another without using a repository
sudo docker save -o <path for generated tar file> <image name>

sudo docker save -o /home/matrix/matrix-data.tar matrix-data

Once you think you are fully prepared, please try to solve the 9 questions at the end of Docker Certified Associate Guide

https://docker.cdn.prismic.io/docker%2Fa2d454ff-b2eb-4e9f-af0e-533759119eee_dca+study+guide+v1.0.1.pdf

Final Words

  • The key take away from this exam is, you can easily clear this exam if you know Docker Enterprise Server, Docker Registry as well as Docker Swarm in depth.
  • The last exam I wrote was the AWS Security Specialist Exam where a question was scenario-based and some of them are almost one page long, here most of the questions are too the point.
  • So keep calm and write this exam and let me know in case if you have any questions.

21 Days of Docker – Day 21

Welcome to Day 21 for 21 Days of Docker

Thanks, everyone for joining 21 Days of Docker, I learned a lot and I believe you guys also got a chance to learn something out of my blogs.

In the next few weeks, I am coming up with 21 Days of AWS using Terraform, so stay tuned:-).

Thanks, everyone, and Happy Learning!.

Day 1: Introduction to Docker

Day 2: First Docker Container

Day 3: Building Container Continue

Day 4: Docker Container Under The Hood

Day 5: Introduction to Dockerfile

Day 6: Introduction to Dockerfile – Part 2

Day 7: Use multip-stage builds with Dockerfile

Day 8: Docker Images, Layers & Containers

Day 9 : Docker Networking – Part 1

Day 10: Docker Networking – Part 2

Day 11: Docker Networking – Part 3

Day 12: Docker Storage – Part 1

Day 13: Docker Storage – Part 2

Day 14: Introduction to Docker Swarm – Part 1

Day 15 : Introduction to Docker Swarm – Part 2

Day 16: Introduction to Docker Swarm – Part 3

Day 17: Building Effective Docker Image

Day 18: Docker Security

Day 19: Docker Networking Deep Dive

Day 20 – Introduction and Installation of Docker Enterprise Edition

Day 1 – Introduction to Docker

Day 2 – Introduction to Docker Images and Dockerfile

Day 3 – Docker Networking

Please follow me with my Journey

This time to make learning more interactive, I am adding

  • Slack
  • Meetup

Please feel free to join this group.

Slack: 

https://100daysofdevops.slack.com/join/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group https://www.meetup.com/100daysofdevops/

21 Days of Docker-Day 20 – Introduction and Installation of Docker Enterprise Edition

Welcome to Day 20 of 21 Days of Docker, the topic for today Introduction and Installation of Docker Enterprise Edition

What is Docker Enterprise Edition?

Docker Enterprise Edition (Docker EE) is designed for enterprise development and IT teams who build, ship, and run business-critical applications in production and at scale.

Docker Enterprise Features

  • Role-based access control
  • LDAP/AD integration
  • Image scanning and signing enforcement policies
  • Security policies

Universal Control Plane

  • Docker Universal Control Plane (UCP) is the enterprise-grade cluster management solution from Docker. You install it on-premises or in your virtual private cloud, and it helps you manage your Docker cluster and applications through a single interface.

Docker Trusted Registry

  • Docker Trusted Registry (DTR) is the enterprise-grade image storage solution from Docker. You install it behind your firewall so that you can securely store and manage the Docker images you use in your applications.

Installing Docker Enterprise Edition

Setup

  • 3 Ubuntu Hosts
  • 1 Universal Control Plane(UCP) manager
  • 1 Docker Trusted Registry(DTR)
  • 1 Worker Node
  • Let take a look, how we can install and configure the Docker EE engine, UCP, and DTR.
  • Perform the following steps on all three servers:
  1. Start a free trial for Docker EE: If you don’t have a Docker EE trial already started, then launch one here: https://hub.docker.com/editions/enterprise/docker-ee-trial. This free trial lasts up to a month, but another one can be started right after it expires.
  2. Go to https://hub.docker.com/my-content and retrieve a unique URL for Docker EE.
  3. Click Setup.
  4. Copy the URL generated for Docker EE.
  5. Set a few environment variables. Ensure that the unique URL generated for Docker EE is also used here:
DOCKER_EE_URL=<YOUR_DOCKER_EE_URL> 
DOCKER_EE_VERSION=18.09
  • Verify that the required packages install successfully:
$ sudo apt-get install -y \
>     apt-transport-https \
>     ca-certificates \
>     curl \
>     software-properties-common
[sudo] password for cloud_user:
Reading package lists... Done
Building dependency tree
Reading state information... Done
ca-certificates is already the newest version (20180409).
curl is already the newest version (7.58.0-2ubuntu3.8).
The following additional packages will be installed:
  python3-software-properties
The following NEW packages will be installed:
  apt-transport-https
The following packages will be upgraded:
  python3-software-properties software-properties-common
2 upgraded, 1 newly installed, 0 to remove and 101 not upgraded.
Need to get 35.3 kB of archives.
After this operation, 166 kB of additional disk space will be used.
Get:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 apt-transport-https all 1.6.12 [1692 B]
Get:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/main amd64 software-properties-common all 0.96.24.32.11 [9996 B]
Get:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates/main amd64 python3-software-properties all 0.96.24.32.11 [23.6 kB]
Fetched 35.3 kB in 0s (1949 kB/s)
Selecting previously unselected package apt-transport-https.
(Reading database ... 112422 files and directories currently installed.)
Preparing to unpack .../apt-transport-https_1.6.12_all.deb ...
Unpacking apt-transport-https (1.6.12) ...
Preparing to unpack .../software-properties-common_0.96.24.32.11_all.deb ...
Unpacking software-properties-common (0.96.24.32.11) over (0.96.24.32.7) ...
Preparing to unpack .../python3-software-properties_0.96.24.32.11_all.deb ...
Unpacking python3-software-properties (0.96.24.32.11) over (0.96.24.32.7) ...
Setting up apt-transport-https (1.6.12) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up python3-software-properties (0.96.24.32.11) ...
Processing triggers for dbus (1.12.2-1ubuntu1.1) ...
Setting up software-properties-common (0.96.24.32.11) ...
  • Add the gpg key and repository using the unique URL for Docker EE
 curl -fsSL "${DOCKER_EE_URL}/ubuntu/gpg" | sudo apt-key add -
OK

$ sudo add-apt-repository \
>    "deb [arch=$(dpkg --print-architecture)] $DOCKER_EE_URL/ubuntu \
>    $(lsb_release -cs) \
>    stable-$DOCKER_EE_VERSION"
Hit:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease
Get:4 https://storebits.docker.com/ee/trial/sub-8158a9b3-de4e-4753-b73e-d386fca163ff/ubuntu bionic InRelease [116 kB]
Get:5 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:6 https://storebits.docker.com/ee/trial/sub-8158a9b3-de4e-4753-b73e-d386fca163ff/ubuntu bionic/stable-18.09 amd64 Packages [6386 B]
Fetched 211 kB in 0s (454 kB/s)
Reading package lists... Done
  • Install Docker EE:
$ sudo apt-get update
Hit:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:4 https://storebits.docker.com/ee/trial/sub-8158a9b3-de4e-4753-b73e-d386fca163ff/ubuntu bionic InRelease
Get:5 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Fetched 88.7 kB in 1s (166 kB/s)
Reading package lists... Done

$ sudo apt-get install -y docker-ee=5:18.09.4~3-0~ubuntu-bionic
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  aufs-tools cgroupfs-mount containerd.io docker-ee-cli libltdl7 pigz
The following NEW packages will be installed:
  aufs-tools cgroupfs-mount containerd.io docker-ee docker-ee-cli libltdl7 pigz
0 upgraded, 7 newly installed, 0 to remove and 101 not upgraded.
Need to get 56.5 MB of archives.
After this operation, 266 MB of additional disk space will be used.
Get:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/universe amd64 pigz amd64 2.4-1 [57.4 kB]
Get:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/universe amd64 aufs-tools amd64 1:4.9+20170918-1ubuntu1 [104 kB]
Get:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/universe amd64 cgroupfs-mount all 1.4 [6320 B]
Get:4 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic/main amd64 libltdl7 amd64 2.4.6-2 [38.8 kB]
Get:5 https://storebits.docker.com/ee/trial/sub-8158a9b3-de4e-4753-b73e-d386fca163ff/ubuntu bionic/stable-18.09 amd64 containerd.io amd64 1.2.5-1 [19.9 MB]
Get:6 https://storebits.docker.com/ee/trial/sub-8158a9b3-de4e-4753-b73e-d386fca163ff/ubuntu bionic/stable-18.09 amd64 docker-ee-cli amd64 5:18.09.10~3-0~ubuntu-bionic [17.1 MB]
Get:7 https://storebits.docker.com/ee/trial/sub-8158a9b3-de4e-4753-b73e-d386fca163ff/ubuntu bionic/stable-18.09 amd64 docker-ee amd64 5:18.09.4~3-0~ubuntu-bionic [19.3 MB]
Fetched 56.5 MB in 2s (30.9 MB/s)
Selecting previously unselected package pigz.
(Reading database ... 112428 files and directories currently installed.)
Preparing to unpack .../0-pigz_2.4-1_amd64.deb ...
Unpacking pigz (2.4-1) ...
Selecting previously unselected package aufs-tools.
Preparing to unpack .../1-aufs-tools_1%3a4.9+20170918-1ubuntu1_amd64.deb ...
Unpacking aufs-tools (1:4.9+20170918-1ubuntu1) ...
Selecting previously unselected package cgroupfs-mount.
Preparing to unpack .../2-cgroupfs-mount_1.4_all.deb ...
Unpacking cgroupfs-mount (1.4) ...
Selecting previously unselected package containerd.io.
Preparing to unpack .../3-containerd.io_1.2.5-1_amd64.deb ...
Unpacking containerd.io (1.2.5-1) ...
Selecting previously unselected package docker-ee-cli.
Preparing to unpack .../4-docker-ee-cli_5%3a18.09.10~3-0~ubuntu-bionic_amd64.deb ...
Unpacking docker-ee-cli (5:18.09.10~3-0~ubuntu-bionic) ...
Selecting previously unselected package docker-ee.
Preparing to unpack .../5-docker-ee_5%3a18.09.4~3-0~ubuntu-bionic_amd64.deb ...
Unpacking docker-ee (5:18.09.4~3-0~ubuntu-bionic) ...
Selecting previously unselected package libltdl7:amd64.
Preparing to unpack .../6-libltdl7_2.4.6-2_amd64.deb ...
Unpacking libltdl7:amd64 (2.4.6-2) ...
Setting up aufs-tools (1:4.9+20170918-1ubuntu1) ...
Setting up containerd.io (1.2.5-1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
Processing triggers for ureadahead (0.100.0-20) ...
Setting up cgroupfs-mount (1.4) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Setting up docker-ee-cli (5:18.09.10~3-0~ubuntu-bionic) ...
Processing triggers for systemd (237-3ubuntu10.29) ...
Setting up libltdl7:amd64 (2.4.6-2) ...
Setting up docker-ee (5:18.09.4~3-0~ubuntu-bionic) ...
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket.
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up pigz (2.4-1) ...
Processing triggers for ureadahead (0.100.0-20) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
  • Add user  access to run the Docker commands
$ sudo usermod -a -G docker centos

NOTE: You need to resource your env back or just logout and log back in

  • Test the Docker EE installation to verify that it’s working:
$ docker version
Client:
 Version:           18.09.10
 API version:       1.39
 Go version:        go1.12.10
 Git commit:        2408617bbf
 Built:             Fri Oct  4 20:56:49 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Enterprise
 Engine:
  Version:          18.09.4
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       c3516c4
  Built:            Wed Mar 27 18:02:16 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Set Up the UCP Manager

  • With docker installation out of the way, let install UCP manager on UCP manager host
  • Pull the UCP image:
$ docker image pull docker/ucp:3.1.5
3.1.5: Pulling from docker/ucp
cd784148e348: Pull complete
3871e7d70c20: Pull complete
b7a92b3565bc: Pull complete
Digest: sha256:334d7b21e30d7b3caeea6dd0fb4e79633b7035a4fd63ea479dfac19470206012
Status: Downloaded newer image for docker/ucp:3.1.5
  • Use the UCP image for the installation:
$ docker container run --rm -it --name ucp \
>   -v /var/run/docker.sock:/var/run/docker.sock \
>   docker/ucp:3.1.5 install \
>   --host-address 10.0.1.101 \
>   --interactive
INFO[0000] Your engine version 18.09.4, build c3516c4 (4.15.0-1034-aws) is compatible with UCP 3.1.5 (e3b1ac1)
Admin Username: admin <--- Provide a username
Admin Password:  <-- and a password
Confirm Admin Password:
INFO[0014] Pulling required images... (this may take a while)
INFO[0014] Pulling docker/ucp-dsinfo:3.1.5
INFO[0022] Pulling docker/ucp-cfssl:3.1.5
INFO[0022] Pulling docker/ucp-interlock-extension:3.1.5
INFO[0023] Pulling docker/ucp-agent:3.1.5
INFO[0024] Pulling docker/ucp-calico-node:3.1.5
INFO[0027] Pulling docker/ucp-compose:3.1.5
INFO[0028] Pulling docker/ucp-kube-compose-api:3.1.5
INFO[0029] Pulling docker/ucp-kube-dns:3.1.5
INFO[0030] Pulling docker/ucp-etcd:3.1.5
INFO[0031] Pulling docker/ucp-kube-dns-dnsmasq-nanny:3.1.5
INFO[0033] Pulling docker/ucp-interlock:3.1.5
INFO[0033] Pulling docker/ucp-auth:3.1.5
INFO[0034] Pulling docker/ucp-hyperkube:3.1.5
INFO[0042] Pulling docker/ucp-controller:3.1.5
INFO[0045] Pulling docker/ucp-auth-store:3.1.5
INFO[0046] Pulling docker/ucp-metrics:3.1.5
INFO[0049] Pulling docker/ucp-interlock-proxy:3.1.5
INFO[0050] Pulling docker/ucp-pause:3.1.5
INFO[0050] Pulling docker/ucp-swarm:3.1.5
INFO[0051] Pulling docker/ucp-kube-compose:3.1.5
INFO[0052] Pulling docker/ucp-kube-dns-sidecar:3.1.5
INFO[0053] Pulling docker/ucp-calico-cni:3.1.5
INFO[0055] Pulling docker/ucp-calico-kube-controllers:3.1.5
INFO[0060] Pulling docker/ucp-azure-ip-allocator:3.1.5
WARN[0061] None of the hostnames we'll be using in the UCP certificates [ip-10-0-1-101 127.0.0.1 172.17.0.1 10.0.1.101] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases: <-- Hit enter and select default aliases
INFO[0000] Initializing a new swarm at 10.0.1.101
INFO[0013] Installing UCP with host address 10.0.1.101 - If this is incorrect, please specify an alternative address with the '--host-address' flag
INFO[0013] Deploying UCP Service...
INFO[0089] Installation completed on ip-10-0-1-101 (node kwropkch0bblk0ua5ld7gezco)
INFO[0089] UCP Instance ID: 9wbsl1mze4aiqy1k9h6s7h98q
INFO[0089] UCP Server SSL: SHA-256 Fingerprint=40:72:CC:57:89:C0:83:1E:42:28:7B:B8:23:AE:A5:A5:7D:AA:FF:2A:EC:F7:36:DD:08:79:7E:70:29:17:2B:A6
INFO[0089] Login to UCP at https://10.0.1.101:443
INFO[0089] Username: admin
INFO[0089] Password: (your admin password)
  • In a web browser go to: https://[UCP manager Public IP] for accessing the UCP manager
  • Use the admin credentials that were created during the initial setup process to log in.
  • Note: A warning about the self-signed certificate’s validity may emerge. This notification can be disregarded

Add Both UCP Workers to the Cluster

  1. Navigate back to the UCP manager interface in a web browser to retrieve the worker join command. We will also generate a docker swarm join command that can be copied.
  2. Click Shared Resources
  3. Click Nodes
  4. Click Add Node.
  5. Apply the following values on the Add Node page:
    • Node type: Linux
    • Node role: Worker
$ docker swarm join --token SWMTKN-1-3uz01qrsbwsk4nj8p5u22e8k9f1sc1adu850y0hazn9hn15bai-aefa5kpqvy0lqj1a5s0u9t9ez 10.0.1.101:2377
This node joined a swarm as a worker.

$ docker swarm join --token SWMTKN-1-3uz01qrsbwsk4nj8p5u22e8k9f1sc1adu850y0hazn9hn15bai-aefa5kpqvy0lqj1a5s0u9t9ez 10.0.1.101:2377
This node joined a swarm as a worker.

Set Up Docker Trusted Registry

Get the DTR setup command from the UCP manager by performing the following steps:

  1. Access the UCP manager from a web browser.
  2. Click Admin > Admin Settings.
  3. Click Docker Trusted Registry.
  4. On the Admin Settings page locate the UCP Node section.
  5. Click ip-10-0-1-102.
  6. Click the checkbox labeled Disable TLS verification for UCP.
$ docker run -it --rm docker/dtr install  --ucp-node ip-10-0-1-102  --ucp-username admin  --ucp-url https://10.0.1.101  --ucp-insecure-tls
Unable to find image 'docker/dtr:latest' locally
latest: Pulling from docker/dtr
9d48c3bd43c5: Pull complete
dcfa06138f1d: Pull complete
3a8b460c24c5: Pull complete
4bb8be37e77e: Pull complete
ba41549fd9f6: Pull complete
Digest: sha256:e1eae7579a6a1793d653dd97df9297464600e2644565fa2ed33c351f942facf6
Status: Downloaded newer image for docker/dtr:latest
INFO[0000] Beginning Docker Trusted Registry installation
ucp-password:
INFO[0004] Validating UCP cert
INFO[0004] Connecting to UCP
INFO[0004] health checking ucp
INFO[0004] The UCP cluster contains the following nodes without port conflicts: ip-10-0-1-102, ip-10-0-1-103
INFO[0004] Searching containers in UCP for DTR replicas
INFO[0004] Searching containers in UCP for DTR replicas
INFO[0004] verifying [80 443] ports on ip-10-0-1-102
INFO[0008] Waiting for running dtr-phase2 container to finish
INFO[0008] starting phase 2
INFO[0000] Validating UCP cert
INFO[0000] Connecting to UCP
INFO[0000] health checking ucp
INFO[0000] Verifying your system is compatible with DTR
INFO[0000] Checking if the node is okay to install on
INFO[0000] Using default overlay subnet: 10.1.0.0/24
INFO[0000] Creating network: dtr-ol
INFO[0000] Connecting to network: dtr-ol
INFO[0000] Waiting for phase2 container to be known to the Docker daemon
INFO[0001] Setting up replica volumes...
INFO[0002] Creating initial CA certificates
INFO[0002] Bootstrapping rethink...
INFO[0002] Creating dtr-rethinkdb-729902891973...
INFO[0008] Establishing connection with Rethinkdb
INFO[0009] Waiting for database dtr2 to exist
INFO[0009] Waiting for database dtr2 to exist
INFO[0009] Waiting for database dtr2 to exist
INFO[0010] Generated TLS certificate.                    dnsNames="[*.com *.*.com example.com *.dtr *.*.dtr]" domains="[*.com *.*.com 172.17.0.1 example.com *.dtr *.*.dtr]" ipAddresses="[172.17.0.1]"
INFO[0011] License config copied from UCP.
INFO[0011] Migrating db...
INFO[0000] Establishing connection with Rethinkdb
INFO[0000] Migrating database schema                     fromVersion=0 toVersion=10
INFO[0003] Waiting for database notaryserver to exist
INFO[0003] Waiting for database notaryserver to exist
INFO[0003] Waiting for database notaryserver to exist
INFO[0005] Waiting for database notarysigner to exist
INFO[0005] Waiting for database notarysigner to exist
INFO[0006] Waiting for database notarysigner to exist
INFO[0006] Waiting for database jobrunner to exist
INFO[0007] Waiting for database jobrunner to exist
INFO[0007] Waiting for database jobrunner to exist
INFO[0010] Migrated database from version 0 to 10
INFO[0021] Starting all containers...
INFO[0021] Getting container configuration and starting containers...
INFO[0021] Automatically configuring rethinkdb cache size to 2000 mb
INFO[0022] Recreating dtr-rethinkdb-729902891973...
INFO[0028] Creating dtr-registry-729902891973...
INFO[0032] Creating dtr-garant-729902891973...
INFO[0037] Creating dtr-api-729902891973...
INFO[0060] Creating dtr-notary-server-729902891973...
INFO[0064] Recreating dtr-nginx-729902891973...
INFO[0071] Creating dtr-jobrunner-729902891973...
INFO[0077] Creating dtr-notary-signer-729902891973...
INFO[0082] Creating dtr-scanningstore-729902891973...
INFO[0087] Trying to get the kv store connection back after reconfigure
INFO[0087] Establishing connection with Rethinkdb
INFO[0088] Verifying auth settings...
INFO[0088] Successfully registered dtr with UCP
INFO[0088] Installation is complete
INFO[0088] Replica ID is set to: 729902891973
INFO[0088] You can use flag '--existing-replica-id 729902891973' when joining other replicas to your Docker Trusted Registry Cluster

NOTE: Don’t forget to change the –ucp-url to Private IP of UCP server

  • Once the installation is complete, browse the DTR url
  • Note: Once again, a warning about the self-signed certificate’s validity may emerge. This notification can be disregarded

Please follow me with my Journey

This time to make learning more interactive, I am adding

  • Slack
  • Meetup

Please feel free to join this group.

Slack: 

https://100daysofdevops.slack.com/join/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group https://www.meetup.com/100daysofdevops/

21 Days of Docker-Day 19 -Docker Networking Deep Dive

On Day 9, Day 10 and Day 11 I discussed Docker Networking, let go one level down and dig deeper into it.

What is Container Networking Model(CNM) and Libnetwork?

  • The CNM is an open-source container networking specification that contributed to the community by Docker Inc.
  • Docker’s libnetwork is a library that provides an implementation for CNM.
  • However, third-party plugins can be used to replace the built-in Docker driver.
  • Libnetwork is cross-platform and open-source.
  • CNM has interfaces for both IPAM plugins and network plugin. The IPAM plugin APIs can be used to create/delete address pools and allocate/deallocate container IP addresses. The network plugin APIs are used to create/delete networks and add/remove containers from networks.

Docker Networking on Linux

  • Docker networking uses the Linux Kernel extensive networking capabilities(eg: TCP/IP stack, VXLAN, DNS)
  • Docker networking utilizes many Linux Kernel networking features(network namespaces, bridges, iptables, veth pairs…)
  • Linux Bridges: L2 virtual switches implemented in the kernel
  • Network namespaces: Used for isolating container network stacks
  • veth pairs: Connecting containers to container networks
  • iptables: Used for port mapping, load balancing, network isolation

Docker Overlay Driver

  • The overlay network driver creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it (including swarm service containers) to communicate securely. Docker transparently handles routing of each packet to and from the correct Docker daemon host and the correct destination container.

How does it works?

  • The overlay driver uses VXLAN technology to build the network
  • A VXLAN tunnel is created through the underlay network(s)

Network Troubleshooting

  • The first step in any troubleshooting is to check the container logs
docker container logs <container id>
  • If you want to check docker daemon logs
sudo journalctl -u docker
  • Let discuss one handy tool called netshoot
  • Docker network troubleshooting can become complex. With a proper understanding of how Docker networking works and the right set of tools, you can troubleshoot and resolve these networking issues. The netshoot container has a set of powerful networking troubleshooting tools that can be used to troubleshoot Docker networking issues.
  • If you’re having networking issues with your application’s container, you can launch netshoot with that container’s network namespace.
  • netshoot includes a set of powerful tools
apache2-utils
bash
bind-tools
bird
bridge-utils
busybox-extras
calicoctl
conntrack-tools
ctop
curl
dhcping
drill
ethtool
file
fping
iftop
iperf
iproute2
ipset
  • To illustrate that let’s take a simple example
  • Let create a custom bridge network
$ docker network create my-custom-net
b56567c0609dda35f8312c08b6c974875217447e6d3618ea9d63256e3013d2cf
  • Run and attach a container to this network
$ docker container run -dt --name my-nginx --network my-custom-net -p 80:80 nginx
0564f1f854c004911ea42edd0d7492a8c5f32cafba72993022ec99d36e0c8c33
  • Let say we are facing some issue in connecting to the nginx container, now to check that
 docker container run --rm --network my-custom-net nicolaka/netshoot curl my-nginx:80
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
100   612  100   612    0     0  76500      0 --:--:-- --:--:-- --:--:-- 87428
  • One of the most powerful features of netshoot, if you’re having networking issues with your application’s container, you can launch netshoot with that container’s network namespace like this :

$ docker run -it --net container:<container_name> nicolaka/netshoot

  • The case we have discussed above, we can do it
$ docker container run --rm -it --net container:my-nginx nicolaka/netshoot curl localhost:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

For more info

https://github.com/nicolaka/netshoot

Please follow me with my Journey

This time to make learning more interactive, I am adding

  • Slack
  • Meetup

Please feel free to join this group.

Slack: 

https://100daysofdevops.slack.com/join/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group https://www.meetup.com/100daysofdevops/

21 Days of Docker-Day 18 -Docker Security

Welcome to Day 18 of 21 Days of Docker. The topic for today is Docker Security. Today my focus is on three topics

  • Security scanning using Docker Trusted Registry
  • Managing secret in Swarm
  • Docker Content Trust
  • Encrypting Overlay Network

Security scanning using Docker Trusted Registry

Docker Trusted Registry comes along with Docker Enterprise Edition and I really like this feature called DTR scanning. Before going there what is Docker Trusted Registry?

Docker Trusted Registry is an on-premises registry that allows
enterprises to store and manage their Docker images on-premise.

DTR run a security scan on your image, you can view the results.

Managing secret in Swarm

  • In terms of Docker Swarm services, a secret is a blob of data, such as a password, SSH private key, SSL certificate, or another piece of data that should not be transmitted over a network or stored unencrypted in a Dockerfile or in your application’s source code.
  • In Docker 1.13 and higher, you can use Docker secrets to centrally manage this data and securely transmit it to only those containers that need access to it.
  • Secrets are encrypted during transit and at rest in a Docker swarm.
  • A given secret is only accessible to those services which have been granted explicit access to it, and only while those service tasks are running.
  • Step1: Create a file
 $ cat mysecret 
username: admin
pass: admin123
  • Step2: Create a secret from a file or we can even do it from STDIN.
$ docker secret create mysupersecret mysecret 
uzvrfy96205o541pql1xgym4s
  • Step3: List the secrets
$ docker secret ls
ID                          NAME                DRIVER              CREATED             UPDATED
uzvrfy96205o541pql1xgym4s   mysupersecret                           13 seconds ago      13 seconds ago
  • Step4: Secret is encrypted, so even if we try to inspect , we can’t see the secret
$ docker secret inspect uzvrfy96205o541pql1xgym4s
[
    {
        "ID": "uzvrfy96205o541pql1xgym4s",
        "Version": {
            "Index": 41
        },
        "CreatedAt": "2019-10-25T17:22:01.841559963Z",
        "UpdatedAt": "2019-10-25T17:22:01.841559963Z",
        "Spec": {
            "Name": "mysupersecret",
            "Labels": {}
        }
    }
]
  • Now let’s create a container using this secret
$ docker service create --name mynginx1 --secret mysupersecret nginx
ueugjjkuhbbvrrszya1zb5gxs
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service converged 
  • Let’s login to the container
$ docker exec -it 48a4c7e74a8e bash
root@48a4c7e74a8e:/# cd /run
root@48a4c7e74a8e:/run# ls
lock  nginx.pid  secrets  utmp
# cd secrets/
  • As you can see container has access to the secret file
# cat mysupersecret 
username: admin
pass: admin123

Docker Content Trust

  • When transferring data among networked systems, trust is a central concern. In particular, when communicating over an untrusted medium such as the internet, it is critical to ensure the integrity and the publisher of all the data a system operates on. You use the Docker Engine to push and pull images (data) to a public or private registry. Content trust gives you the ability to verify both the integrity and the publisher of all the data received from a registry over any channel.
  • Docker Content Trust (DCT) provides the ability to use digital signatures for data sent to and received from remote Docker registries. These signatures allow client-side or runtime verification of the integrity and publisher of specific image tags.
  • Through DCT, image publishers can sign their images and image consumers can ensure that the images they use are signed.
  • First, log in to Docker Hub. Enter your Docker Hub credentials when prompted.
   docker login
  • Generate a delegation key pair. 
$ docker trust key generate lakhera2014
Generating key for lakhera2014...
Enter passphrase for new lakhera2014 key with ID 8480add: 
Repeat passphrase for new lakhera2014 key with ID 8480add: 
Successfully generated and loaded private key. Corresponding public key available: /Users/plakhera/lakhera2014.pub
  • Then we’ll add ourselves as a signer to an image repository.
$ docker trust key generate lakhera2014
Generating key for lakhera2014...
Enter passphrase for new lakhera2014 key with ID 8480add: 
Repeat passphrase for new lakhera2014 key with ID 8480add: 
Successfully generated and loaded private key. Corresponding public key available: /Users/plakhera/lakhera2014.pub
plakhera-ltm:~ plakhera$ docker trust signer add --key lakhera2014.pub lakhera2014  lakhera2014/mydct-test
Adding signer "lakhera2014" to lakhera2014/mydct-test...
Initializing signed repository for lakhera2014/mydct-test...
You are about to create a new root signing key passphrase. This passphrase
will be used to protect the most sensitive key in your signing system. Please
choose a long, complex passphrase and be careful to keep the password and the
key file itself secure and backed up. It is highly recommended that you use a
password manager to generate the passphrase and keep it safe. There will be no
way to recover this key. You can find the key in your config directory.
Enter passphrase for new root key with ID e638abc: 
Repeat passphrase for new root key with ID e638abc: 
Enter passphrase for new repository key with ID 5d6a8b3: 
Repeat passphrase for new repository key with ID 5d6a8b3: 
Successfully initialized "lakhera2014/mydct-test"
Successfully added signer: lakhera2014 to lakhera2014/mydct-test

NOTE: Once again, be sure to make note of the passphrases used.

  • Let’s create a Dockerfile
$ cat Dockerfile 
FROM busybox
CMD echo coming from dct
  • Build the image
$ docker build -t lakhera2014/mydct:unsigned .
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e
 ---> 19485c79a9bb
Step 2/2 : CMD echo coming from dct
 ---> Using cache
 ---> df88a4715edc
Successfully built df88a4715edc
Successfully tagged lakhera2014/mydct:unsigned
Tagging busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e as busybox:latest
  • Run it
$ docker container run lakhera2014/mydct:unsigned
coming from dct
  • Let’s enable Docker Content Trust now and see the result
export DOCKER_CONTENT_TRUST=1
$ docker container run lakhera2014/mydct:unsigned
docker: Error: remote trust data does not exist for docker.io/lakhera2014/mydct: notary.docker.io does not have trust data for docker.io/lakhera2014/mydct.
See 'docker run --help'.
  • Let’s build the image but this time giving signed tag
$ docker build -t lakhera2014/mydct:signed .
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e
 ---> 19485c79a9bb
Step 2/2 : CMD echo coming from dct
 ---> Using cache
 ---> df88a4715edc
Successfully built df88a4715edc
Successfully tagged lakhera2014/mydct:signed
Tagging busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e as busybox:latest
  • Push a signed tag to the repo. Enter the passphrase — this will be the one that was chosen earlier when running the docker trust key generate command
$ docker trust sign lakhera2014/mydct:signed
Enter passphrase for root key with ID e638abc: 
Enter passphrase for new repository key with ID f626d57: 
Repeat passphrase for new repository key with ID f626d57: 
Enter passphrase for lakhera2014 key with ID 8480add: 
Created signer: lakhera2014
Finished initializing signed repository for lakhera2014/mydct:signed
Signing and pushing trust data for local image lakhera2014/mydct:signed, may overwrite remote trust data
The push refers to repository [docker.io/lakhera2014/mydct]
6c0ea40aef9d: Mounted from library/busybox 
signed: digest: sha256:13bc8974eb312c18d28ea46a356ab0af01565f7259eb6042803a2eda4f56b52f size: 527
Signing and pushing trust metadata
Enter passphrase for lakhera2014 key with ID 8480add: 
Successfully signed docker.io/lakhera2014/mydct:signed
  • NOTE: docker trust sign also pushes the image to dockerhub.
  • Run the container again
$ docker container run lakhera2014/mydct:signed
coming from dct
  • Turn off Docker Content Trust and attempt to run the unsigned image again
$ export DOCKER_CONTENT_TRUST=0
$ docker container run lakhera2014/mydct:unsigned
coming from dct

Encrypting Overlay Network

  • We can encrypt communication between containers on overlay networks in order to provide greater security within our swarm cluster.
  • Use the –opt encrypted flag when creating overlay network to encrypt it
docker network create --opt encrypted --driver overlay <network name>
  • Create a service using the overlay network
docker service create --name my-encrypted-overlay --network <overlay network name> --replicas 3 nginx

Please follow me with my Journey

This time to make learning more interactive, I am adding

  • Slack
  • Meetup

Please feel free to join this group.

Slack: 

https://100daysofdevops.slack.com/join/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group https://www.meetup.com/100daysofdevops/

21 Days of Docker-Day 17 -Building Effective Docker Image

Welcome to Day 17 of 21 Days of Docker. The topic for today is the Building Effective Docker Image.

What are containers layers?

On Day 8, I talked about Docker Images and layers.

Image is build of a bunch of image layers and when we boot the container using this image, container adds a read-write layer on the top of it.

Now at this stage, the question you need to ask yourself, why do I care how many layers I have?

  • More layers mean a larger image. The larger the image, the longer it takes to both build, push and pull from the registry.
  • Smaller images mean faster builds and deploys.

How can I reduce my layers?

  • Use shared base images where possible.
  • Limit the data written to the container layer.
  • Chain RUN statements
  • Prevent cache misses at build for as long as possible.

Other steps we can do, to build an optimized image

Choosing the right base image

centos                                                                                  latest              9f38484d220f        7 months ago        202MB

vs

python                                                                                  3.7.3-alpine        2caaa0e9feab        3 months ago        87.2MB
  • In this case, if I am building a Python application, I don’t need entire centos operating, I just need a minimal base OS(which is provided by alpine), with all the Python binaries installed on the top of it.
FROM ubuntu:latest  --> FROM python:3.7.3-alpine

LABEL maintainer="[email protected]"

RUN apt-get update -y && \-->we don't need this as Python image already include it
    apt-get install -y python-pip python-dev

COPY ./requirements.txt /app/requirements.txt

WORKDIR /app

RUN pip install -r requirements.txt

COPY . /app

ENTRYPOINT [ "python" ]

CMD [ "app.py" ]

NOTE: There might be cases where you need a full base OS

  • Security
  • Compliance
  • Ease of development

Use of Scratch

  • It’s a special empty Dockerfile.
  • Use this to build your own base images.
  • OR build a minimal image that run a binary and nothing else.
FROM scratch
COPY hello /
CMD ["/hello"]

More info: https://docs.docker.com/develop/develop-images/baseimages/

Use Multi-stage Build as I mentioned on Day7

Please follow me with my Journey

This time to make learning more interactive, I am adding

  • Slack
  • Meetup

Please feel free to join this group.

Slack: 

https://100daysofdevops.slack.com/join/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group https://www.meetup.com/100daysofdevops/

21 Days of Docker-Day 16 -Introduction to Docker Compose

Welcome to Day 16 of 21 Days of Docker. The topic for today is a docker compose

Docker compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Install Compose

  • Run this command to download the current stable release of Docker Compose:
$ sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  • Set the execute permission
sudo chmod +x /usr/local/bin/docker-compose
  • To verify it
$ docker-compose --version
docker-compose version 1.24.1, build 4667896b
  • Define the services that make up your app in docker-compose.yml
version: '3'
services:
  web:
    image: nginx
    ports:
      - "80:80"
  redis:
    image: "redis:alpine"
  • This Compose file defines two services: web and redis.

Web service

  • The web service uses an nginx image. It then binds the container and the host machine to the exposed port, 8080. This example service uses the default port for the nginx web server, 80.

Redis service

  • The redis service uses a public Redis image pulled from the Docker Hub registry.
  • To validate your configuration
$ docker-compose config
'services:
  redis:
    image: redis:alpine
  web:
    image: nginx
    ports:
    - 8080:80/tcp
version: '3.0'
  • Start up your application by running docker-compose up
$ docker-compose up -d

Creating network "cloud_user_default" with the default driver
Creating cloud_user_redis_1 ... done
Creating cloud_user_web_1   ... done
  • To bring down the service
$ docker-compose down
Removing cloud_user_web_1   ... done
Removing cloud_user_redis_1 ... done
Removing network cloud_user_default

Famous WordPress example, I will leave up to you guys to test it

https://docs.docker.com/compose/wordpress/

version: '3.3'

services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: somewordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
       WORDPRESS_DB_NAME: wordpress
volumes:
    db_data: {}

Deploy a stack to a swarm

When running Docker Engine in swarm mode, you can use docker stack deploy to deploy a complete application stack to the swarm. The deploy command accepts a stack description in the form of a Compose file.

$ docker stack deploy -c docker-compose.yml mystack
Creating network mystack_default
Creating service mystack_web
Creating service mystack_redis
  • List stacks
$ docker stack ls
NAME                SERVICES            ORCHESTRATOR
mystack             2                   Swarm
  • List the tasks in the stack
$ docker stack ps  mystack
ID                  NAME                IMAGE               NODE                          DESIRED STATE       CURRENT STATE            ERROR               PORTS
gd34he1ej70p        mystack_redis.1     redis:alpine        plakhera14c.example.com   Running             Running 14 seconds ago                       
r3d8fmrigxwx        mystack_web.1       nginx:latest        plakhera12c.example.com   Running             Running 18 seconds ago                       
  • As you can see redis is deployed on plakhera14c.example.com and web on plakhera12c.example.com
  • List the services in the stack
$ docker stack services mystack
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
3mcj15hyaq6a        mystack_web         replicated          1/1                 nginx:latest        *:8080->80/tcp
j1h8c1ygn796        mystack_redis       replicated          1/1                 redis:alpine        
  • Remove one or more stacks
$ docker stack rm  mystack
Removing service mystack_redis
Removing service mystack_web
Removing network mystack_default

Please follow me with my Journey

This time to make learning more interactive, I am adding

  • Slack
  • Meetup

Please feel free to join this group.

Slack: 

https://100daysofdevops.slack.com/join/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group https://www.meetup.com/100daysofdevops/

21 Days of Docker-Day 15 -Introduction to Docker Swarm- Part 2

On Day 14, I gave you the basic introduction to Docker Swarm, let explore swarm more in-depth

Adding Network and Publishing Ports to Swarm Tasks

  • Publishing port for Swarm tasks is similar to what we did for docker
$ docker service create --name mypublishportservice --replicas 2 -p 8080:80 nginx
khwcntzjltfmxu8rwxy208wb2
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 
  • Verify it
$ docker service ls
ID                  NAME                   MODE                REPLICAS            IMAGE               PORTS
juqr8xuetjh1        myglobal               global              3/3                 nginx:latest        
khwcntzjltfm        mypublishportservice   replicated          2/2                 nginx:latest        *:8080->80/tcp

$ docker service ps mypublishportservice
ID                  NAME                     IMAGE               NODE                          DESIRED STATE       CURRENT STATE            ERROR               PORTS
mtgrus1pk5m0        mypublishportservice.1   nginx:latest        plakhera12c.mylabserver.com   Running             Running 20 seconds ago                       
ul3vtb79mu8r        mypublishportservice.2   nginx:latest        plakhera14c.mylabserver.com   Running             Running 19 seconds ago                       
[cloud_user@plakhera12c ~]$ docker container ls
  • Go to one of the nodes and try to access it on port 8080
$ curl 172.31.21.46:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Lock your swarm to protect its encryption key

  • When Docker restarts, both the TLS key used to encrypt communication among swarm nodes and the key used to encrypt and decrypt Raft logs on disk, are loaded into each manager node’s memory. 
  • Docker 1.13 introduces the ability to protect the mutual TLS encryption key and the key used to encrypt and decrypt Raft logs at rest, by allowing you to take ownership of these keys and to require manual unlocking of your managers. This feature is called autolock.

Enable or disable autolock on an existing swarm

  • To enable autolock on an existing swarm, set the autolock flag to true.
# docker swarm update --autolock=true
Swarm updated.
To unlock a swarm manager after it restarts, run the `docker swarm unlock`
command and provide the following key:

    SWMKEY-1-XXXXXXXXXXXXXXXXXXXXXX

Please remember to store this key in a password manager, since without it you
will not be able to restart the manager.
  • Store the key in a safe place, such as in a password manager.
  • When Docker restarts, you need to unlock the swarm. A locked swarm causes an error like the following when you try to start or restart a service:
$ sudo systemctl restart docker
  • To unlock a locked swarm, use docker swarm unlock.
$ docker swarm unlock
Please enter unlock key: 

View the current unlock key for a running swarm

$ docker swarm unlock-key
To unlock a swarm manager after it restarts, run the `docker swarm unlock`
command and provide the following key:

    SWMKEY-1-XXXXXXXXXXXXXXXXXXXXXXX

Please remember to store this key in a password manager, since without it you
will not be able to restart the manager.

Rotate the unlock key

$ docker swarm unlock-key --rotate
Successfully rotated manager unlock key.

To unlock a swarm manager after it restarts, run the `docker swarm unlock`
command and provide the following key:

    SWMKEY-1-XXXXXXXXXXXXXXXXXXXXXXX

Please remember to store this key in a password manager, since without it you
will not be able to restart the manager.

Disable autolock

docker swarm update --autolock=false

Mount Volumes with Swarm

  • To mount/create the volume using Swarm
$ docker service create --name mytestvolservice --mount type=volume,source=mytestvol,target=/mytestvol nginx
uh2t9a60p11f7ylehr6jrv0uo
overall progress: 1 out of 1 tasks 
1/1: running   
verify: Service converged 
  • To verify it
$ docker service ps mytestvolservice
ID                  NAME                 IMAGE               NODE                          DESIRED STATE       CURRENT STATE            ERROR               PORTS
yzrwdg5vwgbl        mytestvolservice.1   nginx:latest        plakhera13c.mylabserver.com   Running             Running 13 seconds ago                       
  • As you can see this is been created on plakhera13 machine, let’s login to that machine
$ docker volume ls
DRIVER              VOLUME NAME
local               mytestvol
  • Get more information about the volume
$ docker volume inspect mytestvol
[
    {
        "CreatedAt": "2019-10-21T01:43:03Z",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/mytestvol/_data", <---
        "Name": "mytestvol",
        "Options": null,
        "Scope": "local"
    }
]
$ sudo ls -l /var/lib/docker/volumes/mytestvol/_data
total 0
  • Let’s login to the container
$ docker exec -it c98ffe8fe655 bash
# cd mytestvol/
# touch mytestfile
  • Logout from the container and see if the file exist on the Docker host
$ sudo ls -l /var/lib/docker/volumes/mytestvol/_data
total 0
-rw-r--r--. 1 root root 0 Oct 21 01:45 mytestfile
  • Now the question is what will happen if you remove the service, is the Volume still exist?
$ docker service rm mytestvolservice
mytestvolservice
  • Let’s verify it, yay yes 🙂
$ docker volume ls
DRIVER              VOLUME NAME
local               mytestvol

Add or remove label metadata

Node labels provide a flexible method of node organization. You can also use node labels in service constraints. Apply constraints when you create a service to limit the nodes where the scheduler assigns tasks for the service.

  • First let get’s the node id
$ docker node ls
ID                            HOSTNAME                      STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
ws27unxgekvajgwtnj43tywsy *   plakhera12c.mylabserver.com   Ready               Active              Leader              19.03.4
ce05mu24b2p600ecejmy5gj8x     plakhera13c.mylabserver.com   Ready               Active                                  19.03.4
s0tcp7y6sw5l4bc0d622mtv21     plakhera14c.mylabserver.com   Ready               Active                                  19.03.4
  • Apply the label
$ docker node update --label-add region=us-west-2 s0tcp7y6sw5l4bc0d622mtv21
s0tcp7y6sw5l4bc0d622mtv21
  • Verify it
$ docker node inspect s0tcp7y6sw5l4bc0d622mtv21
[
    {
        "ID": "s0tcp7y6sw5l4bc0d622mtv21",
        "Version": {
            "Index": 145
        },
        "CreatedAt": "2019-10-23T03:08:41.624202115Z",
        "UpdatedAt": "2019-10-23T05:29:34.502968372Z",
        "Spec": {
            "Labels": { <----------
                "region": "us-west-2"
            },
  • Create the service using the constraint
$ docker service create --name myserviceconstraint1 --constraint node.labels.region==us-west-2 --replicas 1 nginx
s1cq2avedvt1xeetbjv353j0a
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service converged 
  • Verify it
$ docker service ps myserviceconstraint1
ID                  NAME                     IMAGE               NODE                          DESIRED STATE       CURRENT STATE            ERROR               PORTS
shjn105n69hi        myserviceconstraint1.1   nginx:latest        plakhera14c.mylabserver.com   Running             Running 24 seconds ago                       

Please follow me with my Journey

This time to make learning more interactive, I am adding

  • Slack
  • Meetup

Please feel free to join this group.

Slack: 

https://100daysofdevops.slack.com/join/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group https://www.meetup.com/100daysofdevops/