21 Days of Docker-Day 4- Docker Container Under The Hood

Welcome to Day 4, so far we learned about the basics of Docker, let’s move our discussion from 10,000ft overview to 1000ft.

But before we go there based on the last 3 days knowledge, can you please answer a few of my questions?

Q1: What is the Kernel version of your Docker container?

Q2: Why the first process inside the Docker container run as PID 1?

Q3: How much default memory is allocated to my Docker container?

Q4: Is there is any way to restrict how much amount of memory we can allocate to the container?

Q5: How does container get its IP or able to communicate to the external world?

These are the question which pops up in my mind when I first start exploring docker, you may not have a answer to these question if you are new to Docker but if you already start thinking in this direction you are heading in right direction 🙂

21 Days of Docker-Day 3 - Building Container Continue

On day 2, we created our first container, in detached mode

But we haven’t logged into the container, now it’s a time to logged into that container. Last time the issue we faced that once we logged out of the container it got shutdown, let see how we can deal with this problem

  • We have this container up and running
$ docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
3afb4a8cfeb7        nginx               "nginx -g 'daemon of…"   37 hours ago        Up 3 seconds        80/tcp              mytestserver
  • It’s time to log into this container but this time using docker exec and now I am inside my docker container.
$ docker container exec -it 3afb4a8cfeb7 bash
  • What exec will do
exec                       Run a command in a running container
-i, --interactive          Keep STDIN open even if not attached
-t, --tty                  Allocate a pseudo-TTY
  • Let’s dig more into it and see the difference -i and -t makes
  • This time let start with -i flag only
$ docker container exec -i 3afb4a8cfeb7 bash
  • As you can see with -i, I am only getting an interactive session but not the terminal
  • Let’s try out the same command but this time only with -t
$ docker container exec -t 3afb4a8cfeb7 bash
root@3afb4a8cfeb7:/# ls
  • As you can see here, we are only getting terminal here but I am not able to interact with it
  • So this needs to be built as a part of your muscle memory that we need to use -i and -t in tandem when we are trying to login to any container.

21 Days of Docker-Day 2 — First Docker Container

On Day 1, I gave you the basic introduction to Docker and how to install Docker, it’s time to create your first docker container

Type the below command to run your first docker container

docker container run hello-world
$ docker container run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

For more examples and ideas, visit:

Let see what happen behind the scene

  • As this is my first container, docker engine tried to find an image named hello-world.
  • But as we just get started, there is no such image stored locally
Unable to find image 'hello-world:latest' locally
  • Docker engine goes to DockerHub(For the time being think DockerHub as a GitHub for Docker containers), looking for this image named hello-world


  • It finds the image, pulls it down and then runs it in a container.
  • Hello World only function is to output the text you see in the terminal, after which the container exits

21 Days of Docker-Day 1 — Introduction to Docker

What is Docker?

Docker is an open platform for developing, shipping and running application. Its main benefit is to package applications in containers, allowing them to be portable to any system running a Linux, Mac or Windows operating system (OS).

It follows the build once and runs anywhere approach.

Docker Engine

Docker Engine is a client-server application with these major components:

  • A server which is a type of long-running program called a daemon process (the dockerd command).
  • A REST API that specifies interfaces that programs can use to talk to the daemon and instruct it what to do.
  • A command line interface (CLI) client (the docker command).
  • The CLI uses the Docker REST API to control or interact with the Docker daemon through scripting or direct CLI commands. Many other Docker applications use the underlying API and CLI.
  • The daemon creates and manages Docker objects, such as images, containers, networks, and volumes.

21 Days of Docker

Thanks, everyone who was the part of my earlier journey 100 Days of DevOps https://100daysofdevops.com/day-100-100-days-of-devops/100 Days of DevOps

As I promised earlier that I will come up with something better in the next few months which is not the full-fledge 100days but breaking down into small components and this time 21 Days of Docker.

Starting from Oct 7, I am starting a Program, which is called “21 Days of Docker” and the main idea behind this is to spend at least one hour of every day for next 21 days in Sharing Docker knowledge and then share progress via

This time to make learning more interactive, I am adding 

  • Slack 
  • Meetup

Please feel free to join this group.

Slack: https://join.slack.com/t/100daysofdevops/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group100daysofdevops (Newark, CA)
Thanks, everyone for being the part of my earlier journey “100 Days of DevOps”…www.meetup.com

Some of my Docker recommendations, but please feel free to add if I am missing anything.


Play with Docker ClassroomPlay with Docker Classroom
The Play with Docker classroom brings you labs and tutorials that help you get hands-on experience using Docker. In…training.play-with-docker.com

Linux Academy(Linux Academy is giving 7 days trial https://linuxacademy.com/join/pricing? )

Learn Docker by DoingCourse: Learn Docker by Doing | Linux Academy
Travis Thomsen Course Development Director in Content I have over 17 years of experience in all phases of the software…linuxacademy.com

Docker — Deep DiveCourse: Docker – Deep Dive | Linux Academy
Travis Thomsen Course Development Director in Content I have over 17 years of experience in all phases of the software…linuxacademy.com

Docker Certified Associate (DCA)Course: Docker Certified Associate (DCA) | Linux Academy
Will Boyd DevOps Team Lead in Content Docker is an extremely powerful tool for running and managing containers…linuxacademy.com

Safari Books Online(Safari Book give 10 days free trial https://learning.oreilly.com/register/ )Docker Containers, Third Edition
4+ Hours of Video Instruction Docker Containers LiveLessons takes you through your first experiences understanding…learning.oreilly.com

Udemy(Udemy give 30-day refund policy https://support.udemy.com/hc/en-us/sections/206457407-Refunds )Docker Certified Associate 2019
This course is specifically designed for the aspirants who intend to give the ” Docker Certified Associate”…www.udemy.com

Day 100 – 100 Days of DevOps

Welcome to Day 100 of 100 Days of DevOps

Finally, with limping and crawling, we reached to the Day 100 of 100 days of DevOps. I apologize for not being consistent in the latter half especially after Day97 but I learned a lot and I believe you guys also got a chance to learn something out of my blogs.

I will promise that I will come up with something better in the next few months which is not the full-fledge 100days but breaking down into small components eg: 30 Days of DevOps.

Once again thank you, everyone, who followed me, I will continue to post my blog .

Thanks, everyone, and Happy Learning!

100 Days Journey


Day 1-Introduction to CloudWatch Metrics


Day 2-Introduction to Simple Notification Service(SNS)


Day 3-Introduction to CloudTrail


Day 4-CloudWatch log agent Installation — Centos7


Day 5-CloudWatch to Slack Notification


Day 6-CloudWatch Logs(Metric Filters)


Day 7-AWS S3 Event


Day 8-Introduction to AWS Security Token Service(STS)


Day 9-Delegate Access Across AWS Accounts Using IAM Roles


Day 10- Restricting User to Launch only T2 Instance


Day 11- Restricting S3 Bucket Access to Specific IP Addresses


Day 12- How to ensure that users can’t turn off CloudTrail


Day 13- How to stop/start EC2 instance on schedule basis to save cost


Day 14- How to automate the process of EBS Snapshot Creation


Day 22-Introduction to Key Management System(KMS)


Day 23- How to encrypt EBS Volume using KMS


Day 24- How to encrypt S3 Bucket using KMS


Day 25-AWS S3 Bucket using Terraform


Day 26-Introduction to IAM


Day 28- Introduction to VPC Flow Logs


Day 29- Introduction to RDS — MySQL


Day 30-Introduction to AWS CLI


Day 31-Introduction to VPC Peering


Day 32-Introduction to NAT Gateway


Day 33- On Demand Hibernate


Day 35-AWS S3 Intelligent-Tiering (S3 INT)


Day 36-Introduction to AWS System Manager


Day 37- Automate the Process of AMI Creation Using System Manager Maintenance Windows


Day 38-Introduction to Transit Gateway


Day 39-Introduction to VPC EndPoint


Day 40-Introduction to AWS Config


Day 41-Real-Time Apache Log Analysis using Amazon Kinesis and Amazon Elasticsearch Service


Day 42-Audit your AWS Environment


Day 43- Introduction to EC2


Day 44-S3 Cross Region Replication(CRR)


Day 45-Simple Backup Solution using S3, Glacier and VPC Endpoint


Day 46-Introduction to Amazon Glacier


Day 47-Introduction to Amazon Elastic File System (EFS)


Day 48- Threat detection and mitigation at AWS


Day 49-Introduction to Route53


Day 50-Introduction to Route53 Failover


Day 69-Introduction to AWS Lambda


Day 70-Introduction to Boto3


Day 71-EC2 Instance creation using Lambda


Day 92-Choosing Right EC2 Instance Type


Day 98- AWS Lambda with Terraform Code

Day 99- AWS Boto3


Day 15- Introduction to Terraform


Day 16- Building VPC using Terraform


Day 17- Creating EC2 Instance using Terraform


Day 18-Add monitoring to these instances using Terraform(CloudWatch and SNS)


Day 19 – Application Load Balancer using Terraform


Day 20— Auto-Scaling Group using Terraform


Day 21- MySQL RDS Database Creation using Terraform



Day 27- Introduction to Packer


Day 34- Terraform Pipeline using Jenkins



Day 51-Introduction to Bash Scripting


Day 52-Conditional Testing in Shell


Day 53-Introduction to Regular Expression — Part 1


Day 65-Bash Script to Monitor Service


Day 85- Shell Script to find the failed login


Day 91-How to check if the file exists (Bash/Python)



Day 54-And You Thought You Knew RPM


Day 55-Introduction to YUM


Day 56-Debugging Performance Issue using SAR


Day 57-Debugging I/O Performance Issue


Day 62-Useful Linux Command for Network Troubleshooting


Day 63- Wireshark for HTTP/HTTPS Analysis


Day 66-Linux Boot Process


Day 67-Introduction to Chrony


Day 68-Introduction to Systemd


Day 76-How Linux Kernel is organized


Day 77-Process Management in Linux



Day 73- Introduction to Ansible



Day 74- Introduction to GIT


Docker & Kubernetes

Day 58-Docker Basics


Day 59- Introduction to DockerFile


Day 72-Introduction to Kubernetes



Day 60-Introduction to Jenkins


Day 61-Jenkins Agent Node



Day 64- Regular Expression using Python


Day 78- Python OS/Subprocess Module


Day 79-Apache Log Parser Using Python


Day 80-Python Unit Testing(Pytest)


Day 81-Debugging Python Code


Day 82- Python Object Oriented Programming(OOP)


Day 86-Python Flow Control(if-else statement)


Day 87-While/For Loop Python


Day 88-Lists in Python


Day 89-Python Files I/O


Day 90- Try and Except Statement Python


Day 93-Python Functions


Day 94-Introduction to Numpy for Data Analysis


Day 95-Introduction to Django



Day 75- Introduction to Fabric


Day 83-Introduction to Splunk


Day 84-Introduction to ElasticSearch


Day 96-Document Object Model(DOM)


Day 97-Introduction to JQuery


100 Days of DevOps — Day 99- AWS  Boto3

What is Boto3?

Boto3 is the Amazon Web Services (AWS) SDK for Python. It enables Python developers to create, configure, and manage AWS services, such as EC2 and S3. Boto3 provides an easy to use, object-oriented API, as well as low-level access to AWS services.

Boto3 is built on the top of a library called Botocore, which is shared by the AWS CLI. Botocore provides the low level clients, session and credentials and configuration data. Boto3 built on the top of Botocore by providing its own session, resources, collections, waiters and paginators.

Botocore is the basis for the aws-cli.



Continue reading “100 Days of DevOps — Day 99- AWS  Boto3”

100 Days of DevOps — Day 98- AWS Lambda with Terraform Code

What is AWS Lambda?

With AWS Lambda, you can run code without provisioning or managing servers. You pay only for the compute time that you consume — there’s no charge when your code isn’t running. You can run code for virtually any type of application or backend service — all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

  • To start with Lambda

Go to https://us-west-2.console.aws.amazon.com/lambda → Create a function

  • Create function you have three options
* Author from scratch: Which is self explanatory, 
i.e you are writing your own function
* Use a blueprint: Build a lambda application from sample 
code and configuration preset for common use cases
(Provided by AWS)
* Browse serverless app repository: Deploy a sample 
lambda application from the AWS Serverless 
Application Repository(Published by other developers 
and AWS Patners)
  • Function name: HelloWorld
  • Runtime: Choose Python3.7 from the dropdown
  • Permission: For the time being choose the default permission
  • Click Create Function

Invoking Lambda Function

  • When building applications on AWS Lambda the core components are Lambda functions and event sources. An event source is the AWS service or custom application that publishes events, and a Lambda function is the custom code that processes the events
* Amazon S3 Pushes Events
* AWS Lambda Pulls Events from a Kinesis Stream
* HTTP API requests through API Gateway
* CloudWatch Schedule Events
  • From the list select CloudWatch Events
Reference: https://www.youtube.com/watch?v=WbHw14hF7lU
NOTE: It’s an old slide, GO is already supported
  • As you can see under CloudWatch Events it says configuration required
  • Rule: Create a new rule
  • Rule name: Everyday
  • Rule description: Give your Rule some description
  • Rule type: Choose Schedule expression and under its rate(1 day)(i.e its going to trigger it every day)

Schedule Expressions Using Rate or Cron – AWS Lambda
AWS Lambda supports standard rate and cron expressions for frequencies of up to once per minute. CloudWatch Events rate…docs.aws.amazon.com

  • Click on Add and Save
  • Now go back to your Lambda Code(HelloWorld)
import json
def lambda_handler(event, context):
# TODO implement
print(event) <--------
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
  • Add this entry, which simply means we are trying to print the event
  • Again save it
  • Let’s try to set a simple test event, Click on Test
  • Under Event template, search for Amazon CloudWatch
  • Event Name: Give your event some name and test it
  • Go back and this time Click on Monitoring
  • Click on View logs in CloudWatch
  • Click on the log stream and you will see the same logs you see in Lambda console

Lambda Programming Model

  • Lambda supports a bunch of programming languages
  • You write code for your Lambda function in one of the languages AWS Lambda supports. Regardless of the language you choose, there is a common pattern to writing code for a Lambda function that includes the following core concepts.
* Handler: Handler is the function AWS Lambda calls 
to start execution of your Lambda function, it act 
as an entry point.
  • As you can see handle start with lambda_function which is a Python Script Name and then lambda_handler which is a function and act as an entry point for event and context
  • Event: We already saw in the previous example where we passed the CloudWatch Event to our code
  • Context — AWS Lambda also passes a context object to the handler function, as the second parameter. Via this context object, your code can interact with AWS Lambda. For example, your code can find the execution time remaining before AWS Lambda terminates your Lambda function.
  • Logging — Your Lambda function can contain logging statements. AWS Lambda writes these logs to CloudWatch Logs.
  • Exceptions — Your Lambda function needs to communicate the result of the function execution to AWS Lambda. Depending on the language you author your Lambda function code, there are different ways to end a request successfully or to notify AWS Lambda an error occurred during the execution.
  • One more thing, I want to highlight is the timeout
  • You can now set the timeout value for a function to any value up to 15 minutes. When the specified timeout is reached, AWS Lambda terminates execution of your Lambda function. As a best practice, you should set the timeout value based on your expected execution time to prevent your function from running longer than intended.

Common Use case of Lambda

Terraform Code

All the steps we have performed manually. let’s try to automate it using terraform

  • Step1: Create your test Python function
def lambda_handler(event, context):
    print ("Hello from terraform world")
    return "hello from terraform world"
  • Now let’s zip it up
$ zip lambda.zip lambda.py 
  adding: lambda.py (deflated 27%)
  • Step2: Define your Lambda resource
resource "aws_lambda_function" "test_lambda" {
filename = "lambda.zip"
function_name = "lambda_handler"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "lambda.lambda_handler"

# The filebase64sha256() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the base64sha256() function and the file() function:
# source_code_hash = "${base64sha256(file("lambda_function_payload.zip"))}"
source_code_hash = "${base64sha256("lambda.zip")}"
runtime = "python2.7"
  • filename: Is the name of the file, you zipped in the previous step
  • function name: Is the name of the function you defined in your python code
  • role: IAM role attached to the Lambda Function. This governs both who / what can invoke your Lambda Function, as well as what resources our Lambda Function has access to
  • handler: Function entry point in our code(python code filename.method name) (filename: lambda.py we don’t need to include file extension) and (lambda function is lambda_handler def lambda_handler(event, context))
  • source_code_hash: Used to trigger updates. Must be set to a base64-encoded SHA256 hash of the package file specified with either filename or s3_key. The usual way to set this is filebase64sha256("file.zip") (Terraform 0.11.12 and later) or base64sha256(file("file.zip")) (Terraform 0.11.11 and earlier), where “file.zip” is the local filename of the lambda function source archive.
  • Runtime: The identifier of the function’s  runtime
Valid Values: nodejs8.10 | nodejs10.x | java8 | python2.7 | python3.6 | python3.7 | dotnetcore1.0 | dotnetcore2.1 | go1.x | ruby2.5 | provided
  • Step3: Create an IAM Role
resource "aws_iam_role" "iam_for_lambda" {
name = "iam_for_lambda"

assume_role_policy = <<EOF
"Version": "2012-10-17",
"Statement": [
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
"Effect": "Allow",
"Sid": ""

# See also the following AWS managed policy: AWSLambdaBasicExecutionRole
resource "aws_iam_policy" "lambda_logging" {
name = "lambda_logging"
path = "/"
description = "IAM policy for logging from a lambda"

policy = <<EOF
"Version": "2012-10-17",
"Statement": [
"Action": [
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"

resource "aws_iam_role_policy_attachment" "lambda_logs" {
role = "${aws_iam_role.iam_for_lambda.name}"
policy_arn = "${aws_iam_policy.lambda_logging.arn}"

Reference https://www.terraform.io/docs/providers/aws/r/lambda_function.html

  • Step4: Terraform Init: Initialize a Terraform working directory, containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control.
$ terraform init

Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "aws" (2.23.0)...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.aws: version = "~> 2.23"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
  • Step5: Terraform plan: The terraform plan command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_cloudwatch_log_group.example
      id:                             <computed>
      arn:                            <computed>
      name:                           "/aws/lambda/lambda_handler"
      retention_in_days:              "14"

  + aws_iam_policy.lambda_logging
      id:                             <computed>
      arn:                            <computed>
      description:                    "IAM policy for logging from a lambda"
      name:                           "lambda_logging"
      path:                           "/"
      policy:                         "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": [\n        \"logs:CreateLogStream\",\n        \"logs:PutLogEvents\"\n      ],\n      \"Resource\": \"arn:aws:logs:*:*:*\",\n      \"Effect\": \"Allow\"\n    }\n  ]\n}\n"

  + aws_iam_role.iam_for_lambda
      id:                             <computed>
      arn:                            <computed>
      assume_role_policy:             "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": \"sts:AssumeRole\",\n      \"Principal\": {\n        \"Service\": \"lambda.amazonaws.com\"\n      },\n      \"Effect\": \"Allow\",\n      \"Sid\": \"\"\n    }\n  ]\n}\n"
      create_date:                    <computed>
      force_detach_policies:          "false"
      max_session_duration:           "3600"
      name:                           "iam_for_lambda"
      path:                           "/"
      unique_id:                      <computed>

  + aws_iam_role_policy_attachment.lambda_logs
      id:                             <computed>
      policy_arn:                     "${aws_iam_policy.lambda_logging.arn}"
      role:                           "iam_for_lambda"

  + aws_lambda_function.test_lambda
      id:                             <computed>
      arn:                            <computed>
      filename:                       "lambda.zip"
      function_name:                  "lambda_handler"
      handler:                        "lambda.lambda_handler"
      invoke_arn:                     <computed>
      last_modified:                  <computed>
      memory_size:                    "128"
      publish:                        "false"
      qualified_arn:                  <computed>
      reserved_concurrent_executions: "-1"
      role:                           "${aws_iam_role.iam_for_lambda.arn}"
      runtime:                        "python2.7"
      source_code_hash:               "Gpu07NPcj26NrKv0Ne6BbZkfDRuM3ozHHqCFUWH9Sqg="
      source_code_size:               <computed>
      timeout:                        "3"
      tracing_config.#:               <computed>
      version:                        <computed>

Plan: 5 to add, 0 to change, 0 to destroy.


Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Step6: terraform apply: The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_cloudwatch_log_group.example
      id:                             <computed>
      arn:                            <computed>
      name:                           "/aws/lambda/lambda_handler"
      retention_in_days:              "14"

  + aws_iam_policy.lambda_logging
      id:                             <computed>
      arn:                            <computed>
      description:                    "IAM policy for logging from a lambda"
      name:                           "lambda_logging"
      path:                           "/"
      policy:                         "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": [\n        \"logs:CreateLogStream\",\n        \"logs:PutLogEvents\"\n      ],\n      \"Resource\": \"arn:aws:logs:*:*:*\",\n      \"Effect\": \"Allow\"\n    }\n  ]\n}\n"

  + aws_iam_role.iam_for_lambda
      id:                             <computed>
      arn:                            <computed>
      assume_role_policy:             "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": \"sts:AssumeRole\",\n      \"Principal\": {\n        \"Service\": \"lambda.amazonaws.com\"\n      },\n      \"Effect\": \"Allow\",\n      \"Sid\": \"\"\n    }\n  ]\n}\n"
      create_date:                    <computed>
      force_detach_policies:          "false"
      max_session_duration:           "3600"
      name:                           "iam_for_lambda"
      path:                           "/"
      unique_id:                      <computed>

  + aws_iam_role_policy_attachment.lambda_logs
      id:                             <computed>
      policy_arn:                     "${aws_iam_policy.lambda_logging.arn}"
      role:                           "iam_for_lambda"

  + aws_lambda_function.test_lambda
      id:                             <computed>
      arn:                            <computed>
      filename:                       "lambda.zip"
      function_name:                  "lambda_handler"
      handler:                        "lambda.lambda_handler"
      invoke_arn:                     <computed>
      last_modified:                  <computed>
      memory_size:                    "128"
      publish:                        "false"
      qualified_arn:                  <computed>
      reserved_concurrent_executions: "-1"
      role:                           "${aws_iam_role.iam_for_lambda.arn}"
      runtime:                        "python2.7"
      source_code_hash:               "Gpu07NPcj26NrKv0Ne6BbZkfDRuM3ozHHqCFUWH9Sqg="
      source_code_size:               <computed>
      timeout:                        "3"
      tracing_config.#:               <computed>
      version:                        <computed>

Plan: 5 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_iam_policy.lambda_logging: Creating...
  arn:         "" => "<computed>"
  description: "" => "IAM policy for logging from a lambda"
  name:        "" => "lambda_logging"
  path:        "" => "/"
  policy:      "" => "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": [\n        \"logs:CreateLogStream\",\n        \"logs:PutLogEvents\"\n      ],\n      \"Resource\": \"arn:aws:logs:*:*:*\",\n      \"Effect\": \"Allow\"\n    }\n  ]\n}\n"
aws_iam_role.iam_for_lambda: Creating...
  arn:                   "" => "<computed>"
  assume_role_policy:    "" => "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": \"sts:AssumeRole\",\n      \"Principal\": {\n        \"Service\": \"lambda.amazonaws.com\"\n      },\n      \"Effect\": \"Allow\",\n      \"Sid\": \"\"\n    }\n  ]\n}\n"
  create_date:           "" => "<computed>"
  force_detach_policies: "" => "false"
  max_session_duration:  "" => "3600"
  name:                  "" => "iam_for_lambda"
  path:                  "" => "/"
  unique_id:             "" => "<computed>"
aws_iam_policy.lambda_logging: Still creating... (10s elapsed)
aws_iam_role.iam_for_lambda: Still creating... (10s elapsed)
aws_iam_role.iam_for_lambda: Creation complete after 10s (ID: iam_for_lambda)
aws_lambda_function.test_lambda: Creating...
  arn:                            "" => "<computed>"
  filename:                       "" => "lambda.zip"
  function_name:                  "" => "lambda_handler"
  handler:                        "" => "lambda.lambda_handler"
  invoke_arn:                     "" => "<computed>"
  last_modified:                  "" => "<computed>"
  memory_size:                    "" => "128"
  publish:                        "" => "false"
  qualified_arn:                  "" => "<computed>"
  reserved_concurrent_executions: "" => "-1"
  role:                           "" => "arn:aws:iam::XXXXXX:role/iam_for_lambda"
  runtime:                        "" => "python2.7"
  source_code_hash:               "" => "Gpu07NPcj26NrKv0Ne6BbZkfDRuM3ozHHqCFUWH9Sqg="
  source_code_size:               "" => "<computed>"
  timeout:                        "" => "3"
  tracing_config.#:               "" => "<computed>"
  version:                        "" => "<computed>"
aws_iam_policy.lambda_logging: Creation complete after 11s (ID: arn:aws:iam::355622012945:policy/lambda_logging)
aws_iam_role_policy_attachment.lambda_logs: Creating...
  policy_arn: "" => "arn:aws:iam::XXXXXX:policy/lambda_logging"
  role:       "" => "iam_for_lambda"
aws_iam_role_policy_attachment.lambda_logs: Creation complete after 0s (ID: iam_for_lambda-20190814010350932300000001)
aws_lambda_function.test_lambda: Still creating... (10s elapsed)
aws_lambda_function.test_lambda: Still creating... (20s elapsed)
aws_lambda_function.test_lambda: Still creating... (30s elapsed)
aws_lambda_function.test_lambda: Still creating... (40s elapsed)
aws_lambda_function.test_lambda: Creation complete after 41s (ID: lambda_handler)
aws_cloudwatch_log_group.example: Creating...
  arn:               "" => "<computed>"
  name:              "" => "/aws/lambda/lambda_handler"
  retention_in_days: "" => "14"
aws_cloudwatch_log_group.example: Still creating... (10s elapsed)
aws_cloudwatch_log_group.example: Creation complete after 11s (ID: /aws/lambda/lambda_handler)

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

GitHub Source Code Link https://github.com/100daysofdevops/100daysofdevops/tree/master/aws-lambda

My road to AWS Certified Security - Specialty Certification

This is the continuation of my earlier post My road to AWS Certified Solution Architect.


I wrote the AWS Certified Solution Architect exam almost 8 months back and after clearing that exam I decided to write my second AWS exam in the next three months but those three months became six. A couple of weeks back I watched the below youtube video “Inside the mind of a master procrastinator | Tim Urban” and was able to co-relate myself to this guy. That was the time when the Panic Monster hit inside my brain and told me this is the correct time to write the next exam.

Panic Monster

YAY I cleared the exam!

WARNING: Some House Keeping task, before reading this blog 🙂 🙂

1: As everyone needs to sign NDA with AWS, I can’t tell you the exact question asked during the exam neither I have GB of memory, but I can give you the pointers what to expect in the exam.

2: As we all know AWS infrastructure updates everyday, so some of the stuff might not be relevant after a few days/weeks/months.

3: Please don’t ask for any exam dumps or question, that defeats the whole purpose of the exam.

Exam Preparation

  • I highly recommend the Linux Academy Course to everyone, Adrian Cantrill did an excellent job in explaining all the concepts and going into the in-depth of all topics.


  • My second recommendation is Acloudguru, especially there “Updates For 2019” section.


  • AWS Re: Invent Videos: I highly recommend going through these videos, as they will give you enough in-depth knowledge about each service.
  • AWS Documentation: Best documentation ever provided by any service provider. Don’t miss the FAQ regarding each service (especially for KMS, IAM, VPC).

Once you are done with the above preparation, it’s a good time to gauge your knowledge, check the AWS provided sample question.


Now coming back to the exam, the entire exam is divided into five main topics.

Based on my experience, you must need to know these four services to clear this exam.

  • KMS
  • VPC
  • IAM
  • Identity Federation(This is a surprise package to me, I saw almost 5–6 questions related to Identity Federation).

Domain 1: Incident Response

  • What steps you will perform if your ACCESS_KEY and SECRET_ACCESS_KEY got leaked accidentally on GitHub(Tips: You need to rotate the key immediately, update your application which is using this key(good idea to use Role) and then disable/delete this key).
  • What steps to follow if your EC2 instance got compromised(Tips: Take the snapshot of EBS volume, Build instance in your forensic subnet or isolate this instance).


Domain2: Logging and Monitoring


  • Make sure you understand that same Cloudtrail can be applied to all regions, the question will trick you, do you create one trail per region or the same trail can be applied to multiple regions? What will happen to all the future region, can the same trail will be applied or do we need to create a new trail?
  • Must try CloudTrail multi-account scenario(Where you can create one central S3 bucket and can push trail from different accounts)(Common issues: Not able to push logs from the particular account? Does the S3 bucket policy looks correct? Do we have an IAM Resource defined for that particular account?)


  • How to troubleshoot if cloud watch agent is not sending logs to CloudWatch Log Group(Some Tips: Is cloudwatch agent running? Does EC2 Instance Role have sufficient permission to push logs to CloudWatch Logs)
  • Cloudwatch metrics to filter events and create an alert?(eg: Failed logins or someone trying to break-in with root credentials)


VPC Flow Logs

  • Must remember this point, VPC Flow Logs is not for deep packet inspection or analysis(it only hold metadata), for a deep packet inspection you need third party tool(eg: Wireshark)
  • Understand the format of VPC Flow Log and check some sample flow logs(Pay special attention to ACCEPT vs REJECT field, where packets are getting REJECTED at Security Group Level or NACL)
2 123456789010 eni-abc123de 
20641 22 6 20 4249 1418530010 1418530070 ACCEPT OK
2 123456789010 eni-abc123de 
49761 3389 6 20 4249 1418530010 1418530070 REJECT OK


S3 Events

  • Different type of S3 events


AWS Config

  • In which cases you are going to use AWS Config? Some use cases
* Ensure that EC2 instances launched in a particular 
VPC are properly tagged.
* Make sure that every instance is associated with at 
least one security group.
* Check to make sure that port 22 is not open in any 
production security group.


AWS Inspector

  • Understand what is the use of the inspector



  • What is a rule package?

A rules package is a collection of security checks that can be configured as part of an assessment template and assessment run.

Amazon Inspector has two types of rules packages, the 
network reachability rules package that checks for 
network accessibility of your Amazon EC2 instances, 
and host assessment rules packages that check for 
vulnerabilities and insecure configurations on the
 Amazon EC2 instance. Host assessment rules packages 
include Common Vulnerabilities and Exposures (CVE), 
Center for Internet Security (CIS) Operating System 
configuration benchmarks, and security best practices.

Domain 3: Infrastructure Security


  • Try to create CloudFront Distribution and make a note of each step.
  • What is the difference when you use your own SSL cert vs CloudFront Provided cert.


  • Use of AWS WAF


  • Remember this, you can only use WAF with Amazon CloudFront and the Application Load Balancer (ALB)
  • Whenever question asked for SQL injection and Cross-Site Scripting (XSS) think of WAF as a security solution


  • Understand the difference between Security Group vs Network Access Control List
  • VPC endpoint and check its Policies
  • Example: Restricting Access to a Specific Endpoint
"Version": "2012-10-17",
"Id": "Policy1415115909152",
"Statement": [
"Sid": "Access-to-specific-VPCE-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::my_secure_bucket",
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": "vpce-1a2b3c4d"


System Manager

  • Use of System Manager Parameter Store(eg: Which service you will use to store a secret in AWS, if the question is related to DB then prefer Secret Manager)
  • How to use System Manager for Patching(Question could be to meet the compliance requirement you need to regularly patch your server, which AWS Service you can use)

Domain 4: Identity and Access Management

  • You will see a bunch of questions related to IAM Policies and what the particular policy do?
  • Make sure you are comfortable and understand the difference between IAM Policies vs Resource Policies(Especially S3 bucket policies and KMS).
  • Use of AWS Organization(Remember Service Control Policy(SCP) can deny access only, they cannot allow)
  • Understand how AWS Secure Token Service(STS), this is not only important for the exam but also as a part of your daily job.

Active Directory

  • 5–6 questions related to active directory.
  • Please brush up your concept related to Web Identity Federation and SAML.

Domain 5: Data Protection

  • Must try this scenario, KMS Bring your own key.


  • In which case you prefer HSM(look for key term like satisfying the compliance requirement)over KMS
  • Understand how key Rotation works in case of AWS Managed Key(Automatically Rotated after 3 years) vs Customer Manager(Automatically rotated after 365 days — disabled by default) vs Customer Manager imported key material (No automatic rotation)
  • KMS Grant: With grants, you can programmatically delegate the use of KMS customer master keys (CMKs) to other AWS principals. You can use them to allow access, but not deny it. Grants are typically used to provide temporary permissions or more granular permissions.

Other Key Topics


  • Whenever question asked about PII(personally identifiable information) your best bet is Macie

Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved. For more info



  • When the question asked for analyzing S3 logs, then the most probable answer is Athena

Amazon S3 stores server access logs as objects in an S3 bucket. You can use Athena to quickly analyze and query S3 access logs.


DDOS attack

  • Whenever question asked about DDOS attack, then Shield might be a solution.

AWS provides two levels of protection against DDoS attacks: AWS Shield Standard and AWS Shield Advanced.


AWS Secret Manager

  • Remember this point, when you enable the secret manager it will rotate credentials immediately. Make sure all your application instances are configured to use Secret Manager before enabling credentials rotations



  • Remember the port Number
  • Port 25 is the default but EC2 throttles email traffic on port 25
  • To avoid timeout either use port 587 or 2587

AWS Lambda

  • Understand the difference between Function Policy(helpful in troubleshooting if cloud watch doesn’t get invoked) vs Lambda Execution Role(Where Lambda need to perform some action eg: Stopping any EC2 instance)

AWS Glacier Vault Lock

  • Initiate the lock by attaching a vault lock policy to your vault, which sets the lock to an in-progress state and returns a lock ID. While in the in-progress state, you have 24 hours to validate your vault lock policy before the lock ID expires.



  • Make sure you understand this part

Does Amazon wipe EBS drive data upon deletion?

Your data will live in the storage system for an indefinite period of time after you terminate the volume but will be wiped prior to being available to another user.


Final Words

  • As this is the Speciality exam, you will except this exam to be much more difficult as compared to other exam and on the top of it you need to know so many aws services not just skimming through it but you need to know in-depth but in the end you will learn so much out of it. So keep calm and write this exam and let me know in case if you have any question.