21 Days of Docker-Day 4- Docker Container Under The Hood

Welcome to Day 4, so far we learned about the basics of Docker, let’s move our discussion from 10,000ft overview to 1000ft.

But before we go there based on the last 3 days knowledge, can you please answer a few of my questions?

Q1: What is the Kernel version of your Docker container?

Q2: Why the first process inside the Docker container run as PID 1?

Q3: How much default memory is allocated to my Docker container?

Q4: Is there is any way to restrict how much amount of memory we can allocate to the container?

Q5: How does container get its IP or able to communicate to the external world?

These are the question which pops up in my mind when I first start exploring docker, you may not have a answer to these question if you are new to Docker but if you already start thinking in this direction you are heading in right direction 🙂

21 Days of Docker-Day 3 - Building Container Continue

On day 2, we created our first container, in detached mode

But we haven’t logged into the container, now it’s a time to logged into that container. Last time the issue we faced that once we logged out of the container it got shutdown, let see how we can deal with this problem

  • We have this container up and running
$ docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
3afb4a8cfeb7        nginx               "nginx -g 'daemon of…"   37 hours ago        Up 3 seconds        80/tcp              mytestserver
  • It’s time to log into this container but this time using docker exec and now I am inside my docker container.
$ docker container exec -it 3afb4a8cfeb7 bash
root@3afb4a8cfeb7:/#
  • What exec will do
exec                       Run a command in a running container
-i, --interactive          Keep STDIN open even if not attached
-t, --tty                  Allocate a pseudo-TTY
  • Let’s dig more into it and see the difference -i and -t makes
  • This time let start with -i flag only
$ docker container exec -i 3afb4a8cfeb7 bash
ls
bin
boot
dev
etc
home
lib
mnt
  • As you can see with -i, I am only getting an interactive session but not the terminal
  • Let’s try out the same command but this time only with -t
$ docker container exec -t 3afb4a8cfeb7 bash
root@3afb4a8cfeb7:/# ls
  • As you can see here, we are only getting terminal here but I am not able to interact with it
  • So this needs to be built as a part of your muscle memory that we need to use -i and -t in tandem when we are trying to login to any container.

21 Days of Docker-Day 2 — First Docker Container

On Day 1, I gave you the basic introduction to Docker and how to install Docker, it’s time to create your first docker container

Type the below command to run your first docker container

docker container run hello-world
$ docker container run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

Let see what happen behind the scene

  • As this is my first container, docker engine tried to find an image named hello-world.
  • But as we just get started, there is no such image stored locally
Unable to find image 'hello-world:latest' locally
  • Docker engine goes to DockerHub(For the time being think DockerHub as a GitHub for Docker containers), looking for this image named hello-world

https://hub.docker.com/

  • It finds the image, pulls it down and then runs it in a container.
  • Hello World only function is to output the text you see in the terminal, after which the container exits

21 Days of Docker-Day 1 — Introduction to Docker

What is Docker?

Docker is an open platform for developing, shipping and running application. Its main benefit is to package applications in containers, allowing them to be portable to any system running a Linux, Mac or Windows operating system (OS).

It follows the build once and runs anywhere approach.

Docker Engine

Docker Engine is a client-server application with these major components:

  • A server which is a type of long-running program called a daemon process (the dockerd command).
  • A REST API that specifies interfaces that programs can use to talk to the daemon and instruct it what to do.
  • A command line interface (CLI) client (the docker command).
  • The CLI uses the Docker REST API to control or interact with the Docker daemon through scripting or direct CLI commands. Many other Docker applications use the underlying API and CLI.
  • The daemon creates and manages Docker objects, such as images, containers, networks, and volumes.

21 Days of Docker

Thanks, everyone who was the part of my earlier journey 100 Days of DevOps http://100daysofdevops.com/day-100-100-days-of-devops/100 Days of DevOps
Motivationmedium.com

As I promised earlier that I will come up with something better in the next few months which is not the full-fledge 100days but breaking down into small components and this time 21 Days of Docker.

Starting from Oct 7, I am starting a Program, which is called “21 Days of Docker” and the main idea behind this is to spend at least one hour of every day for next 21 days in Sharing Docker knowledge and then share progress via

This time to make learning more interactive, I am adding 

  • Slack 
  • Meetup

Please feel free to join this group.

Slack: https://join.slack.com/t/100daysofdevops/shared_invite/enQtNzg1MjUzMzQzMzgxLWM4Yjk0ZWJiMjY4ZWE3ODBjZjgyYTllZmUxNzFkNTgxZjQ4NDlmZjkzODAwNDczOTYwOTM2MzlhZDNkM2FkMDA

Meetup Group

If you are in the bay area, please join this meetup group100daysofdevops (Newark, CA)
Thanks, everyone for being the part of my earlier journey “100 Days of DevOps”…www.meetup.com

Some of my Docker recommendations, but please feel free to add if I am missing anything.

YouTube

Play with Docker ClassroomPlay with Docker Classroom
The Play with Docker classroom brings you labs and tutorials that help you get hands-on experience using Docker. In…training.play-with-docker.com

Linux Academy(Linux Academy is giving 7 days trial https://linuxacademy.com/join/pricing? )

Learn Docker by DoingCourse: Learn Docker by Doing | Linux Academy
Travis Thomsen Course Development Director in Content I have over 17 years of experience in all phases of the software…linuxacademy.com

Docker — Deep DiveCourse: Docker – Deep Dive | Linux Academy
Travis Thomsen Course Development Director in Content I have over 17 years of experience in all phases of the software…linuxacademy.com

Docker Certified Associate (DCA)Course: Docker Certified Associate (DCA) | Linux Academy
Will Boyd DevOps Team Lead in Content Docker is an extremely powerful tool for running and managing containers…linuxacademy.com

Safari Books Online(Safari Book give 10 days free trial https://learning.oreilly.com/register/ )Docker Containers, Third Edition
4+ Hours of Video Instruction Docker Containers LiveLessons takes you through your first experiences understanding…learning.oreilly.com

Udemy(Udemy give 30-day refund policy https://support.udemy.com/hc/en-us/sections/206457407-Refunds )Docker Certified Associate 2019
This course is specifically designed for the aspirants who intend to give the ” Docker Certified Associate”…www.udemy.com

Day 100 – 100 Days of DevOps

Welcome to Day 100 of 100 Days of DevOps

Finally, with limping and crawling, we reached to the Day 100 of 100 days of DevOps. I apologize for not being consistent in the latter half especially after Day97 but I learned a lot and I believe you guys also got a chance to learn something out of my blogs.

I will promise that I will come up with something better in the next few months which is not the full-fledge 100days but breaking down into small components eg: 30 Days of DevOps.

Once again thank you, everyone, who followed me, I will continue to post my blog .

Thanks, everyone, and Happy Learning!

100 Days Journey

AWS

Day 1-Introduction to CloudWatch Metrics

https://medium.com/faun/100-days-of-devops-day-1-introduction-to-cloudwatch-metrics-b04be36307a8

Day 2-Introduction to Simple Notification Service(SNS)

https://medium.com/@devopslearning/100-days-of-devops-day-2-introduction-to-simple-notification-service-sns-97137b2f1f1e

Day 3-Introduction to CloudTrail

https://medium.com/@devopslearning/100-days-of-devops-day-3-introduction-to-cloudtrail-5ce923f44584

Day 4-CloudWatch log agent Installation — Centos7

https://medium.com/@devopslearning/100-days-of-devops-day-4-cloudwatch-log-agent-installation-centos7-d11054fffdf4

Day 5-CloudWatch to Slack Notification

https://medium.com/@devopslearning/100-days-of-devops-day-5-cloudwatch-to-slack-notification-d2d84a192bf2

Day 6-CloudWatch Logs(Metric Filters)

https://medium.com/@devopslearning/100-days-of-devops-day-6-cloudwatch-logs-metric-filters-94c572cc241

Day 7-AWS S3 Event

https://medium.com/@devopslearning/100-days-of-devops-day-7-aws-s3-event-cf64c6699ca1

Day 8-Introduction to AWS Security Token Service(STS)

https://medium.com/faun/100-days-of-devops-day-8-introduction-to-aws-security-token-service-sts-b0f164e5d6a3

Day 9-Delegate Access Across AWS Accounts Using IAM Roles

https://medium.com/@devopslearning/100-days-of-devops-day-9-delegate-access-across-aws-accounts-using-iam-roles-b7898b15ed3d

Day 10- Restricting User to Launch only T2 Instance

https://medium.com/faun/100-days-of-devops-day-10-restricting-user-to-launch-only-t2-instance-509aaaec5aa2

Day 11- Restricting S3 Bucket Access to Specific IP Addresses

https://medium.com/@devopslearning/100-days-of-devops-day-11-restricting-s3-bucket-access-to-specific-ip-addresses-a46c659b30e2

Day 12- How to ensure that users can’t turn off CloudTrail

https://medium.com/faun/100-days-of-devops-day-12-how-to-ensure-that-users-cant-turn-off-cloudtrail-ecdfce605894

Day 13- How to stop/start EC2 instance on schedule basis to save cost

https://medium.com/faun/100-days-of-devops-day-13-how-to-stop-start-ec2-instance-on-schedule-basis-to-save-cost-ed224b80a2e8

Day 14- How to automate the process of EBS Snapshot Creation

https://medium.com/@devopslearning/100-days-of-devops-day-14-how-to-automate-the-process-of-ebs-snapshot-creation-86418f2d7f09

Day 22-Introduction to Key Management System(KMS)

https://medium.com/@devopslearning/100-days-of-devops-day-22-introduction-to-key-management-system-kms-4c73ff555169

Day 23- How to encrypt EBS Volume using KMS

https://medium.com/@devopslearning/100-days-of-devops-day-23-how-to-encrypt-ebs-volume-using-kms-3706f7990f3

Day 24- How to encrypt S3 Bucket using KMS

https://medium.com/@devopslearning/100-days-of-devops-day-24-how-to-encrypt-s3-bucket-using-kms-fc3b3bcf4c1b

Day 25-AWS S3 Bucket using Terraform

https://medium.com/@devopslearning/100-days-of-devops-day-25-aws-s3-bucket-using-terraform-caccaa6b9c81

Day 26-Introduction to IAM

https://medium.com/@devopslearning/100-days-of-devops-day-26-introduction-to-iam-b69315623b01

Day 28- Introduction to VPC Flow Logs

https://medium.com/@devopslearning/100-days-of-devops-day-28-introduction-to-vpc-flow-logs-d11a99cd18ca

Day 29- Introduction to RDS — MySQL

https://medium.com/@devopslearning/100-days-of-devops-day-29-introduction-to-rds-mysql-14a6c0fa827b

Day 30-Introduction to AWS CLI

https://medium.com/@devopslearning/100-days-of-devops-day-30-introduction-to-aws-cli-6e1227986ebb

Day 31-Introduction to VPC Peering

https://medium.com/@devopslearning/100-days-of-devops-day-31-introduction-to-vpc-peering-662184e7559e

Day 32-Introduction to NAT Gateway

https://medium.com/@devopslearning/100-days-of-devops-day-32-introduction-to-nat-gateways-7482da86e5f8

Day 33- On Demand Hibernate

https://medium.com/@devopslearning/100-days-of-devops-day-33-on-demand-hibernate-6de5997481e4

Day 35-AWS S3 Intelligent-Tiering (S3 INT)

https://medium.com/@devopslearning/100-days-of-devops-day-35-aws-s3-intelligent-tiering-s3-int-3b0c30c4bfeb

Day 36-Introduction to AWS System Manager

https://medium.com/@devopslearning/100-days-of-devops-day-36-introduction-to-aws-system-manager-21ffb5d634d0

Day 37- Automate the Process of AMI Creation Using System Manager Maintenance Windows

https://medium.com/@devopslearning/100-days-of-devops-day-37-automate-the-process-of-ami-creation-using-system-manager-maintenance-c81218004c55

Day 38-Introduction to Transit Gateway

https://medium.com/@devopslearning/100-days-of-devops-day-38-introduction-to-transit-gateway-1d2f6ca1e4a0

Day 39-Introduction to VPC EndPoint

https://medium.com/@devopslearning/100-days-of-devops-day-39-introduction-to-vpc-endpoint-7d949f61bed6

Day 40-Introduction to AWS Config

https://medium.com/@devopslearning/100-days-of-devops-day-40-introduction-to-aws-config-e5f4ad41b194

Day 41-Real-Time Apache Log Analysis using Amazon Kinesis and Amazon Elasticsearch Service

https://medium.com/@devopslearning/100-days-of-devops-day-41-real-time-apache-log-analysis-using-amazon-kinesis-and-amazon-f3b506626681

Day 42-Audit your AWS Environment

https://medium.com/@devopslearning/100-days-of-devops-day-42-audit-your-aws-environment-50237fc3b3

Day 43- Introduction to EC2

https://medium.com/@devopslearning/100-days-of-devops-day-43-introduction-to-ec2-7004a603a67f

Day 44-S3 Cross Region Replication(CRR)

https://medium.com/@devopslearning/100-days-of-devops-day-44-s3-cross-region-replication-crr-8c58ae8c68d4

Day 45-Simple Backup Solution using S3, Glacier and VPC Endpoint

https://medium.com/@devopslearning/100-days-of-devops-day-45-simple-backup-solution-using-s3-glacier-and-vpc-endpoint-26c51ddba04

Day 46-Introduction to Amazon Glacier

https://medium.com/@devopslearning/100-days-of-devops-day-46-introduction-to-amazon-glacier-e6587432e1a1

Day 47-Introduction to Amazon Elastic File System (EFS)

https://medium.com/@devopslearning/100-days-of-devops-day-47-introduction-to-amazon-elastic-file-system-efs-d81598439fcd

Day 48- Threat detection and mitigation at AWS

https://medium.com/the-crossover-cast/100-days-of-devops-day-48-threat-detection-and-mitigation-at-aws-b29611707f67

Day 49-Introduction to Route53

https://medium.com/@devopslearning/100-days-of-devops-day-49-introduction-to-route53-d6b01195aaef

Day 50-Introduction to Route53 Failover

https://medium.com/@devopslearning/100-days-of-devops-day-50-introduction-to-route53-failover-9466cfb3c5d4

Day 69-Introduction to AWS Lambda

https://medium.com/@devopslearning/100-days-of-devops-day-69-introduction-to-aws-lambda-6ac6dfbd6fb8

Day 70-Introduction to Boto3

https://medium.com/@devopslearning/100-days-of-devops-day-70-introduction-to-boto3-98a257749dd0

Day 71-EC2 Instance creation using Lambda

https://medium.com/@devopslearning/100-days-of-devops-day-71-ec2-instance-creation-using-lambda-e45dd5129364

Day 92-Choosing Right EC2 Instance Type

https://medium.com/@devopslearning/100-days-of-devops-day-92-choosing-right-ec2-instance-type-2f5d52bd6c85

Day 98- AWS Lambda with Terraform Code

Day 99- AWS Boto3

Terraform

Day 15- Introduction to Terraform

https://medium.com/@devopslearning/100-days-of-devops-day-15-introduction-to-terraform-7a168dec8d38

Day 16- Building VPC using Terraform

https://medium.com/@devopslearning/100-days-of-devops-day-16-building-vpc-using-terraform-7c507ce07413

Day 17- Creating EC2 Instance using Terraform

https://medium.com/@devopslearning/100-days-of-devops-day-17-creating-ec2-instance-using-terraform-c876a09d9d66

Day 18-Add monitoring to these instances using Terraform(CloudWatch and SNS)

https://medium.com/@devopslearning/100-days-of-devops-day-18-add-monitoring-to-these-instances-using-terraform-cloudwatch-and-sns-530520239fb6

Day 19 – Application Load Balancer using Terraform

https://medium.com/@devopslearning/100-days-of-devops-day-19-application-load-balancer-using-terraform-58794aeaf31f

Day 20— Auto-Scaling Group using Terraform

https://medium.com/@devopslearning/100-days-of-devops-day-20-auto-scaling-group-using-terraform-3000a834fa35

Day 21- MySQL RDS Database Creation using Terraform

https://medium.com/@devopslearning/100-days-of-devops-day-21-mysql-rds-database-creation-using-terraform-278eeaff339f

CI-CD

Day 27- Introduction to Packer

https://medium.com/@devopslearning/100-days-of-devops-day-27-introduction-to-packer-d77089ecac01

Day 34- Terraform Pipeline using Jenkins

https://medium.com/@devopslearning/100-days-of-devops-day-34-terraform-pipeline-using-jenkins-a3d81975730f

BASH SCRIPTING

Day 51-Introduction to Bash Scripting

https://medium.com/@devopslearning/100-days-of-devops-day-51-introduction-to-bash-scripting-9501ce7a32a4

Day 52-Conditional Testing in Shell

https://medium.com/@devopslearning/100-days-of-devops-day-52-conditional-testing-in-shell-6d4eb4a1f010

Day 53-Introduction to Regular Expression — Part 1

https://medium.com/@devopslearning/100-days-of-devops-day-53-introduction-to-regular-expression-part-1-c6218f1670b7

Day 65-Bash Script to Monitor Service

https://medium.com/@devopslearning/100-days-of-devops-day-65-bash-script-to-monitor-service-b7d75a5b2b0d

Day 85- Shell Script to find the failed login

https://medium.com/@devopslearning/100-days-of-devops-day-85-shell-script-to-find-the-failed-login-a87975b9e21f

Day 91-How to check if the file exists (Bash/Python)

https://medium.com/@devopslearning/100-days-of-devops-day-91-how-to-check-if-the-file-exists-bash-python-ddc8087a3cbf

Linux

Day 54-And You Thought You Knew RPM

https://medium.com/@devopslearning/100-days-of-devops-day-54-and-you-thought-you-knew-rpm-18e63e8aa4bc

Day 55-Introduction to YUM

https://medium.com/@devopslearning/100-days-of-devops-day-55-introduction-to-yum-5c5f0db91787

Day 56-Debugging Performance Issue using SAR

https://medium.com/@devopslearning/100-days-of-devops-day-56-debugging-performance-issue-using-sar-fcb61d6dc641

Day 57-Debugging I/O Performance Issue

https://medium.com/@devopslearning/100-days-of-devops-day-57-debugging-i-o-performance-issue-d6dd05dd2dea

Day 62-Useful Linux Command for Network Troubleshooting

https://medium.com/@devopslearning/100-days-of-devops-day-62-useful-linux-command-for-network-troubleshooting-920430a2f75f

Day 63- Wireshark for HTTP/HTTPS Analysis

https://medium.com/@devopslearning/100-days-of-devops-day-63-wireshark-for-http-https-analysis-550857e2da6c

Day 66-Linux Boot Process

https://medium.com/@devopslearning/100-days-of-devops-day-66-linux-boot-process-a8dbddcc508e

Day 67-Introduction to Chrony

https://medium.com/@devopslearning/100-days-of-devops-day-67-introduction-to-chrony-680b3d016260

Day 68-Introduction to Systemd

https://medium.com/@devopslearning/100-days-of-devops-day-68-introduction-to-systemd-b54fb4ca006d

Day 76-How Linux Kernel is organized

https://medium.com/@devopslearning/100-days-of-devops-day-76-how-linux-kernel-is-organized-257bafbc31fc

Day 77-Process Management in Linux

https://medium.com/@devopslearning/100-days-of-devops-day-77-process-management-in-linux-21aabae5b124

Ansible

Day 73- Introduction to Ansible

https://medium.com/@devopslearning/100-days-of-devops-day-73-introduction-to-ansible-723ad630fcee

GIT

Day 74- Introduction to GIT

https://medium.com/@devopslearning/100-days-of-devops-day-74-introduction-to-git-9374bafb08b6

Docker & Kubernetes

Day 58-Docker Basics

https://medium.com/@devopslearning/100-days-of-devops-day-58-docker-basics-d1c75cb84dc4

Day 59- Introduction to DockerFile

https://medium.com/@devopslearning/100-days-of-devops-day-59-introduction-to-dockerfile-e854ba90669a

Day 72-Introduction to Kubernetes

https://medium.com/@devopslearning/100-days-of-devops-day-72-introduction-to-kubernetes-9dda4009a0ab

Jenkins

Day 60-Introduction to Jenkins

https://medium.com/@devopslearning/100-days-of-devops-day-60-introduction-to-jenkins-5afc0f700335

Day 61-Jenkins Agent Node

https://medium.com/@devopslearning/100-days-of-devops-day-61-jenkins-agent-node-4b3779366767

Python

Day 64- Regular Expression using Python

https://medium.com/@devopslearning/100-days-of-devops-day-64-regular-expression-using-python-edf5a776fa74

Day 78- Python OS/Subprocess Module

https://medium.com/@devopslearning/100-days-of-devops-day-78-python-os-subprocess-module-95ae25bc686d

Day 79-Apache Log Parser Using Python

https://medium.com/@devopslearning/100-days-of-devops-day-79-apache-log-parser-using-python-849135ed1a08

Day 80-Python Unit Testing(Pytest)

https://medium.com/@devopslearning/100-days-of-devops-day-80-python-unit-testing-pytest-67168a91ea06

Day 81-Debugging Python Code

https://medium.com/@devopslearning/100-days-of-devops-day-81-debugging-python-code-a1e19b4011a8

Day 82- Python Object Oriented Programming(OOP)

https://medium.com/@devopslearning/100-days-of-devops-day-82-python-object-oriented-programming-oop-44786b0184f6

Day 86-Python Flow Control(if-else statement)

https://medium.com/@devopslearning/100-days-of-devops-day-86-python-flow-control-if-else-statement-a20cf04b4fbe

Day 87-While/For Loop Python

https://medium.com/@devopslearning/100-days-of-devops-day-87-while-for-loop-python-cf405b6e868f

Day 88-Lists in Python

https://medium.com/@devopslearning/100-days-of-devops-day-88-lists-in-python-a6eb7fdb6cee

Day 89-Python Files I/O

https://medium.com/@devopslearning/100-days-of-devops-day-89-python-files-i-o-c8b771b43fb7

Day 90- Try and Except Statement Python

https://medium.com/@devopslearning/100-days-of-devops-day-90-try-and-except-statement-python-48d5c140bcc7

Day 93-Python Functions

https://medium.com/@devopslearning/100-days-of-devops-day-93-python-functions-f7a8f92fb563

Day 94-Introduction to Numpy for Data Analysis

https://medium.com/@devopslearning/100-days-of-devops-day-94-introduction-to-numpy-for-data-analysis-127561af9e1d

Day 95-Introduction to Django

https://medium.com/@devopslearning/100-days-of-devops-day-95-introduction-to-django-37942477d6c

Miscellaneous

Day 75- Introduction to Fabric

https://medium.com/@devopslearning/100-days-of-devops-day-75-introduction-to-fabric-2e80f5c3148f

Day 83-Introduction to Splunk

https://medium.com/@devopslearning/100-days-of-devops-day-83-introduction-to-splunk-9c1caf04f253

Day 84-Introduction to ElasticSearch

https://medium.com/@devopslearning/100-days-of-devops-day-84-introduction-to-elasticsearch-d4927603b99c

Day 96-Document Object Model(DOM)

https://medium.com/@devopslearning/100-days-of-devops-day-96-document-object-model-dom-8860ea8018f7

Day 97-Introduction to JQuery

https://medium.com/@devopslearning/100-days-of-devops-day-97-introduction-to-jquery-f63288571e8d

100 Days of DevOps — Day 99- AWS  Boto3

What is Boto3?

Boto3 is the Amazon Web Services (AWS) SDK for Python. It enables Python developers to create, configure, and manage AWS services, such as EC2 and S3. Boto3 provides an easy to use, object-oriented API, as well as low-level access to AWS services.

Boto3 is built on the top of a library called Botocore, which is shared by the AWS CLI. Botocore provides the low level clients, session and credentials and configuration data. Boto3 built on the top of Botocore by providing its own session, resources, collections, waiters and paginators.

Botocore is the basis for the aws-cli.

https://github.com/boto/boto3

https://github.com/boto/botocore

Continue reading “100 Days of DevOps — Day 99- AWS  Boto3”

100 Days of DevOps — Day 98- AWS Lambda with Terraform Code

What is AWS Lambda?

With AWS Lambda, you can run code without provisioning or managing servers. You pay only for the compute time that you consume — there’s no charge when your code isn’t running. You can run code for virtually any type of application or backend service — all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

  • To start with Lambda

Go to https://us-west-2.console.aws.amazon.com/lambda → Create a function

  • Create function you have three options
* Author from scratch: Which is self explanatory, 
i.e you are writing your own function
* Use a blueprint: Build a lambda application from sample 
code and configuration preset for common use cases
(Provided by AWS)
* Browse serverless app repository: Deploy a sample 
lambda application from the AWS Serverless 
Application Repository(Published by other developers 
and AWS Patners)
  • Function name: HelloWorld
  • Runtime: Choose Python3.7 from the dropdown
  • Permission: For the time being choose the default permission
  • Click Create Function

Invoking Lambda Function

  • When building applications on AWS Lambda the core components are Lambda functions and event sources. An event source is the AWS service or custom application that publishes events, and a Lambda function is the custom code that processes the events
* Amazon S3 Pushes Events
* AWS Lambda Pulls Events from a Kinesis Stream
* HTTP API requests through API Gateway
* CloudWatch Schedule Events
  • From the list select CloudWatch Events
Reference: https://www.youtube.com/watch?v=WbHw14hF7lU
NOTE: It’s an old slide, GO is already supported
  • As you can see under CloudWatch Events it says configuration required
  • Rule: Create a new rule
  • Rule name: Everyday
  • Rule description: Give your Rule some description
  • Rule type: Choose Schedule expression and under its rate(1 day)(i.e its going to trigger it every day)

Schedule Expressions Using Rate or Cron – AWS Lambda
AWS Lambda supports standard rate and cron expressions for frequencies of up to once per minute. CloudWatch Events rate…docs.aws.amazon.com

  • Click on Add and Save
  • Now go back to your Lambda Code(HelloWorld)
import json
def lambda_handler(event, context):
# TODO implement
print(event) <--------
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
  • Add this entry, which simply means we are trying to print the event
  • Again save it
  • Let’s try to set a simple test event, Click on Test
  • Under Event template, search for Amazon CloudWatch
  • Event Name: Give your event some name and test it
  • Go back and this time Click on Monitoring
  • Click on View logs in CloudWatch
  • Click on the log stream and you will see the same logs you see in Lambda console

Lambda Programming Model

  • Lambda supports a bunch of programming languages
  • You write code for your Lambda function in one of the languages AWS Lambda supports. Regardless of the language you choose, there is a common pattern to writing code for a Lambda function that includes the following core concepts.
* Handler: Handler is the function AWS Lambda calls 
to start execution of your Lambda function, it act 
as an entry point.
  • As you can see handle start with lambda_function which is a Python Script Name and then lambda_handler which is a function and act as an entry point for event and context
  • Event: We already saw in the previous example where we passed the CloudWatch Event to our code
  • Context — AWS Lambda also passes a context object to the handler function, as the second parameter. Via this context object, your code can interact with AWS Lambda. For example, your code can find the execution time remaining before AWS Lambda terminates your Lambda function.
  • Logging — Your Lambda function can contain logging statements. AWS Lambda writes these logs to CloudWatch Logs.
  • Exceptions — Your Lambda function needs to communicate the result of the function execution to AWS Lambda. Depending on the language you author your Lambda function code, there are different ways to end a request successfully or to notify AWS Lambda an error occurred during the execution.
  • One more thing, I want to highlight is the timeout
  • You can now set the timeout value for a function to any value up to 15 minutes. When the specified timeout is reached, AWS Lambda terminates execution of your Lambda function. As a best practice, you should set the timeout value based on your expected execution time to prevent your function from running longer than intended.

Common Use case of Lambda

Terraform Code

All the steps we have performed manually. let’s try to automate it using terraform

  • Step1: Create your test Python function
def lambda_handler(event, context):
    print ("Hello from terraform world")
    return "hello from terraform world"
  • Now let’s zip it up
$ zip lambda.zip lambda.py 
  adding: lambda.py (deflated 27%)
  • Step2: Define your Lambda resource
resource "aws_lambda_function" "test_lambda" {
filename = "lambda.zip"
function_name = "lambda_handler"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "lambda.lambda_handler"

# The filebase64sha256() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the base64sha256() function and the file() function:
# source_code_hash = "${base64sha256(file("lambda_function_payload.zip"))}"
source_code_hash = "${base64sha256("lambda.zip")}"
runtime = "python2.7"
}
  • filename: Is the name of the file, you zipped in the previous step
  • function name: Is the name of the function you defined in your python code
  • role: IAM role attached to the Lambda Function. This governs both who / what can invoke your Lambda Function, as well as what resources our Lambda Function has access to
  • handler: Function entry point in our code(python code filename.method name) (filename: lambda.py we don’t need to include file extension) and (lambda function is lambda_handler def lambda_handler(event, context))
  • source_code_hash: Used to trigger updates. Must be set to a base64-encoded SHA256 hash of the package file specified with either filename or s3_key. The usual way to set this is filebase64sha256("file.zip") (Terraform 0.11.12 and later) or base64sha256(file("file.zip")) (Terraform 0.11.11 and earlier), where “file.zip” is the local filename of the lambda function source archive.
  • Runtime: The identifier of the function’s  runtime
Valid Values: nodejs8.10 | nodejs10.x | java8 | python2.7 | python3.6 | python3.7 | dotnetcore1.0 | dotnetcore2.1 | go1.x | ruby2.5 | provided
  • Step3: Create an IAM Role
resource "aws_iam_role" "iam_for_lambda" {
name = "iam_for_lambda"

assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}

# See also the following AWS managed policy: AWSLambdaBasicExecutionRole
resource "aws_iam_policy" "lambda_logging" {
name = "lambda_logging"
path = "/"
description = "IAM policy for logging from a lambda"

policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"
}
]
}
EOF
}

resource "aws_iam_role_policy_attachment" "lambda_logs" {
role = "${aws_iam_role.iam_for_lambda.name}"
policy_arn = "${aws_iam_policy.lambda_logging.arn}"
}

Reference https://www.terraform.io/docs/providers/aws/r/lambda_function.html

  • Step4: Terraform Init: Initialize a Terraform working directory, containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control.
$ terraform init

Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "aws" (2.23.0)...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.aws: version = "~> 2.23"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
  • Step5: Terraform plan: The terraform plan command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_cloudwatch_log_group.example
      id:                             <computed>
      arn:                            <computed>
      name:                           "/aws/lambda/lambda_handler"
      retention_in_days:              "14"

  + aws_iam_policy.lambda_logging
      id:                             <computed>
      arn:                            <computed>
      description:                    "IAM policy for logging from a lambda"
      name:                           "lambda_logging"
      path:                           "/"
      policy:                         "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": [\n        \"logs:CreateLogStream\",\n        \"logs:PutLogEvents\"\n      ],\n      \"Resource\": \"arn:aws:logs:*:*:*\",\n      \"Effect\": \"Allow\"\n    }\n  ]\n}\n"

  + aws_iam_role.iam_for_lambda
      id:                             <computed>
      arn:                            <computed>
      assume_role_policy:             "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": \"sts:AssumeRole\",\n      \"Principal\": {\n        \"Service\": \"lambda.amazonaws.com\"\n      },\n      \"Effect\": \"Allow\",\n      \"Sid\": \"\"\n    }\n  ]\n}\n"
      create_date:                    <computed>
      force_detach_policies:          "false"
      max_session_duration:           "3600"
      name:                           "iam_for_lambda"
      path:                           "/"
      unique_id:                      <computed>

  + aws_iam_role_policy_attachment.lambda_logs
      id:                             <computed>
      policy_arn:                     "${aws_iam_policy.lambda_logging.arn}"
      role:                           "iam_for_lambda"

  + aws_lambda_function.test_lambda
      id:                             <computed>
      arn:                            <computed>
      filename:                       "lambda.zip"
      function_name:                  "lambda_handler"
      handler:                        "lambda.lambda_handler"
      invoke_arn:                     <computed>
      last_modified:                  <computed>
      memory_size:                    "128"
      publish:                        "false"
      qualified_arn:                  <computed>
      reserved_concurrent_executions: "-1"
      role:                           "${aws_iam_role.iam_for_lambda.arn}"
      runtime:                        "python2.7"
      source_code_hash:               "Gpu07NPcj26NrKv0Ne6BbZkfDRuM3ozHHqCFUWH9Sqg="
      source_code_size:               <computed>
      timeout:                        "3"
      tracing_config.#:               <computed>
      version:                        <computed>


Plan: 5 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Step6: terraform apply: The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_cloudwatch_log_group.example
      id:                             <computed>
      arn:                            <computed>
      name:                           "/aws/lambda/lambda_handler"
      retention_in_days:              "14"

  + aws_iam_policy.lambda_logging
      id:                             <computed>
      arn:                            <computed>
      description:                    "IAM policy for logging from a lambda"
      name:                           "lambda_logging"
      path:                           "/"
      policy:                         "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": [\n        \"logs:CreateLogStream\",\n        \"logs:PutLogEvents\"\n      ],\n      \"Resource\": \"arn:aws:logs:*:*:*\",\n      \"Effect\": \"Allow\"\n    }\n  ]\n}\n"

  + aws_iam_role.iam_for_lambda
      id:                             <computed>
      arn:                            <computed>
      assume_role_policy:             "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": \"sts:AssumeRole\",\n      \"Principal\": {\n        \"Service\": \"lambda.amazonaws.com\"\n      },\n      \"Effect\": \"Allow\",\n      \"Sid\": \"\"\n    }\n  ]\n}\n"
      create_date:                    <computed>
      force_detach_policies:          "false"
      max_session_duration:           "3600"
      name:                           "iam_for_lambda"
      path:                           "/"
      unique_id:                      <computed>

  + aws_iam_role_policy_attachment.lambda_logs
      id:                             <computed>
      policy_arn:                     "${aws_iam_policy.lambda_logging.arn}"
      role:                           "iam_for_lambda"

  + aws_lambda_function.test_lambda
      id:                             <computed>
      arn:                            <computed>
      filename:                       "lambda.zip"
      function_name:                  "lambda_handler"
      handler:                        "lambda.lambda_handler"
      invoke_arn:                     <computed>
      last_modified:                  <computed>
      memory_size:                    "128"
      publish:                        "false"
      qualified_arn:                  <computed>
      reserved_concurrent_executions: "-1"
      role:                           "${aws_iam_role.iam_for_lambda.arn}"
      runtime:                        "python2.7"
      source_code_hash:               "Gpu07NPcj26NrKv0Ne6BbZkfDRuM3ozHHqCFUWH9Sqg="
      source_code_size:               <computed>
      timeout:                        "3"
      tracing_config.#:               <computed>
      version:                        <computed>


Plan: 5 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_iam_policy.lambda_logging: Creating...
  arn:         "" => "<computed>"
  description: "" => "IAM policy for logging from a lambda"
  name:        "" => "lambda_logging"
  path:        "" => "/"
  policy:      "" => "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": [\n        \"logs:CreateLogStream\",\n        \"logs:PutLogEvents\"\n      ],\n      \"Resource\": \"arn:aws:logs:*:*:*\",\n      \"Effect\": \"Allow\"\n    }\n  ]\n}\n"
aws_iam_role.iam_for_lambda: Creating...
  arn:                   "" => "<computed>"
  assume_role_policy:    "" => "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": \"sts:AssumeRole\",\n      \"Principal\": {\n        \"Service\": \"lambda.amazonaws.com\"\n      },\n      \"Effect\": \"Allow\",\n      \"Sid\": \"\"\n    }\n  ]\n}\n"
  create_date:           "" => "<computed>"
  force_detach_policies: "" => "false"
  max_session_duration:  "" => "3600"
  name:                  "" => "iam_for_lambda"
  path:                  "" => "/"
  unique_id:             "" => "<computed>"
aws_iam_policy.lambda_logging: Still creating... (10s elapsed)
aws_iam_role.iam_for_lambda: Still creating... (10s elapsed)
aws_iam_role.iam_for_lambda: Creation complete after 10s (ID: iam_for_lambda)
aws_lambda_function.test_lambda: Creating...
  arn:                            "" => "<computed>"
  filename:                       "" => "lambda.zip"
  function_name:                  "" => "lambda_handler"
  handler:                        "" => "lambda.lambda_handler"
  invoke_arn:                     "" => "<computed>"
  last_modified:                  "" => "<computed>"
  memory_size:                    "" => "128"
  publish:                        "" => "false"
  qualified_arn:                  "" => "<computed>"
  reserved_concurrent_executions: "" => "-1"
  role:                           "" => "arn:aws:iam::XXXXXX:role/iam_for_lambda"
  runtime:                        "" => "python2.7"
  source_code_hash:               "" => "Gpu07NPcj26NrKv0Ne6BbZkfDRuM3ozHHqCFUWH9Sqg="
  source_code_size:               "" => "<computed>"
  timeout:                        "" => "3"
  tracing_config.#:               "" => "<computed>"
  version:                        "" => "<computed>"
aws_iam_policy.lambda_logging: Creation complete after 11s (ID: arn:aws:iam::355622012945:policy/lambda_logging)
aws_iam_role_policy_attachment.lambda_logs: Creating...
  policy_arn: "" => "arn:aws:iam::XXXXXX:policy/lambda_logging"
  role:       "" => "iam_for_lambda"
aws_iam_role_policy_attachment.lambda_logs: Creation complete after 0s (ID: iam_for_lambda-20190814010350932300000001)
aws_lambda_function.test_lambda: Still creating... (10s elapsed)
aws_lambda_function.test_lambda: Still creating... (20s elapsed)
aws_lambda_function.test_lambda: Still creating... (30s elapsed)
aws_lambda_function.test_lambda: Still creating... (40s elapsed)
aws_lambda_function.test_lambda: Creation complete after 41s (ID: lambda_handler)
aws_cloudwatch_log_group.example: Creating...
  arn:               "" => "<computed>"
  name:              "" => "/aws/lambda/lambda_handler"
  retention_in_days: "" => "14"
aws_cloudwatch_log_group.example: Still creating... (10s elapsed)
aws_cloudwatch_log_group.example: Creation complete after 11s (ID: /aws/lambda/lambda_handler)

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

GitHub Source Code Link https://github.com/100daysofdevops/100daysofdevops/tree/master/aws-lambda

My road to AWS Certified Security - Specialty Certification

This is the continuation of my earlier post My road to AWS Certified Solution Architect.

https://medium.com/@devopslearning/my-road-to-aws-certified-solution-architect-394676f15680

I wrote the AWS Certified Solution Architect exam almost 8 months back and after clearing that exam I decided to write my second AWS exam in the next three months but those three months became six. A couple of weeks back I watched the below youtube video “Inside the mind of a master procrastinator | Tim Urban” and was able to co-relate myself to this guy. That was the time when the Panic Monster hit inside my brain and told me this is the correct time to write the next exam.

Panic Monster

YAY I cleared the exam!

WARNING: Some House Keeping task, before reading this blog 🙂 🙂

1: As everyone needs to sign NDA with AWS, I can’t tell you the exact question asked during the exam neither I have GB of memory, but I can give you the pointers what to expect in the exam.

2: As we all know AWS infrastructure updates everyday, so some of the stuff might not be relevant after a few days/weeks/months.

3: Please don’t ask for any exam dumps or question, that defeats the whole purpose of the exam.

Exam Preparation

  • I highly recommend the Linux Academy Course to everyone, Adrian Cantrill did an excellent job in explaining all the concepts and going into the in-depth of all topics.

https://linuxacademy.com/course/aws-certified-security-specialty/

  • My second recommendation is Acloudguru, especially there “Updates For 2019” section.

https://acloud.guru/learn/aws-certified-security-specialty

  • AWS Re: Invent Videos: I highly recommend going through these videos, as they will give you enough in-depth knowledge about each service.
  • AWS Documentation: Best documentation ever provided by any service provider. Don’t miss the FAQ regarding each service (especially for KMS, IAM, VPC).

Once you are done with the above preparation, it’s a good time to gauge your knowledge, check the AWS provided sample question.

https://d1.awsstatic.com/training-and-certification/docs-security-spec/AWS%20Certified%20Security%20-%20Specialty_Sample%20Questions.pdf

Now coming back to the exam, the entire exam is divided into five main topics.

Based on my experience, you must need to know these four services to clear this exam.

  • KMS
  • VPC
  • IAM
  • Identity Federation(This is a surprise package to me, I saw almost 5–6 questions related to Identity Federation).

Domain 1: Incident Response

  • What steps you will perform if your ACCESS_KEY and SECRET_ACCESS_KEY got leaked accidentally on GitHub(Tips: You need to rotate the key immediately, update your application which is using this key(good idea to use Role) and then disable/delete this key).
  • What steps to follow if your EC2 instance got compromised(Tips: Take the snapshot of EBS volume, Build instance in your forensic subnet or isolate this instance).

https://aws.amazon.com/premiumsupport/knowledge-center/potential-account-compromise/

Domain2: Logging and Monitoring

CloudTrail

  • Make sure you understand that same Cloudtrail can be applied to all regions, the question will trick you, do you create one trail per region or the same trail can be applied to multiple regions? What will happen to all the future region, can the same trail will be applied or do we need to create a new trail?
  • Must try CloudTrail multi-account scenario(Where you can create one central S3 bucket and can push trail from different accounts)(Common issues: Not able to push logs from the particular account? Does the S3 bucket policy looks correct? Do we have an IAM Resource defined for that particular account?)

CloudWatch

  • How to troubleshoot if cloud watch agent is not sending logs to CloudWatch Log Group(Some Tips: Is cloudwatch agent running? Does EC2 Instance Role have sufficient permission to push logs to CloudWatch Logs)
  • Cloudwatch metrics to filter events and create an alert?(eg: Failed logins or someone trying to break-in with root credentials)

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/ExtractBytesExample.html

VPC Flow Logs

  • Must remember this point, VPC Flow Logs is not for deep packet inspection or analysis(it only hold metadata), for a deep packet inspection you need third party tool(eg: Wireshark)
  • Understand the format of VPC Flow Log and check some sample flow logs(Pay special attention to ACCEPT vs REJECT field, where packets are getting REJECTED at Security Group Level or NACL)
2 123456789010 eni-abc123de 172.31.16.139 172.31.16.21 
20641 22 6 20 4249 1418530010 1418530070 ACCEPT OK
2 123456789010 eni-abc123de 172.31.9.69 172.31.9.12 
49761 3389 6 20 4249 1418530010 1418530070 REJECT OK

https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html

S3 Events

  • Different type of S3 events

https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html

AWS Config

  • In which cases you are going to use AWS Config? Some use cases
* Ensure that EC2 instances launched in a particular 
VPC are properly tagged.
* Make sure that every instance is associated with at 
least one security group.
* Check to make sure that port 22 is not open in any 
production security group.

https://aws.amazon.com/blogs/aws/aws-config-rules-dynamic-compliance-checking-for-cloud-resources/

AWS Inspector

  • Understand what is the use of the inspector

https://aws.amazon.com/premiumsupport/knowledge-center/set-up-amazon-inspector/

https://docs.aws.amazon.com/inspector/latest/userguide/inspector_rule-packages.html

  • What is a rule package?

A rules package is a collection of security checks that can be configured as part of an assessment template and assessment run.

Amazon Inspector has two types of rules packages, the 
network reachability rules package that checks for 
network accessibility of your Amazon EC2 instances, 
and host assessment rules packages that check for 
vulnerabilities and insecure configurations on the
 Amazon EC2 instance. Host assessment rules packages 
include Common Vulnerabilities and Exposures (CVE), 
Center for Internet Security (CIS) Operating System 
configuration benchmarks, and security best practices.

Domain 3: Infrastructure Security

CloudFront

  • Try to create CloudFront Distribution and make a note of each step.
  • What is the difference when you use your own SSL cert vs CloudFront Provided cert.

AWS WAF

  • Use of AWS WAF

https://aws.amazon.com/waf/getting-started/

  • Remember this, you can only use WAF with Amazon CloudFront and the Application Load Balancer (ALB)
  • Whenever question asked for SQL injection and Cross-Site Scripting (XSS) think of WAF as a security solution

VPC

  • Understand the difference between Security Group vs Network Access Control List
  • VPC endpoint and check its Policies
  • Example: Restricting Access to a Specific Endpoint
{
"Version": "2012-10-17",
"Id": "Policy1415115909152",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::my_secure_bucket",
"arn:aws:s3:::my_secure_bucket/*"],
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": "vpce-1a2b3c4d"
}
}
}
]
}

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html#vpc-endpoints-policies-s3

System Manager

  • Use of System Manager Parameter Store(eg: Which service you will use to store a secret in AWS, if the question is related to DB then prefer Secret Manager)
  • How to use System Manager for Patching(Question could be to meet the compliance requirement you need to regularly patch your server, which AWS Service you can use)

Domain 4: Identity and Access Management

  • You will see a bunch of questions related to IAM Policies and what the particular policy do?
  • Make sure you are comfortable and understand the difference between IAM Policies vs Resource Policies(Especially S3 bucket policies and KMS).
  • Use of AWS Organization(Remember Service Control Policy(SCP) can deny access only, they cannot allow)
  • Understand how AWS Secure Token Service(STS), this is not only important for the exam but also as a part of your daily job.

Active Directory

  • 5–6 questions related to active directory.
  • Please brush up your concept related to Web Identity Federation and SAML.

Domain 5: Data Protection

  • Must try this scenario, KMS Bring your own key.

https://aws.amazon.com/blogs/aws/new-bring-your-own-keys-with-aws-key-management-service/

  • In which case you prefer HSM(look for key term like satisfying the compliance requirement)over KMS
  • Understand how key Rotation works in case of AWS Managed Key(Automatically Rotated after 3 years) vs Customer Manager(Automatically rotated after 365 days — disabled by default) vs Customer Manager imported key material (No automatic rotation)
  • KMS Grant: With grants, you can programmatically delegate the use of KMS customer master keys (CMKs) to other AWS principals. You can use them to allow access, but not deny it. Grants are typically used to provide temporary permissions or more granular permissions.

Other Key Topics

Macie

  • Whenever question asked about PII(personally identifiable information) your best bet is Macie

Amazon Macie recognizes sensitive data such as personally identifiable information (PII) or intellectual property and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved. For more info

https://docs.aws.amazon.com/macie/latest/userguide/what-is-macie.html

Athena

  • When the question asked for analyzing S3 logs, then the most probable answer is Athena

Amazon S3 stores server access logs as objects in an S3 bucket. You can use Athena to quickly analyze and query S3 access logs.

https://aws.amazon.com/blogs/big-data/analyzing-data-in-s3-using-amazon-athena/

DDOS attack

  • Whenever question asked about DDOS attack, then Shield might be a solution.

AWS provides two levels of protection against DDoS attacks: AWS Shield Standard and AWS Shield Advanced.

https://docs.aws.amazon.com/waf/latest/developerguide/ddos-overview.html

AWS Secret Manager

  • Remember this point, when you enable the secret manager it will rotate credentials immediately. Make sure all your application instances are configured to use Secret Manager before enabling credentials rotations

https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html

AWS SES

  • Remember the port Number
  • Port 25 is the default but EC2 throttles email traffic on port 25
  • To avoid timeout either use port 587 or 2587

AWS Lambda

  • Understand the difference between Function Policy(helpful in troubleshooting if cloud watch doesn’t get invoked) vs Lambda Execution Role(Where Lambda need to perform some action eg: Stopping any EC2 instance)

AWS Glacier Vault Lock

  • Initiate the lock by attaching a vault lock policy to your vault, which sets the lock to an in-progress state and returns a lock ID. While in the in-progress state, you have 24 hours to validate your vault lock policy before the lock ID expires.

https://docs.aws.amazon.com/amazonglacier/latest/dev/vault-lock.html

AWS EBS

  • Make sure you understand this part

Does Amazon wipe EBS drive data upon deletion?

Your data will live in the storage system for an indefinite period of time after you terminate the volume but will be wiped prior to being available to another user.

https://forums.aws.amazon.com/thread.jspa?threadID=111692

Final Words

  • As this is the Speciality exam, you will except this exam to be much more difficult as compared to other exam and on the top of it you need to know so many aws services not just skimming through it but you need to know in-depth but in the end you will learn so much out of it. So keep calm and write this exam and let me know in case if you have any question.