21 Days of AWS using Terraform – Day 21

Welcome to Day 21 of 21 Days of AWS using Terraform

Thanks, everyone for joining 21 Days of AWS using Terraform, I learned a lot and I believe you guys also got a chance to learn something out of my blogs.

Thanks, everyone, and Happy Learning!.

Please join me with my journey by following any of the below links

21 Days of AWS using Terraform – Day 20- Importing Existing AWS Resources to Terraform

his is a very common question asked by any terraform beginner
I have a bunch of existing cloud resources(AWS in this case) created either

  • Manually(AWS GUI)
  • Boto3
  • CloudFormation

How to make those resources as a part of Terraform Configuration(tf and state file)?

Unfortunately, there is no good/better way to do that, I listed out a few options to make that thing possible but none of them is straightforward and requires a lot of manual effort. If you are aware of any other way to achieve this please let me know :-).

Solution 1: Terraform import command

Terraform import command is able to import existing infrastructure. This allows you to take resources you’ve created by some other means and bring it under Terraform management.

This is a great way to slowly transition infrastructure to terraform.

To import a resource (eg: instance i-082efb265ed372cc5) , which is created manually using AWS Console, first write a resource block for it in your own terraform configuration establishing the name by which it will be known to Terraform

Screen Shot 2019-07-03 at 11.18.06 AM.png

main.tf

provider "aws" {
region = "us-west-2"
}

resource "aws_instance" "import_test" {
ami = "${var.aws_ami}"
instance_type = "${var.instance_type}"
key_name = "${var.key_name}"
tags {
name = "test-instance"
}
}

variables.tf

variable "aws_ami" {
  default = "ami-032509850cf9ee54e"
}

variable "instance_type" {
  default = "t2.micro"
}

variable "key_name" {
  default = "my-test-key"
}

terraform import [options] ADDR ID

Where:

  • ADDR is the address of your Terraform’s defined resource to import to.
  • ID is your AWS object ID

Now terraform import can be run to attach an existing instance to this resource configuration

$ terraform import aws_instance.import_test i-082efb265ed372cc5
aws_instance.import_test: Importing from ID "i-082efb265ed372cc5"...
aws_instance.import_test: Import complete!
Imported aws_instance (ID: i-082efb265ed372cc5)
aws_instance.import_test: Refreshing state... (ID: i-082efb265ed372cc5)

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

What this command will do is locate the AWS instance with instance id i-082efb265ed372cc5(which has been created outside terraform) and attach its existing settings as described by the EC2 API to the name aws_instance.import_test in the Terraform state.

As a result of the above command, the resource is recorded in the state file. We can now run a terraform plan to see how the configuration compares to the imported resource, and make any adjustments to the configuration to align with the current (or desired) state of the imported object.


$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

aws_instance.import_test: Refreshing state... (ID: i-082efb265ed372cc5)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place

Terraform will perform the following actions:

~ aws_instance.import_test
tags.Name: "test-instance" => ""
tags.name: "" => "test-instance"


Plan: 0 to add, 1 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

State – terraform.tfstate

Terraform must store state about your managed infrastructure and configuration. This state is used by Terraform to map real world resources to our configuration, keep track of metadata, and to improve performance for large infrastructures.

This state is stored by default in a local file named terraform.tfstate.

Terraform uses this local state to create plans and make changes to our infrastructure. Before any terraform operation, Terraform does a refresh to update the state with the real infrastructure.

Reference
https://www.terraform.io/docs/import/
https://www.terraform.io/docs/state/

Solution 2: Terraforming

One of the major drawbacks with the above solution is that terraform does not support how to generate the resource code automatically. We will have to manually define them before performing the import command. It’s time-consuming and might cause a problem by human error such as writing invalid syntax, putting wrong object id, etc.

To make our life easier there is a tool called Terraforming it’s a free and open-source tool written in Ruby. It helps you to export existing AWS resources to Terraform style (tf, tfstate).

Currently, Terraforming requires Ruby 2.3 and supports Terraform v0.9.3 or higher. You can install Terraforming by gem command.

Installation

gem install terraforming

Terraforming command usage

Terraforming supports to export many types of AWS resource. For example EC2 instances, Security Group, Route53, VPC, IAM, etc. You can see the full list of supported options by running

$ terraforming 
Commands:
terraforming alb # ALB
terraforming asg # AutoScaling Group
terraforming cwa # CloudWatch Alarm
terraforming dbpg # Database Parameter Group
terraforming dbsg # Database Security Group
terraforming dbsn # Database Subnet Group
terraforming ddb # DynamoDB
terraforming ec2 # EC2
terraforming ecc # ElastiCache Cluster
terraforming ecsn # ElastiCache Subnet Group
terraforming efs # EFS File System
terraforming eip # EIP
terraforming elb # ELB
terraforming help [COMMAND] # Describe available commands or one specific command
terraforming iamg # IAM Group
terraforming iamgm # IAM Group Membership
terraforming iamgp # IAM Group Policy
terraforming iamip # IAM Instance Profile
terraforming iamp # IAM Policy
terraforming iampa # IAM Policy Attachment
terraforming iamr # IAM Role
terraforming iamrp # IAM Role Policy
terraforming iamu # IAM User
terraforming iamup # IAM User Policy
terraforming igw # Internet Gateway
terraforming kmsa # KMS Key Alias
terraforming kmsk # KMS Key
terraforming lc # Launch Configuration
terraforming nacl # Network ACL
terraforming nat # NAT Gateway
terraforming nif # Network Interface
terraforming r53r # Route53 Record
terraforming r53z # Route53 Hosted Zone
terraforming rds # RDS
terraforming rs # Redshift
terraforming rt # Route Table
terraforming rta # Route Table Association
terraforming s3 # S3
terraforming sg # Security Group
terraforming sn # Subnet
terraforming snss # SNS Subscription
terraforming snst # SNS Topic
terraforming sqs # SQS
terraforming vgw # VPN Gateway
terraforming vpc # VPC

Options:
[--merge=MERGE] # tfstate file to merge
[--overwrite], [--no-overwrite] # Overwrite existing tfstate
[--tfstate], [--no-tfstate] # Generate tfstate
[--profile=PROFILE] # AWS credentials profile
[--region=REGION] # AWS region
[--assume=ASSUME] # Role ARN to assume
[--use-bundled-cert], [--no-use-bundled-cert] # Use the bundled CA certificate from AWS SDK

Export AWS resource into tf

Following is an example of exporting existing EC2 instances

$ terraforming ec2
resource "aws_instance" "test-instance" {
    ami                         = "ami-032509850cf9ee54e"
    availability_zone           = "us-west-2a"
    ebs_optimized               = false
    instance_type               = "t2.micro"
    monitoring                  = false
    key_name                    = "my-test-key"
    subnet_id                   = "subnet-f3e4988a"
    vpc_security_group_ids      = ["sg-052d06581ff7fd432"]
    associate_public_ip_address = false
    private_ip                  = "172.31.29.96"
    source_dest_check           = true

    root_block_device {
        volume_type           = "gp2"
        volume_size           = 8
        delete_on_termination = true
    }

    tags {
        "Name" = "test-instance"
    }
}

Once we have tf code definitions above, we can copy paste them into our terraform code.

Create main.tf file and copy the above code

provider "aws" {
 region = "us-west-2"
}
resource "aws_instance" "test-instance" {
    ami                         = "ami-032509850cf9ee54e"
    availability_zone           = "us-west-2a"
    ebs_optimized               = false
    instance_type               = "t2.micro"
    monitoring                  = false
    key_name                    = "my-test-key"
    subnet_id                   = "subnet-f3e4988a"
    vpc_security_group_ids      = ["sg-052d06581ff7fd432"]
    associate_public_ip_address = false
    private_ip                  = "172.31.29.96"
    source_dest_check           = true

    root_block_device {
        volume_type           = "gp2"
        volume_size           = 8
        delete_on_termination = true
    }

    tags {
        "Name" = "test-instance"
    }
}

Similar to Solution 1 we need to attach an existing instance to this resource configuration

$ terraform import aws_instance.test-instance i-082efb265ed372cc5
aws_instance.test-instance: Importing from ID "i-082efb265ed372cc5"...
aws_instance.test-instance: Import complete!
Imported aws_instance (ID: i-082efb265ed372cc5)
aws_instance.test-instance: Refreshing state... (ID: i-082efb265ed372cc5)

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

Now we can confirm the new resource definition by running terraform plan.

$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

aws_instance.test-instance: Refreshing state... (ID: i-082efb265ed372cc5)

------------------------------------------------------------------------

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.

NOTE: If terraform doesn’t show any changes to AWS infrastructure, it means our imported resource is successful.

The same step needs to be performed for any other resources

$ terraforming sg
resource "aws_security_group" "vpc-061fe159b3de2c515-agatrainWebSG" {
    name        = "agatrainWebSG"
    description = "webserver sec group"
    vpc_id      = "vpc-061fe159b3de2c515"

    ingress {
        from_port       = 80
        to_port         = 80
        protocol        = "tcp"
        cidr_blocks     = ["0.0.0.0/0"]
        ipv6_cidr_blocks     = ["::/0"]
    }

    ingress {
        from_port       = 22
        to_port         = 22
        protocol        = "tcp"
        cidr_blocks     = ["0.0.0.0/0"]
        ipv6_cidr_blocks     = ["::/0"]
    }

    ingress {
        from_port       = 3306
        to_port         = 3306
        protocol        = "tcp"
        security_groups = []
        self            = true
    }


    egress {
        from_port       = 0
        to_port         = 0
        protocol        = "-1"
        cidr_blocks     = ["0.0.0.0/0"]
    }

    tags {
        "Name" = "mytestsg"
    }
}

Reference
https://github.com/dtan4/terraforming

Solution3: Converting CloudFormation Code to Terraform

Let’s say you have an existing CloudFormation code which you want to convert into terraform

Sample CloudFormation Code
https://s3-us-west-2.amazonaws.com/cloudformation-templates-us-west-2/EC2InstanceWithSecurityGroupSample.template

wget https://s3-us-west-2.amazonaws.com/cloudformation-templates-us-west-2/EC2InstanceWithSecurityGroupSample.template
mv EC2InstanceWithSecurityGroupSample.template example-stack.json

To Create a CloudFormation Stack

aws cloudformation create-stack --template-body file://example-stack.json \ —stack-name Cloudformation-to-terraform

Get the CloudFormation Stack response

$ aws cloudformation describe-stacks --stack-name Cloudformation-to-terraform
{
"Stacks": [
{
"StackId": "arn:aws:cloudformation:us-west-2:188140964824:stack/Cloudformation-to-terraform/1c3b5ad0-9eb2-11e9-b9f3-0242fa89e25c",
"DriftInformation": {
"StackDriftStatus": "NOT_CHECKED"
},
"Description": "AWS CloudFormation Sample Template EC2InstanceWithSecurityGroupSample: Create an Amazon EC2 instance running the Amazon Linux AMI. The AMI is chosen based on the region in which the stack is run. This example creates an EC2 security group for the instance to give you SSH access. **WARNING** This template creates an Amazon EC2 instance. You will be billed for the AWS resources used if you create a stack from this template.",
"Parameters": [
{
"ParameterValue": "terraform-plakhera",
"ParameterKey": "KeyName"
},
{
"ParameterValue": "0.0.0.0/0",
"ParameterKey": "SSHLocation"
},
{
"ParameterValue": "t2.small",
"ParameterKey": "InstanceType"
}
],
"Tags": [],
"Outputs": [
{
"Description": "InstanceId of the newly created EC2 instance",
"OutputKey": "InstanceId",
"OutputValue": "i-0723c8aa2ca4385eb"
},
{
"Description": "Public IP address of the newly created EC2 instance",
"OutputKey": "PublicIP",
"OutputValue": "54.184.166.37"
},
{
"Description": "Availability Zone of the newly created EC2 instance",
"OutputKey": "AZ",
"OutputValue": "us-west-2c"
},
{
"Description": "Public DNSName of the newly created EC2 instance",
"OutputKey": "PublicDNS",
"OutputValue": "ec2-54-184-166-37.us-west-2.compute.amazonaws.com"
}
],
"EnableTerminationProtection": false,
"CreationTime": "2019-07-04T23:19:02.785Z",
"StackName": "Cloudformation-to-terraform",
"NotificationARNs": [],
"StackStatus": "CREATE_COMPLETE",
"DisableRollback": false,
"RollbackConfiguration": {
"RollbackTriggers": []
}
}
]
}
  • Save the output to a file Cloudformation-to-terraform

Install a tool called cf-to-tf https://github.com/humanmade/cf-to-tf

npm i -g @humanmade/cf-to-tfbrew install json2hcl
  • Run this command
$ cf-to-tf --stack - config < Cloudformation-to-terraform  |json2hcl | cf-to-tf clean-hcl | terraform fmt -
resource "aws_cloudformation_stack" "main" {
name = "Cloudformation-to-terraform"
disable_rollback = false

parameters = {
KeyName = "terraform-plakhera"
SSHLocation = "0.0.0.0/0"
InstanceType = "t2.small"
}

tags = {}
}

NOTE: It does not generate terraform configuration for the individual resource CloudFormation provisions, it generates an aws_cloudformation_stack resource so that you can manage your existing CloudFormation stacks with Terraform instead of or in conjunction with the AWS Console and CLI.

In case if you are getting this error “I get the error unable to parse JSON: 1:13: illegal char illegal"

https://github.com/humanmade/cf-to-tf/issues/8

NOTE: I found this tool buggy as it doesn’t import all of my resources


CloudFormer is not a Solution it’s just a one more tool I want to mention if anyone is already aware of CloudFormation.

Cloudformer

What is AWS CloudFormer ?

AWS CloudFormer is a template creation tool and it creates AWS CloudFormation template from your existing resources in AWS account. We can select any supported AWS resources that are running in our account, and CloudFormer creates a template in an Amazon S3 bucket.

01. Go to AWS management console which is these resources reside.
02. Go to CloudFormation service in AWS management console and click on Launch CloudFormer.
03. Choose select a sample template and select CloudFormer from the drop down. Go Next

Screen Shot 2019-07-03 at 2.48.01 PM.png

04. Give a Name, User name and Password and select CreateNewVPC from VPC Selection drop down list.

Screen Shot 2019-07-03 at 2.49.21 PM.png

05. Keep remaining values as default in the wizard and complete with review.

Now You can see new EC2 instance is creating under EC2 service dashboard. Your CloudFormer template will be implement inside the EC2 instance. Once EC2 creation is completed, You can access the template using public DNS of EC2.

06. Select the cloudFormer EC2 instance and copy the public DNS. paste into your browser with https://<publicDNS>. You will redirect to this page. Then select your AWS region from the drop down and click on Create Template.

Screen Shot 2019-07-04 at 5.01.30 PM.png
Screen Shot 2019-07-04 at 5.16.50 PM.png

7: In the next window called Template Information, You have to enter Template info like Template description. Enter those details and click continue.

Screen Shot 2019-07-04 at 5.20.48 PM.png

08. Select DNS Name if you have them in your VPC in the next window.

Screen Shot 2019-07-04 at 5.22.10 PM.png

09. Select the VPC which you need to include in CloudFormation Template. Here onwards you will see only resources belong to the selected VPCs.

Screen Shot 2019-07-04 at 5.22.57 PM.png

10. Select Subnets and Internet Gateway you need to add to the cloud formation template.

Screen Shot 2019-07-04 at 5.24.29 PM.png

11. Select Access Control Lists(ACL) and Route tables you need to include CloudFormation template.

Screen Shot 2019-07-04 at 5.25.32 PM.png

12. Select Elastic IPs and other managed services you need to add into the CloudFormation template

Screen Shot 2019-07-04 at 5.26.31 PM.png

13: Select EC2 instances you need to add to the CloudFormation template.

Screen Shot 2019-07-04 at 5.30.19 PM.png

14. Select Storage related services and application services from the next steps.

Screen Shot 2019-07-04 at 5.31.56 PM.png
Screen Shot 2019-07-04 at 5.33.14 PM.png

16: Select the Security Groups and Operational Resources you need to insert into the CloudFormation template from the next two windows.

Screen Shot 2019-07-04 at 5.33.14 PM.png

17: Now you are selected all the resources you need to include with CloudFormer template. Provide a Name for CloudFormer Template. Create S3 bucket inside S3 service and select it from the drop down list. Your CloudFormer template will save in this S3 bucket.Save CloudFormer Template.

Screen Shot 2019-07-04 at 5.36.51 PM.png

18: You can access CloudFormer template from the S3 location when you need to deploy it. Also you can change the details in the template according to your requirement.

Screen Shot 2019-07-04 at 5.37.31 PM.png

Reference https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-cloudformer.html

Please join me with my journey by following any of the below links


21 Days of AWS using Terraform – Day 19-Introduction to AWS Lambda using Terraform

What is AWS Lambda?

With AWS Lambda, you can run code without provisioning or managing servers. You pay only for the compute time that you consume — there’s no charge when your code isn’t running. You can run code for virtually any type of application or backend service — all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

  • To start with Lambda

Go to https://us-west-2.console.aws.amazon.com/lambda → Create a function

  • Create function you have three options
* Author from scratch: Which is self explanatory, 
i.e you are writing your own function
* Use a blueprint: Build a lambda application from sample 
code and configuration preset for common use cases
(Provided by AWS)
* Browse serverless app repository: Deploy a sample 
lambda application from the AWS Serverless 
Application Repository(Published by other developers 
and AWS Patners)
  • Function name: HelloWorld
  • Runtime: Choose Python3.7 from the dropdown
  • Permission: For the time being choose the default permission
  • Click Create Function

Invoking Lambda Function

  • When building applications on AWS Lambda the core components are Lambda functions and event sources. An event source is the AWS service or custom application that publishes events, and a Lambda function is the custom code that processes the events
* Amazon S3 Pushes Events
* AWS Lambda Pulls Events from a Kinesis Stream
* HTTP API requests through API Gateway
* CloudWatch Schedule Events
  • From the list select CloudWatch Events
Reference: https://www.youtube.com/watch?v=WbHw14hF7lU
NOTE: It’s an old slide, GO is already supported
  • As you can see under CloudWatch Events it says configuration required
  • Rule: Create a new rule
  • Rule name: Everyday
  • Rule description: Give your Rule some description
  • Rule type: Choose Schedule expression and under its rate(1 day)(i.e its going to trigger it every day)

Schedule Expressions Using Rate or Cron – AWS Lambda
AWS Lambda supports standard rate and cron expressions for frequencies of up to once per minute. CloudWatch Events rate…docs.aws.amazon.com

  • Click on Add and Save
  • Now go back to your Lambda Code(HelloWorld)
import json
def lambda_handler(event, context):
# TODO implement
print(event) <--------
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
  • Add this entry, which simply means we are trying to print the event
  • Again save it
  • Let’s try to set a simple test event, Click on Test
  • Under Event template, search for Amazon CloudWatch
  • Event Name: Give your event some name and test it
  • Go back and this time Click on Monitoring
  • Click on View logs in CloudWatch
  • Click on the log stream and you will see the same logs you see in Lambda console

Lambda Programming Model

  • Lambda supports a bunch of programming languages
  • You write code for your Lambda function in one of the languages AWS Lambda supports. Regardless of the language you choose, there is a common pattern to writing code for a Lambda function that includes the following core concepts.
* Handler: Handler is the function AWS Lambda calls 
to start execution of your Lambda function, it act 
as an entry point.
  • As you can see handle start with lambda_function which is a Python Script Name and then lambda_handler which is a function and act as an entry point for event and context
  • Event: We already saw in the previous example where we passed the CloudWatch Event to our code
  • Context — AWS Lambda also passes a context object to the handler function, as the second parameter. Via this context object, your code can interact with AWS Lambda. For example, your code can find the execution time remaining before AWS Lambda terminates your Lambda function.
  • Logging — Your Lambda function can contain logging statements. AWS Lambda writes these logs to CloudWatch Logs.
  • Exceptions — Your Lambda function needs to communicate the result of the function execution to AWS Lambda. Depending on the language you author your Lambda function code, there are different ways to end a request successfully or to notify AWS Lambda an error occurred during the execution.
  • One more thing, I want to highlight is the timeout
  • You can now set the timeout value for a function to any value up to 15 minutes. When the specified timeout is reached, AWS Lambda terminates execution of your Lambda function. As a best practice, you should set the timeout value based on your expected execution time to prevent your function from running longer than intended.

Common Use case of Lambda

Terraform Code

All the steps we have performed manually. let’s try to automate it using terraform

  • Step1: Create your test Python function
def lambda_handler(event, context):
    print ("Hello from terraform world")
    return "hello from terraform world"
  • Now let’s zip it up
$ zip lambda.zip lambda.py 
  adding: lambda.py (deflated 27%)
  • Step2: Define your Lambda resource
resource "aws_lambda_function" "test_lambda" {
filename = "lambda.zip"
function_name = "lambda_handler"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "lambda.lambda_handler"

# The filebase64sha256() function is available in Terraform 0.11.12 and later
# For Terraform 0.11.11 and earlier, use the base64sha256() function and the file() function:
# source_code_hash = "${base64sha256(file("lambda_function_payload.zip"))}"
source_code_hash = "${base64sha256("lambda.zip")}"
runtime = "python2.7"
}
  • filename: Is the name of the file, you zipped in the previous step
  • function name: Is the name of the function you defined in your python code
  • role: IAM role attached to the Lambda Function. This governs both who / what can invoke your Lambda Function, as well as what resources our Lambda Function has access to
  • handler: Function entry point in our code(python code filename.method name) (filename: lambda.py we don’t need to include file extension) and (lambda function is lambda_handler def lambda_handler(event, context))
  • source_code_hash: Used to trigger updates. Must be set to a base64-encoded SHA256 hash of the package file specified with either filename or s3_key. The usual way to set this is filebase64sha256("file.zip") (Terraform 0.11.12 and later) or base64sha256(file("file.zip")) (Terraform 0.11.11 and earlier), where “file.zip” is the local filename of the lambda function source archive.
  • Runtime: The identifier of the function’s  runtime
Valid Values: nodejs8.10 | nodejs10.x | java8 | python2.7 | python3.6 | python3.7 | dotnetcore1.0 | dotnetcore2.1 | go1.x | ruby2.5 | provided
  • Step3: Create an IAM Role
resource "aws_iam_role" "iam_for_lambda" {
name = "iam_for_lambda"

assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}

# See also the following AWS managed policy: AWSLambdaBasicExecutionRole
resource "aws_iam_policy" "lambda_logging" {
name = "lambda_logging"
path = "/"
description = "IAM policy for logging from a lambda"

policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"
}
]
}
EOF
}

resource "aws_iam_role_policy_attachment" "lambda_logs" {
role = "${aws_iam_role.iam_for_lambda.name}"
policy_arn = "${aws_iam_policy.lambda_logging.arn}"
}

Reference https://www.terraform.io/docs/providers/aws/r/lambda_function.html

  • Step4: Terraform Init: Initialize a Terraform working directory, containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control.
$ terraform init

Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "aws" (2.23.0)...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.aws: version = "~> 2.23"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
  • Step5: Terraform plan: The terraform plan command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files.
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_cloudwatch_log_group.example
      id:                             <computed>
      arn:                            <computed>
      name:                           "/aws/lambda/lambda_handler"
      retention_in_days:              "14"

  + aws_iam_policy.lambda_logging
      id:                             <computed>
      arn:                            <computed>
      description:                    "IAM policy for logging from a lambda"
      name:                           "lambda_logging"
      path:                           "/"
      policy:                         "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": [\n        \"logs:CreateLogStream\",\n        \"logs:PutLogEvents\"\n      ],\n      \"Resource\": \"arn:aws:logs:*:*:*\",\n      \"Effect\": \"Allow\"\n    }\n  ]\n}\n"

  + aws_iam_role.iam_for_lambda
      id:                             <computed>
      arn:                            <computed>
      assume_role_policy:             "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": \"sts:AssumeRole\",\n      \"Principal\": {\n        \"Service\": \"lambda.amazonaws.com\"\n      },\n      \"Effect\": \"Allow\",\n      \"Sid\": \"\"\n    }\n  ]\n}\n"
      create_date:                    <computed>
      force_detach_policies:          "false"
      max_session_duration:           "3600"
      name:                           "iam_for_lambda"
      path:                           "/"
      unique_id:                      <computed>

  + aws_iam_role_policy_attachment.lambda_logs
      id:                             <computed>
      policy_arn:                     "${aws_iam_policy.lambda_logging.arn}"
      role:                           "iam_for_lambda"

  + aws_lambda_function.test_lambda
      id:                             <computed>
      arn:                            <computed>
      filename:                       "lambda.zip"
      function_name:                  "lambda_handler"
      handler:                        "lambda.lambda_handler"
      invoke_arn:                     <computed>
      last_modified:                  <computed>
      memory_size:                    "128"
      publish:                        "false"
      qualified_arn:                  <computed>
      reserved_concurrent_executions: "-1"
      role:                           "${aws_iam_role.iam_for_lambda.arn}"
      runtime:                        "python2.7"
      source_code_hash:               "Gpu07NPcj26NrKv0Ne6BbZkfDRuM3ozHHqCFUWH9Sqg="
      source_code_size:               <computed>
      timeout:                        "3"
      tracing_config.#:               <computed>
      version:                        <computed>


Plan: 5 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Step6: terraform apply: The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_cloudwatch_log_group.example
      id:                             <computed>
      arn:                            <computed>
      name:                           "/aws/lambda/lambda_handler"
      retention_in_days:              "14"

  + aws_iam_policy.lambda_logging
      id:                             <computed>
      arn:                            <computed>
      description:                    "IAM policy for logging from a lambda"
      name:                           "lambda_logging"
      path:                           "/"
      policy:                         "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": [\n        \"logs:CreateLogStream\",\n        \"logs:PutLogEvents\"\n      ],\n      \"Resource\": \"arn:aws:logs:*:*:*\",\n      \"Effect\": \"Allow\"\n    }\n  ]\n}\n"

  + aws_iam_role.iam_for_lambda
      id:                             <computed>
      arn:                            <computed>
      assume_role_policy:             "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": \"sts:AssumeRole\",\n      \"Principal\": {\n        \"Service\": \"lambda.amazonaws.com\"\n      },\n      \"Effect\": \"Allow\",\n      \"Sid\": \"\"\n    }\n  ]\n}\n"
      create_date:                    <computed>
      force_detach_policies:          "false"
      max_session_duration:           "3600"
      name:                           "iam_for_lambda"
      path:                           "/"
      unique_id:                      <computed>

  + aws_iam_role_policy_attachment.lambda_logs
      id:                             <computed>
      policy_arn:                     "${aws_iam_policy.lambda_logging.arn}"
      role:                           "iam_for_lambda"

  + aws_lambda_function.test_lambda
      id:                             <computed>
      arn:                            <computed>
      filename:                       "lambda.zip"
      function_name:                  "lambda_handler"
      handler:                        "lambda.lambda_handler"
      invoke_arn:                     <computed>
      last_modified:                  <computed>
      memory_size:                    "128"
      publish:                        "false"
      qualified_arn:                  <computed>
      reserved_concurrent_executions: "-1"
      role:                           "${aws_iam_role.iam_for_lambda.arn}"
      runtime:                        "python2.7"
      source_code_hash:               "Gpu07NPcj26NrKv0Ne6BbZkfDRuM3ozHHqCFUWH9Sqg="
      source_code_size:               <computed>
      timeout:                        "3"
      tracing_config.#:               <computed>
      version:                        <computed>


Plan: 5 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_iam_policy.lambda_logging: Creating...
  arn:         "" => "<computed>"
  description: "" => "IAM policy for logging from a lambda"
  name:        "" => "lambda_logging"
  path:        "" => "/"
  policy:      "" => "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": [\n        \"logs:CreateLogStream\",\n        \"logs:PutLogEvents\"\n      ],\n      \"Resource\": \"arn:aws:logs:*:*:*\",\n      \"Effect\": \"Allow\"\n    }\n  ]\n}\n"
aws_iam_role.iam_for_lambda: Creating...
  arn:                   "" => "<computed>"
  assume_role_policy:    "" => "{\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Action\": \"sts:AssumeRole\",\n      \"Principal\": {\n        \"Service\": \"lambda.amazonaws.com\"\n      },\n      \"Effect\": \"Allow\",\n      \"Sid\": \"\"\n    }\n  ]\n}\n"
  create_date:           "" => "<computed>"
  force_detach_policies: "" => "false"
  max_session_duration:  "" => "3600"
  name:                  "" => "iam_for_lambda"
  path:                  "" => "/"
  unique_id:             "" => "<computed>"
aws_iam_policy.lambda_logging: Still creating... (10s elapsed)
aws_iam_role.iam_for_lambda: Still creating... (10s elapsed)
aws_iam_role.iam_for_lambda: Creation complete after 10s (ID: iam_for_lambda)
aws_lambda_function.test_lambda: Creating...
  arn:                            "" => "<computed>"
  filename:                       "" => "lambda.zip"
  function_name:                  "" => "lambda_handler"
  handler:                        "" => "lambda.lambda_handler"
  invoke_arn:                     "" => "<computed>"
  last_modified:                  "" => "<computed>"
  memory_size:                    "" => "128"
  publish:                        "" => "false"
  qualified_arn:                  "" => "<computed>"
  reserved_concurrent_executions: "" => "-1"
  role:                           "" => "arn:aws:iam::XXXXXX:role/iam_for_lambda"
  runtime:                        "" => "python2.7"
  source_code_hash:               "" => "Gpu07NPcj26NrKv0Ne6BbZkfDRuM3ozHHqCFUWH9Sqg="
  source_code_size:               "" => "<computed>"
  timeout:                        "" => "3"
  tracing_config.#:               "" => "<computed>"
  version:                        "" => "<computed>"
aws_iam_policy.lambda_logging: Creation complete after 11s (ID: arn:aws:iam::355622012945:policy/lambda_logging)
aws_iam_role_policy_attachment.lambda_logs: Creating...
  policy_arn: "" => "arn:aws:iam::XXXXXX:policy/lambda_logging"
  role:       "" => "iam_for_lambda"
aws_iam_role_policy_attachment.lambda_logs: Creation complete after 0s (ID: iam_for_lambda-20190814010350932300000001)
aws_lambda_function.test_lambda: Still creating... (10s elapsed)
aws_lambda_function.test_lambda: Still creating... (20s elapsed)
aws_lambda_function.test_lambda: Still creating... (30s elapsed)
aws_lambda_function.test_lambda: Still creating... (40s elapsed)
aws_lambda_function.test_lambda: Creation complete after 41s (ID: lambda_handler)
aws_cloudwatch_log_group.example: Creating...
  arn:               "" => "<computed>"
  name:              "" => "/aws/lambda/lambda_handler"
  retention_in_days: "" => "14"
aws_cloudwatch_log_group.example: Still creating... (10s elapsed)
aws_cloudwatch_log_group.example: Creation complete after 11s (ID: /aws/lambda/lambda_handler)

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

Please join me with my journey by following any of the below links

21 Days of AWS using Terraform – Day 18- Introduction to AWS VPC Peering using Terraform

Welcome to Day 18 of 21 Days of AWS using Terraform. The topic for today is Introduction to AWS VPC Peering using Terraform.

What is VPC Peering?

  • Let say two VPC want to communicate with each other or share service between them, the best way to do that with the help of VPC Peering
  • VPC Peering connection is a networking connection between two VPCs that allow us to route traffic between them using private IPv4 addresses.
  • Instances in either VPC can communicate with each other as if they are part of the same network
  • AWS uses the existing infrastructure of a VPC to create a VPC peering connection
  • It’s neither a gateway nor a VPN connection and doesn’t rely on a separate piece of physical hardware
  • There is no single point of failure or a bandwidth bottleneck i.e bandwidth between instances in peered VPC is no different than bandwidth between instances in the same VPC.
  • VPC Peering doesn’t support transitive peering i.e VPC1 can talk to VPC 2, VPC 2 can talk to VPC3 but VPC1 can’t talk to VPC3. This is because of the security reason so if VPC1 want to communicate with VPC3 we need to establish one more peering connection between VPC1 and VPC3.
  • Once VPC Peering is established instance in two VPC can communicate with each other using Private IP(no need to communicate via Internet Gateway)
  • Inter-region VPC is supported
  • VPC Peering is even supported between two different accounts
  • Make sure there is no over-lapping IP between two VPC’s

Go to your VPC Dashboard and look for Peering Connections → Create Peering Connection

  • Give some meaningful name to Peering connection name tag(eg: vpc-peering-test)
  • Select Requester VPC
  • As mentioned in the first part of the series, we can create VPC Peering between different account as well as between different region
  • Select Acceptor VPC(As you can see Acceptor VPC has complete different CIDR region, as overlapping CIDR is not supported)

Even I am creating VPC Peering between the same account, I still need to accept peering connection

  • The final step is to update the individual VPC route table with the peering connection

Terraform Code

Please join me with my journey by following any of the below links

21 Days of AWS using Terraform – Day 17- Introduction to AWS VPC Endpoint using Terraform

Welcome to Day 17 of 21 Days of AWS using Terraform. The topic for today is Introduction to AWS VPC Endpoint using Terraform.

What is VPC EndPoint?

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.

Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.

There are two types of VPC endpoints:

  • Interface endpoints(using private links): An interface endpoint is an elastic network interface with a private IP address that serves as an entry point for traffic destined to a supported service
# Supported Services
 Amazon API Gateway
 AWS CloudFormation
 Amazon CloudWatch
 Amazon CloudWatch Events
 Amazon CloudWatch Logs
 AWS CodeBuild
 AWS Config
 Amazon EC2 API
 Elastic Load Balancing API
 Amazon Elastic Container Registry
 Amazon Elastic Container Service
 AWS Key Management Service
 Amazon Kinesis Data Streams
 Amazon SageMaker and Amazon SageMaker Runtime
 Amazon SageMaker Notebook Instance
 AWS Secrets Manager
 AWS Security Token Service
 AWS Service Catalog
 Amazon SNS
 Amazon SQS
 AWS Systems Manager
 Endpoint services hosted by other AWS accounts
 Supported AWS Marketplace partner services 
  • Gateway endpoints: A gateway endpoint is a gateway that is a target for a specified route in your route table, used for traffic destined to a supported AWS service
# Supported Services
* Amazon S3
* DynamoDB

Scenario1: I want to push logs from EC2 private instance(running on Private IP)to CloudWatch Logs.

  • To setup VPC Endpoint
Go to https://us-west-2.console.aws.amazon.com/vpc --> EndPoints
  • Once the endpoint is created you will see an elastic network interface with a private IP address which acts as an entry point for traffic destined to a supported service

Terraform Code

resource "aws_vpc_endpoint" "ec2logs" {
  vpc_id            = "${var.vpc_id}"
  service_name      = "com.amazonaws.us-west-2.logs"
  subnet_ids        = ["${var.private_subnet1}", "${var.private_subnet2}"]
  vpc_endpoint_type = "Interface"

  security_group_ids = [
    "${var.security_group}",
  ]

  policy = <<POLICY
{
    "Statement": [
        {
            "Action": "*",
            "Effect": "Allow",
            "Resource": "*",
            "Principal": "*"
        }
    ]
}
POLICY

  private_dns_enabled = true
}

variables.tf

variable "vpc_id" {}
variable "private_subnet1" {}
variable "private_subnet2" {}
variable "security_group" {}

VPC Endpoint Log Module

module "vpc_endpoint_logs" {
  source          = "./vpc_endpoint_logs"
  vpc_id          = "${module.vpc.vpc_id}"
  private_subnet1 = "${module.vpc.private_subnet1}"
  private_subnet2 = "${module.vpc.private_subnet2}"
  security_group  = "${module.vpc.security_group}"
}

Scenario2: I want to push logs from EC2 private instance(running on Private IP)to AWS S3.

  • In the case of gateway endpoint, you will see the entry in the route table, used for traffic destined to a supported AWS service
resource "aws_vpc_endpoint" "s3" {
  vpc_id       = "${var.vpc_id}"
  service_name = "com.amazonaws.us-west-2.s3"
  route_table_ids = ["${var.route_table}"]
      policy = <<POLICY
{
    "Statement": [
        {
            "Action": "*",
            "Effect": "Allow",
            "Resource": "*",
            "Principal": "*"
        }
    ]
}
POLICY
}

variables.tf

variable "vpc_id" {}
variable "route_table" {}

VPC Endpoint S3 Module

module "vpc_endpoint_s3" {
  source = "./vpc_endpoint_s3"
  vpc_id = "${module.vpc.vpc_id}"
  route_table = "${module.vpc.route_table}"
}

Here we need a little modification to our VPC module, here the output of route table act as an input to VPC endpoint s3 module

output "route_table" {
  value = "${aws_default_route_table.private_route.id}"
}

Limitations

  • Only Support IPv4
  • Support only for the same region
  • Interface endpoints cannot only be accessible via VPC Peering or VPN connection only via Direct Connect.
  • You cannot use an IAM policy or bucket policy to allow access from a VPC IPv4 CIDR range (the private IPv4 address range). VPC CIDR blocks can be overlapping or identical, which may lead to unexpected results. Therefore, you cannot use the aws:SourceIp condition in your IAM policies for requests to Amazon S3 through a VPC endpoint. This applies to IAM policies for users and roles, and any bucket policies. If a statement includes the aws:SourceIp condition, the value fails to match any provided IP address or range

Please join me with my journey by following any of the below links

21 Days of AWS using Terraform – Day 16- Introduction to AWS Config using Terraform

Welcome to Day 16 of 21 Days of AWS using Terraform. The topic for today is Introduction to AWS Config using Terraform.

What Is AWS Config?

AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time.

Features

  • Track state of all resources(OS level too — Windows/Linux)
  • Meet your compliance need(PCI-DSS, HIPAA)
  • Validate against AWS Config Rule

Setting up AWS Config

* All resources: You can check on, Record all rsources supported in this region
OR
Global resources like IAM
OR
We can even check specific resources eg: EC2
* Amazin S3 bucket: This bucket will recieve configuration history and configuration snapshot files
* Amazon SNS topic(Optional): We can send config changes to S3 bucket
* AWS Config role: It give AWS config read-only access(IAM Role)to AWS resource
* Skip this for the time being
  • Confirm and AWS Config setup for us.
  • Check the status of AWS config, by click on the status icon on the top of the page
  • Now click on Resource and then Instance
  • Click on the Configuration timeline
  • Scroll down and click on changes

Scenario: Last time we skipped the rule section, this time let add all the config rule, our task for today to make sure for an account is compliant

  • CloudTrail must be enabled
  • S3 bucket versioning must be enabled
  • EC2 instance must be a part of VPC
  • We are only using instance type as t2.micro

Search for CloudTrail and select cloudtrail-enabled

  • You don’t need to change any of the default value and click on save

Same way search for S3 bucket versioning enabled

Search for ec2-instances-in-vpc

  • This requires some changes as you need to specify your VPC id

Search for desired-instance-type

  • Add the instanceType Value to t2.micro
  • Finally, you will see something like this
  • If you further drill down, as you can see this instance is using t2.medium while in config rule for the desired-instance-type we choose t2.micro
  • One more example, as you can see in this case S3 bucket is non-compliant
  • If we can go to the S3 bucket and enabled versioning
  • As we remediated the issue, to see the immediate effect
  • We are back in business

Terraform Code

  • Now we need to automate the entire process and then is no better tool other then terraform to do a job for us.

Terraform Example

provider "aws" {
  region = "us-west-2"
}

resource "aws_iam_role" "my-config" {
  name = "config-example"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "config.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "my-config" {
  role       = "${aws_iam_role.my-config.name}"
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSConfigRole"
}

resource "aws_s3_bucket" "my-config" {
  bucket = "config-bucket-for-my-test-project"
  acl    = "private"

  versioning {
    enabled = true
  }

  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_config_configuration_recorder" "my-config" {
  name     = "config-example"
  role_arn = "${aws_iam_role.my-config.arn}"

  recording_group {
    all_supported                 = true
    include_global_resource_types = true
  }
}

resource "aws_config_delivery_channel" "my-config" {
  name           = "config-example"
  s3_bucket_name = "${aws_s3_bucket.my-config.bucket}"

  depends_on = ["aws_config_configuration_recorder.my-config"]
}

resource "aws_config_configuration_recorder_status" "config" {
  name       = "${aws_config_configuration_recorder.my-config.name}"
  is_enabled = true

  depends_on = ["aws_config_delivery_channel.my-config"]
}

resource "aws_config_config_rule" "instances_in_vpc" {
  name = "instances_in_vpc"

  source {
    owner             = "AWS"
    source_identifier = "INSTANCES_IN_VPC"
  }

  depends_on = ["aws_config_configuration_recorder.my-config"]
}

resource "aws_config_config_rule" "cloud_trail_enabled" {
  name = "cloud_trail_enabled"

  source {
    owner             = "AWS"
    source_identifier = "CLOUD_TRAIL_ENABLED"
  }

  input_parameters = <<EOF
{
  "s3BucketName": "cloudwatch-to-s3-logs"
}
EOF

  depends_on = ["aws_config_configuration_recorder.my-config"]
}

resource "aws_config_config_rule" "s3_bucket_versioning_enabled" {
  name = "s3_bucket_versioning_enabled"

  source {
    owner             = "AWS"
    source_identifier = "S3_BUCKET_VERSIONING_ENABLED"
  }

  depends_on = ["aws_config_configuration_recorder.my-config"]
}

resource "aws_config_config_rule" "desired_instance_type" {
  name = "desired_instance_type"

  "source" {
    owner             = "AWS"
    source_identifier = "DESIRED_INSTANCE_TYPE"
  }

  input_parameters = <<EOF
{
  "alarmActionRequired" : "t2.micro"
}
EOF

  depends_on = ["aws_config_configuration_recorder.my-config"]
}

GitHub Link

https://github.com/100daysofdevops/21_days_of_aws_using_terraform/tree/master/config

Please join me with my journey by following any of the below link

21 Days of AWS using Terraform – Day 15- Introduction to KMS using Terraform

Welcome to Day 15 of 21 Days of AWS using Terraform. The topic for today is Introduction to KMS using Terraform.

What is KMS?

AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data.

Advantage

Concepts

  • KMS stores Customer Master Keys(CMK) which is a logical representation of a key.
  • Key can be generated by KMS or imported.
  • The encrypted data keys are stored with the data
  • CMK never leaves KMS and never leaves a region
  • CMK can encrypt or decrypt data up to 4KB in size.

How KMS Encrypt Data

Reference: https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping

  • We start with the plain text and then uses data keys along with an algorithm and come up with encrypted data.
  • Encrypted data is finally stored in a storage that can be anything(eg:EBS, EFS, S3…)
  • KMS then took data key, Encrypt it with a master key along with an encryption algorithm, resulted in it an encrypted data key, that stored alongside with data.

KMS in action

# To access KMS
Go to AWS Console --> Security, Identity, & Compliance --> Key Management Service --> Create a key

NOTE: YAY!!!, Now Key Management Service got its a new home but you can still access it via old way i.e

AWS Console --> IAM --> Encryption keys

Step1:

* Alias: Enter an alias and descrption for the key(eg: Alias: mydemotestkey, similarly Descrption)
* Key material origin: Choose KMS(External: You can bring your own Key(BUOY),CloudHSM(More about it later)

Step2:

* Adding Tag is Optional but its a good practice
resource "aws_kms_key" "my-kms-key" {
  description         = "My KMS Keys for Data Encryption"
  enable_key_rotation = true

  tags = {
    Name = "my-kms-keys"
  }

resource "aws_kms_alias" "smc-kms-alias" {
  target_key_id = "${aws_kms_key.my-kms-key.key_id}"
  name          = "alias/my-terraform-final-encryption-key"
}

Step3:

  • Choose the users and roles who can administer this key.
  • This is critical as an administrator have right to delete these keys and after that, your data will become unusable

Step4: Define key usage permissions,select the IAM users and roles that can use the CMK to encrypt and decrypt data with the AWS KMS API

Step5: Review and edit key policy

Key Deletion

  • You can’t delete key immediately, rather then you need to schedule it
  • The waiting period is from 7–30 days, this is to make sure you understand that deleting a key makes all data encrypted under that key unrecoverable
  • AWS managed CMKs. You cannot manage key rotation for AWS managed CMKs. AWS KMS automatically rotates AWS managed keys every three years (1095 days).
  • When you enable automatic key rotation, AWS KMS rotates the CMK 365 days after the enable date and every 365 days thereafter.
  policy = <<EOF
{
    "Id": "key-consolepolicy-3",
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Enable IAM User Permissions",
            "Effect": "Allow",
            "Principal": {
                "AWS": "${var.user_arn}"
            },
            "Action": "kms:*",
            "Resource": "*"
        },
        {
            "Sid": "Allow access for Key Administrators",
            "Effect": "Allow",
            "Principal": {
                "AWS": "${var.user_arn}"
            },
            "Action": [
                "kms:Create*",
                "kms:Describe*",
                "kms:Enable*",
                "kms:List*",
                "kms:Put*",
                "kms:Update*",
                "kms:Revoke*",
                "kms:Disable*",
                "kms:Get*",
                "kms:Delete*",
                "kms:TagResource",
                "kms:UntagResource",
                "kms:ScheduleKeyDeletion",
                "kms:CancelKeyDeletion"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow use of the key",
            "Effect": "Allow",
            "Principal": {
                "AWS": "${var.user_arn}"
            },
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow attachment of persistent resources",
            "Effect": "Allow",
            "Principal": {
                "AWS": "${var.user_arn}"
            },
            "Action": [
                "kms:CreateGrant",
                "kms:ListGrants",
                "kms:RevokeGrant"
            ],
            "Resource": "*",
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": "true"
                }
            }
        }
    ]
}
EOF
}
variables.tf
variable "user_arn" {
}

Final KMS Module will look like this

module "kms" {
  source   = "./kms"
  user_arn = "${module.iam.aws_iam_user}"
}

KMS Limits

Limits — AWS Key Management Service
Learn the default maximums that apply to AWS Key Management Service (AWS KMS) resources such as keys, aliases, grants…docs.aws.amazon.com

NOTE: When a request is throttled, AWS KMS returns a ThrottlingExceptionerror

GitHub Link

https://github.com/100daysofdevops/21_days_of_aws_using_terraform/tree/master/kms

Please join me with my journey by following any of the below link

21 Days of AWS using Terraform – Day 14- Introduction to Transit Gateway using Terraform

Welcome to Day 14 of 21 Days of AWS using Terraform. The topic for today is Introduction to Transit Gateway using Terraform

During ReInvent 2018, AWS released a bunch of Products

https://aws.amazon.com/about-aws/whats-new/2018/11/introducing-aws-transit-gateway/

But if you ask me one product which stands out among these is Transit Gateway.

What is Transit Gateway?

AWS Transit Gateway is a service that enables customers to connect their Amazon Virtual Private Clouds (VPCs) and their on-premises networks to a single gateway

Features of Transit Gateway

  • Connect thousands of Amazon Virtual Private Clouds (VPCs) and on-premises networks using a single gateway
  • Hub and Spoke Network Topology
  • Scales up to 5000 VPCs
  • Spread traffic over many VPN connections (Scale horizontally eg: Now two VPN connection combined together give 2.5GBPS(1.25GBPS + 1.25GBPS)
  • Max throughput AWS tested so far is 50GBPS
  • Direct Connect is still not supported(In AWS 2019 Roadmap)
  • Under the hood, to make this happen AWS is using a technology called Hyperplane(https://twitter.com/awsreinvent/status/935740155499040768?lang=en)
  • Transit gateway each route table support 10000 routes(in case of VPC default route table limit is still 100)
  • Difference between Transit VPC vs Transit gateway
  • Transit Gateway is available under VPC console

Step1: Build TGW

Go to https://us-west-2.console.aws.amazon.com/vpc → Transit Gateways → Transit Gateways --> Create Transit Gateway
* Name tag and Description: Give some meaningful name to your Transit Gateway and Description
*
Amazon side ASN: Autonomous System Number (ASN) of your Transit Gateway. You can use an existing ASN assigned to your network. If you don't have one, you can use a private ASN in the 64512-65534 or 4200000000-4294967294 range.
*
DNS Support: Enable Domain Name System resolution for VPCs attached to this Transit Gateway(If you have multiple VPC, this will enable hostname resolution between two VPC)
*
VPN ECMP support: Equal-cost multi-path routing for VPN Connections that are attached to this Transit Gateway.Equal Cost Multipath (ECMP) routing support between VPN connections. If connections advertise the same CIDRs, the traffic is distributed equally between them.
*
Default route table association: Automatically associate Transit Gateway attachments with this Transit Gateway's default route table.
*
Default route table propagation: Automatically propagate Transit Gateway attachments with this Transit Gateway's default route table
*
Auto accept shared attachments: Automatically accept cross account attachments that are attached to this Transit Gateway.In case if you are planning to spread your TGW across multiple account.

Terraform Code

resource "aws_ec2_transit_gateway" "my-test-tgw" {
  description                     = "my-test-transit-gateway"
  amazon_side_asn                 = 64512
  auto_accept_shared_attachments  = "disable"
  default_route_table_association = "enable"
  default_route_table_propagation = "enable"
  dns_support                     = "enable"
  vpn_ecmp_support                = "enable"

  tags {
    Name = "my-test-transit-gateway"
  }
}

Step2: Attach your VPC

Go to Transit Gateways --> Transit Gateway Attachments --> Create Transit Gateway Attachment
* Select your TGW created in Step1
* Give your VPC attachment some name
* Enable DNS support
* Select your first VPC
  • Perform the same step for VPC2

NOTE: When you attach a VPC or create a VPN connection on a transit gateway, the attachment is associated with the default route table of the transit gateway.

resource "aws_ec2_transit_gateway_vpc_attachment" "my-test-transit-gateway-attachment" {
  transit_gateway_id = "${aws_ec2_transit_gateway.my-test-tgw.id}"
  vpc_id             = "${var.vpc_id}"
  dns_support        = "enable"

  subnet_ids = [
    "${var.public_subnet1}",
    "${var.public_subnet2}",
  ]

  tags {
    Name = "my-test-tgw-vpc-attachment"
  }
}

Step3: Update Route Table

  • If you click on the Transit Gateway Route Table, you will see we have the patch from Transit Gateway to our VPC
  • We need a return path(i.e from our VPC to TGW), VPC1 route table needs to be updated to point to TGW to route to the second VPC and vice-versa(i.e 10.0.0.0/16 to tgw on the second VPC)
# Adding Route for Transit Gateway

resource "aws_route" "my-tgw-route" {
  route_table_id         = "${aws_route_table.public_route.id}"
  destination_cidr_block = "0.0.0.0/0"
  transit_gateway_id     = "${var.transit_gateway}"
}

NOTE: In the AWS Console I am using 172.16.0.0/16 but in terraform code I am using 0.0.0.0/0

Some Key Terms

  • associations — Each attachment is associated with exactly one route table. Each route table can be associated with zero to many attachments.
  • route propagation — A VPC or VPN connection can dynamically propagate routes to a transit gateway route table.

GitHub Link: https://github.com/100daysofdevops/21_days_of_aws_using_terraform/tree/master/transit_gateway

21 Days of AWS using Terraform – Day 13- Introduction to NAT Gateway using Terraform

Welcome to Day 13 of 21 Days of AWS using Terraform. The topic for today is Introduction to Nat Gateway using Terraform

What is NAT Gateway

NAT gateway enables instance in Private Subnet to connect to the internet or other AWS services but prevent the internet from initiating a connection with those instances.

How NAT works

  • NAT device has an Elastic IP address and is connected to the Internet through an internet gateway.
  • When we connect an instance in a private subnet through the NAT device, which routes traffic from the instance to the internet gateway and routes any response to the instance
  • NAT maps multiple private IPv4 addresses to a single public IPv4 address.

NAT gateway doesn’t support IPv6 traffic for that you need to use Egress only gateway.

NOTE: IPv6 traffic is separate from IPv4 traffic, route table must include separate routes for IPv6 traffic.

For more info

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html

To create a NAT gateway

Go to VPC Dashboard → NAT Gateways → Create NAT gateways

  • Make sure you select the Public Subnet in your custom VPC
  • For NAT gateway to work, it needs Elastic IP

NOTE: NAT Gateway creation will take 10–15 min

  • Once the NAT gateway is available, add it to your default Route table

The advantage of NAT Gateway

  • NAT gateway is highly available but we need it per availability zone.
  • Can scale up to 45Gbps
  • Managed by AWS

Limitation of NAT Gateway

  • You can associate exactly one Elastic IP address with a NAT gateway. You cannot disassociate an Elastic IP address from a NAT gateway after it’s created. To use a different Elastic IP address for your NAT gateway, you must create a new NAT gateway with the required address, update your route tables, and then delete the existing NAT gateway if it’s no longer required.
  • You cannot associate a security group with a NAT gateway. You can use security groups for your instances in the private subnets to control the traffic to and from those instances.
  • You can use a network ACL to control the traffic to and from the subnet in which the NAT gateway is located. The network ACL applies to the NAT gateway’s traffic

On Day 2 we wrote the AWS VPC code, now its time to modify that code

  • As we done via AWS UI, first step is to create Elastic IP
resource "aws_eip" "my-test-eip" {
  vpc = true
}

vpc – Boolean if the EIP is in a VPC or not.

  • Let’s create NAT gateway
resource "aws_nat_gateway" "my-test-nat-gateway" {
  allocation_id = "${aws_eip.my-test-eip.id}"
  subnet_id = "${aws_subnet.public_subnet.1.id}"
}
  • allocation_id – The Allocation ID of the Elastic IP address for the gateway.
  • subnet_id – The Subnet ID of the subnet in which to place the gateway.
  • Once the NAT gateway is available, add it to your default Route table
resource "aws_default_route_table" "private_route" {
  default_route_table_id = "${aws_vpc.main.default_route_table_id}"
  route {
    nat_gateway_id = "${aws_nat_gateway.my-test-nat-gateway.id}"
    cidr_block = "0.0.0.0/0"
  }

  tags = {
    Name = "my-private-route-table"
  }
}

GitHub Link

https://github.com/100daysofdevops/21_days_of_aws_using_terraform/tree/master/vpc

21 Days of AWS using Terraform – Day 12- Introduction to CloudTrail using Terraform

Welcome to Day 12 of 21 Days of AWS using Terraform. Topic for today is Introduction to CloudTrail using Terraform

What Is AWS CloudTrail?

AWS CloudTrail is an AWS service that helps you enable governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs.

  • It’s enabled when the account is created(for 7 days)
  • When activity occurs in your AWS account, that activity is recorded in a CloudTrail event.
  • Entries can be viewed in Event History(for 90 days)
  • Event logs can be aggregated across accounts and regions.

NOTE

  • Historically CloudTrail was not enabled by default
  • It won’t logs events like SSH/RDP only API call.
CloudTrail Dashboard(Most Recent Events)

Reference: https://aws.amazon.com/blogs/aws/new-aws-api-activity-lookup-in-cloudtrail/

For an ongoing record of activity and events in your AWS account, create a trail. It allows us to send logs to S3 bucket and can be single or multi-region.

To create a trail

Go to AWS Console --> Management & Governance --> CloudTrail --> Trails --> Create trail
* Trail name: Give your trail name
*
Apply trail to all regions: You have an option to choose all regions or specific region.
*
Read/Write events: You have the option to filter the events
*
Data events: Data events provide insights into the resource operations performed on or within a resource
S3: You can record S3 object-level API activity (for example, GetObject and PutObject) for individual buckets, or for all current and future buckets in your AWS account
Lambda:You can record Invoke API operations for individual functions, or for all current and future functions in your AWS account.
*
Storage Locations: Where to store the logs, we can create new bucket or use existing bucket
*
Log file prefix: We have the option to provide prefix, this will make it easier to browse log file
*
Encrypt log file with SSE-KMS:Default SSE-S3 Server side encryption(AES-256) or we can use KMS
*
Enable log file validation: To determine whether a log file was modified, deleted, or unchanged after CloudTrail delivered it, you can use CloudTrail log file integrity validation
*
Send SNS notification for every log file delivery:SNS notification of log file delivery allow us to take action immediately

NOTE:

  • There is always a delay between when the event occurs vs displayed on CloudTrail dashboard. On top of that, there is an additional delay when that log will be transferred to S3 bucket.
  • Delivered every 5(active) minutes with up to 15-minute delay
  • All CloudEvents are JSON structure you will see something like this when you try to view any event

Terraform Code

resource "aws_cloudtrail" "my-demo-cloudtrail" {
  name                          = "${var.cloudtrail_name}"
  s3_bucket_name                = "${aws_s3_bucket.s3_bucket_name.id}"
  include_global_service_events = true
  is_multi_region_trail         = true
  enable_log_file_validation    = true
}
  • name – Specifies the name of the trail.
  • s3_bucket_name – Specifies the name of the S3 bucket designated for publishing log files.
  • include_global_service_events – Specifies whether the trail is publishing events from global services such as IAM to the log files. Defaults to true.
  • is_multi_region_trail – Specifies whether the trail is created in the current region or in all regions. Defaults to false.
  • enable_log_file_validation – Specifies whether log file integrity validation is enabled. Defaults to false.

Then we have S3 bucket policy which allows us to send logs to S3 bucket and can be single or multi-region.

resource "aws_s3_bucket" "s3_bucket_name" {
  bucket = "${var.s3_bucket_name}"

  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
{
   "Sid": "AWSCloudTrailAclCheck",
   "Effect": "Allow",
   "Principal": {
      "Service": "cloudtrail.amazonaws.com"
},
 "Action": "s3:GetBucketAcl",
 "Resource": "arn:aws:s3:::s3-cloudtrail-bucket-with-terraform-code"
},
{
"Sid": "AWSCloudTrailWrite",
"Effect": "Allow",
"Principal": {
  "Service": "cloudtrail.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::s3-cloudtrail-bucket-with-terraform-code/*",
"Condition": {
  "StringEquals": {
     "s3:x-amz-acl": "bucket-owner-full-control"
  }
}
  }
  ]
  }
  POLICY
}

GitHub Link

https://github.com/100daysofdevops/21_days_of_aws_using_terraform/tree/master/cloudtrail

Looking forward for you guys to join this journey

In addition to that, I am going to host 5 meetups whose aim is to build the below architecture.

  • Meetup: https://www.meetup.com/100daysofdevops
  • Day1(Nov 10): Introduction to Terraform https://www.meetup.com/100daysofdevops/events/266192294/
  • Day 2(Nov 16): Building VPC using Terraform
  • Day 3(Nov 17): Creating EC2 Instance inside this VPC using Terraform
  • Day 4(Nov 23): Adding Application Load Balancer and Auto-Scaling to the EC2 instance created on Day 3
  • Day5(Nov 24): Add Backend MySQL Database and CloudWatch Alarm using Terraform