Python OS Module

  • The Python OS Module provides an interface with an underlying operating system that Python is running and it can be Windows, Mac or Linux.
  • It helps us to automate tasks like creating or removing directories

To get the current working directory

>>> os.getcwd()
'/home/cloud_user'

To change directory

>>> os.chdir("/tmp")

To verify it

>>> os.getcwd()
'/tmp'

To list directories

>>> os.listdir()
['systemd-private-929d26066ab64d9892496c0b5e05dbb6-systemd-timesyncd.service-JrC3MC', '.X1-lock', '.Test-unix', 'pulse-PKdhtXMmr18n', 'systemd-private-929d26066ab64d9892496c0b5e05dbb6-systemd-resolved.service-DXvPLp', '.X11-unix', '.XIM-unix', '.ICE-unix', 'ssh-54k5Trn8h27F', '.font-unix', 'systemd-private-929d26066ab64d9892496c0b5e05dbb6-rtkit-daemon.service-FethN5', '.xfsm-ICE-MJ54F0']

OR you can pass the path in the command

>>> os.listdir("/home/cloud_user")
['.profile', 'rest.yml', 'openssl', '.bash_history', '.kube', 'alpine.yml', '.ICEauthority', 'Documents', 'kubernetes-metrics-server', '.gvfs', 'Downloads', 'Music', 'kubernetes', '.bash_logout', 'regex', 'Public', '.viminfo', '.gnupg', 'Desktop', 'Videos', '.Xauthority', '.vnc', '.python_history', '.sudo_as_admin_successful', 'metrics-server', '.local', '.dbus', 'Pictures', '.mozilla', 'Templates', '.cache', '.config', 'multicontainer.yml', '.ssh', 'pod.yml', '.lesshst', '.bashrc']

To make directories

>>> os.mkdir("/tmp/testdir")

To create directory recursively

>>> os.makedirs("/tmp/a/b")

If you try to create directory recursively using os.mkdir you will get this error

>>> os.mkdir("/tmp/c/d")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/c/d'

To remove a file

>>> os.remove("pod.yml")

To remove a directory

>>> os.rmdir("/tmp/e")

To remove a directory recursively

>>> os.removedirs("/tmp/c/d")

To rename the file or directory

>>> os.rename("/tmp/xyz","/tmp/abc")

To get the environment variable information

>>> os.environ
environ({'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'LANG': 'C.UTF-8', 'XDG_SESSION_ID': '5', 'HUSHLOGIN': 'FALSE', 'USER': 'cloud_user', 'PWD': '/home/cloud_user', 'HOME': '/home/cloud_user', 'XDG_DATA_DIRS': '/usr/local/share:/usr/share:/var/lib/snapd/desktop', 'MAIL': '/var/mail/cloud_user', 'REMOTEHOST': 'localhost', 'SHELL': '/bin/bash', 'TERM': 'xterm-256color', 'SHLVL': '1', 'LOGNAME': 'cloud_user', 'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1001/bus', 'XDG_RUNTIME_DIR': '/run/user/1001', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'LESSOPEN': '| /usr/bin/lesspipe %s', '_': '/usr/bin/python3'})

To get the userid

>>> os.getuid()
1001

To get the groupid

>>> os.getgid()
1001

To get the current shell process id

>>> os.getpid()
11377

To execute the shell command

>>> os.system("clear")

NOTE: If you try to save the output of the command in a variable, it will not work with os module , it only store the return code eg: 0 for successfully executed command in this case or non-zero for unsuccessful command.

>>> rt=os.system("ls")
Desktop  Documents  Downloads  Music  Pictures	Public	Templates  Videos  alpine.yml  kubernetes  kubernetes-metrics-server  metrics-server  multicontainer.yml  openssl  ospath.py  regex  rest.yml
>>> print(rt)
0

OS Path

  • os.path is a sub module of OS
  • os.path module is used to work on paths

Returns the final component of a pathname

>>> os.path.basename("/etc/ldap/ldap.conf")
'ldap.conf'

Returns the directory component of a pathname

>>> os.path.dirname("/etc/ldap/ldap.conf")
'/etc/ldap'

Join two or more pathname components

>>> os.path.join("/home","abc")
'/home/abc'

Let’s try to do the same with os.path.join,

>>> "home" + "abc"
'homeabc'

As you can, with os.path.join intelligently add the seperator between the two path based on the Operating System.

Split a pathname. Returns tuple “(head, tail)” where “tail” is everything after the final slash.

>>> os.path.split("/home/abc")
('/home', 'abc')

Return the size of a file

>>> os.path.getsize("/etc/ldap/ldap.conf")
332

NOTE: It returns the size in terms of bytes.

Test whether a path exists. Returns False for broken symbolic links

>>> os.path.exists("/etc/resolv.conf")
True

Test whether a path is a regular file

>>> os.path.isfile("/etc/resolv.conf")
True

Return true if the pathname refers to an existing directory

>>> os.path.isdir("/etc")
True

This is especially helpful, in checking if the file exists before performing any operation on the top of it.

import os
path="/etc/resolv.conf"

if os.path.exists(path):
    print("File exists")
else:
    print("File doesn't exists")

NOTE: I showed the above example using os.path.exists but if we are specifically looking for file we can use os.path.isfile.

Test whether a path is a symbolic link

>>> os.path.islink("/etc/localtime")
True

OS Walk

  • It’s used to generates the file names in a directory tree by walking the tree either top-down or bottom-up.
>>> import os
>>> os.walk("/home/prashant")
<generator object walk at 0x7f87f75545c8>

NOTE: It creates a generator object. To convert it into list

>>> list(os.walk("/home/prashant"))
[('/home/prashant', [], [])]
  • From a given path, Python is generating a list which contains tuples and each tuple consist of three values
  • First value is the path we have given “‘/home/prashant'”
  • Second Value is the list of directories in a given list
  • Third is list of files in a given directory

Let say in a given path, I will create some files and directories

 sudo tree
.
├── ashish
│   └── abhya
├── pankaj
│   └── newtestfile
└── test1

You will see the output like this

>>> list(os.walk("/home/prashant"))
[('/home/prashant', ['ashish', 'pankaj'], ['test1']), ('/home/prashant/ashish', [], ['abhya']), ('/home/prashant/pankaj', [], ['newtestfile'])]
  • If we are going to run the for loop on the top of it, we will get the same output
>>> for path in os.walk("/home/prashant"):
...     print(path)
... 
('/home/prashant', ['ashish', 'pankaj'], ['test1'])
('/home/prashant/ashish', [], ['abhya'])
('/home/prashant/pankaj', [], ['newtestfile'])

As you can see in the above output we are getting the tuple back. As we know we can unpack the tuple

>>> for rootpath, dirpath, filepath in os.walk("/home/prashant"):
...     print(rootpath)
... 
/home/prashant
/home/prashant/ashish
/home/prashant/pankaj

In the above example, I am only getting the root path back. If we want files in that path

>>> for rootpath, dirpath, filepath in os.walk("/home/prashant"):
...     print(rootpath, filepath)
... 
/home/prashant ['test1']
/home/prashant/ashish ['abhya']
/home/prashant/pankaj ['newtestfile']
  • The important scenario is where we want to join the top level directory with file
import os
path="/home/prashant"

for rootfile, dirname, filename in os.walk(path):
    for file in filename:
        print(os.path.join(rootfile, file))

and the output will be simply

$ python3 listfile.py 
/home/prashant/listfile.py
/home/prashant/test1
/home/prashant/ashish/abhya
/home/prashant/pankaj/newtestfile

We can also extend this concept to search for a particular file

import os
path="/etc"
file_search=input("Please enter the filename to search: ")
for rootfile, dirname, filename in os.walk(path):
    for file in filename:
        if file == file_search:  
            print(os.path.join(rootfile, file))

Please join me with my journey by following any of the below links

Certified Kubernetes Administrator (CKA) – Day 1

Pre-requisites

  • Docker
  • Basics of Kubernetes(POD’s, Services, Deployments)
  • YAML
  • Linux Command line and setting up Linux Machine

Certified Kubernetes Administrator

  • Kubernetes Certification is hands-on
  • Exam Details
  • Exam Cost: $300(One Free Re-take within Next 12 month)
  • Online Exam
  • Exam Duration: 3hr(~24 Question)
  • The version of Kubernetes running in the exam environment: v1.16
  • Passing score: 74%
  • Resources allowed during the exam:

https://kubernetes.io/docs/

https://github.com/kubernetes/

https://kubernetes.io/blog/

Useful Resources

Certified Kubernetes Administrator: https://www.cncf.io/certification/cka/

Exam Curriculum (Topics): https://github.com/cncf/curriculum

Candidate Handbook: https://www.cncf.io/certification/candidate-handbook

Exam Tips: http://training.linuxfoundation.org/go//Important-Tips-CKA-CKAD

FAQ: https://training.linuxfoundation.org/wp-content/uploads/2019/11/CKA-CKAD-FAQ-11.22.19.pdf

Kubernetes Cluster Architecture

What is Kubernetes?

Official Definition

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.

Kubernets comes from the Greek word meaning Helmsman – the person who steers a ship. The theme is reflected in the logo.

https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

https://kubernetes.io/docs/concepts/overview/components/

At the high-level overview, we have two major components

  • Kubernetes Master: Manage, Plan, Schedule, and monitor nodes
  • Kubernetes Nodes: Host application as containers

Other Components:

ETCD

  • Think of it like Database which stores cluster information(Nodes, PODs, Configs, Secrets, Accounts, Roles, Bindings,…) in key-value format about the cluster.
  • Listen on Port 2379 by default.
  • Every information we see when we run kubectl get command is from ETCD server.
  • Every change we are doing to our cluster eg: Deploying additional nodes, pods , replica sets are updated in etcd server.
  • Only when its updated in etcd change is considered to be complete.
  • Deployed via kubeadm(kubectl get pods -n kube-system –> etcd-master).
  • kubectl exec etcd-master -n kube-system etcdctl get / –prefix -keys-only.
  • Use the RAFT protocol.
$ kubectl get pods -n kube-system |grep -i etcd
etcd-plakhera11.example.com                      1/1     Running   4          9d

kube-scheduler

Identify the right node to place a container depending on the worker node capacity, taints or toleration.

  • The scheduler is only responsible for deciding which POD goes on which node, it doesn’t actually place the pod on the node it’s the job of the kubelet.
  • The scheduler is only responsible for deciding which POD goes on which node, it doesn’t actually place the pod on the node it’s the job of the kubelet.
  • The way scheduler places pod on nodes

Filter Nodes: Requirement raised by POD

Rank Nodes: Depending upon the CPU and Memory

$ kubectl get pods -n kube-system
NAME                                                  READY   STATUS    RESTARTS   AGE

kube-scheduler-plakhera11c.mylabserver.com            1/1     Running   5          11d
  • Or you can check the running process
$ ps aux|grep -i kube-scheduler
root      2019  0.7  0.4 139596 34428 ?        Ssl  01:42   0:22 kube-scheduler --address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true

Controller Manager

  • Manages various controller in Kubernetes

1: Replication Controller: It monitors the status of replicasets, to make sure the desired number of pods always running in the set. If pod dies it creates another one.

2: Node Controller:

  • It monitors the status of nodes every 5s
  • Node monitor grace period = 40s
  • POD eviction timeout = 5m
$ kubectl get pods -n kube-system
NAME                                                  READY   STATUS    RESTARTS   AGE
kube-controller-manager-plakhera11c.mylabserver.com   1/1     Running   5          11d
  • To see the effective process
$ ps aux|grep -i kube-controller-manager
root      2095  2.3  1.1 205516 93028 ?        Ssl  01:42   0:56 kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --node-cidr-mask-size=24 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --use-service-account-credentials=true
cloud_u+  2263  0.0  0.0  14988  2652 pts/1    R+   02:22   0:00 grep --color=auto -i kube-controller-manager

Kube ApiServer

  • Is responsible for how these components talk to each other. It exposes Kube Api which is used by an external user to perform cluster operation.
  • When we run any kubectl command(eg: kubectl get nodes) behind the scene
kubectl --> kube-apiserver(authenticate and validate the request) --> ETCD Cluster(retrieve data) --> Response back with the requested data
  • kubeApi Server is the only component that interacts directly with ETCD datastore.
  • If installed via kubeadm(kubectl get pods -n kube-system)
$ kubectl get pods -n kube-system
NAME                                                  READY   STATUS    RESTARTS   AGE
kube-apiserver-plakhera11.example.com            1/1     Running   5          11d
  • OR you can check the process
$ ps aux|grep -i kube-api
root      2125  2.9  3.2 446884 261124 ?       Ssl  01:42   0:58 kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.31.99.206 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
  • Container Runtime Engine: eg: Docker or Rkt(Rocket)

Kubelet

  • Is an agent that runs on each node in the cluster, it listens for instruction from KubeApi Server and deploys/destroy node on the cluster. KubeApi server periodically fetches status reports from kubelet to monitor the status of the node and docker. It registers the node to the Kubernetes cluster.

NOTE: Kubeadm doesn’t deploy Kubelets.

To check the status of kubelet agent

$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2020-01-27 01:41:58 UTC; 54min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 785 (kubelet)
    Tasks: 17 (limit: 2318)
   CGroup: /system.slice/kubelet.service
           └─785 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --n

OR

$ ps aux|grep -i kubelet
 root       785  2.3  4.2 1345380 85736 ?       Ssl  01:41   1:17 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/libkubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/run/systemd/resolve/resolv.conf

Kube-proxy

  • Think of it like POD Network, how POD in a cluster can communicate with each other.
  • It make sure each nodes gets its own unique IP address and implement local IPTABLES or IPVS rules to handle routing and load balancing of traffic on the Pod network.
$ kubectl get pods -n kube-system
NAME                                                  READY   STATUS    RESTARTS   AGE
kube-proxy-4lvxx                                      1/1     Running   4          11d
kube-proxy-7w6p4                                      1/1     Running   4          11d
kube-proxy-tfrwv                                      1/1     Running   5          11d

POD

  • Containers are encapsulated in the form of Kubernetes objects known as POD. Pod itself doesn’t actually run anything, it’s just a sandbox for hosting containers.
  • In other terms it provides the share execution environment with has a set of resources that are shared by every container that is the part of POD(eg: IP addresses, ports, hostnames, sockets, memory, volumes etc)
  • A POD is a single instance of an application, it’s the smallest object we can create in kubernetes. You cannot run a container directly on a Kubernetes cluster- containers must always run inside the Pods.
  • To see the list of pods
$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7cdbd8cdc9-tzbk6   1/1     Running   1          3d22h

Creating a POD using YAML file

  • Kubernetes uses yaml file as an input to create object like POD, ReplicaSet, Deployments etc.
  • This yaml file is then POST to the API server.
  • API Server examines the file, write it to etcd store and then scheduler deploys it to the healthy node with enough available resources.
  • YAML file always contains these four top level fields
apiVersion:
kind:
metadata:
spec:
  • apiVersion: Version of kubernetes api we use to create an object, depending upon the type of object we are trying to create.
  • Think of version field as defining the schema, newer is usually better.
KindVersion
PODv1
Servicev1
ReplicaSet/Deploymentapps/v1
  • kind: Refer to the type of object we are trying to create in this case Pod. Other possible values are Service, ReplicaSet, Deployment.
  • metadata: Refer to the data about the object
metadata:
  name: mytest-pod
  labels:
    app: mytestapp

In the above example, the name of our pod is mytest-pod(which is a string) and then we are assigning labels to it which is a dictionary and it can be any key-value pairs.

  • Spec: Now we are going to specify the container or image we need in the POD.
spec:
  containers:
    - name: mynginx-container
      image: nginx 

In the above example we are giving our container a name mynginx container and asking it to pull the image from dockerhub.

Note (-) in the front of name, which indicate it’s a list and we can specify multiple containers here.

Once we have the file ready we can deploy pod using

kubectl create -f <filename>.yml
  • Once the pod is create you can verify it using
$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
my-test-pod              1/1     Running   0          3m10s
  • You can add –watch flag to the kubectl get pods command so that you can monitor it.
  • -o wide flag gives couple of more columns(NOTE I am showing a different example here)
$ kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP             NODE                          NOMINATED NODE   READINESS GATES
frontend-6cww6                      1/1     Running   8          10d   10.244.2.99    plakhera13c.mylabserver.com   <none>           <none>
frontend-dw5s4                      1/1     Running   8          10d   10.244.1.65    plakhera12c.mylabserver.com   <none>           <none>

  • -o yaml flag , returns a full copy of the Pod manifests from the cluster store. The output is divided into two parts
  1. The desired state (.spec section)
  2. The current observed state(.status section)
$ kubectl get pods my-test-pod -o yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2020-01-27T04:51:59Z"
  name: my-test-pod
  namespace: default
  resourceVersion: "302557"
  selfLink: /api/v1/namespaces/default/pods/my-test-pod
  uid: c0a48e63-40c0-11ea-8152-06e53f8e1eee
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: my-test-pod
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-s9cz4
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: plakhera13c.mylabserver.com
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-s9cz4
    secret:
      defaultMode: 420
      secretName: default-token-s9cz4
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2020-01-27T04:51:59Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2020-02-07T03:19:36Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2020-02-07T03:19:36Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2020-01-27T04:51:59Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://8b82b56654223e400c8f3f6709dd2636e4e0f6602eeb699e793ca57ee65942d8
    image: nginx:latest
    imageID: docker-pullable://nginx@sha256:ad5552c786f128e389a0263104ae39f3d3c7895579d45ae716f528185b36bc6f
    lastState:
      terminated:
        containerID: docker://74dd6acef0394c71285cd8cd0eb0afbe1fe0fd5b2c6c51c56b4c8ef3eae5bdcd
        exitCode: 0
        finishedAt: "2020-02-04T18:16:08Z"
        reason: Completed
        startedAt: "2020-02-04T14:17:31Z"
    name: my-test-pod
    ready: true
    restartCount: 9
    state:
      running:
        startedAt: "2020-02-07T03:19:35Z"
  hostIP: 172.31.98.89
  phase: Running
  podIP: 10.244.2.101
  qosClass: BestEffort
  startTime: "2020-01-27T04:51:59Z"
  • But my pod manifest is just 8 lines long, but the output is more than 8 lines, now the question is from where these extra information comes?
  • Two main sources
  1. Kubernetes pod object has far more properties than what we defined in the manifests. What we dont set explicitly are automatically expanded with default values by Kubernetes.
  2. As mentioned above, here we are getting Pod current observed state as well its desired state.
  • Another great command to get the detailed information about POD
$ kubectl describe pod my-test-pod 
Name:               my-test-pod
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               plakhera13c.mylabserver.com/172.31.98.89
Start Time:         Mon, 27 Jan 2020 04:51:59 +0000
Labels:             <none>
Annotations:        <none>
Status:             Running
IP:                 10.244.2.12
Containers:
  my-test-pod:
    Container ID:   docker://e83d9d4dc1a1f01de04a1ea4eae834c6978b1a607eb47950e2862c353dc6a22e
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:70821e443be75ea38bdf52a974fd2271babd5875b2b1964f05025981c75a6717
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 27 Jan 2020 04:52:01 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s9cz4 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-s9cz4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-s9cz4
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From                                  Message
  ----    ------     ----   ----                                  -------
  Normal  Scheduled  7m10s  default-scheduler                     Successfully assigned default/my-test-pod to plakhera13c.mylabserver.com
  Normal  Pulling    7m9s   kubelet, plakhera13c.mylabserver.com  pulling image "nginx"
  Normal  Pulled     7m8s   kubelet, plakhera13c.mylabserver.com  Successfully pulled image "nginx"
  Normal  Created    7m8s   kubelet, plakhera13c.mylabserver.com  Created container
  Normal  Started    7m8s   kubelet, plakhera13c.mylabserver.com  Started container
  • Complete pod.yml file
$ cat pod.yml 
apiVersion: v1
kind: Pod
metadata:
  name: my-test-pod


spec:
  containers:
  - name: my-test-pod
    image: nginx
  • To login into containers running in Pods use kubectl exec
$ kubectl exec -it my-test-pod sh
# 
  • To delete a Pod
$ kubectl delete pod my-test-pod
pod "my-test-pod" deleted

Please join me with my journey by following any of the below links

21 Days of Happiness

We all know the type of stressful life we are living in, and we always look for things that reduce our stress. The main idea behind the 21 Days of Happiness program is sharing the key to our quick stress relief, and in my case, it’s my dog(Prince).

Two main rules:
1: Try to share ways or post pictures that keep your stressors in check, every day for the next 21 days.
2: Post Picture on Instagram and tag #21DaysOfHappinessWithPrince

Please join me with my journey by following the below link

  • Instagram: #21DaysOfHappinessWithPrince

21 Days of Reading

“Books are the man’s best friends”. Books give us knowledge and also tells us valuable things about life, time, money, success, etc. This year I am coming up with a challenge where the idea is to read 1 book per month. The book can be of any category Tech, Science Fiction, Novel, Story, etc.

Two main rules
1: Try to read one chapter per day 
2: Tweet your progress every day with the #21daysofreading

Every day comes up with some challenges so there are two cheat days for me(Monday and Wednesday). In these two days, I will try to meet the goal, but if I am not able to achieve, I will not regret it.

Please join me with my journey by following the below link

  • Twitter: @21DaysOfReading

21 Days of Fitness

The main idea behind 21 Days of Fitness is to reach a goal of 10,000 steps per day and get you stronger, fitter, and healthier. 

We have hectic schedules filled with stress and to cope with the pressure we need to be fit. 10,000 steps a day is an easy activity for all fitness levels and ages.

Two main rule
1: Reach a goal of 10000 steps per day
2: Tweet your progress every day with the #21daysoffitness

We all know that every day comes up with some challenges, so there are two cheat days for me(Monday and Wednesday). In these two days, I will try to meet the goal, but if I am not able to achieve, I will not regret it.

Please join me with my journey by following the below link

  • Twitter: @21daysoffitness

The 2020 Year of DevOps Automation

Happy New Year Everyone. The last year 2019, I started three programs

100 Days of DevOps

21 Days of Docker – Day 21

21 Days of AWS using Terraform 

Thanks, everyone who participated in these programs, I learned a lot and I believe you guys also got a chance to learn something out of my blogs.

Starting from Feb15, I am starting two new programs

1: DevOps Automation

2: DevOps Certification

1: DevOps Automation: The main idea behind this, we will meet every Saturday at 7 am PST(via meetup) and try to automate any DevOps related process using below tools

  • Python
  • Shell Scripting
  • Awk
  • Sed
  • Regular Expression

I put together a few of the topics here but I am looking for your inputs, please post it under comment section what other technologies you want to automate in 2020, I will definitely add that as a part of the excel sheet

https://docs.google.com/spreadsheets/d/1E9UgzcC8RgBgWVzCHCtivbRRPp6bev-LeBy8OpvvTY8/edit?usp=sharing

The way this program is different from the earlier program is that so far you only heard me talking during meetup but this time I need more input from your end, please share your ideas and if you want to speak about a particular topic, please let me know I will add you as a speaker for that particular week.

2: DevOps Certification: This is one of the ambitious and my dream project, to get 1 or more certification per month. The way this program works , we will meet every weekend(Saturday/Sunday) for 90min(8:15 am PST – 9:45 am PST). I will assign the topic to everyone for that week(or we can discuss more about it, how we can distribute the topic to everyone depend upon your expertise) and then you need to speak about the particular topic. In this way, you will get more exposure as you are trying to explain the particular topic to n number of people. Here is list of the certifications

https://docs.google.com/spreadsheets/d/1hQYRh5VLWXERfCF3TbFmskjfMTGVZVRxlRJh7im89Bc/edit?usp=sharing

NOTE: Nothing set on stone and this list is always modifiable, so looking forward for your inputs

I know after reading all this, you think I am crazy but I know its doable the only thing which is required is

  • Disciplined
  • Motivation

Please join me with my journey by following any of the below links

My road to AWS Certified Solution Architect

WARNING: Before reading this doc 🙂 🙂

1: As everyone needs to sign NDA with AWS, I can’t tell you the exact question asked during the exam neither I have GB of memory, but I can give you the pointer what to expect in the exam

2: As we all know AWS infrastructure updates everyday, so some of the stuff might not be relevant after a few days/weeks/months…

3: Please don’t ask for any exam dumps or question, that defeats the whole purpose of the exam.

Finally, after waiting for 5 years(Journey started at way back in 2014 when I first logged into AWS Console) yesterday, I cleared my AWS Certified Solution Architect Exam.

Why it took me so long to write my First AWS Exam?

  • Let me first start with trying to introduce myself; I am an X-RedHat Certified Architect(yes that’s true, I cleared all the 5 RedHat Exams). RedHat exams are mostly hand’s on which are based on the scenario you need to deploy or create some server/application
  • My initial assumption regarding AWS exam was it’s mostly theoretical, i.e. they will give you a series of questions(single/multiple choice), and you need to select one/multiple options, so basically you are not implementing anything. On the top of that to answer these question you need to memorize a bunch of stuff.

So how everything changed?

  • Initially, when I started preparing, I realized there is a lot of stuff which I completely miss/not paid attention. But later on, I paid special attention to those, e.g., S3 seems to the pretty straightforward concept, but when I start exploring it(mentioned below some of the different S3 services), I came to know its one of the amazing services.

Exam Preparation

  • If you don’t have any experience with AWS service, I will recommend first start with acloudguru. Please don’t miss AWS — 10,000 Foot Overview, this will give you a good overview of all the AWS Services

Online Course | AWS Certified Solutions Architect | AssociateLearn the major components of Amazon Web Services, and prepare for the associate-level AWS Certified Solutions…acloud.guru

  • The second, one of the most useful resource is Linux Academy. One of the advantages of using Linux Academy is its hands-on lab. This will give you enough hand’s on experience required for the certification.

AWS Certified Solutions Architect — Associate Level (2018)Welcome to Linux Academy&#39;s all new AWS Certified Solutions Architect — Associate prep course. This course prepares…linuxacademy.com

  • AWS Re: Invent Videos: I highly recommend going through these videos, as they will give you enough in-depth knowledge about each service.
  • AWS Documentation: Best documentation ever provided by any service provider. Don’t miss the FAQ regarding each service(especially for EC2, S3, VPC)
  • Exam Readiness

Curriculum Details | AWS Training & CertificationEdit descriptionwww.aws.training

https://docs.aws.amazon.com/index.html
  • Last but not the least, hands-on experience, there is no substitute for that. As per certification pre-requisite

AWS Certified Solutions Architect — AssociateThe AWS Certified Solutions Architect — Associate examination is intended for individuals who perform a solutions…aws.amazon.com

Services

You must know these three services in order to clear this EXAM

  • EC2
  • VPC
  • S3

Some services which I under-estimate and I saw at least 2–4 question related to those services

  • DynamoDB
  • Kinesis Firehouse
  • CloudFront
  • SQS

I am not using any of these services in my day to day operation, and that’s why I didn’t pay much attention. Also, it’s time for AcloudGuru and Linux Academy to add some more in-depth content related to these services

Some surprise packages

  • AWSAthena
  • AWS Inspector

My Idea about the exam

  • As this is an associate level exam, my initial perception about this exam that I don’t need to go in-depth of all of the services but this exam surprises me with some in-depth questions. So please make sure to read/implement as much as possible about (EC2/VPC/S3).

Let’s talk about different Services and what concept you should know in order to clear this exam

S3

  • This table is the key to understand different S3 storage classes. Make sure you understand
* Durability and Availibility of each class* In which situation you are going to use specific class

Cloud Storage Classes — Amazon Simple Storage Service (S3) — AWSExplore S3 cloud storage offerings for different durability and availability levels, including Amazon S3 Standard, S3…aws.amazon.com

  • Understand S3 Object Lifecycle Management and when to move an object to S3-Standard-IA/S3 One Zone IA vs Glacier

Object Lifecycle Management — Amazon Simple Storage ServiceUse Amazon S3 to manage your objects so that they are stored cost effectively throughout their lifecycle.docs.aws.amazon.com

  • Difference between Server access logging vs Object Access logging

Serve access logging vs Object-level loggingCurrently after creating my S3 buckets under properties, I see Server access logging and object-level logging. What is…acloud.guru

  • Understand how encryption(both Server/Client) works for S3

Protecting Data Using Encryption — Amazon Simple Storage ServiceUse data encryption to provide added security for your data objects stored in your buckets.docs.aws.amazon.com

  • Cross region replication in S3

Cross-Region Replication — Amazon Simple Storage ServiceSet up and configure cross-region replication to allow automatic, asynchronous copying of objects across Amazon S3…docs.aws.amazon.com

  • Surprise package Amazon S3 inventory

Amazon S3 Inventory — Amazon Simple Storage ServiceDescribes Amazon S3 inventory and how to use it.docs.aws.amazon.com

Key takeaways* It provides CSV and Apache Optimized Row Columnar(ORC) outputs files that lists objects and corresponding metadata 
* You can query AWS Inventory using standard SQL by using Amazon Athena, Amazon Redshift Spectrum

AWS Storage Gateway

  • Difference between different storage gateway and which one to use under which situation(Especially when they ask migrating services from on-premises data center to AWS cloud and how to keep data in sync)

What Is AWS Storage Gateway? — AWS Storage GatewayFind an introduction to AWS Storage Gateway, which connects your on-premises environment with cloud-based storage.docs.aws.amazon.com

AWS Snowball

  • Whenever they ask about Petabyte(even terabyte) this is the best bet(Again migrating on-premises data center to AWS)

EC2

  • Understand the difference between different purchasing options(On-demand, Reserved, Spot and Dedicated)

Instance Purchasing Options — Amazon Elastic Compute CloudAmazon EC2 provides different purchasing options that enable you to optimize your costs.docs.aws.amazon.com

  • Pay special emphasis to Dedicated Hosts(Look for Keyword like compliance requirements /server-bound software licenses)
  • Understand the difference between Instance Store Volumes vs EBS(Look for Keyword shutdown as in case of Instance Store Volumes your data will be Wiped)

Understand the Instance Store and EBSFor data you want to retain longer, or if you want to encrypt the data, use Amazon Elastic Block Store (Amazon EBS)…aws.amazon.com

  • Security Group(They are not going to ask you this question directly but mostly scenario based questions like multi-tier environment where you have web frontend vs MySQL as database and which port you are going to open in your backend DB(MySQL), As you only need a connection from the web frontend, you only need to specify Mysql DB security group)

LoadBalancer

  • Difference between Application vs Network Load Balancer and in which scenario you are going to use which one

Elastic Load Balancing FeaturesElastic Load Balancing provides integrated certificate management and SSL/TLS decryption, allowing you the flexibility…aws.amazon.com

VPC

  • Create VPC from scratch(At least 2 Private Subnet and 2 public Subnet)
  • What is the use of Internet Gateway and what changes you need to make in your routing table to route the traffic to the internet(0.0.0.0/0 to IGW)
  • How Private Instance is going to talk to the Internet(NAT Gateway)(again create it from scratch)
  • VPC Endpoints(understand the difference between Gateway Endpoint vs Interface Endpoint)
  • Difference between NACL vs Security Group

CloudWatch

  • Remember Cloudwatch now is not only to display metrics but you can also push application logs via Cloudwatch agents

What is Amazon CloudWatch Logs? — Amazon CloudWatch LogsDescribes the fundamentals, concepts, and terminology you need to know for using CloudWatch Logs to monitor, store, and…docs.aws.amazon.com

  • Placement Group: Justbrief idea about EC2 Placement group and what is the purpose of it(keyword low latency between ec2 instances)

Placement Groups — Amazon Elastic Compute CloudLaunch instances in a placement group to cluster them logically into a low-latency group, or to spread them across…docs.aws.amazon.com

  • Elastic File System(EFS): Look for a key term like the instance need to be simultaneously mounted on the bunch of EC2 instances(Choice between S3/EBS/EFS)

Amazon Elastic File System (Amazon EFS) — Amazon Elastic Compute CloudUse Amazon EFS to create an EFS file system and mount it to one or more of your Linux instances.docs.aws.amazon.com

  • Lambda: Whenever they talk about cost optimization then lambda is your go-to choice(But please read the scenario carefully)

Route53

  • Understand the difference between different routing policy

Choosing a Routing Policy — Amazon Route 53Choose a routing policy before you create records in Amazon Route 53.docs.aws.amazon.com

  • Pay special emphasis to a latency based(key word user in specific region facing latency, so key choice is between Route53 vs CloudFront)and failover routing policy

Autoscaling: Just a brief idea about how auto-scaling works

Databases

  • For RDS MySQL understand the difference between read-only replication(performance gain) vs HA(in case of failover)
  • AWS is paying special emphasis on Aurora, so in case if they ask migrating on-premises MySql/Postgres to AWS Cloud then Aurora is the best bet

IAM

  • Make sure you understand the purpose of roles and use roles to communicate to different AWS Service, rather than using Public Internet Route

BONUS: AWS goodies during re-invent 2018 🙂

My AWS Re: Invent 2019 Experience

Last week, I had the opportunity of attending the Amazon Web Services re: Invent conference in Las Vegas.

I would like to start with the good news, I am now AWS SysOps certified.

This is my 3rd AWS Certification and overall 15th.

Date: Dec 2 -Dec6 (5 days)

Venue: Venetian, Aria, MGM, Bellagio, Mirage, Encore

Sessions: 1700+ ( Bootcamp, Builders Session, Chalk Talk, Hackathon,  Sessions, Workshops, Keynote) & Exhibitions

AWS launched a bunch of products during re:Invent

https://aws.amazon.com/blogs/aws/aws-launches-previews-at-reinvent-2019-sunday-december-1st/

https://aws.amazon.com/blogs/aws/aws-launches-previews-at-reinvent-2019-tuesday-december-3rd/

https://aws.amazon.com/blogs/aws/aws-launches-previews-at-reinvent-2019-wednesday-december-4th/

What happens in Vegas doesn’t have to stay in Vegas and we all know somehow it always work magically during demo session but there was still a hiccup when we tried to implement the same in our laptop/desktop. So before I start forgetting what I learned during the conference, I try to re-create a few things I learned during the conference, a detailed explanation is in the below-mentioned doc.

Recommendation

  • Workshop: Try to attend as much workshop as possible as all the session videos are available on youtube after 24-48 hours.
  • Chalk Talk: They are pretty good and you can interrupt the speaker to ask any question as compared to session where you need to wait till the end.
  • If you want to know more about the Newly Product Launched then try to attend Leadership session.
  • To get information about all the Products launched attend Keynotes session
  • Finally, Session talks or depends upon your personal preference.

My Personal Order of preference

Workshops —> Chalk Talk —> Leadership Sessions —> Keynotes —> Sessions

Workshops

As I mentioned above, please try to attend as much workshop as possible

1: Threat Detection Workshop

https://automating-threat-detection.awssecworkshops.com/01-environment-setup/https://aws.amazon.com/blogs/security/how-get-started-security-response-automation-aws/

2: IAM Workshop

https://identity-round-robin.awssecworkshops.com/permission-boundaries-advanced/

3: EBS Workshop

https://github.com/aws-samples/maximizing-storage-throughput-and-performance

4: EC2 AutoScaling Workshop

https://ec2spotworkshops.com/running-amazon-ec2-workloads-at-scale.html

5: AWS Well-Architected Labs

https://www.wellarchitectedlabs.com/Cost/ent206.html

NOTE: 

1: These workshops are already publicly available.

2: This is implied but doesn’t try these workshops in PRD env, some of these workshops, eg: Workshop 2 is to simulate a brute force attack.

3: AWS during the workshop gave us a demo account to perform these workshops, so I believe it will cost you some $$$ if you want to test these workshops.

Top three announcement for me

1: AWS Transit Gateway Adds Multicast and Inter-Regional Peering

https://aws.amazon.com/blogs/aws/aws-transit-gateway-adds-multicast-and-inter-regional-peering/

2: AWS Identity and Access Management (IAM) Access Analyzer

https://aws.amazon.com/blogs/aws/identify-unintended-resource-access-with-aws-identity-and-access-management-iam-access-analyzer/



3: EC2 Image Builder

https://aws.amazon.com/image-builder/



https://aws.amazon.com/blogs/aws/identify-unintended-resource-access-with-aws-identity-and-access-management-iam-access-analyzer/

3: EC2 Image Builder

https://aws.amazon.com/image-builder/

Compute Announcement

AWS Compute Optimizer

https://aws.amazon.com/blogs/aws/aws-compute-optimizer-your-customized-resource-optimization-service/

Introducing EC2 Image Builder

https://aws.amazon.com/about-aws/whats-new/2019/12/introducing-ec2-image-builder/

AWS Savings Plans

https://aws.amazon.com/blogs/aws/new-savings-plans-for-aws-compute-services/

AWS Outposts are now generally available

https://www.businessinsider.com/amazon-aws-outposts-hybrid-cloud-generally-available-2019-12#:~:targetText=On%20Tuesday%20at%20the%20AWS,Outposts%20is%20now%20generally%20available.&targetText=With%20AWS%20Outposts%2C%20customers%20can,on%20their%20private%20data%20centers.

AWS Local Zones

https://aws.amazon.com/blogs/aws/aws-now-available-from-a-local-zone-in-los-angeles/

Networking

AWS Transit Gateway now supports Inter-Region Peering

https://aws.amazon.com/about-aws/whats-new/2019/12/aws-transit-gateway-supports-inter-region-peering/
https://aws.amazon.com/blogs/aws/aws-transit-gateway-adds-multicast-and-inter-regional-peering/

Inter-region peering is available in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and EU (Frankfurt) and Multicast is available in US East (N. Virginia)

Serverless Transit Network Orchestrator

https://aws.amazon.com/solutions/serverless-transit-network-orchestrator/


Announcing Accelerated Site-to-Site VPN for Improved VPN Performance

https://aws.amazon.com/about-aws/whats-new/2019/12/announcing-accelerated-site-to-site-vpn-for-improved-vpn-performance/

Transit Gateway Network Manager

Transit Gateway Network Manager (Network Manager) enables you to centrally manage your networks that are built around transit gateways. You can visualize and monitor your global network across AWS Regions and on-premises locations.

https://aws.amazon.com/transit-gateway/network-manager/

Some Other noticeable announcement

AWS s3 access point:  S3 Access Points are unique hostnames with dedicated access policies that describe how data can be accessed using that endpoint

https://aws.amazon.com/blogs/aws/easily-manage-shared-data-sets-with-amazon-s3-access-points/

AWS Fargate for AWS EKS: Starting today, you can start using Amazon Elastic Kubernetes Service to run Kubernetes pods on AWS Fargate

https://aws.amazon.com/blogs/aws/amazon-eks-on-aws-fargate-now-generally-available/

AWS Elasticsearch Search ultrawarm: UltraWarm, a fully managed, low-cost, warm storage tier for Amazon Elasticsearch Service

https://aws.amazon.com/blogs/aws/announcing-ultrawarm-preview-for-amazon-elasticsearch-service/

AWS managed Cassandra service:  Amazon Managed Apache Cassandra Service (MCS), a scalable, highly available, and managed Apache Cassandra-compatible database service. Amazon MCS is serverless

https://aws.amazon.com/blogs/aws/new-amazon-managed-apache-cassandra-service-mcs/

Then a bunch of Machine Learning announcement

Sagemaker studio: Amazon SageMaker Studio, the first fully integrated development environment (IDE) for machine learning (ML).

https://aws.amazon.com/blogs/aws/amazon-sagemaker-studio-the-first-fully-integrated-development-environment-for-machine-learning/

Sagemaker notebook: SageMaker Notebooks lets you quickly spin up a notebook for machine learning projects. CPU usage with SageMaker Notebooks can be managed by AWS and quickly transfer content from notebooks.

Sagemaker experiments : SageMaker Experiments is for training and tuning models automatically and capture parameters when testing models. Older experiments can be searched for by name, data set use, or parameters to make it easier to share and search models.

Sagemaker debugger: With Amazon SageMaker Debugger, you can debug and analyze complex training issues, and receive alerts. It automatically introspects your models, collects debugging data, and analyzes it to provide real-time alerts and advice on ways to optimize your training times, and improve model quality. All information is visible as your models are training

https://aws.amazon.com/blogs/aws/amazon-sagemaker-studio-the-first-fully-integrated-development-environment-for-machine-learning/

Sagemaker model monitor: Amazon SageMaker Model Monitor continuously monitors the quality of Amazon SageMaker machine learning models in production. It enables developers to set an alert when there are deviations in the model quality, such as data drift and anomalies

https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html

Sagemaker autopilot: Amazon SageMaker Autopilot to automatically create the best classification and regression machine learning models, while allowing full control and visibility.

https://aws.amazon.com/blogs/aws/amazon-sagemaker-autopilot-fully-managed-automatic-machine-learning/

Amazon Fraud Detector: Amazon Fraud Detector is a fully managed service that makes it easy to identify potentially fraudulent online activities such as online payment fraud and the creation of fake accounts.

https://aws.amazon.com/fraud-detector/

Amazon Codeguru: Amazon CodeGuru is a machine learning service for automated code reviews and application performance recommendations.

https://aws.amazon.com/codeguru/

Contact lens for Amazon connect: Contact Lens for Amazon Connect, a set of capabilities for Amazon Connect enabled by machine learning (ML) that gives contact center supervisors and analysts the ability to understand the content, sentiment, and trends of their customer conversations to identify crucial customer feedback and improve customer experience

https://aws.amazon.com/blogs/contact-center/announcing-contact-lens-for-amazon-connect-preview/

Amazon Kendra : Amazon Kendra is a highly accurate and easy to use enterprise search service that’s powered by machine learning. Kendra delivers powerful natural language search capabilities to your websites and applications so your end users can more easily find the information they need within the vast amount of content spread across your company.

https://aws.amazon.com/kendra/

YouTube/Video Links for some sessions

AWS Transit Gateway reference architectures for many VPCs
https://www.youtube.com/watch?v=9Nikqn_02Oc

Power of EBPF by Brendan Gregg(Short and funny Clip – must watch)
https://twitter.com/AWSOpen/status/1202379357131431936

Some lessons learned based on past experience

  • Try to schedule all your sessions in one hotel. Jumping from one hotel to another is time-consuming. AWS provides shuttle service, but sometimes it takes more than an hour to reach the different hotel 
  • Please, reserve sessions in advance(You can reserve your seat in advance, AWS opens the window ~1month ahead), else you need to be in WalkIn queue and at least need to be there 1 hour prior, and even then it’s not sure you will get a seat. Bummer this is the biggest lesson I learned.
  • For Keynote at least reach two-hour advance if you need a better seat/view.

Final Word

The re: Invent conference was without a doubt, one of the best experiences and adventures I have had in my IT career. I have been to a few conferences before but to see first hand and be a part of the Amazon community, surrounded by and engaging with like-minded people for the 5-day duration was truly inspiring and something that will stay with me for a very long time

Some pics

If you need any other info, please feel free to contact via any of the below links

  • Website: https://100daysofdevops.com/
  • Twitter: @100daysofdevops OR @lakhera2015
  • Facebook: https://www.facebook.com/groups/795382630808645/
  • Medium: https://medium.com/@devopslearning
  • GitHub: https://github.com/100daysofdevops/100daysofdevops
  • Slack:  https://join.slack.com/t/100daysofdevops/shared_invite/enQtODQ4OTUxMTYxMzc5LTYxZjBkNGE3ZjE0OTE3OGFjMDUxZTBjNDZlMDVhNmIyZWNiZDhjMTM1YmI4MTkxZTQwNzcyMDE0YmYxYjMyMDM
  • YouTube Channel: https://www.youtube.com/user/laprashant/videos

My road to AWS Certified Solution Architect

WARNING: Before reading this doc 🙂 🙂

1: As everyone needs to sign NDA with AWS, I can’t tell you the exact question asked during the exam neither I have GB of memory, but I can give you the pointer what to expect in the exam

2: As we all know AWS infrastructure updates everyday, so some of the stuff might not be relevant after a few days/weeks/months…

3: Please don’t ask for any exam dumps or question, that defeats the whole purpose of the exam.

Finally, after waiting for 5 years(Journey started at way back in 2014 when I first logged into AWS Console) yesterday, I cleared my AWS Certified Solution Architect Exam.

Why it took me so long to write my First AWS Exam?

  • Let me first start with trying to introduce myself; I am an X-RedHat Certified Architect(yes that’s true, I cleared all the 5 RedHat Exams). RedHat exams are mostly hand’s on which are based on the scenario you need to deploy or create some server/application
  • My initial assumption regarding AWS exam was it’s mostly theoretical, i.e. they will give you a series of questions(single/multiple choice), and you need to select one/multiple options, so basically you are not implementing anything. On the top of that to answer these question you need to memorize a bunch of stuff.

So how everything changed?

  • Initially, when I started preparing, I realized there is a lot of stuff which I completely miss/not paid attention. But later on, I paid special attention to those, e.g., S3 seems to the pretty straightforward concept, but when I start exploring it(mentioned below some of the different S3 services), I came to know its one of the amazing services.

Exam Preparation

  • If you don’t have any experience with AWS service, I will recommend first start with acloudguru. Please don’t miss AWS — 10,000 Foot Overview, this will give you a good overview of all the AWS Services

https://acloud.guru/learn/aws-certified-solutions-architect-associate

  • The second, one of the most useful resource is Linux Academy. One of the advantages of using Linux Academy is its hands-on lab. This will give you enough hand’s on experience required for the certification.

https://linuxacademy.com/course/aws-certified-solutions-architect-2019-associate-level/

  • AWS Re: Invent Videos: I highly recommend going through these videos, as they will give you enough in-depth knowledge about each service.
  • AWS Documentation: Best documentation ever provided by any service provider. Don’t miss the FAQ regarding each service(especially for EC2, S3, VPC)
  • Exam Readiness

https://www.aws.training/Details/Curriculum?id=20685

  • Last but not the least, hands-on experience, there is no substitute for that. As per certification pre-requisite

https://aws.amazon.com/certification/certified-solutions-architect-associate/

Services

You must know these three services in order to clear this EXAM

  • EC2
  • VPC
  • S3

Some services which I under-estimate and I saw at least 2–4 question related to those services

  • DynamoDB
  • Kinesis Firehouse
  • CloudFront
  • SQS

I am not using any of these services in my day to day operation, and that’s why I didn’t pay much attention. Also, it’s time for AcloudGuru and Linux Academy to add some more in-depth content related to these services

Some surprise packages

  • AWS Athena
  • AWS Inspector

My Idea about the exam

  • As this is an associate level exam, my initial perception about this exam that I don’t need to go in-depth of all of the services but this exam surprises me with some in-depth questions. So please make sure to read/implement as much as possible about (EC2/VPC/S3).

Let’s talk about different Services and what concept you should know in order to clear this exam

S3

  • This table is the key to understand different S3 storage classes. Make sure you understand
* Durability and Availibility of each class
* In which situation you are going to use specific class

Cloud Storage Classes — Amazon Simple Storage Service (S3) — AWS
Explore S3 cloud storage offerings for different durability and availability levels, including Amazon S3 Standard, S3…aws.amazon.com

  • Understand S3 Object Lifecycle Management and when to move an object to S3-Standard-IA/S3 One Zone IA vs Glacier

Object Lifecycle Management — Amazon Simple Storage Service
Use Amazon S3 to manage your objects so that they are stored cost effectively throughout their lifecycle.docs.aws.amazon.com

  • Difference between Server access logging vs Object Access logging

Serve access logging vs Object-level logging
Currently after creating my S3 buckets under properties, I see Server access logging and object-level logging. What is…acloud.guru

  • Understand how encryption(both Server/Client) works for S3

Protecting Data Using Encryption — Amazon Simple Storage Service
Use data encryption to provide added security for your data objects stored in your buckets.docs.aws.amazon.com

  • Cross region replication in S3

Cross-Region Replication — Amazon Simple Storage Service
Set up and configure cross-region replication to allow automatic, asynchronous copying of objects across Amazon S3…docs.aws.amazon.com

  • Surprise package Amazon S3 inventory

Amazon S3 Inventory — Amazon Simple Storage Service
Describes Amazon S3 inventory and how to use it.docs.aws.amazon.com

Key takeaways
* It provides CSV and Apache Optimized Row Columnar(ORC) outputs files that lists objects and corresponding metadata 
* You can query AWS Inventory using standard SQL by using Amazon Athena, Amazon Redshift Spectrum

AWS Storage Gateway

  • Difference between different storage gateway and which one to use under which situation(Especially when they ask migrating services from on-premises data center to AWS cloud and how to keep data in sync)

What Is AWS Storage Gateway? — AWS Storage Gateway
Find an introduction to AWS Storage Gateway, which connects your on-premises environment with cloud-based storage.docs.aws.amazon.com

AWS Snowball

  • Whenever they ask about Petabyte(even terabyte) this is the best bet(Again migrating on-premises data center to AWS)

EC2

  • Understand the difference between different purchasing options(On-demand, Reserved, Spot and Dedicated)

Instance Purchasing Options — Amazon Elastic Compute Cloud
Amazon EC2 provides different purchasing options that enable you to optimize your costs.docs.aws.amazon.com

  • Pay special emphasis to Dedicated Hosts(Look for Keyword like compliance requirements /server-bound software licenses)
  • Understand the difference between Instance Store Volumes vs EBS(Look for Keyword shutdown as in case of Instance Store Volumes your data will be Wiped)

Understand the Instance Store and EBS
For data you want to retain longer, or if you want to encrypt the data, use Amazon Elastic Block Store (Amazon EBS)…aws.amazon.com

  • Security Group(They are not going to ask you this question directly but mostly scenario based questions like multi-tier environment where you have web frontend vs MySQL as database and which port you are going to open in your backend DB(MySQL), As you only need a connection from the web frontend, you only need to specify Mysql DB security group)

LoadBalancer

  • Difference between Application vs Network Load Balancer and in which scenario you are going to use which one

Elastic Load Balancing Features
Elastic Load Balancing provides integrated certificate management and SSL/TLS decryption, allowing you the flexibility…aws.amazon.com

VPC

  • Create VPC from scratch(At least 2 Private Subnet and 2 public Subnet)
  • What is the use of Internet Gateway and what changes you need to make in your routing table to route the traffic to the internet(0.0.0.0/0 to IGW)
  • How Private Instance is going to talk to the Internet(NAT Gateway)(again create it from scratch)
  • VPC Endpoints(understand the difference between Gateway Endpoint vs Interface Endpoint)
  • Difference between NACL vs Security Group

CloudWatch

  • Remember Cloudwatch now is not only to display metrics but you can also push application logs via Cloudwatch agents

What is Amazon CloudWatch Logs? — Amazon CloudWatch Logs
Describes the fundamentals, concepts, and terminology you need to know for using CloudWatch Logs to monitor, store, and…docs.aws.amazon.com

  • Placement Group: Justbrief idea about EC2 Placement group and what is the purpose of it(keyword low latency between ec2 instances)

Placement Groups — Amazon Elastic Compute Cloud
Launch instances in a placement group to cluster them logically into a low-latency group, or to spread them across…docs.aws.amazon.com

  • Elastic File System(EFS): Look for a key term like the instance need to be simultaneously mounted on the bunch of EC2 instances(Choice between S3/EBS/EFS)

Amazon Elastic File System (Amazon EFS) — Amazon Elastic Compute Cloud
Use Amazon EFS to create an EFS file system and mount it to one or more of your Linux instances.docs.aws.amazon.com

  • Lambda: Whenever they talk about cost optimization then lambda is your go-to choice(But please read the scenario carefully)

Route53

  • Understand the difference between different routing policy

Choosing a Routing Policy — Amazon Route 53
Choose a routing policy before you create records in Amazon Route 53.docs.aws.amazon.com

  • Pay special emphasis to a latency based(key word user in specific region facing latency, so key choice is between Route53 vs CloudFront)and failover routing policy

Autoscaling: Just a brief idea about how auto-scaling works

Databases

  • For RDS MySQL understand the difference between read-only replication(performance gain) vs HA(in case of failover)
  • AWS is paying special emphasis on Aurora, so in case if they ask migrating on-premises MySql/Postgres to AWS Cloud then Aurora is the best bet

IAM

  • Make sure you understand the purpose of roles and use roles to communicate to different AWS Service, rather than using Public Internet Route

BONUS: AWS goodies during re-invent 2018 🙂

My road to AWS Certified SysOps Administrator – Associate

This is the continuation of my earlier post My road to AWS Certified Solution Architect, AWS Certified Security - Specialty Certification and now AWS SysOps exam.

https://medium.com/@devopslearning/my-road-to-aws-certified-solution-architect-394676f15680

YAY I cleared the exam! 🙂

WARNING: Some House Keeping task, before reading this blog  

1: As everyone needs to sign NDA with AWS, I can’t tell you the exact question asked during the exam neither I have GB of memory, but I can give you the pointers what to expect in the exam.

2: As we all know AWS infrastructure updates everyday, so some of the stuff might not be relevant after a few days/weeks/months.

3: Please don’t ask for any exam dumps or question, that defeats the whole purpose of the exam.

Exam Preparation

  • I highly recommend acloudguru course to everyone, this is specific to the exam and cover most of the topics

https://acloud.guru/learn/aws-certified-sysops-administrator-associate

  • My second recommendation is Linux Academy, Linux Academy covers goes into the depath of each topics.

https://linuxacademy.com/course/aws-certified-sys-ops-administrator-associate-soa-c-01/

  • AWS Re: Invent Videos: I highly recommend going through these videos, as they will give you enough in-depth knowledge about each service.
  • AWS Documentation: Best documentation ever provided by any service provider. Don’t miss the FAQ regarding each service (especially for CloudWatch, CloudFormation and Route53 ).
  • My own blog 🙂

Once you are done with the above preparation, it’s a good time to gauge your knowledge, check the AWS provided sample question

https://d1.awsstatic.com/training-and-certification/docs-sysops-associate/AWS-Certified-SysOps-Administrator-Associate-Sample-Questions-v1.5_FINAL.pdf

Now coming back to the exam, the entire exam is divided into seven main topics.

Based on my experience, you must need to know these three services to clear this exam.

  • CloudWatch
  • CloudFormation
  • ALB

Surprise Package: Not much question related to RDS

Domain 1: Monitoring and Reporting

  • Which metrics cloudwatch monitors by default
  • Atleast have rough idea about CloudWatch monitoring dashboard, the way I memorize it(CDNS –> Content Delivery Network Status)
* C --> CPU
* D --> Disk
* N --> Network
* S --> Status Check
  • Learn this by heart memory and disk utilization is a custom metric(Don’t confuse with the above disk read and write, here AWS is asking about how much disk space consumed by VM) and how to configure it using cloudwatch agent to push custom metrics memory and disk utilziation to cloudwatch.

https://medium.com/@devopslearning/100-days-of-devops-day-4-cloudwatch-log-agent-installation-centos7-d11054fffdf4

  • How to create a billing alarm using CloudWatch

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/gs_monitor_estimated_charges_with_cloudwatch.html

  • Make sure you understand the difference between CloudTrail(API call) vs CloudWatch(Metrics) vs AWS Config(Audit).
  • CloudTrail Log Validation, please check this and make sure you know how to enable it. You probably see a bunch of questions related to this topic

https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html

  • AWS Organization (You will expect 2-3 question related to this topic)

Domain 2: High Availability

  • How to encrypt existing RDS instance(make sure you understand there is no way you can encrypt existing DB eg:MySql , you need to take a snapshot and then create a copy and that copy you can be encrypted)

https://aws.amazon.com/premiumsupport/knowledge-center/encrypt-rds-snapshots/

  • Understand which services AWS take care of maintenance vs which service you need to take care of maintainence(eg: EC2)
* RDS
* ElasticCache
* RedShift
* DynamoDB DAX
* Neptune
* Amazon DocumentDB
  • How to troubleshoot AutoScaling Issues

https://docs.aws.amazon.com/autoscaling/ec2/userguide/CHAP_Troubleshooting.html

  • How to improve CloudFront Cache hit ratio

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ConfiguringCaching.html

Domain 3: Deployment and Provisioning

  • Understand different EC2 pricing model(in which case you use Spot vs Reserved vs On Demand)

https://aws.amazon.com/ec2/pricing/

  • Understand the difference between stop/start(boots up in different hypervisor) the instance vs reboot(same hypervisor)

https://alestic.com/2011/09/ec2-reboot-stop-start/

  • ELB Error message(I got confused between multiple choices)

https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/ts-elb-error-message.html

  • Various use case of aws system manager especially for patching(I always refer this video to check the system manager concepts)

Domain 4: Storage and Data Management

  • How S3 lifecycle policies works
  • S3 Mfa delete
  • S3 delete marker
  • S3 Resource Policy

https://docs.aws.amazon.com/AmazonS3/latest/dev/DeletingObjectVersions.html

  • Understand how KMS work just in basic sense
  • Use of snowball(Whenever question asked about Terabyte/Petabyte of data to move to AWS and you have 100-150MB link, snowball is your best bet)
  • Different type of storage gateways(file vs volume vs tape and in what is the use case of everyone)
    https://aws.amazon.com/storagegateway/faqs/

Domain 5: Security and Compliance

  • Under the AWS Shared Responsibility Model

https://aws.amazon.com/compliance/shared-responsibility-model/

  • Understand how AWS WAF works
  • Difference between AWS Shield vs GuardDuty

https://medium.com/the-crossover-cast/100-days-of-devops-day-48-threat-detection-and-mitigation-at-aws-b29611707f67

  • Usage of Trusted Advisor

https://medium.com/@devopslearning/100-days-of-devops-day-42-audit-your-aws-environment-50237fc3b3

  • Various AWS limits

https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html

  • How AWS Inspector works

Domain 6: Networking

  • AWS VPC(make sure you know this by heart, not for this exam but for all associate exam)
  • Difference between Security Group and NACL
  • Usage of NAT gateway and in which cases Black Hole is created
If you delete a NAT gateway, the NAT gateway routes remain in a blackhole status until you delete or update the routes
  • Usgae of VPC flowlogs and how it works

https://medium.com/@devopslearning/100-days-of-devops-day-28-introduction-to-vpc-flow-logs-d11a99cd18ca

  • Some use cases of Route53(eg: How to use it with CloudFront and Load Balancer hint is Alias record)

Domain 7: Automation and Optimization

  • This is one Domain which I lagged because I am not using much of the tool needed for this domain eg: CloudFormation, OpsWork and Elastic Bean Stalk
  • CloudFormation Delete Stack
  • One question related to OpsWork
  • One question related to ElasticBean Stalk

Final Words

  • The key take away from this exam is, you can easily clear this exam if you know CloudWatch, CloudFormation and Load Balancer
  • The last exam I wrote was the AWS Security Specialist Exam where a question was scenario-based and some of them are almost one page long, here most of the questions are too the point.
  • So keep calm and write this exam and let me know in case if you have any questions.

Please join me with my journey by following any of the below links

  • Website: https://100daysofdevops.com/
  • Twitter: @100daysofdevops OR @lakhera2015
  • Facebook: https://www.facebook.com/groups/795382630808645/
  • Medium: https://medium.com/@devopslearning
  • GitHub: https://github.com/100daysofdevops/100daysofdevops
  • Slack:  https://join.slack.com/t/100daysofdevops/shared_invite/enQtODQ4OTUxMTYxMzc5LTYxZjBkNGE3ZjE0OTE3OGFjMDUxZTBjNDZlMDVhNmIyZWNiZDhjMTM1YmI4MTkxZTQwNzcyMDE0YmYxYjMyMDM
  • YouTube Channel: https://www.youtube.com/user/laprashant