Skip to main content

Kubernetes Fundamentals

Kubernetes Fundamentals:

Open source platform for running cloud native apps
                               
--------Cloud native apps------     containers
--------Kubernetes------------- platform- linux nodes, vm, cloud instance
--------Infrastructure---------       IAAS

Kubernetes
                1. Control Plane (Brain)
                                - is a api server, schedular, controllers, persistence store
                                - Store - etcd (stateful)
                                - api server (kubectl -> requests(POST)-> API)(YAML)
                2. Workers Nodes (running applications)

Kubernetes API:
                RESTful CRUD: Create, read, update, delete
                kubectl command is used for making API requests
                                kubectl -> YAML -> Kubernetes Cluster (Desired State -> Current State)
                API groups:
                                - Core API
                                - apps API
                                - authorization API
                                - storage API
                (SIGS look after API development)
               
Kubernetes Objects:
                Pod
                                - contains one or more containers
                                - atomic unit of scheduling
                                - object on the cluster
                                - Defined in the v1 API groups
                Deploy
                                - Object in the cluster
                                - defined in the apps/v1 api groups
                                - Scaling
                                - Rolling updates
                Daemon set:
                                - one pod per node
                Sts (stateful sets):
                                - stateful app components









App Architecture:

cid:image009.jpg@01D4A299.07DC2170


cid:image010.jpg@01D4A299.07DC2170



cid:image011.jpg@01D4A299.07DC2170

cid:image012.jpg@01D4A299.07DC2170








K8s Networking:
                Old world:
cid:image014.jpg@01D4A299.07DC2170
                New World:

cid:image015.jpg@01D4A299.07DC2170

cid:image020.jpg@01D4A299.07DC2170

K8s deployed on IAAS launches IAAS specific loadbalancers when we request service of type loadbalancer in K8’s.

cid:image024.jpg@01D4A299.07DC2170

cid:image027.jpg@01D4A299.07DC2170
               
 Kubernetes Networking:
                Rules:
  1. All nodes in a cluster can talk to each other
  2. All Pods on the network can talk to each other w/o NAT
  3. Every Pod gets its own IP address

cid:image029.jpg@01D4A299.07DC2170



Kubernetes Services:

cid:image035.jpg@01D4A299.07DC2170

Service name and ip address are stable do not change are registered with native DNS service of cluster.


cid:image036.jpg@01D4A299.07DC2170

Service and Pods are connected via label selector defined for service.
Whenever service object is created K8s creates another object called endpoint object which tracks the pods coming alive of shutting down based on the label selector.
Endpoint object is the list of ips for the pods alive.
Service object is always watching the API server for new pods using the label selector and updates the end-point object with the ips.

cid:image038.jpg@01D4A299.07DC2170




3 major types of services:
  1. LoadBalancer
  2. NodePort
  3. ClusterIp

cid:image042.jpg@01D4A299.07DC2170

cid:image046.jpg@01D4A299.07DC2170

Node Ip address and node port – can get you access to the pods in specific node in the cluster
AWS and Azure uses NodePort Port behind the load balancer to connect to K8S cluster.


cid:image051.jpg@01D4A299.07DC2170

cid:image052.jpg@01D4A299.07DC2170


Service Networks:
3 Networks:
  1. Node Network
  2. Pod Network
  3. Service Network (Its not even in the K8 networks i.e. Node and Pod networks)

cid:image054.jpg@01D4A299.07DC2170


Every Node has 3 components i.e. container runtime, kubelet and k-proxy. Node is basically a VM or physical machine on K8s network.

cid:image056.jpg@01D4A299.07DC2170












Kubernetes architecture with cloud controller manager.


cid:image060.jpg@01D4A299.07DC2170

Kube proxy : it writes the IPVS/IPTables rules on each nodes, any request addressed to service network, rewrite the headers, and send them to the appropriate pods in the pod network.
               
                                cid:image062.jpg@01D4A299.07DC2170

               






 Flow will look like below:
                cid:image065.jpg@01D4A299.07DC2170

Kube-proxy IPVS mode is preferred and scalable
cid:image067.jpg@01D4A299.07DC2170

cid:image072.jpg@01D4A299.07DC2170

All containers in the Pod share the same Pod’s network stack and can talk via localhost.

      cid:image074.jpg@01D4A299.07DC2170
cid:image078.jpg@01D4A299.07DC2170



cid:image002.jpg@01D4A2C3.54D625F0


Storage in Kubernetes:
cid:image010.jpg@01D4A2C3.54D625F0

      cid:image011.jpg@01D4A2C3.54D625F0
               
cid:image012.jpg@01D4A2C3.54D625F0

In Kubernetes Storage is First Class Citizen.

cid:image014.jpg@01D4A2C3.54D625F0
K8s has Persistent Volumes Subsystem for consumption which uses Container Storage Interface as connector to Real Storage system.
cid:image018.jpg@01D4A2C3.54D625F0

               
  PV are storage resource i.e. 20 GB fast SSD
  PVC are ticket to use PV.
  SC are ways to implement PV and PVC in a dynamic way.
cid:image020.jpg@01D4A2C3.54D625F0

   Container Storage Interface:
cid:image024.jpg@01D4A2C3.54D625F0

PV Subsystem:
cid:image029.jpg@01D4A2C3.54D625F0
               
cid:image030.jpg@01D4A2C3.54D625F0
               
PV’s are created on the cluster but it exists on external system or cloud as shown above for google platform.
To access PV’s, one needs PVC i.e. claim ticket. Which is defined like below in YAML file.

               cid:image032.jpg@01D4A2C3.54D625F0

Sample PV YAML:
                cid:image038.jpg@01D4A2C3.54D625F0
Note: pdName called uber-disk is a volume present on your cloud platform/ on premises, outside of the cluster.

PV access modes:
           cid:image039.jpg@01D4A2C3.54D625F0

A PV can have only one active claim i.e. PVC at the moment.
RWO – only one Pod can RW
PWM – multiple pods can RW
ROM – multiple pods can RO

Only File based volumes support RWM. Block volumes do not support RWM.

Retain Policy can be delete or retain.

 PVC YAML:
                cid:image041.jpg@01D4A2C3.54D625F0

               


PV and PVC spec has to match.
cid:image045.jpg@01D4A2C3.54D625F0

 Pod using PV and PVC:
cid:image047.jpg@01D4A2C3.54D625F0
               
   Dynamic provisioning with Storage Classes: It enable dynamic provisioning of volumes.
cid:image051.jpg@01D4A2C3.54D625F0
    PV Sub system control loop checks for PVC created for the storage class and creates the associated PV.

   Code to Kubernetes:
cid:image053.jpg@01D4A2C3.54D625F0
               
cid:image057.jpg@01D4A2C3.54D625F0

cid:image059.jpg@01D4A2C3.54D625F0

Kubernetes Deployments:
         Deployment manages a single Pod i.e. one deployment can run only one type of Pod.
         Deployment in K8 are completely declarative. We should never change that.

         cid:image063.jpg@01D4A2C3.54D625F0

   Replica set creates 3 identical Pods for the specs defined.

cid:image065.jpg@01D4A2C3.54D625F0

Strategy defines how to update live app in K8s. K8 creates another replica set and keep on surging the Pods in new replica set and removing from old replica set until the process is over.
Minreadyseconds define the amount of seconds K8 has to wait before K8 launches next rolling updates.

                cid:image071.jpg@01D4A2C3.54D625F0

 Labels are most important in K8s.
               
                cid:image072.jpg@01D4A2C3.54D625F0
   cid:image074.jpg@01D4A2C3.54D625F0

Auto Scaling in K8s:
               
Horizontal Pod Autoscaler: to add more Pods
Cluster Autoscaler: to add more nodes
               
If Pods CPU and memory is full:

cid:image076.jpg@01D4A2C3.54D625F0

Horizontal Auto Scaler will start scaling by adding more Pods.

cid:image077.jpg@01D4A2C3.54D625F0

If cluster memory is also full then Pods scaling will add the Pod deployment in the pending state.

cid:image079.jpg@01D4A2C3.54D625F0

Once pods start going into Pending state, Cluster Autoscaler will kickin and adds more nodes to the cluster and Pending Pods will be deployed.

cid:image086.jpg@01D4A2C3.54D625F0

cid:image088.jpg@01D4A2C3.54D625F0


cid:image092.jpg@01D4A2C3.54D625F0


cid:image096.jpg@01D4A2C3.54D625F0

Cluster autoscaler triggers every 10 seconds. Cluster auto scalers scales based on standard requests. Horizational autoscaler scales on actual values.
Pods should always make resource requests for scaling to work. Cluster scaling works based on what is getting requested.

cid:image098.jpg@01D4A2C3.54D625F0

cid:image100.jpg@01D4A2C3.54D625F0

RBAC and Admission Control:

Client: kubectl and K8 control pane components
               
cid:image104.jpg@01D4A2C3.54D625F0
               
RBAC is enabled since 1.6

    cid:image108.jpg@01D4A2C3.54D625F0

  cid:image110.jpg@01D4A2C3.54D625F0

  cid:image112.jpg@01D4A2C3.54D625F0
               
  cid:image114.jpg@01D4A2C3.54D625F0

Authorization mode: Node, RBAC.
We can use kubectl to add RBAC role and role bindings.

cid:image118.jpg@01D4A2C3.54D625F0

Admission control is policy enforcements and can be externalized using webhooks.

cid:image125.jpg@01D4A2C3.54D625F0

Admission control has two steps i.e. mutating and validating. Mutating modifies the request. And both steps should return true. Otherwise request will stop.

cid:image126.jpg@01D4A2C3.54D625F0

cid:image128.jpg@01D4A2C3.54D625F0

Other Kubernetes stuff:

cid:image132.jpg@01D4A2C3.54D625F0

Comments

Popular posts from this blog

Pivotal Cloud Foundry Developer Certification - Logging, Scaling and High Availability

 How do you access application logs? cf logs APP_NAME cf start APP_NAME To see the logs of particular pcf sub system. cf logs APP_NAME | grep "API\|CELL" To exclude particular logs cf logs APP_NAME | grep -v "API\|CELL" To see application events i.e. start, stop, crash etc... cf events APP_NAME To display all the lines in the Loggregator buffer cf logs APP_NAME --recent  What are the components of the Loggregator system? Loggregator is the next generation system for aggregating and streaming logs and metrics from all of the user apps and system components in a Cloud Foundry deployment. Primary use: 1. Tail/dump logs using CLI.  2. Stream to 3rd party log archive and analysis service 3. Operators and admins can access Loggregator Firehouse, the combined stream from all the apps and metrics data. 4. Operators can deploy nozzle to the firehouse.  A nozzle is a component that monitors the Firehose for specified events and metrics,

Kumaoni Song/Poem - Uttarakhand meri matrebhoomi

O Bhumi Teri Jai Jaikaara Myar Himaala O Bhumi Teri Jai Jaikaara Myar Himaala Khwar main koot tyaro hyu jhalako-2 Chhalaki kaali Gangai ki dhaara myara Himaala Himaala kaali Gangai ki dhaara myar Himaala Uttarakhand meri matrebhoomi Matrabhoomi ya meri pitrabhoomi O Bhoomi teri jai jai kaara myar Himaala Himaala teri jai jai kaara myar Himaala Tali tali taraai kuni-2 O kuni mali mali bhabara myar Himaala Himaala Mali mali bhabara myar Himaala Badari Kedara ka dwar chhana-2 Myara kankhal Hariwara myar Himaala Himaala kankhal Haridwara myar Himaala Kaali Dhauli ka bali chhali jaani-2 Bata naan thula kailasha myar himaala  Ho Bata naan thula kailasha myar Himaala Parvati ko myaro mait yen chha-2 Ho yen chha Shivjyu ko saurasa myar Himaala Himaala Shiv jyu ko saurasa myar Himaala Dhan mayedi mero yo janama-2 Himaala teri kokhi mahana myar Himaala Himaala teri kokhi mahana myar Himaala Mari jula to tari julo-2 O eju ail tyara baana myar Himaala-2 Himaala ail tyara

OpenStack - Conceptual architecture showing the relationship b/w services

AWS vs Openstack comparison https://redhatstackblog.redhat.com/2015/05/13/public-vs-private-amazon-compared-to-openstack/