Skip to main content

Docker Fundamentals

Docker Training Notes

Docker command: Check if docker is installed fine.
1. docker info
2. docker-compose --version

Hello world with Docker
1. docker run hello-world

Docker took below steps:
1. The docker client contacted the Docker daemon.
2. The Docker daemon pulled the 'hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it to the terminal.


CLIENT DOCKER_HOST REGISTRY
| | |
docker build         Docker daemon       Docker Hub
docker pull Containers
docker run          Images

Docker images vs container:
1. Image is combination of filesystem and parameters
2. Image has no state and doesn't change
3. Image is downloaded, build and run
4. Container == running image
5. Image : Class
   Container : instance
6. Image : result of running a series of steps i.e. build process
7. One image, many containers
8. Containers are immutable 

Downloading and storing Docker Images: Docker Hub is Docker REGISTRY

Docker Registry
|
Docker Repository Docker Repository
| |
Docker Image Docker Image Docker Image Docker Image
| | | |
Tag:v1 Tag:v2 Tag:v1 Tag:v2

https://hub.docker.com/explore/

How to specify your own registry/hub:
docker run docker.io/library/hello-world
docker run library/hello-world

Automated Deployments steps for docker images:

1. Push Code
2. Kick Off CI Tasks
3. Build image
4. Send webhook
5. Receive webhook
6. Pull new image
7. Restart containers

Docker Hub vs Docker Store
- Your Images - Offical images
- Free community Images - Paid Trusted Images
- Unlimited public repos. - Keeps tab on paid content


Docker build process:
2 ways:
1. docker commit
2. dockerfile (superior) - has build steps for your application

Dockerize a simple web application:
Dockerfile:
 --------------------------------------------------------------------------------
FROM <baseimage_name>:<baseimage_version> // To load base image 

FROM usage exmaple:
FROM python:2.7-alpine
FROM ruby:2.4-alpine

RUN mkdir /app // to run commands

WORKDIR /app // sets a working directory for next set of instructions
// can be multiple in a file

COPY requirements.txt requirements.txt // it will copy the file from current directory to /app directory

RUN pip install -r requirements.txt // -r is to install packages listed in a file

COPY . . // copy everything from local directory to current /app folder

// docker caches the instructions in docker file where it doesn't find out the files changed in the docker client.

LABEL maintainer="Bipin Joshi <bipin.joshi1@verizon.com" \
          version="1.0"
// LABEL is used to set metadata for docker image

CMD flask run --host=0.0.0.0 --port=5000 // application can accessed at localhost:5000 or docker_ipaddress:5000

---------------------------------------------------------------------------------------

Docker CLI:
docker --help // management commands and commands

docker image build -t web1 . // this command will run docker file 

docker image inspect web1 // json of docker image details

docker image ls // list down all the images
// <none> is dangling image

docker image rm web1:1.0 // remove docker image

//push image to docker hub
// 1. login to docker hub
// docker to docker through command line
docker login

// tag the docker image with docker username
docker image tag web1 bipinjoshi/web1:latest // bipinjoshi is dockerhub user name

//push the image to dockerhub
docker image push bipinjoshi/web1:latest

// pull image from dockerhub
docker pull bipinjoshi/web1:latest


Running docker images:
docker container ls // list all running containers

docker container ls -a // list all stopped  containers

docker container rm sharp_pike // delete un-used containers

// run docker container with the image

docker container running -it -p 5000:5000 -e FLASK_APP=app.py web1

// open localhost
localhost:5000

// run docker container with the image with rm option when container will be stopped

docker container running -it --rm --name web1 -p 5000:5000 -e FLASK_APP=app.py web1

// to run container in background use -d

docker container running -it --rm --name web1 -p 5000:5000 -e FLASK_APP=app.py -d web1

// to get docker logs for web1 container
docker container logs web1

// to tail docker logs for web1 container use -f
docker container logs -f web1

// to get docker stats
docker container stats

// docker run help - list of all the parameter to run command
docker container run --help

docker container exec -it web1 bash // to debug inside the container bash session

docker container exec -it web1 flask --version // to see the version of flask and python

docker container exec -it --user "$(id -u):$(id -g)" web1 touch hi.txt // create a new file with current logged in user using volumes

// how to launch a python promt or similar images i.e. nodejs etc...
docker container run -it --rm --name testingpython python:2.7-alpine python

Linking containers with docker networks:
// build a new web container
docker image build -t web2 .

// build a redis container
docker pull redis:3.2-alpine

// list docker networks
docker network ls // list down 3 networks i.e. bridge, host and none. bridge is docker device zero. More details can be found using ifconfig.

docker network inspect bridge // inspect particular network. container {} will be empty if no containers running

// create your own bridge network to have custom domain name
docker network create --driver bridge mynetwork

// inspect your network
docker network inspect mynetwork

// run container commands with mynetwork with --net attritbute
docker container run --rm -itd -p 5000:5000 -- name redis --net mynetwork redis:3.2-alpine

docker container run --rm -itd -p 5001:5001 -e FLASK_APP=app.py -e FLASK_DEBUG=1 --name web2 -v $PWD:/app --net mynetwork web2

// inspect your network , you should have 2 containers running with name in your bridge
docker network inspect mynetwork

// bridge network only connects container running in same docker host. Else we need to use overlay network driver.

Persisting data to docker host:

// Use data volumes to persist the data 

// Named volumes i.e. web2_redis:/data allows us to use a name instead of filepath. And data will be persisted into special data volume directory. It will be differnt from source code volumes

// create a docker volume
docker volume create web2_redis

// list of all docker volumes
docker volume ls

// get more details of the volume i.e. mountpoint etc...
docker volume inspect web2_redis

//once volumes are created then those can be used in run commands to provide the volume where data need to stored
docker container run --rm -itd -p 5001:5001 --name redis --net mynetwork -v web2_redis:/data redis:3.2-alpine

//save redis data manually on volume manually
docker exec redis redis-cli SAVE

Optimizing docker images:
1. .dockerignore file
2. remove unnecessary dependencies

Running scripts when a container starts:
- One project multiple projects
- ENV variables lets us configure it
- Complicated logic lives in a baked in shell script

Eg. docker file instruction will look like below:
COPY docker -entrypoint.sh /
RUN chmod + /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
------------------------------
Eg. bash script will look like:
#!/bin/sh

set -e
echo "The Dockerfile ENTRYPOINT has been executed"
export WEB2_COUNTER_MSG="${WEB2_COUNTER_MSG:- this is the message}"

exec "$@"

-----------------------------
ENTRYPOINT let's us run custom scripts.
Eg. where it can be used:
1. db config
2. nginx config
3. etc...

Docker Clean up:
docker container ls // find out what containers are running
docker container ls -a // find out all stopped containers
docker system df // find out disk space consumed by containers
//<none> are dangling images
docker system df -v // verbose command
docker system info // info about docker installation.. Useful for bug reports
docker system prune // docker cleanup command
docker system prune -f // run w/o prompt for confirmation
docker system prune -a // delete all containers
docker container stop container1 container2 // stop multiple containers
docker container stop $(docker container ls -a -q) //stop all running containers.

Docker Compose:
docker-compose.yml  // tool to run multiple commands and manange multiple containers

Sample docker compose:
----------------------
verzion: '3'

services:
  redis:
    image: 'redis:3.2-alpine'
ports:
  - '5001:5001'
volumes:
  - 'redis:/data'
  web: 
build: '.'
depends_on:
  - 'redis'
#environment:
#  KEY: 'value'
env_file:
  - '.env'
#  - '.env.production'
#image: 'bipinjoshi/web:1.0' // create an docker image with tagged version
ports:
  - '5000:5000'
volumes:
  - '.:/app'
  
volumes:
  redis: {}
---------------------
.env file:
COMPOSE_PROJECT_NAME=web2

PYTHONBUFFERED=true
FLASK_APP=app.py
FLASK_DEBUG=1
---------------------

docker -compose --help // compose help command
docker -compose build // build from docker compose yml file
docker image ls // list down docker images
docker -compose pull // pull down all the images requried for compose
docker -compose  up // start the containers
docker -compose stop // stop all running containers
docker -compose up --build -d // command to build and run compose yml in background
docker -compose ps // list out docker containers
docker -compose logs -f // tail the logs using compose... shows logs output for all the containers
docker -compose restart // restart containers
docker -compose exec web ls -la // execute the command on a  container
docker -compose eexec web sh // open shell command for container
docker -compose rm // remove stopped containers with promt for confirmation


Microservices with Docker - Refer docker example // Really Good

Tips of Dockerizing:
1. Logs end up in the container, so log to STDOUT. Logs will be managed at docker host level.
2. Use ENV variables (Separate environment == separate env files)
3. Keep your apps stateless. Store fastmoving data in tools like Redis or store on client side
4. Follow 12 factor apps https://12factor.net/

Comments

Post a Comment

Popular posts from this blog

Pivotal Cloud Foundry Developer Certification - Logging, Scaling and High Availability

 How do you access application logs? cf logs APP_NAME cf start APP_NAME To see the logs of particular pcf sub system. cf logs APP_NAME | grep "API\|CELL" To exclude particular logs cf logs APP_NAME | grep -v "API\|CELL" To see application events i.e. start, stop, crash etc... cf events APP_NAME To display all the lines in the Loggregator buffer cf logs APP_NAME --recent  What are the components of the Loggregator system? Loggregator is the next generation system for aggregating and streaming logs and metrics from all of the user apps and system components in a Cloud Foundry deployment. Primary use: 1. Tail/dump logs using CLI.  2. Stream to 3rd party log archive and analysis service 3. Operators and admins can access Loggregator Firehouse, the combined stream from all the apps and metrics data. 4. Operators can deploy nozzle to the firehouse.  A nozzle is a component that monitors the Firehose for specified events and metrics,

Kumaoni Song/Poem - Uttarakhand meri matrebhoomi

O Bhumi Teri Jai Jaikaara Myar Himaala O Bhumi Teri Jai Jaikaara Myar Himaala Khwar main koot tyaro hyu jhalako-2 Chhalaki kaali Gangai ki dhaara myara Himaala Himaala kaali Gangai ki dhaara myar Himaala Uttarakhand meri matrebhoomi Matrabhoomi ya meri pitrabhoomi O Bhoomi teri jai jai kaara myar Himaala Himaala teri jai jai kaara myar Himaala Tali tali taraai kuni-2 O kuni mali mali bhabara myar Himaala Himaala Mali mali bhabara myar Himaala Badari Kedara ka dwar chhana-2 Myara kankhal Hariwara myar Himaala Himaala kankhal Haridwara myar Himaala Kaali Dhauli ka bali chhali jaani-2 Bata naan thula kailasha myar himaala  Ho Bata naan thula kailasha myar Himaala Parvati ko myaro mait yen chha-2 Ho yen chha Shivjyu ko saurasa myar Himaala Himaala Shiv jyu ko saurasa myar Himaala Dhan mayedi mero yo janama-2 Himaala teri kokhi mahana myar Himaala Himaala teri kokhi mahana myar Himaala Mari jula to tari julo-2 O eju ail tyara baana myar Himaala-2 Himaala ail tyara

OpenStack - Conceptual architecture showing the relationship b/w services

AWS vs Openstack comparison https://redhatstackblog.redhat.com/2015/05/13/public-vs-private-amazon-compared-to-openstack/