My Workplay on docker
-
docker run
:- runs a command in a new container .docker run = docker create + docker start
-
docker run -p <localhostport>:<containerport> <imagename/id>
:- running on ports -
docker ps
:- to list all the running containers -
docker ps --all
:- list all the container ever created -
docker system prune
:- to delete all the containers ever created along with some other properties -
docker logs <container-id>
:- to get the logs -
docker start
:- start stopped container -
docker stop
:- stop the container - gets a sigterm message - terminate signal -
docker kill
:- kills the container or stops the container instantly -
docker exec -it <container id> <command>
:- Execute an additional command in container.-it
makes us to provide the input.-it equivalent to -i -t
-
docker exec -it <container id> sh
:- provides access to the terminal inside the context of the container -
docker build .
:- build an image from a Docker file -
docker-compose up
:- aggregates the output of each container. Similar to docker run myimage -
docker-compose up --build
:- similar to docker build and docker run. Rebuilds the container after making any changes to the file -
docker-compose up -d
:- starts the containers in the background and leaves them running -
docker-compose down
:- stops the running containers at the same time -
docker-compose ps
:- show the status of the containers -
docker commit
:- manual image generation -
docker build -f <filename> .
:- to run a dockerfile with some different name -
docker pull
:- pulls an image from registry -
docker push
:- pushes an image to registry -
docker search
:- search for an image in Docker Hub -
docker history
:- shows the history of the image -
docker info
:- shows system wide information -
docker rm
:- remove one or more containers -
docker rmi
:-remove one or more images -
docker pause
:- pauses all processes within one or more containers -
docker unpause
:- unpause all processes within one or more containers
-
What is Kubernetes - System for running many different containers over multiple different machines
-
Why Kubernetes - When you need to run many different containers with different images
-
Install kubectl - cli to interact with master
-
Install a VM driver virtual box - make VM which will be our single node
-
Install minikube - runs a single node on that vm
-
kubectl
:- use for managing containers in the node -
minikube
:- use for managing the VM itself (Local Only) -
minikube start
:- to start the minikube
Docker Compose | Kubernetes |
---|---|
Each entry can optionally get docker compose to build an image | Kubernetes expects all images to already be built |
Each entry represents a container we want to create | One config file per object we want to create |
Each entry defines the n/w requirements (Ports) | We have to manually sets up all n/w |
Get a simple container running on our local K8s cluster running
-
Make sure our image is hosted on docker hub
-
Make one config file to create the container
-
Make one config file to setup n/w
Config file used to create Objects for example StatefulSet, ReplicaController, Pod, Service. Object will serve different functionalities like running container, monitoring a container, setting up nw etc.
-
Pods :- Runs one or more closely related containers
-
Services :- Sets up networking in K8S cluster. There are 4 subtypes
-
- ClusterIp
-
- NodePort : Exposes a container to the outside world and its only good for dev purposes
-
- LoadBalancer
-
- Ingress
-
kubectl apply -f <fileNamePath>
:- feed a config file to kubectl -
kubectl get pods
:- prints the status of all running pods -
kubectl get services
:- prints the status of all running services -
minikube ip
:- to get the ip of the VM
Important Takeaways |
---|
Kubernetes is a system to deploy containerized apps |
Nodes are individual machines or vm's that run containers |
Masters are machines or vm's with a set of programs to manage nodes |
Kubernetes didn't build our images - it got them from somewhere else |
Kubernetes (the master) decided where to run each container - each node can run a dissimilar set of containers |
To deploy something, we update the desired state of the master with a config file |
The master works constantly to meet you desired state |
kubectl describe <ObjectType> <ObjectName>
:- describes the logs about the specified object type and name
Limitations on Updating Config file |
---|
Can only change spec.containers[*].image |
Can only change spec.initContainers[*].image |
Can only change spec.activeDeadlineSeconds |
Can only change spec.tolerations |
- Solution to this is by using Deployment Object type
Pods | Deployment |
---|---|
Run a single set of containers that are very close | Run a set of identical pods (one or more) |
Good for one-off dev purposes | Monitors the state of each pod updating as necessary |
Rarely used directly in production | Good for dev and production |
kubectl delete -f <configfile>
:- remove an object (like an imperative update)
kubectl get deployments
:- prints the status of the deployment
-
Manually delete pods to get the deployment to recreate them with the latest version but deleting pods manually seems silly(bad idea)
-
Tag images with real version number and specify them in config file but it adds an extra step in the production deployment process(no friendly)
-
Use an imperative command to update the image version the deployment must use also not the best solution we can say
-
kubectl set image <objectType> / <objectName> <containerName> = <newImageToUse>
:- Imperative command to update the image
eval $(minikube docker-env)
- this only configures your current terminal window ie this is not permanent and have to rerun the same command everytime if we close the terminal window
minikube docker-env
#produces the following output
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://ip:2376"
export DOCKER_CERT_PATH="path"
export MINIKUBE_ACTIVE_DOCKERD="minikube"
# To point your shell to minikube's docker-daemon, run:
# eval $(minikube -p minikube docker-env)
- Use all the same debugging techniques we learned with Docker CLI (Many of these are available through kubectl)
For example kubectl logs or kubectl exec -t sh to enable a shell etc.
-
Manually kill containers to test Kubernetes self heal
-
Delete cached images in the node (
docker system prune -a
)
-
Create config file for each service and deployment
-
Test locally on minikube
-
Setup github/travis flow to build images and deploy
-
Deploy app to cloud provider
-
ClusterIp exposes a set of pods to other objects inside the cluster and no outside world
-
NodePort exposes a set of pods to the outside world and good for dev purposes only.
Here we need to give ---
3 consecutive dashes to separate them
apiVersion: apps/v1 kind: Deployment metadata: name: server-deployment spec: replicas: 3 selector: matchLabels: component: server template: metadata: labels: component: server spec: containers: - name: server image: imageName ports: - containerPort: 5000 --- apiVersion: v1 kind: Service metadata: name: server-cluster-ip-service spec: type: ClusterIP selector: component: server ports: - port: 5000 targetPort: 5000