Menu

Dipesh Majumdar

Blog and Paintings

Difference between service port and container port

service port is marked in yellow highlight below - 

k run  nginx --image nginx:1.7.7 --port 8080

k expose deploy nginx --name nginx-svc --port 6060 --target-port 8080

 

 

To those who seek to be noticed...

Be so good in your craft that people are bound to notice you.  - Dipesh Majumdar

vi editor tips and tricks

  1. go to top of screen
  2. go to middle of file
  3. go to end of file
  4. start inserting just above the cursor line (better than i)
  5. start inserting below the cursor line... though i rarely need this because i like to insert from above cursor line
  6. go to 9th line of job.yaml
  7. find out all occurences of the string job in job.yaml
  8. traverse word by word - forward and then backword
  9. traverse left right
  10. traverse up and down
  11. delete to end of word. but if word: the : wont be deleted so now delete complete word
  12. delete word up until next space
  13. go to a particular character say - t
  14. go back to a particular character say - T
  15. go to beginning of line and then to end of line

answers: 

  1. H (or gg)
  2. M
  3. L  (or you can do a capital G)
  4. O
  5. o
  6. vi job.yaml +9
  7. grep -in job job.yaml
  8. w for forward and b for backward
  9. h and l
  10. k and j
  11. dw  word and dW for complete word
  12. as you would all know the meaning of dd - which is to delete a line... and x - which is to delete a character... have you ever thought about as to how to delete a group of characters till a particular character. Well the answer is: dt<that_character>.... so it can be like dt<SPACE> if you want to delete till the next space character.
  13. ft
  14. fT
  15. 0 $

 

links and references:

http://www.lagmonster.org/docs/vi.html

https://www.thomas-krenn.com/en/wiki/Vi_editor_tips_and_tricks

https://www.tecmint.com/how-to-use-vi-and-vim-editor-in-linux/

https://openvim.com/

skeleton ingress to feed nginx ingress controller

The way it works is: 

nginx ingress controller has to be deployed in your nginx namespace which is exclusively reserved for nginx-ingress related stuff.

in that nginx-ingress-controller will have a pod (or more replicas) and a matching service. 

Now individual ingresses for your workloads need to be created so these are then consumed by nginx-ingress-controller pod

To create individual ingress resources one must know to create a skeleton ingress (bare minimum stuff that must go into an ingress) first. So here is syntax of skeleton-ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: dipz22-ingress
 annotations:
  nginx.ingress.kubernetes.io/rewrite-target: /
  nginx.ingress.kubernetes.io/ssl-redirect: "false"
 labels: 
   app: ingress
spec:
 rules: 
  - http:
      paths: 
       - path: /
           backend: 
           servicePort: 80
           serviceName: dipz22-service

Enabling K8s network policy during Cluster creation

While creating k8s cluster you need to enable this feature. 

Common Mistakes in declarative yaml manifests and imperative commands

Its a common mistake - you need to know yaml properly. so here i first did this

    readinessProbe:
     httpGet:
     - port: 80
       path: /

and got this error - 

 got "array", expected "map";

The correct way is this

    readinessProbe:
     httpGet:
       port: 80
       path: /

******************************************************

got map expected array - 

    volumeMounts:
      name: xyz
      mountPath: /etc/foo

should be - 

    volumeMounts:
      - name: xyz
        mountPath: /etc/foo

**********************************

 unknown field "name"

spec:
  volumes:
     name: xyz
     secret:
       name: mysecret3

should be - 

spec:
  volumes:
   - name: xyz
     secret:
       secretName: mysecret3

 

*************************************

Attaching service account with pod. one tends to do it this way - 

  serviceAccount:
    name: myuser

Correct way is this:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  serviceAccountName: myuser
  containers:
  - image: nginx
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}

***************************************

While creating secret from a file create the password or username as the file and then value of password or username should be entered in the file. only one value - remember... otherwise its better to create secret from literal

[dipesh.majumdar@demo ~]$ echo pass >password
[dipesh.majumdar@demo ~]$ k create secret generic mysecret2 --from-file=password

***********************************************************************

This is not the way to create configmap

[dipesh.majumdar@demo ~]$ k -n temp create cm cm1 --from-literal=db=mysqldipz22,user=dipz22,password=password123
configmap/cm1 created
[dipesh.majumdar@demo ~]$ k -n temp get cm cm1 -o yaml
apiVersion: v1
data:
  db: mysqldipz22,user=dipz22,password=password123
kind: ConfigMap
metadata:
  creationTimestamp: "2019-04-15T18:54:54Z"
  name: cm1
  namespace: temp
  resourceVersion: "1948893"
  selfLink: /api/v1/namespaces/temp/configmaps/cm1
  uid: f4d36380-5faf-11e9-9e93-42010a8400d9
[dipesh.majumdar@demo ~]$ k -n temp

THIS IS THE CORRECT WAY:

[dipesh.majumdar@demo ~]$ k -n temp create cm cm1 --from-literal=db=mysqldipz22 --from-literal=user=dipz22 --from-literal=password=password123
configmap/cm1 created

 

Sitting on a Jumpbox and testing with network policy

You want to sit on a jumpbox so you can wget to some clusterip:port for testing only....

well if you create a busybox pod - its going to get completed as the entrypoint of the image of its container is only 'sh'

so you have to make it with say 'sleep 3600' 

but you dont what to do that so you can just do this - 

[dipesh.majumdar@demo ~]$ k run busybox --restart=Never --image=busybox -it
If you don't see a command prompt, try pressing enter.
/ # pwd
/
/ #

/ # wget -O- 10.112.11.76:7777
Connecting to 10.112.11.76:7777 (10.112.11.76:7777)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
-                    100% |*********************************************************************************************************************************************|   612  0:00:00 ETA
/ #

open another terminal and check the busybox pod and see how it's running and not completed.

[dipesh.majumdar@demo ~]$ k get po
NAME                    READY   STATUS    RESTARTS   AGE
busybox                 1/1     Running   0          30s
nginx-966857787-2h9fb   1/1     Running   0          5m
nginx-966857787-vwtwt   1/1     Running   0          5m
[dipesh.majumdar@demo ~]$

the moment you come out of the busybox the pod status will be completed

NAME                    READY   STATUS      RESTARTS   AGE
busybox                 0/1     Completed   0          8m
nginx-966857787-2h9fb   1/1     Running     0          13m
nginx-966857787-vwtwt   1/1     Running     0          13m

But what if you want to delete it the moment you come out of the busybox pod - this can be done in 2 ways (either way is good):

[dipesh.majumdar@demo ~]$ k run busybox --restart=Never --image=busybox -it --rm
If you don't see a command prompt, try pressing enter.
/ # exit
pod "busybox" deleted
[dipesh.majumdar@demo ~]$ k run busybox --restart=Never --image=busybox -it --rm -- sh
If you don't see a command prompt, try pressing enter.
/ #

Use cases of jump box shown below - 

Create a dummy api server

kubectl run apiserver --restart=Never --image=nginx --labels app=web,role=api --expose --port 80

Create a network policy to bind the api server so access is only possible from pods with label: devops-team

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: devops-np
spec:
  podSelector:
    matchLabels:
      app: web
  ingress:
    - from:
        - podSelector:           # chooses pods with tag devops-team
           matchLabels:
             tag: devops-team

 

k run busybox$RANDOM --restart=Never --image=busybox -it --rm -- sh

[dipesh.majumdar@demo ~]$ k run busybox$RANDOM --restart=Never --image=busybox -it --rm -- sh
If you don't see a command prompt, try pressing enter.
/ # wget -qO- --timeout=2 http://apiserver
wget: download timed out

[dipesh.majumdar@demo ~]$ k run busybox$RANDOM --restart=Never --image=busybox --labels 'tag=devops-team' -it --rm -- sh
If you don't see a command prompt, try pressing enter.
/ # wget -qO- --timeout=2 http://apiserver
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ #

One more time this time very fast - 

quick creation of dummy apiserver was seen (this created pod and service)

lets do a dummy web... this time this creates pod, matching deployment and matching service. 

k -n temp run web --image nginx:1.7.9 --expose --labels 'app=web' --port 80

now a simple jumpbox...

k -n temp run jumpbox --restart Never --image busybox --labels 'app=jumpbox' --rm -it

###this is same as 

k -n temp run jumpbox --restart Never --image busybox --labels 'app=jumpbox' --rm -it sh

/ # wget --timeout 2 -O- web:80
Connecting to web:80 (10.112.12.179:80)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>

want to run some command without getting into the pod.. instead of sh run the command in this case - whoami

[dipesh.majumdar@demo ~]$ k -n temp run jumpbox --restart Never --image busybox --labels 'app=jumpbox' --rm -it whoami
root
pod "jumpbox" deleted

 

 

 

Little more magic...

ready for it?

do this...

[dipesh.majumdar@demo ~]$ k -n temp run jumpbox --image alpine --restart Never --labels 'app=jumpbox' --rm -it
If you don't see a command prompt, try pressing enter.
/ # apk add --no-cache curl openssl
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
(1/6) Installing ca-certificates (20190108-r0)
(2/6) Installing nghttp2-libs (1.35.1-r0)
(3/6) Installing libssh2 (1.8.2-r0)
(4/6) Installing libcurl (7.64.0-r1)
(5/6) Installing curl (7.64.0-r1)
(6/6) Installing openssl (1.1.1b-r1)
Executing busybox-1.29.3-r10.trigger
Executing ca-certificates-20190108-r0.trigger
OK: 8 MiB in 20 packages
/ # curl -v -I -o /dev/null http://web:80
* Expire in 0 ms for 6 (transfer 0x560fa43a77a0)
* Expire in 1 ms for 1 (transfer 0x560fa43a77a0)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* Expire in 0 ms for 1 (transfer 0x560fa43a77a0)
* Expire in 2 ms for 1 (transfer 0x560fa43a77a0)
* Expire in 0 ms for 1 (transfer 0x560fa43a77a0)
* Expire in 0 ms for 1 (transfer 0x560fa43a77a0)
* Expire in 2 ms for 1 (transfer 0x560fa43a77a0)
* Expire in 0 ms for 1 (transfer 0x560fa43a77a0)
* Expire in 0 ms for 1 (transfer 0x560fa43a77a0)
* Expire in 2 ms for 1 (transfer 0x560fa43a77a0)
* Expire in 1 ms for 1 (transfer 0x560fa43a77a0)
* Expire in 1 ms for 1 (transfer 0x560fa43a77a0)
* Expire in 2 ms for 1 (transfer 0x560fa43a77a0)
* Expire in 1 ms for 1 (transfer 0x560fa43a77a0)
* Expire in 1 ms for 1 (transfer 0x560fa43a77a0)
* Expire in 1 ms for 1 (transfer 0x560fa43a77a0)
*   Trying 10.112.14.49...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x560fa43a77a0)
* Connected to web (10.112.14.49) port 80 (#0)
> HEAD / HTTP/1.1
> Host: web
> User-Agent: curl/7.64.0
> Accept: */*

The docker equivalent of  k8s alpine jumpbox  is - 

[dipesh.majumdar@demo ~]$ docker run --rm -it alpine
/ # apk add --no-cahe curl openssl
apk: unrecognized option: no-cahe
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
(1/6) Installing ca-certificates (20190108-r0)
(2/6) Installing nghttp2-libs (1.35.1-r0)
(3/6) Installing libssh2 (1.8.2-r0)
(4/6) Installing libcurl (7.64.0-r1)
(5/6) Installing curl (7.64.0-r1)
(6/6) Installing openssl (1.1.1b-r1)
Executing busybox-1.29.3-r10.trigger
Executing ca-certificates-20190108-r0.trigger
OK: 8 MiB in 20 packages
/ # curl -v -I -o /dev/null http://web:80
* Expire in 0 ms for 6 (transfer 0x55e43644d7a0)
* Expire in 1 ms for 1 (transfer 0x55e43644d7a0)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* Expire in 0 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 1 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 0 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 0 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 1 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 0 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 0 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 1 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 0 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 0 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 1 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 2 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 2 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 2 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 2 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 2 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 2 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 3 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 3 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 4 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 3 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 3 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 4 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 4 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 4 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 8 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 6 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 6 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 8 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 8 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 8 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 16 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 11 ms for 1 (transfer 0x55e43644d7a0)
* Expire in 11 ms for 1 (transfer 0x55e43644d7a0)
* Could not resolve host: web
* Expire in 14 ms for 1 (transfer 0x55e43644d7a0)
* Closing connection 0
curl: (6) Could not resolve host: web   

#####OBVIOUSLY IT CAN'T RESOLVE HOST BCOZ THIS IS DOCKER AND THE HOST IS IN K8S CLUSTER AS SERVICE

THE ALPINE CONTAINER DISAPPEARS AS SOON AS WE COME OUT OF THE EXECUTABLE SHELL !!!!

login as: dipesh.majumdar
Authenticating with public key "imported-openssh-key"
Passphrase for key "imported-openssh-key":
Last login: Sun Apr 14 11:36:43 2019 from dhcp-077-249-177-085.chello.nl
[dipesh.majumdar@demo ~]$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                  PORTS               NAMES
de27d1c58963        alpine              "/bin/sh"                21 seconds ago      Up 20 seconds                               nostalgic_lamport
8d7696365904        mysql               "docker-entrypoint..."   7 days ago          Exited (1) 7 days ago                       distracted_shockley
c667d456911e        busybox             "sh"                     7 days ago          Exited (0) 7 days ago                       nostalgic_shaw
[dipesh.majumdar@demo ~]$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                  PORTS               NAMES
8d7696365904        mysql               "docker-entrypoint..."   7 days ago          Exited (1) 7 days ago                       distracted_shockley
c667d456911e        busybox             "sh"                     7 days ago          Exited (0) 7 days ago                       nostalgic_shaw
[dipesh.majumdar@demo ~]$

However if you want that the docker container should not disappear - run it like this

[dipesh.majumdar@demo ~]$ docker run --rm -dit alpine
6a3c0f71d34fea50f2d6a66b7370986625d6983b6c991f7db7a25f87632c4223

and then enter the shell like this... 

[dipesh.majumdar@demo ~]$ docker exec -it 6a3c0f71d34f sh
/ # whoami
root
/ # exit

[dipesh.majumdar@demo ~]$

even after you exit the container will be up and running

want to execute a command without entering the docker - do it this way

[dipesh.majumdar@demo ~]$ docker exec -it 6a3c0f71d34f whoami
root
[dipesh.majumdar@demo ~]$ docker exec -it 6a3c0f71d34f pwd
/

 

 

 

Creating a pod and a matching service imperatively

[dipesh.majumdar@demo ~]$ k run nginx --restart=Never --image=nginx --dry-run --port=80  --expose -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  name: nginx
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx
status:
  loadBalancer: {}
---
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
    ports:
    - containerPort: 80
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}

take it in ps.yaml with > ps.yaml and then....

[dipesh.majumdar@demo ~]$ vi ps.yaml
[dipesh.majumdar@demo ~]$ k create -f ps.yaml
service/nginx created
pod/nginx created
[dipesh.majumdar@demo ~]$

so service is created is this 

[dipesh.majumdar@demo ~]$ k get svc nginx
NAME    TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
nginx   ClusterIP   10.112.2.47   <none>        80/TCP    12m

 

this cluster ip seems useless to me as i can't connect from my local.

Hey Wait!

But the cluster ip can reached at from another pod

so create a busybox pod and run below from inside the pod....

wget -O- 10.112.2.47:80
Connecting to 10.112.2.47:80 (10.112.2.47:80)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }

 

Now it's time to expose deployment. first create the deployment - 

[dipesh.majumdar@demo ~]$ cat d.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: simpleapp
  name: simpleapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: foo
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: foo
    spec:
      containers:
      - image: dgkanatsios/simpleapp
        name: simpleapp
        ports:
          - containerPort: 8080
        resources: {}
status: {}

 

k expose deploy simpleapp --port 6666 --target-port 8080

So the svc simpleapp can connect to 3 pod ips:ports (these are the endpoints) see them with a k describe svc simple command. 

[dipesh.majumdar@demo ~]$ k get svc simpleapp
NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
simpleapp   ClusterIP   10.112.14.183   <none>        6666/TCP   5m

[dipesh.majumdar@demo ~]$ k exec -it busybox -- wget -O- 10.112.14.183:6666
Connecting to 10.112.14.183:6666 (10.112.14.183:6666)
Hello world from simpleapp-5b6f9c5676-j8szn and version 2.0
-                    100% |*********************************************************************************************************************************************|    60  0:00:00 ETA

 

you can definitely connect to pod-ip from jumbox server busybox as shown below - 

 

[dipesh.majumdar@demo ~]$ k get po -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE                                                NOMINATED NODE
busybox                      1/1     Running   0          12m   10.48.0.32   gke-standard-cluster-1-default-pool-c7a16408-wk8x   <none>
simpleapp-5b6f9c5676-bxc5w   1/1     Running   0          20m   10.48.0.30   gke-standard-cluster-1-default-pool-c7a16408-wk8x   <none>
simpleapp-5b6f9c5676-j8szn   1/1     Running   0          20m   10.48.0.31   gke-standard-cluster-1-default-pool-c7a16408-wk8x   <none>
simpleapp-5b6f9c5676-tg9kv   1/1     Running   0          20m   10.48.1.63   gke-standard-cluster-1-default-pool-c7a16408-1bnp   <none>
[dipesh.majumdar@demo ~]$ k exec -it busybox -- wget -O- 10.48.0.30:8080
Connecting to 10.48.0.30:8080 (10.48.0.30:8080)
Hello world from simpleapp-5b6f9c5676-bxc5w and version 2.0
-                    100% |********************************|    60  0:00:00 ETA
[dipesh.majumdar@demo ~]$

 

Volumes, Storageclasses, pvc

First of all, if you are creating a hostpath volume within a pod of type Dir and it looks like this...

volumes:
  - name: test-volume
    hostPath:
      path: /etc/some/path  #MIGHT NOT WORK
      type: Directory

it might not work because that path /etc/some/path might not be present in your node file system. 

volumes:
  - name: test-volume
    hostPath:
      path: /sys  #WORKED FOR ME
      type: Directory

so i created one like this /sys which might be present in the node file sytem and then only it started to work.  Whether this is something you should do is another debate. This blog is not responsible for anything that breaks in production systems. So adopt caution!!!

Now coming to the second point ->

After a lot of hard work I managed to write a volume from yaml file. I was a bit surprised to see that there is no way you create a volume imperatively - k create pv ...blah blah... so here is the yaml manifest of the pv

[dipesh.majumdar@demo ~]$ cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: myvolume
spec:
 storageClassName: normal
 hostPath:
  path: /etc/foo
 accessModes:
  - ReadWriteOnce
  - ReadWriteMany
 capacity:
  storage: "10Gi"

But then i got intriged by the storageClassName, becausae i knew that I had my storageClass somewhere in the cluster but then i soon discovered - i do have and it is standard and it DOES NOT HAVE A NAMESPACE

HOLY COW

STORAGECLASS IS NAMESPACE FREE - Imprint that in memory my dear readers!!!

anyway we will visit that later....

so i create my pv.yaml

[dipesh.majumdar@demo ~]$ k create -f pv.yaml
persistentvolume/myvolume created

[dipesh.majumdar@demo ~]$ k get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
myvolume   10Gi       RWO,RWX        Retain           Available           normal                  1m

Looks good so far...

I dig deeper... and I find that this storageclassName is not the storageClass i was talking about in the beginning but it specifies a class which can be used for association with pvc for example... 

it is like a label

[dipesh.majumdar@demo ~]$ k get sc normal
Error from server (NotFound): storageclasses.storage.k8s.io "normal" not found
[dipesh.majumdar@demo ~]$ k get sc
NAME                 PROVISIONER            AGE
standard (default)   kubernetes.io/gce-pd   3d

Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC.

[dipesh.majumdar@demo ~]$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: mypvc
spec:
 storageClassName: normal
 accessModes:
  - ReadWriteOnce
 resources:
  requests:
    storage: 4Gi

 

[dipesh.majumdar@demo ~]$ k get pvc
NAME    STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Bound    myvolume   10Gi       RWO,RWX        normal         1m
[dipesh.majumdar@demo ~]$

ok so now its time to create a pod and associate the volume just created with it and make the volume belong to pvc and not configMap or secret which is also possible

$ k run busybox --restart=Never --image=busybox --dry-run -o yaml --  sleep 3600 >p.yaml

Edit the p.yaml with the yellow highlights ->

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: busybox
  name: busybox
spec:
  volumes:
   - name: myvolume #this volume can be named xyz and is not the pv we created earlier
     persistentVolumeClaim:
       claimName: mypvc
  containers:
  - args:
    - sleep
    - "3600"
    image: busybox
    name: busybox
    volumeMounts:
     - mountPath: /etc/foo #this path need not match with pv hostpath we created earlier
       name: myvolume
#this volume can be named xyz and is not the pv we created earlier
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}

 

Makes a lot of sense now. 

Let me enter something into this directory now by getting inside the pod - >

[dipesh.majumdar@demo ~]$ k exec -it busybox -- sh
/ # pwd
/
/ # cd /etc/foo
/etc/foo # ls
/etc/foo # date > test.txt
/etc/foo # cat test.txt
Sun Apr  7 09:55:18 UTC 2019
/etc/foo # pwd
/etc/foo

 

I will copy the /etc/foo/test.txt to my local

[dipesh.majumdar@demo ~]$ k cp mns/busybox:etc/foo/test.txt /home/dipesh.majumdar
[dipesh.majumdar@demo ~]$ ls test.txt
test.txt
[dipesh.majumdar@demo ~]$ cat test.txt
Sun Apr  7 09:55:18 UTC 2019

Copying from local to pod

[dipesh.majumdar@demo ~]$ date >test2.txt
[dipesh.majumdar@demo ~]$ k cp test2.txt mns/busybox:etc/foo/
[dipesh.majumdar@demo ~]$ k exec -it busybox -- ls -ltra /etc/foo/
total 12
drwxr-xr-x    1 root     root          4096 Apr  7 09:53 ..
-rw-r--r--    1 root     root            29 Apr  7 09:55 test.txt
-rw-rw-r--    1 1000     1001            29 Apr  7 10:05 test2.txt
drwxr-xr-x    2 root     root            80 Apr  7 10:05 .
[dipesh.majumdar@demo ~]$

 

fire you memory - 

spec of pv has 4 items - storageClassName, accessModes, capacity.storage, hostPath.path

spec of pvc has 3 items -  storageClassName, accessModes, resources.requests.storage

notes - 

when the pv is just created status of pv is available. when it is attached to pvc it becomes bound. Deleting pvc makes status of pv as released

[dipesh.majumdar@demo ~]$ k delete pvc mypvc
persistentvolumeclaim "mypvc" deleted
[dipesh.majumdar@demo ~]$ k get pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM       STORAGECLASS   REASON   AGE
myvolume   10Gi       RWO,RWX        Retain           Released   mns/mypvc   normal                  1h
[dipesh.majumdar@demo ~]$

the volume name inside pod is not the persistent volume name but more pod specific.... very important point

persistentVolumeReclaimPolicy: Retain #means pv will be retained even after pvc attached to it is deleted until pv is manually deleted

                                                                         Delete #deleted automatically freeing up storage

                                                                         Recycle #data in the volume will be deleted before it is made available to other pvcs

Finding out Entrypoint, CMD, env of docker from the image

Million Dollar Question: You have an image - but you wan to find out the entry point from the image. How will you do that? Well you need the Dockerfile to know entrypoint right? But if you dont have it right now how will you find out these informations - env, entrypoint, cmd?

[dipesh.majumdar@demo ~]$ docker inspect busybox |grep -A100 -i \"Config\"
        "Config": {
            "Hostname": "",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "sh"
            ],
            "ArgsEscaped": true,
            "Image": "sha256:90b7037cc5e65fa9e3f33e8096febd6fad8af0ff94876d73dabe048d65bec645",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": null
        },
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 1199417,
        "VirtualSize": 1199417,
        "GraphDriver": {
            "Name": "overlay2",
            "Data": {
                "MergedDir": "/var/lib/docker/overlay2/c6026745fff5b1608f900c1415df5ec03cb3f97cdb54e831e636b35fa7f18548/merged",
                "UpperDir": "/var/lib/docker/overlay2/c6026745fff5b1608f900c1415df5ec03cb3f97cdb54e831e636b35fa7f18548/diff",
                "WorkDir": "/var/lib/docker/overlay2/c6026745fff5b1608f900c1415df5ec03cb3f97cdb54e831e636b35fa7f18548/work"
            }
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [
                "sha256:0b97b1c81a3200e9eeb87f17a5d25a50791a16fa08fc41eb94ad15f26516ccea"
            ]
        }
    }
]

 

Monitoring K8s

- metrics server (heapster has been deprecated) - metrics server receives metrics from nodes and stores them in memory and doesn't store data in disk and historical data is not available. each cluster has one metrics server.  how is metrics collected in node? well the kubelet which is the agent on each node and takes instructions from kubernetes api master server and also runs pods in nodes has a sub-component within it which is called cAdvisor or container advisor. it receives metrics from the pods in nodes and makes them available to metrics server.

- prometheus - this can be installed using helm. it gives a GUI which makes it easy to monitor the kubernetes cluster.

- datadog

- dynatrace

Service Account Mystery

[dipesh.majumdar@demo ~]$ k -n mns get all
No resources found.

Though SA is present but still get all doesn't show it. kubectl -n namespace get all also doesn't show secrets and configmaps

[dipesh.majumdar@demo ~]$ k -n mns get sa
NAME      SECRETS   AGE
default   1         1d

[dipesh.majumdar@demo ~]$ k -n mns get secret
NAME                  TYPE                                  DATA   AGE
default-token-s8d4x   kubernetes.io/service-account-token   3      1d
istio.default         istio.io/key-and-cert                 3      1d
istio.sa1             istio.io/key-and-cert                 3      1m

Now Let's Create a Service Account - 

[dipesh.majumdar@demo ~]$ k -n mns create sa sa1
serviceaccount/sa1 created
[dipesh.majumdar@demo ~]$ k -n mns get secrets
NAME                  TYPE                                  DATA   AGE
default-token-s8d4x   kubernetes.io/service-account-token   3      1d
istio.default         istio.io/key-and-cert                 3      1d
istio.sa1             istio.io/key-and-cert                 3      1m
sa1-token-22rr5       kubernetes.io/service-account-token   3      7s
[dipesh.majumdar@demo ~]$ k -n mns get secrets sa1-token-22rr5 -o yaml
apiVersion: v1
data:
  ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURERENDQWZTZ0F3SUJBZ0lSQUlDWDRpQjJvb3l3VEhuSEl5MlIxekF3RFFZSktvWklodmNOQVFFTEJRQXcKTHpFdE1Dc0dBMVVFQXhNa056YzJaREJpWkRrdFlXTXhNUzAwTVdVeExXRmtOVFV0TmpFd05EZ3paVFZpTWpRegpNQjRYRFRFNU1EUXdOREEyTlRVd01Gb1hEVEkwTURRd01qQTNOVFV3TUZvd0x6RXRNQ3NHQTFVRUF4TWtOemMyClpEQmlaRGt0WVdNeE1TMDBNV1V4TFdGa05UVXROakV3TkRnelpUVmlNalF6TUlJQklqQU5CZ2txaGtpRzl3MEIKQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBbk92RW1HVWdzdkIyYjN6ZzRUUWllbk9kWnNLRE5aVkhOb3ptK1NtegpXOWxxWTd2NFpiVGphdEhsS1BDVjFnUEtyNmx5SHU1MUg1MUp1VGMrR0xZOWZtZUdyemFrdGh6Y3pYYys2MmVYClVlaHZJUjRkNG1ocmFRQnNzalMva3J6ZGtCWWtIZWswUVpnbWJ2Ri9oTC94MFN0dWZqN2dYVHpjUWMyRXZvZ3AKd1FaQzVla0wyUUdXZ3FHOTFkQTdDTUw4aDVjNGRuZ2svdDFCR0tuSW1nalc4b3lLb2xTRlpXN1RHMmM5b3dJLwpGSklXN25adWhiaXF3cnNqSXNBbnp1bnhaeG1ZR1E5L2JQYXJEcHYrRGxvTFFYcjRuV2pUSHRJR3JPbktXeHBzCnZ4S2pDUWdDY3pyL29JMWkzdG00QzVZQmMxT3FDelExZ1E2V1Q0Y0NRL2x0RlFJREFRQUJveU13SVRBT0JnTlYKSFE4QkFmOEVCQU1DQWdRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQQpWMnVzOTlNTko5dW1mMCtaUXQ4SDYvd3hnUTR1ZHVpODJEK0FuNEZTckdBWm9sN3Q2bkt2bU4yVjhZM2FkalVkCm1yUEVmSkZ4cHpzbFhhME03TVFNdzJ3YjZmemNEYXdlNUlpOTljZ2JuanFyNW1uWDBHek0vcmFSLzlsQ21CY3AKSEdUbmp3cElQQ25HbW54WjBVcmJMa2hwYjl2b2phNEhJRnF6WGpRVTlpQ1FGTkxEZFB5WHdnRHBsWHhlS21RNAoyWVlPdWc5QTJSLzVycDdiblJqQUtYd3pwV1JOTWhPQktENjNBcDZoK2loSXREK0xHN1R1U2ZTcUpjc0hBZnRZCitTUnlFMzVRTGxhL0hveE9EZ0dUcGZNZXQ3UXQzMXc2bS8zckx1bk83OEY2cnBJZ0VBbnRwbVZhcWE3YUd5ZUsKYXlWb1o5R2k5UmluNjZudUlYRjhEUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  namespace: bW5z
  token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSnRibk1pTENKcmRXSmxjbTVsZEdWekxtbHZMM05sY25acFkyVmhZMk52ZFc1MEwzTmxZM0psZEM1dVlXMWxJam9pYzJFeExYUnZhMlZ1TFRJeWNuSTFJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpYSjJhV05sTFdGalkyOTFiblF1Ym1GdFpTSTZJbk5oTVNJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZ5ZG1salpTMWhZMk52ZFc1MExuVnBaQ0k2SWpJME1EZGtZak0wTFRVNE56SXRNVEZsT1MwNVpqSmpMVFF5TURFd1lXWXdNREV6TVNJc0luTjFZaUk2SW5ONWMzUmxiVHB6WlhKMmFXTmxZV05qYjNWdWREcHRibk02YzJFeEluMC41S1JHeFBTYXpwenBieDh3LVNsNEY3elVudUtLWlFZejhIZ2w0SXp5R0RWMjNQcmI1TkRnbVFDOUFTZTJlazRQUzJ5V0ptU0U1NWpLaFpYMVVIcVlwZWhsSm9sYlBBenVaTGtmXzRGa3RwNEJ6NTFCbWN2UUJQTzVGQUJsM2U3ZmtCNGNfb3Y2cmZQb3UzOVhaVEZqV2g5T1NOdkk3dEN5dVVUVGdYN2VQZkhiZi0tcXFXdzNhenlXZFdmZjVTYjR1bjN1aE9GQ2p5R1doc2dWRDBFOVZxSnlEZkJNRlNCdDA5bmgzTkp4bHpMTGF0WmZVZFR4MTNHaXk3MzNudEI0NWEwU0FLajdBci1BZVRzMEdFNVk3QUxsTE5NVEEzWDZVRmpxalZWZ0wzX01TV1RxdlpFOGVlZXBwOXhkZkZDQWxlRDJBOE9VWERmbXAxVUFaSGJjVGc=
kind: Secret
metadata:
  annotations:
    kubernetes.io/service-account.name: sa1
    kubernetes.io/service-account.uid: 2407db34-5872-11e9-9f2c-42010af00131
  creationTimestamp: "2019-04-06T13:44:46Z"
  name: sa1-token-22rr5
  namespace: mns
  resourceVersion: "517958"
  selfLink: /api/v1/namespaces/mns/secrets/sa1-token-22rr5
  uid: 240ae6a4-5872-11e9-9f2c-42010af00131
type: kubernetes.io/service-account-token
[dipesh.majumdar@demo ~]$

k -n mns run nginx --restart=Never --image=nginx --dry-run -o yaml > pod.yaml  #to the file added the yellow highlight part

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  serviceAccountName: sa1
  containers:
  - image: nginx
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}

[dipesh.majumdar@demo ~]$ k -n mns get po nginx -o yaml --export |grep -A5 -i volume
Flag --export has been deprecated, This flag is deprecated and will be removed in future.
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: sa1-token-22rr5
      readOnly: true
  dnsPolicy: ClusterFirst
  nodeName: gke-standard-cluster-1-default-pool-c7a16408-wk8x
--
  volumes:
  - name: sa1-token-22rr5
    secret:
      defaultMode: 420
      secretName: sa1-token-22rr5
status:

[dipesh.majumdar@demo ~]$ k -n mns exec -it nginx -- ls -ltra /var/run/secrets/kubernetes.io/serviceaccount
total 4
lrwxrwxrwx 1 root root   12 Apr  6 14:05 token -> ..data/token
lrwxrwxrwx 1 root root   16 Apr  6 14:05 namespace -> ..data/namespace
lrwxrwxrwx 1 root root   13 Apr  6 14:05 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root   31 Apr  6 14:05 ..data -> ..2019_04_06_14_05_45.891537062
drwxr-xr-x 2 root root  100 Apr  6 14:05 ..2019_04_06_14_05_45.891537062
drwxrwxrwt 3 root root  140 Apr  6 14:05 .
drwxr-xr-x 3 root root 4096 Apr  6 14:05 ..

 

[dipesh.majumdar@demo ~]$ k -n mns exec -it nginx -- cat /var/run/secrets/kubernetes.io/serviceaccount/token
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtbnMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoic2ExLXRva2VuLTIycnI1Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InNhMSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjI0MDdkYjM0LTU4NzItMTFlOS05ZjJjLTQyMDEwYWYwMDEzMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptbnM6c2ExIn0.5KRGxPSazpzpbx8w-Sl4F7zUnuKKZQYz8Hgl4IzyGDV23Prb5NDgmQC9ASe2ek4PS2yWJmSE55jKhZX1UHqYpehlJolbPAzuZLkf_4Fktp4Bz51BmcvQBPO5FABl3e7fkB4c_ov6rfPou39XZTFjWh9OSNvI7tCyuUTTgX7ePfHbf--qqWw3azyWdWff5Sb4un3uhOFCjyGWhsgVD0E9VqJyDfBMFSBt09nh3NJxlzLLatZfUdTx13Giy733ntB45a0SAKj7Ar-AeTs0GE5Y7ALlLNMTA3X6UFjqjVVgL3_MSWTqvZE8eeepp9xdfFCAleD2A8OUXDfmp1UA/bin/bash -c ZHbcTg[dipesh.majumdar@demo ~]$ k -n mns exec -it nginx -- cat /var/run/secrets/kubernetes.io/serviceaccount/token
^Ccommand terminated with exit code 130
[dipesh.majumdar@demo ~]$

Note: If we had't used the sa1 service account, then the default service account and corresponding default token would be used for volume mount inside any pod created in the mns namespace.  

to disable that: automountServiceAccountToken: false

 

Environment Variable in Docker

[dipesh.majumdar@demo ubunutu]$ cat Dockerfile
FROM ubuntu
#CMD sleep 5
#CMD ["sleep","30"]
ENV env_var1 "100"
ENTRYPOINT ["sleep"]
CMD ["35"]

[dipesh.majumdar@demo ubunutu]$ docker build -t ubuntu_cust:v1 .
Sending build context to Docker daemon 3.072 kB
Step 1/4 : FROM ubuntu
 ---> 94e814e2efa8
Step 2/4 : ENV env_var1 "100"
 ---> Running in e1643d4f4265
 ---> 086a4fc18f7b
Removing intermediate container e1643d4f4265
Step 3/4 : ENTRYPOINT sleep
 ---> Running in 7ff22b4c5347
 ---> f4fc9bd48ea5
Removing intermediate container 7ff22b4c5347
Step 4/4 : CMD 35
 ---> Running in 2cf6577075e8
 ---> 6cac7df40451
Removing intermediate container 2cf6577075e8
Successfully built 6cac7df40451

[dipesh.majumdar@demo ubunutu]$ docker run -dit -e ENV_VAR2=200 --name my_container ubuntu_cust:v1 5000
8c57561061ea615ab15695f7ab99ca263a337cc52e4eaf2c20f16a38039b8373
[dipesh.majumdar@demo ubunutu]$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
8c57561061ea        ubuntu_cust:v1      "sleep 5000"        8 seconds ago       Up 7 seconds                            my_container

 

[dipesh.majumdar@demo ubunutu]$ docker exec -it my_container env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=8c57561061ea
TERM=xterm
ENV_VAR2=200
env_var1=100
HOME=/root

 

ENTRYPOINT and CMD of docker corresponds to command and args in k8s

Consider the Dockerfile - 

FROM ubuntu
#CMD sleep 5
#CMD ["sleep","30"]
ENTRYPOINT ["sleep"]
CMD ["35"]

Build above Dockerfile like this -

cd <to directory where the Dockerfile resides> 

docker build -t ubuntu_cust:v1  .

to execute the entrypoint and cmd of the above dockerfile simply run - 

docker run --name my_container ubuntu_cust:v1

if you want to overwrite the entrypoint and CMD as mentioned in the above dockerfile 

docker run --name=my_container --entrypoint grep  ubuntu_cust:v1 root /etc/passwd

docker rm my_container

docker run --name=my_container --entrypoint cat  ubuntu_cust:v1  /etc/passwd

docker rm my_container

##even if you only specify entrypoint it will overwrite Dockerfile CMD with a blank command... i mean nothing so when you spin up the container it only runs: ls (as shown below)

docker run --name my_container --entrypoint ls ubuntu_cust:v1 

docker rm my_container

now I don't want to over-write the ENTRYPOINT but only the command after the entrypoint... here is how to do it:

docker run --name my_container ubuntu_cust:v1 5

Now i want to use the customized image - ubuntu_cust:v1 in a pod

Consider the pod template - 

apiVersion: v1
kind: Pod
metadata:
 name: mypod1
 labels:
   os: ubuntu
   app: mypod1
spec:
 containers:
 - name: mycontainer
   image: ubuntu_cust:v1
   command: ["cat"]
   args: ["/etc/passwd"]

kubectl -n mns create -f pod.yaml

 

[dipesh.majumdar@demo ubunutu]$ kubectl -n mns logs mypod1 -f
Error from server (BadRequest): container "mycontainer" in pod "mypod1" is waiting to start: trying and failing to pull image

Do a describe and you can see - 

Failed to pull image "ubuntu_cust:v1": rpc error: code = Unknown desc = Error response from daemon: repository ubuntu_cust not found: does not exist or no pull access

 

The image exists in my local - but not in docker-hub, so it's not getting the image. Now i need to push my local image to docker hub or any other container registry. Lets push it to google cloud registry. 

First we need to configure docker to use gcloud command line tool to authenticate requests to container registry (one time activity)

gcloud auth configure-docker

now i need to tag the image to make it compliant with google cloud registry. 

This is how docker images for gcloud registry will have to be named - 

[HOSTNAME]/[PROJECT-ID]/[IMAGE]

docker tag ubuntu_cust:v1 eu.gcr.io/dipeshm1206/ubuntu_cust:v1

[dipesh.majumdar@demo ubunutu]$ docker push eu.gcr.io/dipeshm1206/ubuntu_cust:v1
The push refers to a repository [eu.gcr.io/dipeshm1206/ubuntu_cust]
b57c79f4a9f3: Layer already exists
d60e01b37e74: Layer already exists
e45cfbc98a50: Layer already exists
762d8e1a6054: Layer already exists
v1: digest: sha256:ff44cb098fcbb01682a758c2bd97edd4c492e4bc282ac83617633f5796e79b08 size: 1150

 

Now Let me use the pod manifest this way - 

[dipesh.majumdar@demo ubunutu]$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
 name: mypod1
 labels:
   os: ubuntu
   app: mypod1
spec:
 containers:
 - name: mycontainer
   image: eu.gcr.io/dipeshm1206/ubuntu_cust:v1
   command: ["cat"]
   args: ["/etc/passwd"]

 

[dipesh.majumdar@demo ubunutu]$ kubectl -n mns create -f pod.yaml
pod/mypod1 created
[dipesh.majumdar@demo ubunutu]$ kubectl -n mns get po
NAME                 READY   STATUS      RESTARTS   AGE
mypod1               0/1     Completed   0          8s

 

[dipesh.majumdar@demo ubunutu]$ kubectl -n mns logs mypod1 -f
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin

(more information here - https://cloud.google.com/container-registry/docs/quickstart

 

make docker run from users other than root

Sometimes even after installing docker from root correctly, it might not allow you to connect even though you have done this from your account:

sudo usermod -a -G Docker $USER

the error is shown below. very irritating indeed. doesn't seem to go... 

 

Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.26/containers/json?all=1: dial unix /var/run/docker.sock: connect: permission denied

 

well the solution is changing permission like this... which seemed to work for me:

chmod 777 /var/run/docker.sock

 

Easiest way to write manifests

  • Pod

kubectl -n some_ns run pod_name --restart=Never --image=nginx:1.7.9 --dry-run -o yaml > pod.yaml

Now suppose you want to spin up a pod with some resource constraint this is how you do it - 

kubectl -n mns run nginx20 --restart=Never --image=nginx --requests='cpu=100m,memory=256Mi' --limits='cpu=500m,memory=512Mi' --dry-run -o yaml

What about spinning up 3 pods with nginx1, nginx2 and nginx3

[dipesh.majumdar@demo ~]$ for i in {1..3}; do k run nginx$i --restart=Never --image=nginx --labels=app=v1 ; done;
pod/nginx1 created
pod/nginx2 created
pod/nginx3 created

  • Job:

kubectl run hello --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "date; echo Hello from the Kubernetes cluster"

Note:

100m means 100 divided by 1000 which is 0.1 core CPU

Also note:
--restart=Always: The restart policy for this Pod. Legal values [Always, OnFailure, Never].

If set to Always a deployment is created, if set to OnFailure a job is created, if set to Never, a regular pod is created. For the latter two --replicas must be 1. Default Always

 

How to rollback a deployment in Kubernetes

$ kubectl -n mynamespace describe deploy nginx |grep -i image
    Image:        nginx:2.7.1
$ kubectl -n mynamespace rollout history deploy nginx
deployment.extensions/nginx
REVISION  CHANGE-CAUSE
2         <none>
3         <none>
4         <none>

$ kubectl -n mynamespace rollout undo deploy nginx --to-revision=3
deployment.extensions/nginx rolled back
$ kubectl -n mynamespace rollout history deploy nginx
deployment.extensions/nginx
REVISION  CHANGE-CAUSE
2         <none>
4         <none>
5         <none>

$ kubectl -n mynamespace describe deploy nginx |grep -i image
    Image:        nginx:1.7.8

Classless Interdomain Routing (CIDR)

192.168.7.0/24 means the last bit can range from 0 to 2 (32-24) which is 2 = 256

So addresses possible are (first and last bit should not be considered) - 

192.168.7.1 to 192.168.7.255

Now you can define a subnet using CIDR and you should be able to understand the ranges. 

Define environment variable in a pod manifest

In the below example i am defining an environment variable "envVar1" and then checking it inside the pod by 'printenv'

dipes@LAPTOP-FEA1CJ5Q MINGW64 ~/gitlabDOTcom_content/infrastructure (master)                                                                                                                                      
$ cat 01pod-002-oedpod.yaml                                                                                                                                                                                       
apiVersion: v1                                                                                                                                                                                                    
kind: Pod                                                                                                                                                                                                         
metadata:                                                                                                                                                                                                         
 name: oeudpod                                                                                                                                                                                                    
 namespace: k8s-cert                                                                                                                                                                                              
 labels:                                                                                                                                                                                                          
   app: oeupod-label                                                                                                                                                                                              
   name: oeupod                                                                                                                                                                                                   
spec:                                                                                                                                                                                                             
 containers:                                                                                                                                                                                                      
  - name: oeud                                                                                                                                                                                                    
    image: centos:7                                                                                                                                                                                               
    env:                                                                                                                                                                                                          
     - name: envVar1                                                                                                                                                                                              
       value: envVar1-value                                                                                                                                                                                       
    command:                                                                                                                                                                                                      
     - "bin/bash"                                                                                                                                                                                                 
     - "-c"                                                                                                                                                                                                       
     - "sleep 300"                                                                                                                                                                                                
                                                                                                                                                                                                                  
                                                                                                                                                                                                                  
dipes@LAPTOP-FEA1CJ5Q MINGW64 ~/gitlabDOTcom_content/infrastructure (master)                                                                                                                                      
$ kubectl create -f 01pod-002-oedpod.yaml                                                                                                                                                                         
pod "oeudpod" created                                                                                                                                                                                             
                                                                                                                                                                                                                  
dipes@LAPTOP-FEA1CJ5Q MINGW64 ~/gitlabDOTcom_content/infrastructure (master)                                                                                                                                      
$ kubectl -n k8s-cert get po -l app=oeupod-label                                                                                                                                                                  
NAME      READY     STATUS    RESTARTS   AGE                                                                                                                                                                      
oeudpod   1/1       Running   0          1m                                                                                                                                                                       
                                                                                                                                                                                                                  
dipes@LAPTOP-FEA1CJ5Q MINGW64 ~/gitlabDOTcom_content/infrastructure (master)                                                                                                                                      
$ kubectl -n k8s-cert exec oeudpod -- bash -c 'printenv'                                                                                                                                                          
HOSTNAME=oeudpod                                                                                                                                                                                                  
KUBERNETES_PORT=tcp://10.11.240.1:443                                                                                                                                                                             
KUBERNETES_PORT_443_TCP_PORT=443                                                                                                                                                                                  
KUBERNETES_SERVICE_PORT=443                                                                                                                                                                                       
KUBERNETES_SERVICE_HOST=10.11.240.1                                                                                                                                                                               
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin                                                                                                                                                 
PWD=/                                                                                                                                                                                                             
envVar1=envVar1-value                                                                                                                                                                                             
SHLVL=1                                                                                                                                                                                                           
HOME=/root                                                                                                                                                                                                        
KUBERNETES_PORT_443_TCP_PROTO=tcp                                                                                                                                                                                 
KUBERNETES_SERVICE_PORT_HTTPS=443                                                                                                                                                                                 
KUBERNETES_PORT_443_TCP_ADDR=10.11.240.1                                                                                                                                                                          
KUBERNETES_PORT_443_TCP=tcp://10.11.240.1:443                                                                                                                                                                     
_=/usr/bin/printenv                                                                                                                                                                                               
                                                                                                                                                                                                                  
dipes@LAPTOP-FEA1CJ5Q MINGW64 ~/gitlabDOTcom_content/infrastructure (master)                                                                                                                                      

But if you want to load env variable from a configmap this is the way to do it -

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx:1.7.7
    name: nginx
    env:
     - name: env-direct1
       value: "1234"
     - name: name
       valueFrom:
         configMapKeyRef:
             name: cm1
             key: name
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}

 

Difference between awk and cut

$ docker images |grep simplev01
dipeshm77-ubuntu                                        simplev01                e5337522d045        About an hour ago   210MB
ubuntu                                                  simplev01                e5337522d045        About an hour ago   210MB
dipeshm77/ubuntu                                        simplev01                e5337522d045        About an hour ago   210MB

In the above query result if i want only the image-id of docker we can use awk

$ docker images |grep simplev01 |awk -F ' ' '{print $3}'
e5337522d045
e5337522d045
e5337522d045

"cut -d" can also be tried but it doesn't give the desired result - 

$ docker images |grep simplev01 |cut -d ' ' -f 3

<<<<no output here... which is because of unnecessary spaces that spoils the show>>>>>>

Get rid of unnecessary spaces:
$ docker images |grep simplev01 | tr -s ' '
dipeshm77/ubuntu simplev01 e5337522d045 About an hour ago 210MB
dipeshm77-ubuntu simplev01 e5337522d045 About an hour ago 210MB
ubuntu simplev01 e5337522d045 About an hour ago 210MB

Now "cut -d" gives the desired result

$ docker images |grep simplev01 | tr -s ' '  |cut -d ' ' -f 3
e5337522d045
e5337522d045
e5337522d045

 

 

View older posts »