之所以要将这篇很长很长的博文拆分开,是因为站内查询因这篇长文而失效了,另外打开一个页面实在有些卡顿(⊙﹏⊙)b
复习资料
initContainer
Q: You have a Container with a volume mount. Add a init container that creates an empty file in the volume. (only trick is to mount the volume to init-container as well)
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
1 | apiVersion: v1 |
1 | root@test-9:~/henry# cat init-container.yaml |
Volume
Q: When running a redis key-value store in your pre-production environments many deployments are incoming from CI and leaving behind a lot of stale cache data in redis which is causing test failures. The CI admin has requested that each time a redis key-value-store is deployed in staging that it not persist its data.
Create a pod named non-persistent-redis that specifies a named-volume with name app-cache, and mount path /data/redis. It should launch in the staging namespace and the volume MUST NOT be persistent.
Create a Pod with EmptyDir and in the YAML file add namespace: CI
Yaml格式
1 | apiVersion: v1 |
挂载文件到pod中:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.wise2c.service: xx
io.wise2c.stack: stack001
name: stack001-xx
spec:
replicas: 1
template:
metadata:
labels:
io.wise2c.service: xx
io.wise2c.stack: stack001
spec:
containers:
image: nginx
name: xx
resources:
limits:
cpu: 200m
memory: 1073741824
volumeMounts:
- mountPath: /etc/resolv.conf
name: xx
subPath: resolv.conf
volumes:
- configMap:
name: stack001-xx
name: xx
- apiVersion: v1
data:
resolv.conf: "\nnameserver 10.96.0.10 \n\nsearch stack001.ns-team-2-env-44.svc.cluster.local\
\ ns-team-2-env-44.svc.cluster.local svc.cluster.local cluster.local\noptions\
\ ndots:6"
kind: ConfigMap
metadata:
labels:
io.wise2c.stack: stack001
name: stack001-xx
kind: List
挂载同一个文件到不同pod中,指定不同的名字:
1 | apiVersion: v1 |
两种类型的持久卷
PV, 使用静态的PV来挂载,需要用户自己创建PV.
1 | apiVersion: v1 |
PVC, 用户不用关心PV,只需要说需要什么类型的存储,即创建PVC,然后PVC自动从Storage Class中创建对应的PV。
1 | kind: PersistentVolumeClaim |
Storage Class:
1 | kind: StorageClass |
Pod:
1 | kind: Pod |
Log
Q: Find the error message with the string “Some-error message here”.
https://kubernetes.io/docs/concepts/cluster-administration/logging/
see kubectl logs and /var/log for system services
1 | [root@dev-7 henry]# kcc logs -f --tail=10 orchestration-2080965958-khwfx -c orchestration |
kubelet日志位于/var/log/kubelet下
static pod
Q: Run a Jenkins Pod on a specified node only.
https://kubernetes.io/docs/tasks/administer-cluster/static-pod/
Create the Pod manifest at the specified location and then edit the systemd service file for kubelet(/etc/systemd/system/kubelet.service) to include--pod-manifest-path=/specified/path
. Once done restart the service.
Choose a node where we want to run the static pod. In this example, it’s my-node1.
1
[joe@host ~] $ ssh my-node1
Choose a directory, say
/etc/kubelet.d
and place a web server pod definition there, e.g./etc/kubelet.d/static-pod.yaml
:1
2
3
4
5
6
7
8
9
10
11
12[root@my-node1 ~] $ mkdir /etc/kubernetes.d/
[root@my-node1 ~] $ cat <<EOF >/etc/kubernetes.d/static-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: static-pod
spec:
containers:
- image: busybox
name: test-container
command: ["/bin/sh", "-c", "sleep 9999"]
EOFConfigure your kubelet daemon on the node to use this directory by running it with
--pod-manifest-path=/etc/kubelet.d/
argument. On Fedora edit /etc/kubernetes/kubelet to include this line:1
KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/"
Instructions for other distributions or Kubernetes installations may vary. Restart kubelet. On Fedora, this is:
1
[root@my-node1 ~] $ systemctl restart kubelet
效果如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43[root@dev-9 manifests]# kubectl get pod
NAME READY STATUS RESTARTS AGE
static-pod-dev-9 1/1 Running 0 34s
[root@dev-9 manifests]#
[root@dev-9 manifests]# kubectl describe pod static-pod-dev-9
Name: static-pod-dev-9
Namespace: default
Node: dev-9/192.168.1.190
Start Time: Sun, 12 Nov 2017 21:21:48 +0800
Labels: <none>
Annotations: kubernetes.io/config.hash=1dcad4affd910f45b5c3a8dbdeec8933
kubernetes.io/config.mirror=1dcad4affd910f45b5c3a8dbdeec8933
kubernetes.io/config.seen=2017-11-12T21:21:48.15196949+08:00
kubernetes.io/config.source=file
Status: Running
IP: 10.244.3.45
Containers:
test-container:
Container ID: docker://ef3e28e45e280e4a50942fc472fd025cb84a7014a64dbc57308cddbfeb1bd979
Image: busybox
Image ID: docker-pullable://busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
Port: <none>
Command:
/bin/sh
-c
sleep 9999
State: Running
Started: Sun, 12 Nov 2017 21:21:52 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes: <none>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: :NoExecute
Events: <none>
[root@dev-9 manifests]#
DNS
Q: Use the utility nslookup to look up the DNS records of the service and pod.
From this guide, https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
Look for “Quick Diagnosis”
Services
1 | $ kubectl exec -ti busybox -- nslookup mysvc.myns.svc.cluster.local |
Naming conventions for services and pods:
- For a regular service, this resolves to the port number and the CNAME: (解析到Cluster-IP)
my-svc.my-namespace.svc.cluster.local.
1 | root@test-9:~/henry# kubectl exec -ti busybox-2520568787-kkmrw -- nslookup nginx.default.svc.cluster.local |
- For a headless service, this resolves to multiple answers(RR解析到多个Pod IP), one for each pod that is backing the service, and contains the port number and a CNAME of the pod of the form
auto-generated-name.my-svc.my-namespace.svc.cluster.local
Pods
When enabled, pods are assigned a DNS A record in the form of
pod-ip-address.my-namespace.pod.cluster.local
For example, a pod with IP 1.2.3.4 in the namespace default with a DNS name of cluster.local would have an entry: 1-2-3-4.default.pod.cluster.local
1 | root@test-9:~/henry# kubectl exec -ti busybox-2520568787-kkmrw -- nslookup 10-42-236-215.default.pod.cluster.local |
Ingress
Q 17: Create an Ingress resource, Ingress controller and a Service that resolves to cs.rocks.ch.
- First, create controller and default backend
1
2kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress/master/controllers/nginx/examples/default-backend.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/deployment/nginx/nginx-ingress-controller.yaml
Second, create service and expose
1
2kubectl run ingress-pod --image=nginx --port 80
kubectl expose deployment ingress-pod --port=80 --target-port=80 --type=NodePortCreate the ingress
1
2
3
4
5
6
7
8
9
10
11
12
13
14cat <<EOF >ingress-cka.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
spec:
rules:
- host: "cs.rocks.ch"
http:
paths:
- backend:
serviceName: ingress-pod
servicePort: 80
EOFTo test, run a curl pod
1
2kubectl run -i --tty client --image=tutum/curl
curl -I -L --resolve cs.rocks.ch:80:10.240.0.5 http://cs.rocks.ch/
我认为,要访问ingress,在flannel网络中,应该还可以使用hostPort来暴露出ingress-nginx的80和443端口。
Mandatory commands
1
2
3
4
5
6
7
8
9curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml | kubectl apply -f -Install with RBAC roles
1
2
3curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml | kubectl apply -f -Verify installation:
1
kubectl get pods --all-namespaces -l app=ingress-nginx --watch
接下来还有,请抽根烟继续!