|
前文我們了解了k8s上的Pod資源的生命周期、健康狀態(tài)和就緒狀態(tài)探測(cè)以及資源限制相關(guān)話題,回顧請(qǐng)參考https://www.cnblogs.com/qiuhom-1874/p/14143610.html;今天我們來(lái)了解下Pod控制器相關(guān)話題; 在k8s上控制器就是k8s的“大腦”,在聊k8s開(kāi)篇時(shí),我們說(shuō)過(guò)控制器主要負(fù)責(zé)創(chuàng)建,管理k8s上的資源,如果對(duì)應(yīng)資源不吻合用戶定義的資源狀態(tài),它就會(huì)嘗試重啟或重建的方式讓其狀態(tài)和用戶定義的狀態(tài)吻合;在k8s上控制器的類型有很多,比如pod控制,service控制器,endpoint控制器等等;不同類型的控制器有著不同的功能和作用;比如pod控制器就是針對(duì)pod資源進(jìn)行管理的控制器;單說(shuō)pod控制器,它也有很多類型,根據(jù)pod里容器跑的應(yīng)用程序來(lái)分類,可以分為有狀態(tài)應(yīng)用和無(wú)狀態(tài)應(yīng)用控制,從應(yīng)用程序是否運(yùn)行為守護(hù)進(jìn)程我們可以將控制器分為,守護(hù)進(jìn)程和非守護(hù)進(jìn)程控制器;其中無(wú)狀態(tài)控制器中最常用的有ReplicaSet控制器和Deployment控制;有狀態(tài)應(yīng)用控制器常用的有StatefulSet;守護(hù)進(jìn)程控制器最常用的有daemonSet控制器;非守護(hù)進(jìn)程控制器有job控制器,對(duì)Job類型的控制器,如果要周期性執(zhí)行的有Cronjob控制器; 1、ReplicaSet控制器 ReplicaSet控制器的主要作用是確保Pod對(duì)象副本數(shù)量在任何時(shí)刻都能精準(zhǔn)滿足用戶期望的數(shù)量;這種控制器啟動(dòng)以后,它首先會(huì)查找集群中匹配其標(biāo)簽選擇器的Pod資源對(duì)象,當(dāng)活動(dòng)pod數(shù)量與用戶期望的pod數(shù)量不吻合時(shí),如果多了就刪除,少了就創(chuàng)建;它創(chuàng)建新pod是靠我們?cè)谂渲们鍐沃卸x的pod模板來(lái)創(chuàng)建新pod; 示例:定義創(chuàng)建ReplicaSet控制器 [root@master01 ~]# cat ReplicaSet-controller-demo.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: replicaset-demo
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.14-alpine
ports:
- name: http
containerPort: 80
[root@master01 ~]#
提示:定義ReplicaSet控制器,apiVersion字段的值為apps/v1,kind為ReplicaSet,這兩個(gè)字段都是固定的;后面的metadata中主要定義名稱和名稱空間;spec中主要定義replicas、selector、template;其中replicas這個(gè)字段的值為一個(gè)整數(shù),表示對(duì)應(yīng)pod的副本數(shù)量;selector用于定義標(biāo)簽選擇器;其值為一個(gè)對(duì)象,其中matchLabels字段表示精確匹配標(biāo)簽,這個(gè)字段的值為一個(gè)字典;除了精確匹配標(biāo)簽選擇器這種方式,還有matchExpressions表示使用匹配表達(dá)式,其值為一個(gè)對(duì)象;簡(jiǎn)單說(shuō)定義標(biāo)簽選擇器,第一種是matchLabels,這種方式就是指定一個(gè)或多個(gè)標(biāo)簽,每個(gè)標(biāo)簽就是一個(gè)kvj鍵值對(duì);后者matchExpressions是指定一個(gè)表達(dá)式,其值為一個(gè)對(duì)象,這個(gè)對(duì)象中主要定義key字段,這個(gè)字段定義key的名稱;operator定義操作符,values定義值;key和operator字段的值類型都是字符串,其中operator的值有In, NotIn, Exists和DoesNotExist;values是一個(gè)字符串列表;其次就是定義pod模板,使用template字段定義,該字段的值為一個(gè)對(duì)象其中metadata字段用于定義模板的元素?fù)?jù)信息,這個(gè)元數(shù)據(jù)信息必須定義標(biāo)簽屬性;通常這個(gè)標(biāo)簽屬性和選擇器中的標(biāo)簽相同;spec字段用于定義pod模板的狀態(tài),最重要的是定義pod里容器的名字,鏡像等等; 應(yīng)用資源配置清單 [root@master01 ~]# kubectl apply -f ReplicaSet-controller-demo.yaml replicaset.apps/replicaset-demo created [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 3 3 3 9s [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset-demo 3 3 3 17s nginx nginx:1.14-alpine app=nginx-pod [root@master01 ~]# 提示:rs就是ReplicaSet的簡(jiǎn)寫(xiě);從上面的信息可以看到對(duì)應(yīng)控制器已經(jīng)創(chuàng)建;并且當(dāng)前pod副本數(shù)量為3,用戶期望的數(shù)量也為3,有3個(gè)準(zhǔn)備就緒; 查看pod [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-rsl7q 1/1 Running 0 2m57s nginx-pod replicaset-demo-twknl 1/1 Running 0 2m57s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 2m57s nginx-pod [root@master01 ~]# 提示:可以看到當(dāng)前default名稱空間中創(chuàng)建了3個(gè)pod,其標(biāo)簽為nginx-pod; 測(cè)試:更改其中一個(gè)pod的標(biāo)簽為ngx,看看對(duì)應(yīng)控制器是否會(huì)新建一個(gè)標(biāo)簽為nginx-pod的pod呢? [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-rsl7q 1/1 Running 0 5m48s nginx-pod replicaset-demo-twknl 1/1 Running 0 5m48s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 5m48s nginx-pod [root@master01 ~]# kubectl label pod/replicaset-demo-vzdbb app=ngx --overwrite pod/replicaset-demo-vzdbb labeled [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-qv8tp 1/1 Running 0 4s nginx-pod replicaset-demo-rsl7q 1/1 Running 0 6m2s nginx-pod replicaset-demo-twknl 1/1 Running 0 6m2s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 6m2s ngx [root@master01 ~]# 提示:可以看到當(dāng)我們把其中一個(gè)pod的標(biāo)簽更改為app=ngx后,對(duì)應(yīng)控制器又會(huì)根據(jù)pod模板創(chuàng)建一個(gè)新pod; 測(cè)試:更改pod標(biāo)簽為app=nginx-pod,看看對(duì)應(yīng)控制器是否會(huì)刪除一個(gè)pod呢? [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-qv8tp 1/1 Running 0 2m35s nginx-pod replicaset-demo-rsl7q 1/1 Running 0 8m33s nginx-pod replicaset-demo-twknl 1/1 Running 0 8m33s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 8m33s ngx [root@master01 ~]# kubectl label pod/replicaset-demo-vzdbb app=nginx-pod --overwrite pod/replicaset-demo-vzdbb labeled [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-qv8tp 0/1 Terminating 0 2m50s nginx-pod replicaset-demo-rsl7q 1/1 Running 0 8m48s nginx-pod replicaset-demo-twknl 1/1 Running 0 8m48s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 8m48s nginx-pod [root@master01 ~]# kubectl get pod -L app NAME READY STATUS RESTARTS AGE APP replicaset-demo-rsl7q 1/1 Running 0 8m57s nginx-pod replicaset-demo-twknl 1/1 Running 0 8m57s nginx-pod replicaset-demo-vzdbb 1/1 Running 0 8m57s nginx-pod [root@master01 ~]# 提示:可以看到當(dāng)集群中有多余用戶期望數(shù)量的pod標(biāo)簽時(shí),對(duì)應(yīng)控制器會(huì)把多余的相同標(biāo)簽的pod刪除;從上面的測(cè)試可以看到ReplicaSet控制器是依靠標(biāo)簽選擇器來(lái)判斷集群中pod的數(shù)量是否和用戶定義的數(shù)量吻合,如果不吻合就嘗試刪除或新建,讓對(duì)應(yīng)pod數(shù)量精確滿足用戶期望pod數(shù)量; 查看rs控制器的詳細(xì)信息 [root@master01 ~]# kubectl describe rs replicaset-demo
Name: replicaset-demo
Namespace: default
Selector: app=nginx-pod
Labels: <none>
Annotations: <none>
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=nginx-pod
Containers:
nginx:
Image: nginx:1.14-alpine
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 20m replicaset-controller Created pod: replicaset-demo-twknl
Normal SuccessfulCreate 20m replicaset-controller Created pod: replicaset-demo-vzdbb
Normal SuccessfulCreate 20m replicaset-controller Created pod: replicaset-demo-rsl7q
Normal SuccessfulCreate 15m replicaset-controller Created pod: replicaset-demo-qv8tp
Normal SuccessfulDelete 12m replicaset-controller Deleted pod: replicaset-demo-qv8tp
[root@master01 ~]#
擴(kuò)展/縮減rs控制pod副本數(shù)量 [root@master01 ~]# kubectl scale rs replicaset-demo --replicas=6 replicaset.apps/replicaset-demo scaled [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 6 6 6 32m [root@master01 ~]# kubectl scale rs replicaset-demo --replicas=4 replicaset.apps/replicaset-demo scaled [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE replicaset-demo 4 4 4 32m [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-5t9tt 0/1 Terminating 0 33s replicaset-demo-j75hk 1/1 Running 0 33s replicaset-demo-rsl7q 1/1 Running 0 33m replicaset-demo-twknl 1/1 Running 0 33m replicaset-demo-vvqfw 0/1 Terminating 0 33s replicaset-demo-vzdbb 1/1 Running 0 33m [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-j75hk 1/1 Running 0 41s replicaset-demo-rsl7q 1/1 Running 0 33m replicaset-demo-twknl 1/1 Running 0 33m replicaset-demo-vzdbb 1/1 Running 0 33m [root@master01 ~]# 提示:scale也可以對(duì)控制器做擴(kuò)展和縮減pod副本數(shù)量,除了以上使用命令的方式來(lái)變更對(duì)應(yīng)pod副本數(shù)量;也可以直接在配置清單中修改replicas字段,然后使用apply命令執(zhí)行配置清單進(jìn)行修改; 修改配置清單中的replicas字段的值來(lái)擴(kuò)展pod副本數(shù)量 [root@master01 ~]# cat ReplicaSet-controller-demo.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: replicaset-demo
namespace: default
spec:
replicas: 7
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.14-alpine
ports:
- name: http
containerPort: 80
[root@master01 ~]# kubectl apply -f ReplicaSet-controller-demo.yaml
replicaset.apps/replicaset-demo configured
[root@master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
replicaset-demo 7 7 7 35m
[root@master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
replicaset-demo-j75hk 1/1 Running 0 3m33s
replicaset-demo-k2n9g 1/1 Running 0 9s
replicaset-demo-n7fmk 1/1 Running 0 9s
replicaset-demo-q4dc6 1/1 Running 0 9s
replicaset-demo-rsl7q 1/1 Running 0 36m
replicaset-demo-twknl 1/1 Running 0 36m
replicaset-demo-vzdbb 1/1 Running 0 36m
[root@master01 ~]#
更新pod版本 方式1修改資源配置清單中pod模板的版本,然后在使用apply命令來(lái)執(zhí)行配置清單 [root@master01 ~]# cat ReplicaSet-controller-demo.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: replicaset-demo
namespace: default
spec:
replicas: 7
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.16-alpine
ports:
- name: http
containerPort: 80
[root@master01 ~]# kubectl apply -f ReplicaSet-controller-demo.yaml
replicaset.apps/replicaset-demo configured
[root@master01 ~]# kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset-demo 7 7 7 55m nginx nginx:1.16-alpine app=nginx-pod
[root@master01 ~]#
提示:從上面命令可以看到,它顯示的鏡像版本是1.16的版本; 驗(yàn)證:查看對(duì)應(yīng)pod,看看對(duì)應(yīng)pod中容器鏡像版本是否變成了1.16呢? [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-j75hk 1/1 Running 0 25m replicaset-demo-k2n9g 1/1 Running 0 21m replicaset-demo-n7fmk 1/1 Running 0 21m replicaset-demo-q4dc6 1/1 Running 0 21m replicaset-demo-rsl7q 1/1 Running 0 57m replicaset-demo-twknl 1/1 Running 0 57m replicaset-demo-vzdbb 1/1 Running 0 57m [root@master01 ~]# 提示:從pod創(chuàng)建的時(shí)間來(lái)看,pod沒(méi)有更新; 測(cè)試:刪除一個(gè)pod看看對(duì)應(yīng)pod里容器鏡像是否會(huì)更新呢? [root@master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
replicaset-demo-j75hk 1/1 Running 0 25m
replicaset-demo-k2n9g 1/1 Running 0 21m
replicaset-demo-n7fmk 1/1 Running 0 21m
replicaset-demo-q4dc6 1/1 Running 0 21m
replicaset-demo-rsl7q 1/1 Running 0 57m
replicaset-demo-twknl 1/1 Running 0 57m
replicaset-demo-vzdbb 1/1 Running 0 57m
[root@master01 ~]# kubectl delete pod/replicaset-demo-vzdbb
pod "replicaset-demo-vzdbb" deleted
[root@master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
replicaset-demo-9wqj9 0/1 ContainerCreating 0 10s
replicaset-demo-j75hk 1/1 Running 0 26m
replicaset-demo-k2n9g 1/1 Running 0 23m
replicaset-demo-n7fmk 1/1 Running 0 23m
replicaset-demo-q4dc6 1/1 Running 0 23m
replicaset-demo-rsl7q 1/1 Running 0 58m
replicaset-demo-twknl 1/1 Running 0 58m
[root@master01 ~]# kubectl describe pod/replicaset-demo-9wqj9 |grep Image
Image: nginx:1.16-alpine
Image ID: docker-pullable://nginx@sha256:5057451e461dda671da5e951019ddbff9d96a751fc7d548053523ca1f848c1ad
[root@master01 ~]#
提示:可以看到我們刪除了一個(gè)pod,對(duì)應(yīng)控制器又新建了一個(gè)pod,對(duì)應(yīng)新建的pod鏡像版本就成為了新版本的pod;從上面測(cè)試情況可以看到,對(duì)于rs控制器當(dāng)pod模板中的鏡像版本發(fā)生更改,如果k8s集群上對(duì)應(yīng)pod數(shù)量和用戶定義的數(shù)量吻合,此時(shí)rs控制器不會(huì)更新pod;只有新建后的pod才會(huì)擁有新版本;也就說(shuō)如果我們要rs來(lái)對(duì)pod版本更新,就得刪除原有老的pod后才會(huì)更新; 方式2使用命令更新pod版本 [root@master01 ~]# kubectl set image rs replicaset-demo nginx=nginx:1.18-alpine replicaset.apps/replicaset-demo image updated [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset-demo 7 7 7 72m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE replicaset-demo-9wqj9 1/1 Running 0 13m replicaset-demo-j75hk 1/1 Running 0 40m replicaset-demo-k2n9g 1/1 Running 0 36m replicaset-demo-n7fmk 1/1 Running 0 36m replicaset-demo-q4dc6 1/1 Running 0 36m replicaset-demo-rsl7q 1/1 Running 0 72m replicaset-demo-twknl 1/1 Running 0 72m [root@master01 ~]# 提示:對(duì)于rs控制器,不管用命令還是修改資源配置清單中pod模板中鏡像版本,如果有和用戶期望數(shù)量的pod,它是不會(huì)自動(dòng)更新pod版本的;只有手動(dòng)刪除老版本pod,對(duì)應(yīng)新版本pod才會(huì)被創(chuàng)建; 2、deployment控制器 對(duì)于deployment控制來(lái)說(shuō),它的定義方式和rs控制都差不多,但deploy控制器的功能要比rs強(qiáng)大,它可以實(shí)現(xiàn)滾動(dòng)更新,用戶手動(dòng)定義更新策略;其實(shí)deploy控制器是在rs控制器的基礎(chǔ)上來(lái)管理pod;也就說(shuō)我們?cè)趧?chuàng)建deploy控制器時(shí),它自動(dòng)會(huì)創(chuàng)建一個(gè)rs控制器;其中使用deployment控制器創(chuàng)建的pod名稱是由deploy控制器名稱加上“-”pod模板hash名稱加上“-”隨機(jī)字符串;而對(duì)應(yīng)rs控制器的名稱恰好就是deploy控制器名稱加“-”pod模板hash;即pod名稱就為rs控制器名稱加“-”隨機(jī)字符串; 示例:創(chuàng)建deployment控制器 [root@master01 ~]# cat deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-demo
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: ngx-dep-pod
template:
metadata:
labels:
app: ngx-dep-pod
spec:
containers:
- name: nginx
image: nginx:1.14-alpine
ports:
- name: http
containerPort: 80
[root@master01 ~]#
應(yīng)用配置清單 [root@master01 ~]# kubectl apply -f deploy-demo.yaml deployment.apps/deploy-demo created [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 10s nginx nginx:1.14-alpine app=ngx-dep-pod [root@master01 ~]# 驗(yàn)證:查看是否有rs控制器創(chuàng)建? [root@master01 ~]# kubectl get rs NAME DESIRED CURRENT READY AGE deploy-demo-6d795f958b 3 3 3 57s replicaset-demo 7 7 7 84m [root@master01 ~]# 提示:可以看到有一個(gè)deploy-demo-6d795f958b的rs控制器被創(chuàng)建; 驗(yàn)證:查看pod,看看對(duì)應(yīng)pod名稱是否有rs控制器名稱加“-”一串隨機(jī)字符串? [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-bppjr 1/1 Running 0 2m16s deploy-demo-6d795f958b-mxwkn 1/1 Running 0 2m16s deploy-demo-6d795f958b-sh76g 1/1 Running 0 2m16s replicaset-demo-9wqj9 1/1 Running 0 26m replicaset-demo-j75hk 1/1 Running 0 52m replicaset-demo-k2n9g 1/1 Running 0 49m replicaset-demo-n7fmk 1/1 Running 0 49m replicaset-demo-q4dc6 1/1 Running 0 49m replicaset-demo-rsl7q 1/1 Running 0 85m replicaset-demo-twknl 1/1 Running 0 85m [root@master01 ~]# 提示:可以看到有3個(gè)pod的名稱是deploy-demo-6d795f958b-加隨機(jī)字符串; 更新pod版本 [root@master01 ~]# cat deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-demo
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: ngx-dep-pod
template:
metadata:
labels:
app: ngx-dep-pod
spec:
containers:
- name: nginx
image: nginx:1.16-alpine
ports:
- name: http
containerPort: 80
[root@master01 ~]# kubectl apply -f deploy-demo.yaml
deployment.apps/deploy-demo configured
[root@master01 ~]# kubectl get deploy -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deploy-demo 3/3 3 3 5m45s nginx nginx:1.16-alpine app=ngx-dep-pod
[root@master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
deploy-demo-95cc58f4d-45l5c 1/1 Running 0 43s
deploy-demo-95cc58f4d-6bmb6 1/1 Running 0 45s
deploy-demo-95cc58f4d-7d5r5 1/1 Running 0 29s
replicaset-demo-9wqj9 1/1 Running 0 30m
replicaset-demo-j75hk 1/1 Running 0 56m
replicaset-demo-k2n9g 1/1 Running 0 53m
replicaset-demo-n7fmk 1/1 Running 0 53m
replicaset-demo-q4dc6 1/1 Running 0 53m
replicaset-demo-rsl7q 1/1 Running 0 89m
replicaset-demo-twknl 1/1 Running 0 89m
[root@master01 ~]#
提示:可以看到deploy控制器只要更改了pod模板中鏡像版本,對(duì)應(yīng)pod會(huì)自動(dòng)更新; 使用命令更新pod版本 [root@master01 ~]# kubectl set image deploy deploy-demo nginx=nginx:1.18-alpine deployment.apps/deploy-demo image updated [root@master01 ~]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 3/3 1 3 9m5s [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 1 3 9m11s nginx nginx:1.18-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 9m38s nginx nginx:1.18-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-567b54cd6-6h97c 1/1 Running 0 28s deploy-demo-567b54cd6-j74t4 1/1 Running 0 27s deploy-demo-567b54cd6-wcccx 1/1 Running 0 49s replicaset-demo-9wqj9 1/1 Running 0 34m replicaset-demo-j75hk 1/1 Running 0 60m replicaset-demo-k2n9g 1/1 Running 0 56m replicaset-demo-n7fmk 1/1 Running 0 56m replicaset-demo-q4dc6 1/1 Running 0 56m replicaset-demo-rsl7q 1/1 Running 0 92m replicaset-demo-twknl 1/1 Running 0 92m [root@master01 ~]# 提示:可以看到deploy控制器,只要修改了pod模板中鏡像的版本,對(duì)應(yīng)pod就會(huì)隨之滾動(dòng)更新到我們指定的版本; 查看rs歷史版本 [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 3 3 3 3m50s nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 0 0 0 12m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 0 0 0 7m27s nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 95m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]# 提示:deploy控制器的更新pod版本操作,它會(huì)記錄rs的所有歷史版本;因?yàn)橹灰猵od模板的hash值發(fā)生變化,對(duì)應(yīng)的rs就會(huì)重新被創(chuàng)建一遍,不同于rs控制器,歷史版本的rs上沒(méi)有pod運(yùn)行,只有當(dāng)前版本的rs上才會(huì)運(yùn)行pod; 查看更新歷史記錄 [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 1 <none> 2 <none> 3 <none> [root@master01 ~]# 提示:這里可以看到有3個(gè)版本,沒(méi)有記錄對(duì)應(yīng)的原因;這是因?yàn)槲覀冊(cè)诟聀od版本是沒(méi)有記錄;要想記錄器更新原因,可以在對(duì)應(yīng)名后面加--record選項(xiàng)即可; 示例:記錄更新操作命令到更新歷史記錄 [root@master01 ~]# kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record deployment.apps/deploy-demo image updated [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 2 <none> 3 <none> 4 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true [root@master01 ~]# kubectl apply -f deploy-demo-nginx-1.16.yaml --record deployment.apps/deploy-demo configured [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 4 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true [root@master01 ~]# 提示:可以看到更新操作時(shí)加上--record選項(xiàng)后,再次查看更新歷史記錄,就能顯示對(duì)應(yīng)的更新命令; 回滾到上一個(gè)版本 [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 33m nginx nginx:1.16-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 0 0 0 24m nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 0 0 0 33m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 3 3 3 28m nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 116m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 4 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true [root@master01 ~]# kubectl rollout undo deploy/deploy-demo deployment.apps/deploy-demo rolled back [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true 6 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 34m nginx nginx:1.14-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 0 0 0 26m nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 3 3 3 35m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 0 0 0 29m nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 118m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]# 提示:可以看到執(zhí)行了kubectl rollout undo deploy/deploy-demo命令后,對(duì)應(yīng)版本從1.16就回滾到1.14的版本了;對(duì)應(yīng)更新歷史記錄也把1.14版本更新為當(dāng)前最新記錄; 回滾到指定歷史記錄版本 [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 3 <none> 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true 6 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true [root@master01 ~]# kubectl rollout undo deploy/deploy-demo --to-revision=3 deployment.apps/deploy-demo rolled back [root@master01 ~]# kubectl rollout history deploy/deploy-demo deployment.apps/deploy-demo REVISION CHANGE-CAUSE 5 kubectl apply --filename=deploy-demo-nginx-1.16.yaml --record=true 6 kubectl set image deploy deploy-demo nginx=nginx:1.14-alpine --record=true 7 <none> [root@master01 ~]# kubectl get deploy -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deploy-demo 3/3 3 3 42m nginx nginx:1.18-alpine app=ngx-dep-pod [root@master01 ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR deploy-demo-567b54cd6 3 3 3 33m nginx nginx:1.18-alpine app=ngx-dep-pod,pod-template-hash=567b54cd6 deploy-demo-6d795f958b 0 0 0 42m nginx nginx:1.14-alpine app=ngx-dep-pod,pod-template-hash=6d795f958b deploy-demo-95cc58f4d 0 0 0 36m nginx nginx:1.16-alpine app=ngx-dep-pod,pod-template-hash=95cc58f4d replicaset-demo 7 7 7 125m nginx nginx:1.18-alpine app=nginx-pod [root@master01 ~]# 提示:指定要回滾到某個(gè)歷史記錄的版本,可以使用--to-revision選項(xiàng)來(lái)指定歷史記錄的編號(hào); 查看deploy控制器的詳細(xì)信息 [root@master01 ~]# kubectl describe deploy deploy-demo
Name: deploy-demo
Namespace: default
CreationTimestamp: Thu, 17 Dec 2020 23:40:11 +0800
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 7
Selector: app=ngx-dep-pod
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=ngx-dep-pod
Containers:
nginx:
Image: nginx:1.18-alpine
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: deploy-demo-567b54cd6 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 58m deployment-controller Scaled down replica set deploy-demo-6d795f958b to 1
Normal ScalingReplicaSet 58m deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 3
Normal ScalingReplicaSet 58m deployment-controller Scaled down replica set deploy-demo-6d795f958b to 0
Normal ScalingReplicaSet 55m deployment-controller Scaled up replica set deploy-demo-567b54cd6 to 1
Normal ScalingReplicaSet 54m deployment-controller Scaled down replica set deploy-demo-95cc58f4d to 2
Normal ScalingReplicaSet 38m deployment-controller Scaled up replica set deploy-demo-6d795f958b to 1
Normal ScalingReplicaSet 38m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 2
Normal ScalingReplicaSet 38m deployment-controller Scaled up replica set deploy-demo-6d795f958b to 2
Normal ScalingReplicaSet 37m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 1
Normal ScalingReplicaSet 37m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 0
Normal ScalingReplicaSet 33m (x2 over 58m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 1
Normal ScalingReplicaSet 33m (x2 over 58m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 2
Normal ScalingReplicaSet 33m (x2 over 58m) deployment-controller Scaled down replica set deploy-demo-6d795f958b to 2
Normal ScalingReplicaSet 29m (x3 over 64m) deployment-controller Scaled up replica set deploy-demo-6d795f958b to 3
Normal ScalingReplicaSet 22m (x14 over 54m) deployment-controller (combined from similar events): Scaled down replica set deploy-demo-6d795f958b to 2
[root@master01 ~]#
提示:查看deploy控制器的詳細(xì)信息,可以看到對(duì)應(yīng)pod模板,回滾的過(guò)程,以及默認(rèn)更新策略等等信息; 自定義滾動(dòng)更新策略 [root@master01 ~]# cat deploy-demo-nginx-1.14.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-demo
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: ngx-dep-pod
template:
metadata:
labels:
app: ngx-dep-pod
spec:
containers:
- name: nginx
image: nginx:1.14-alpine
ports:
- name: http
containerPort: 80
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 1
minReadySeconds: 5
[root@master01 ~]#
提示:定義滾動(dòng)更新策略需要使用strategy這個(gè)字段,這個(gè)字段的值是一個(gè)對(duì)象,其中type是指定更新策略,其策略有兩種,第一種是Recreate,這種策略更新方式是新建一個(gè)新版pod,然后再刪除一個(gè)舊版pod以這種方式滾動(dòng)更新;第二種是RollingUpdate,這種策略是用于我們手動(dòng)指定的策略;其中maxSurge表示最大允許超出用戶期望的pod數(shù)量(即更新時(shí)允許新建超出用戶期望的pod數(shù)量),maxUnavailable表示最大允許少于用于期望的pod數(shù)量(即更新時(shí)可以一次刪除幾個(gè)舊版pod);最后minReadySeconds字段不是定義更新策略的,它是spec中的一個(gè)字段,用于限定pod最小就緒時(shí)長(zhǎng);以上更新策略表示,使用RollingUpdate類型策略,并指定最大新建pod超出用戶期望pod數(shù)量為2個(gè),最大允許少于用戶期望pod數(shù)量為1個(gè);pod最小就緒時(shí)間為5秒; 應(yīng)用配置清單 [root@master01 ~]# kubectl apply -f deploy-demo-nginx-1.14.yaml
deployment.apps/deploy-demo configured
[root@master01 ~]# kubectl describe deploy/deploy-demo
Name: deploy-demo
Namespace: default
CreationTimestamp: Thu, 17 Dec 2020 23:40:11 +0800
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 8
Selector: app=ngx-dep-pod
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 5
RollingUpdateStrategy: 1 max unavailable, 2 max surge
Pod Template:
Labels: app=ngx-dep-pod
Containers:
nginx:
Image: nginx:1.14-alpine
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: deploy-demo-6d795f958b (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 47m deployment-controller Scaled up replica set deploy-demo-6d795f958b to 1
Normal ScalingReplicaSet 47m deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 1
Normal ScalingReplicaSet 42m (x2 over 68m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 1
Normal ScalingReplicaSet 42m (x2 over 68m) deployment-controller Scaled up replica set deploy-demo-95cc58f4d to 2
Normal ScalingReplicaSet 42m (x2 over 68m) deployment-controller Scaled down replica set deploy-demo-6d795f958b to 2
Normal ScalingReplicaSet 31m (x14 over 64m) deployment-controller (combined from similar events): Scaled down replica set deploy-demo-6d795f958b to 2
Normal ScalingReplicaSet 41s (x4 over 73m) deployment-controller Scaled up replica set deploy-demo-6d795f958b to 3
Normal ScalingReplicaSet 41s (x2 over 47m) deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 2
Normal ScalingReplicaSet 41s (x2 over 47m) deployment-controller Scaled up replica set deploy-demo-6d795f958b to 2
Normal ScalingReplicaSet 34s (x2 over 47m) deployment-controller Scaled down replica set deploy-demo-567b54cd6 to 0
[root@master01 ~]#
提示:可以看到對(duì)應(yīng)deploy控制器的更新策略已經(jīng)更改為我們定義的策略;為了能夠看出更新的效果,我們這里先手動(dòng)把pod數(shù)量調(diào)整為10個(gè); 擴(kuò)展pod副本數(shù)量 [root@master01 ~]# kubectl scale deploy/deploy-demo --replicas=10 deployment.apps/deploy-demo scaled [root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-5bdfw 1/1 Running 0 3m33s deploy-demo-6d795f958b-5zr7r 1/1 Running 0 8s deploy-demo-6d795f958b-9mc7k 1/1 Running 0 8s deploy-demo-6d795f958b-czwdp 1/1 Running 0 3m33s deploy-demo-6d795f958b-jfrnc 1/1 Running 0 8s deploy-demo-6d795f958b-jw9n8 1/1 Running 0 3m33s deploy-demo-6d795f958b-mbrlw 1/1 Running 0 8s deploy-demo-6d795f958b-ph99t 1/1 Running 0 8s deploy-demo-6d795f958b-wzscg 1/1 Running 0 8s deploy-demo-6d795f958b-z5mnf 1/1 Running 0 8s replicaset-demo-9wqj9 1/1 Running 0 100m replicaset-demo-j75hk 1/1 Running 0 126m replicaset-demo-k2n9g 1/1 Running 0 123m replicaset-demo-n7fmk 1/1 Running 0 123m replicaset-demo-q4dc6 1/1 Running 0 123m replicaset-demo-rsl7q 1/1 Running 0 159m replicaset-demo-twknl 1/1 Running 0 159m [root@master01 ~]# 查看更新過(guò)程 [root@master01 ~]# kubectl get pod -w NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-5bdfw 1/1 Running 0 5m18s deploy-demo-6d795f958b-5zr7r 1/1 Running 0 113s deploy-demo-6d795f958b-9mc7k 1/1 Running 0 113s deploy-demo-6d795f958b-czwdp 1/1 Running 0 5m18s deploy-demo-6d795f958b-jfrnc 1/1 Running 0 113s deploy-demo-6d795f958b-jw9n8 1/1 Running 0 5m18s deploy-demo-6d795f958b-mbrlw 1/1 Running 0 113s deploy-demo-6d795f958b-ph99t 1/1 Running 0 113s deploy-demo-6d795f958b-wzscg 1/1 Running 0 113s deploy-demo-6d795f958b-z5mnf 1/1 Running 0 113s replicaset-demo-9wqj9 1/1 Running 0 102m replicaset-demo-j75hk 1/1 Running 0 128m replicaset-demo-k2n9g 1/1 Running 0 125m replicaset-demo-n7fmk 1/1 Running 0 125m replicaset-demo-q4dc6 1/1 Running 0 125m replicaset-demo-rsl7q 1/1 Running 0 161m replicaset-demo-twknl 1/1 Running 0 161m deploy-demo-578d6b6f94-qhc9j 0/1 Pending 0 0s deploy-demo-578d6b6f94-qhc9j 0/1 Pending 0 0s deploy-demo-578d6b6f94-95srs 0/1 Pending 0 0s deploy-demo-6d795f958b-mbrlw 1/1 Terminating 0 4m16s deploy-demo-578d6b6f94-95srs 0/1 Pending 0 0s deploy-demo-578d6b6f94-qhc9j 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-95srs 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-bht84 0/1 Pending 0 0s deploy-demo-578d6b6f94-bht84 0/1 Pending 0 0s deploy-demo-578d6b6f94-bht84 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-mbrlw 0/1 Terminating 0 4m17s deploy-demo-6d795f958b-mbrlw 0/1 Terminating 0 4m24s deploy-demo-6d795f958b-mbrlw 0/1 Terminating 0 4m24s deploy-demo-578d6b6f94-qhc9j 1/1 Running 0 15s deploy-demo-578d6b6f94-95srs 1/1 Running 0 16s deploy-demo-578d6b6f94-bht84 1/1 Running 0 18s deploy-demo-6d795f958b-ph99t 1/1 Terminating 0 4m38s deploy-demo-6d795f958b-jfrnc 1/1 Terminating 0 4m38s deploy-demo-578d6b6f94-lg6vk 0/1 Pending 0 0s deploy-demo-578d6b6f94-g9c8x 0/1 Pending 0 0s deploy-demo-578d6b6f94-lg6vk 0/1 Pending 0 0s deploy-demo-578d6b6f94-g9c8x 0/1 Pending 0 0s deploy-demo-578d6b6f94-lg6vk 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-g9c8x 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-ph99t 0/1 Terminating 0 4m38s deploy-demo-6d795f958b-jfrnc 0/1 Terminating 0 4m38s deploy-demo-6d795f958b-5zr7r 1/1 Terminating 0 4m43s deploy-demo-578d6b6f94-4rpx9 0/1 Pending 0 0s deploy-demo-578d6b6f94-4rpx9 0/1 Pending 0 0s deploy-demo-578d6b6f94-4rpx9 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-5zr7r 0/1 Terminating 0 4m43s deploy-demo-6d795f958b-ph99t 0/1 Terminating 0 4m44s deploy-demo-6d795f958b-ph99t 0/1 Terminating 0 4m44s deploy-demo-6d795f958b-jfrnc 0/1 Terminating 0 4m44s deploy-demo-6d795f958b-jfrnc 0/1 Terminating 0 4m44s deploy-demo-578d6b6f94-g9c8x 1/1 Running 0 12s deploy-demo-6d795f958b-5zr7r 0/1 Terminating 0 4m51s deploy-demo-6d795f958b-5zr7r 0/1 Terminating 0 4m51s deploy-demo-578d6b6f94-lg6vk 1/1 Running 0 15s deploy-demo-6d795f958b-9mc7k 1/1 Terminating 0 4m56s deploy-demo-578d6b6f94-4lbwg 0/1 Pending 0 0s deploy-demo-578d6b6f94-4lbwg 0/1 Pending 0 0s deploy-demo-578d6b6f94-4lbwg 0/1 ContainerCreating 0 0s deploy-demo-578d6b6f94-4rpx9 1/1 Running 0 13s deploy-demo-6d795f958b-9mc7k 0/1 Terminating 0 4m57s deploy-demo-578d6b6f94-4lbwg 1/1 Running 0 2s deploy-demo-6d795f958b-wzscg 1/1 Terminating 0 4m58s deploy-demo-578d6b6f94-fhkk9 0/1 Pending 0 0s deploy-demo-578d6b6f94-fhkk9 0/1 Pending 0 0s deploy-demo-578d6b6f94-fhkk9 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-wzscg 0/1 Terminating 0 4m59s deploy-demo-578d6b6f94-fhkk9 1/1 Running 0 2s deploy-demo-6d795f958b-z5mnf 1/1 Terminating 0 5m2s deploy-demo-578d6b6f94-sfpz4 0/1 Pending 0 1s deploy-demo-578d6b6f94-sfpz4 0/1 Pending 0 1s deploy-demo-6d795f958b-czwdp 1/1 Terminating 0 8m28s deploy-demo-578d6b6f94-sfpz4 0/1 ContainerCreating 0 1s deploy-demo-578d6b6f94-5bs6z 0/1 Pending 0 0s deploy-demo-578d6b6f94-5bs6z 0/1 Pending 0 0s deploy-demo-578d6b6f94-5bs6z 0/1 ContainerCreating 0 0s deploy-demo-6d795f958b-czwdp 0/1 Terminating 0 8m28s deploy-demo-6d795f958b-z5mnf 0/1 Terminating 0 5m4s deploy-demo-578d6b6f94-sfpz4 1/1 Running 0 2s deploy-demo-6d795f958b-5bdfw 1/1 Terminating 0 8m29s deploy-demo-6d795f958b-9mc7k 0/1 Terminating 0 5m4s deploy-demo-6d795f958b-9mc7k 0/1 Terminating 0 5m4s deploy-demo-578d6b6f94-5bs6z 1/1 Running 0 1s deploy-demo-6d795f958b-5bdfw 0/1 Terminating 0 8m30s deploy-demo-6d795f958b-czwdp 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-czwdp 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-5bdfw 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-5bdfw 0/1 Terminating 0 8m36s deploy-demo-6d795f958b-wzscg 0/1 Terminating 0 5m11s deploy-demo-6d795f958b-wzscg 0/1 Terminating 0 5m11s deploy-demo-6d795f958b-jw9n8 1/1 Terminating 0 8m38s deploy-demo-6d795f958b-jw9n8 0/1 Terminating 0 8m38s deploy-demo-6d795f958b-z5mnf 0/1 Terminating 0 5m14s deploy-demo-6d795f958b-z5mnf 0/1 Terminating 0 5m14s deploy-demo-6d795f958b-jw9n8 0/1 Terminating 0 8m46s deploy-demo-6d795f958b-jw9n8 0/1 Terminating 0 8m46s 提示:使用-w選項(xiàng)可以一直跟蹤查看pod變化過(guò)程;從上面的監(jiān)控信息可以看到,在更新時(shí),首先是將三個(gè)pod標(biāo)記為pending狀態(tài),然后先刪除一個(gè)pod,然后再創(chuàng)建兩個(gè)pod;然后又創(chuàng)建一個(gè),再刪除3個(gè),一次進(jìn)行;不管怎么刪除和新建,對(duì)應(yīng)新舊pod的數(shù)量最少要有9個(gè),最大不超過(guò)12個(gè); 使用暫停更新實(shí)現(xiàn)金絲雀發(fā)布 [root@master01 ~]# kubectl set image deploy/deploy-demo nginx=nginx:1.14-alpine && kubectl rollout pause deploy/deploy-demo deployment.apps/deploy-demo image updated deployment.apps/deploy-demo paused [root@master01 ~]# 提示:以上命令會(huì)根據(jù)我們定義的更新策略,先刪除一個(gè)pod,然后再創(chuàng)建3個(gè)新版pod,然后更新操作就暫停了;此時(shí)對(duì)應(yīng)pod只更新了1個(gè),然后新建了2個(gè)新pod,總共就有12個(gè)pod; 查看pod情況 [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE deploy-demo-6d795f958b-df77k 1/1 Running 0 87s deploy-demo-6d795f958b-tll8b 1/1 Running 0 87s deploy-demo-6d795f958b-zbhwp 1/1 Running 0 87s deploy-demo-fb957b9b-44l6g 1/1 Running 0 3m21s deploy-demo-fb957b9b-7q6wh 1/1 Running 0 3m38s deploy-demo-fb957b9b-d45rg 1/1 Running 0 3m27s deploy-demo-fb957b9b-j7p2j 1/1 Running 0 3m38s deploy-demo-fb957b9b-mkpz6 1/1 Running 0 3m38s deploy-demo-fb957b9b-qctnv 1/1 Running 0 3m21s deploy-demo-fb957b9b-rvrtf 1/1 Running 0 3m27s deploy-demo-fb957b9b-wf254 1/1 Running 0 3m12s deploy-demo-fb957b9b-xclhz 1/1 Running 0 3m22s replicaset-demo-9wqj9 1/1 Running 0 135m replicaset-demo-j75hk 1/1 Running 0 161m replicaset-demo-k2n9g 1/1 Running 0 158m replicaset-demo-n7fmk 1/1 Running 0 158m replicaset-demo-q4dc6 1/1 Running 0 158m replicaset-demo-rsl7q 1/1 Running 0 3h14m replicaset-demo-twknl 1/1 Running 0 3h14m [root@master01 ~]# kubectl get pod|grep "^deploy.*" |wc -l 12 [root@master01 ~]# 提示:之所以多兩個(gè)是因?yàn)槲覀冊(cè)诟虏呗灾卸x允許最大超出用戶期望2個(gè)pod; 恢復(fù)更新 [root@master01 ~]# kubectl rollout resume deploy/deploy-demo && kubectl rollout status deploy/deploy-demo deployment.apps/deploy-demo resumed Waiting for deployment "deploy-demo" rollout to finish: 3 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 6 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 out of 10 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 9 of 10 updated replicas are available... Waiting for deployment "deploy-demo" rollout to finish: 9 of 10 updated replicas are available... deployment "deploy-demo" successfully rolled out [root@master01 ~]# 提示:resume表示恢復(fù)剛才暫停的更新操作;status是用來(lái)查看對(duì)應(yīng)更新過(guò)程; |
|
|