摘要:在節(jié)點(diǎn)上執(zhí)行命令使用節(jié)點(diǎn)執(zhí)行命令的輸出,在上執(zhí)行,使其加入集群。在兩個(gè)節(jié)點(diǎn)上,執(zhí)行完畢上述命令后,在上查看部署成功。部署完成后的觀察檢查現(xiàn)在正在運(yùn)行的可以發(fā)現(xiàn),,,運(yùn)行在上,在三個(gè)節(jié)點(diǎn)上均有運(yùn)行在三個(gè)節(jié)點(diǎn)均有運(yùn)行
集群規(guī)劃 網(wǎng)絡(luò)配置
節(jié)點(diǎn)網(wǎng)絡(luò): 192.168.18.0/24
service網(wǎng)絡(luò): 10.96.0.0/12
pod網(wǎng)絡(luò): 10.244.0.0/16
etcd部署在master節(jié)點(diǎn)上。
部署方法: ansible部署github上有人將部署方式用playbook實(shí)現(xiàn)。
使用kubeadm部署 基本情況kubeadm項(xiàng)目鏈接地址
master、node: 安裝kuberlet, kubeadm, docker
master: kubeadm init
node: kubeadm join
apiserver,scheduler,Controller-manager,etcd在master上以Pod運(yùn)行
kubeproxy以Pod方式運(yùn)行在每一個(gè)node節(jié)點(diǎn)上。
以上pod均為靜態(tài)Pod
每一個(gè)節(jié)點(diǎn)都需要運(yùn)行flannel(也是以Pod方式運(yùn)行),以提供Pod網(wǎng)絡(luò)
kubeadm的介紹
安裝步驟master,node需要安裝kubelet, kubeadm, docker
master節(jié)點(diǎn)上運(yùn)行 kubeadm init
node節(jié)點(diǎn)上運(yùn)行 kubeadm join 加入集群
開(kāi)始部署 我的環(huán)境[root@master yum.repos.d]# cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core) [root@master yum.repos.d]# uname -a Linux master 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux節(jié)點(diǎn)解析
通過(guò)hosts文件解析
192.168.18.128 master.test.com master 192.168.18.129 node01.test.com node01 192.168.18.130 node02.test.com node02
集群通過(guò)時(shí)間服務(wù)器做時(shí)鐘同步,我沒(méi)做。
節(jié)點(diǎn)互信可以按照此文檔配置節(jié)點(diǎn)互信。
選擇版本使用kubernetes v1.11.2
開(kāi)始 確保iptables firewalld等未啟動(dòng),且不開(kāi)機(jī)自啟動(dòng) 配置yum倉(cāng)庫(kù)使用aliyun源,鏈接
docker源使用如下命令獲取cd /etc/yum.repos.d wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repokubernetes源
[root@master yum.repos.d]# cat kubernetes.repo [kubernetes] name=Kubernetes Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg enabled=1查看源是否生效
# yum clean all # yum repolist ***** ***** Determining fastest mirrors kubernetes 243/243 repo id repo name status base/7/x86_64 CentOS-7 - Base - 163.com 9,911 docker-ce-stable/x86_64 Docker CE Stable - x86_64 16 extras/7/x86_64 CentOS-7 - Extras - 163.com 370 kubernetes Kubernetes Repo 243 updates/7/x86_64 CentOS-7 - Updates - 163.com 1,054 repolist: 11,594安裝軟件
三臺(tái)機(jī)器都需要安裝
使用 yum install docker-ce kubelet kubeadm kubectl 安裝
安裝的軟件包如下:
Installing : kubectl-1.11.2-0.x86_64 1/7 Installing : cri-tools-1.11.0-0.x86_64 2/7 Installing : socat-1.7.3.2-2.el7.x86_64 3/7 Installing : kubernetes-cni-0.6.0-0.x86_64 4/7 Installing : kubelet-1.11.2-0.x86_64 5/7 Installing : kubeadm-1.11.2-0.x86_64 6/7 Installing : docker-ce-18.06.0.ce-3.el7.x86_64 7/7啟動(dòng)docker服務(wù)等
由于國(guó)內(nèi)網(wǎng)絡(luò)原因,kubernetes的鏡像托管在google云上,無(wú)法直接下載,需要設(shè)置proxy
在 /usr/lib/systemd/system/docker.service 文件中添加如下兩行
[root@master ~]# cat /usr/lib/systemd/system/docker.service |grep PROXY Environment="HTTPS_PROXY=http://www.ik8s.io:10080" Environment="NO_PROXY=127.0.0.0/8,192.168.18.0/24"
之后,啟動(dòng)docker
systemctl daemon-reload systemctl start docker systemctl enable docker確認(rèn)proc的這兩個(gè)參數(shù)如下,均為1:
[root@master ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables 1 [root@master ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 1設(shè)置kubelet
查看kubelet安裝生成了哪些文件
[root@master ~]# rpm -ql kubelet /etc/kubernetes/manifests # 清單目錄 /etc/sysconfig/kubelet # 配置文件 /etc/systemd/system/kubelet.service # unit file /usr/bin/kubelet # 主程序
默認(rèn)的配置文件
[root@master ~]# cat /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS=
修改kubelet的配置文件
[root@master ~]# cat /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false"
此時(shí)還無(wú)法正常啟動(dòng)kubelet,先設(shè)置kubelet開(kāi)機(jī)自啟動(dòng),使用如下命令: systemctl enable kubelet 。
kubeadm init在master節(jié)點(diǎn)上執(zhí)行
kubeadm init --kubernetes-version=v1.11.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
kubeadm init的輸出可見(jiàn)于此鏈接
此命令,下載了如下image
[root@master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy-amd64 v1.11.2 46a3cd725628 7 days ago 97.8MB k8s.gcr.io/kube-apiserver-amd64 v1.11.2 821507941e9c 7 days ago 187MB k8s.gcr.io/kube-controller-manager-amd64 v1.11.2 38521457c799 7 days ago 155MB k8s.gcr.io/kube-scheduler-amd64 v1.11.2 37a1403e6c1a 7 days ago 56.8MB k8s.gcr.io/coredns 1.1.3 b3b94275d97c 2 months ago 45.6MB k8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 4 months ago 219MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 7 months ago 742kB
現(xiàn)在,正在運(yùn)行的docker如下
[root@master ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1c03e043e6b7 46a3cd725628 "/usr/local/bin/kube?? 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-6fgjm_kube-system_f85e8660-a090-11e8-8ee7-000c29f71e04_0 5f166bd11566 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-6fgjm_kube-system_f85e8660-a090-11e8-8ee7-000c29f71e04_0 0f306f98cc52 b8df3b177be2 "etcd --advertise-cl?? 3 minutes ago Up 3 minutes k8s_etcd_etcd-master_kube-system_2cc1c8a24b68ab9b46bca47e153e74c6_0 8f01317b9e20 37a1403e6c1a "kube-scheduler --ad?? 3 minutes ago Up 3 minutes k8s_kube-scheduler_kube-scheduler-master_kube-system_a00c35e56ebd0bdfcd77d53674a5d2a1_0 4e6a71ab20d3 821507941e9c "kube-apiserver --au?? 3 minutes ago Up 3 minutes k8s_kube-apiserver_kube-apiserver-master_kube-system_d25d40ebb427821464356bb27a38f487_0 69e4c5dae335 38521457c799 "kube-controller-man?? 3 minutes ago Up 3 minutes k8s_kube-controller-manager_kube-controller-manager-master_kube-system_6363f7ebf727b0b95d9a9ef72516a0e5_0 da5981dc546a k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-controller-manager-master_kube-system_6363f7ebf727b0b95d9a9ef72516a0e5_0 b7a8fdc35029 k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-apiserver-master_kube-system_d25d40ebb427821464356bb27a38f487_0 b09efc7ff7bd k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-scheduler-master_kube-system_a00c35e56ebd0bdfcd77d53674a5d2a1_0 ab11d6ffadab k8s.gcr.io/pause:3.1 "/pause" 3 minutes ago Up 3 minutes k8s_POD_etcd-master_kube-system_2cc1c8a24b68ab9b46bca47e153e74c6_0
node節(jié)點(diǎn)可以通過(guò)如下命令加入集群 kubeadm join 192.168.18.128:6443 --token n84v6t.c7d83cn4mo2z8wyr --discovery-token-ca-cert-hash sha256:b946c145416fe1995e1d4d002c149e71a897acc7b106d94cee2920cb2c85ce29 。
在kubeadm init的輸出中,提示我們需要以普通用戶做如下操作,我此時(shí)用root執(zhí)行
[root@master ~]# mkdir -p $HOME/.kube [root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
此時(shí)可以通過(guò) kubelet get 獲取各種資源信息。比如
[root@master ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"}
此時(shí)的監(jiān)聽(tīng)狀態(tài)
[root@master ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 127.0.0.1:2379 *:* LISTEN 0 128 127.0.0.1:10251 *:* LISTEN 0 128 127.0.0.1:2380 *:* LISTEN 0 128 127.0.0.1:10252 *:* LISTEN 0 128 *:22 *:* LISTEN 0 128 127.0.0.1:33881 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 192.168.18.128:10010 *:* LISTEN 0 128 127.0.0.1:10248 *:* LISTEN 0 128 127.0.0.1:10249 *:* LISTEN 0 128 :::6443 :::* LISTEN 0 128 :::10256 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 128 :::10250 :::*
此時(shí)的節(jié)點(diǎn)狀態(tài)
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady master 9m v1.11.2
狀態(tài)為 NotReady , 需要部署 flannel,鏈接
部署flannel在文檔中,找到如下命令,部署
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
按如下方法查看:
[root@master ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-cv4gp 1/1 Running 0 13m coredns-78fcdf6894-wmd25 1/1 Running 0 13m etcd-master 1/1 Running 0 49s kube-apiserver-master 1/1 Running 0 49s kube-controller-manager-master 1/1 Running 0 48s kube-flannel-ds-amd64-r42wr 1/1 Running 0 2m kube-proxy-6fgjm 1/1 Running 0 13m kube-scheduler-master 1/1 Running 0 48s [root@master ~]# docker images |grep flannel quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 6 months ago 44.6MB [root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 14m v1.11.2
此時(shí)master節(jié)點(diǎn)狀態(tài)變?yōu)?Ready 。
在node節(jié)點(diǎn)上執(zhí)行 kubeadm join 命令使用master節(jié)點(diǎn)執(zhí)行 kubeadm init 命令的輸出,在node上執(zhí)行,使其加入集群。
[root@node01 ~]# kubeadm join 192.168.18.128:6443 --token n84v6t.c7d83cn4mo2z8wyr --discovery-token-ca-cert-hash sha256:b946c145416fe1995e1d4d002c149e71a897acc7b106d94cee2920cb2c85ce29 --ignore-preflight-errors=Swap [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{}] you can solve this problem with following methods: 1. Run "modprobe -- " to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support [WARNING Swap]: running with swap on is not supported. Please disable swap I0815 22:02:30.751069 15460 kernel_validator.go:81] Validating kernel version I0815 22:02:30.751145 15460 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.0-ce. Max validated version: 17.03 [discovery] Trying to connect to API Server "192.168.18.128:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.18.128:6443" [discovery] Requesting info from "https://192.168.18.128:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.18.128:6443" [discovery] Successfully established connection with API Server "192.168.18.128:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01" as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run "kubectl get nodes" on the master to see this node join the cluster.
在兩個(gè)節(jié)點(diǎn)上,執(zhí)行完畢上述命令后,在master上查看
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 23m v1.11.2 node01 Ready2m v1.11.2 node02 Ready 1m v1.11.2
部署成功。
部署完成后的觀察 檢查現(xiàn)在正在運(yùn)行的pod[root@master ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE coredns-78fcdf6894-cv4gp 1/1 Running 0 28m 10.244.0.3 mastercoredns-78fcdf6894-wmd25 1/1 Running 0 28m 10.244.0.2 master etcd-master 1/1 Running 0 15m 192.168.18.128 master kube-apiserver-master 1/1 Running 0 15m 192.168.18.128 master kube-controller-manager-master 1/1 Running 0 15m 192.168.18.128 master kube-flannel-ds-amd64-48rvq 1/1 Running 3 6m 192.168.18.130 node02 kube-flannel-ds-amd64-7dw42 1/1 Running 3 7m 192.168.18.129 node01 kube-flannel-ds-amd64-r42wr 1/1 Running 0 16m 192.168.18.128 master kube-proxy-6fgjm 1/1 Running 0 28m 192.168.18.128 master kube-proxy-6mngv 1/1 Running 0 7m 192.168.18.129 node01 kube-proxy-9sh2n 1/1 Running 0 6m 192.168.18.130 node02 kube-scheduler-master 1/1 Running 0 15m 192.168.18.128 master
可以發(fā)現(xiàn):
kube-apiserver, kube-scheduler, kube-controller,etcd-master運(yùn)行在master上,
kube-flannel在三個(gè)節(jié)點(diǎn)上均有運(yùn)行
kube-proxy 在三個(gè)節(jié)點(diǎn)均有運(yùn)行
文章版權(quán)歸作者所有,未經(jīng)允許請(qǐng)勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。
轉(zhuǎn)載請(qǐng)注明本文地址:http://m.specialneedsforspecialkids.com/yun/32702.html
摘要:所以,選擇把運(yùn)行直接運(yùn)行在宿主機(jī)中,使用容器部署其他組件。獨(dú)立部署方式所需機(jī)器資源多按照集群的奇數(shù)原則,這種拓?fù)涞募宏P(guān)控制平面最少就要臺(tái)宿主機(jī)了。 在上篇文章minikube部署中,有提到Minikube部署Kubernetes的核心就是Kubeadm,這篇文章來(lái)詳細(xì)說(shuō)明下Kubeadm原理及部署步驟。寫這篇文章的時(shí)候,Kubernetes1.14剛剛發(fā)布,所以部署步驟以1.14版為...
摘要:所以,選擇把運(yùn)行直接運(yùn)行在宿主機(jī)中,使用容器部署其他組件。獨(dú)立部署方式所需機(jī)器資源多按照集群的奇數(shù)原則,這種拓?fù)涞募宏P(guān)控制平面最少就要臺(tái)宿主機(jī)了。 在上篇文章minikube部署中,有提到Minikube部署Kubernetes的核心就是Kubeadm,這篇文章來(lái)詳細(xì)說(shuō)明下Kubeadm原理及部署步驟。寫這篇文章的時(shí)候,Kubernetes1.14剛剛發(fā)布,所以部署步驟以1.14版為...
摘要:上一章中,我們用去搭建單機(jī)集群,并且創(chuàng)建在三章中講解,本篇將介紹利用部署多節(jié)點(diǎn)集群,并學(xué)會(huì)安裝以及使用的命令行工具,快速創(chuàng)建集群實(shí)例,完成部署應(yīng)用的實(shí)踐。上一章中,我們用 minikube 去搭建單機(jī)集群,并且創(chuàng)建 Deployment、Service(在三章中講解),本篇將介紹利用 kubeadm 部署多節(jié)點(diǎn)集群,并學(xué)會(huì) 安裝以及使用 kubernetes 的命令行工具,快速創(chuàng)建集群實(shí)例,...
kubeadm介紹kubeadm概述Kubeadm 是一個(gè)工具,它提供了 kubeadm init 以及 kubeadm join 這兩個(gè)命令作為快速創(chuàng)建 kubernetes 集群的最佳實(shí)踐。 kubeadm 通過(guò)執(zhí)行必要的操作來(lái)啟動(dòng)和運(yùn)行一個(gè)最小可用的集群。kubeadm 只關(guān)心啟動(dòng)集群,而不關(guān)心其他工作,如部署前的節(jié)點(diǎn)準(zhǔn)備工作、安裝各種Kubernetes Dashboard、監(jiān)控解決方案...
摘要:代表的解決方案為。雖然官網(wǎng)列出的部署方式很多,但也不用被這么多種部署方式搞糊涂了。雖然只是一條命令,但其實(shí)執(zhí)行了很多步驟命令執(zhí)行后輸出如下可以看到,主要做了這些事創(chuàng)建了名為的虛擬機(jī),并在虛擬機(jī)中安裝了容器運(yùn)行時(shí)。 綜述 Kubernetes集群的組件眾多,要部署一套符合生產(chǎn)環(huán)境的集群不是一件容易的事。好在隨著社區(qū)的快速發(fā)展,特別是在它成為事實(shí)上的容器編排標(biāo)準(zhǔn)以后,基本所有的主流云平臺(tái)都...
摘要:代表的解決方案為。雖然官網(wǎng)列出的部署方式很多,但也不用被這么多種部署方式搞糊涂了。雖然只是一條命令,但其實(shí)執(zhí)行了很多步驟命令執(zhí)行后輸出如下可以看到,主要做了這些事創(chuàng)建了名為的虛擬機(jī),并在虛擬機(jī)中安裝了容器運(yùn)行時(shí)。 綜述 Kubernetes集群的組件眾多,要部署一套符合生產(chǎn)環(huán)境的集群不是一件容易的事。好在隨著社區(qū)的快速發(fā)展,特別是在它成為事實(shí)上的容器編排標(biāo)準(zhǔn)以后,基本所有的主流云平臺(tái)都...
閱讀 640·2021-08-17 10:15
閱讀 1724·2021-07-30 14:57
閱讀 1978·2019-08-30 15:55
閱讀 2820·2019-08-30 15:55
閱讀 2708·2019-08-30 15:44
閱讀 670·2019-08-30 14:13
閱讀 2386·2019-08-30 13:55
閱讀 2592·2019-08-26 13:56