十年网站开发经验 + 多家企业客户 + 靠谱的建站团队
量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决
本篇内容主要讲解“Kubernetes集群的搭建方法”,感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习“Kubernetes集群的搭建方法”吧!
成都创新互联专注于沅江网站建设服务及定制,我们拥有丰富的企业做网站经验。 热诚为您提供沅江营销型网站建设,沅江网站制作、沅江网页设计、沅江网站官网定制、成都微信小程序服务,打造沅江网络公司原创品牌,更为您提供沅江网站排名全网营销落地服务。
使用kubeadm搭建一个单节点kubernets实例,仅供学习. 运行环境和软件概要如下:
~ | 版本 | 备注 |
---|---|---|
OS | Ubuntu 18.0.4 | 192.168.132.152 my.servermaster.local/192.168.132.154 my.worker01.local |
Docker | 18.06.1~ce~3-0~ubuntu | k8s最新版(1.12.3)支持的最高版本, 必须固定 |
Kubernetes | 1.12.3 | 目标软件 |
以上系统和软件基本是2018.11截止最新的状态, 其中docker需要注意必须安装k8s支持到的版本.
关闭系交换分区
swapoff -a
安装运行时, 默认使用docker, 安装docker即可
apt-get install docker-ce=18.06.1~ce~3-0~ubuntu
安装kubeadm 一下命令和官网的命令一致, 但是是包源改为阿里云
apt-get update && apt-get install -y apt-transport-https curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat </etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl
因为国内是访问不到k8s.gcr.io所以需要将需要的镜像提前下载, 这次采用从阿里云镜像仓库下载, 并修改下载后的镜像tag为k8s.gcr.io
# a. 查看都需要哪些镜像需要下载 kubeadm config images list --kubernetes-version=v1.12.3 k8s.gcr.io/kube-apiserver:v1.12.3 k8s.gcr.io/kube-controller-manager:v1.12.3 k8s.gcr.io/kube-scheduler:v1.12.3 k8s.gcr.io/kube-proxy:v1.12.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/coreDNS:1.2 # b. 创建一个自动处理脚本下载镜像->重新tag->删除老tag vim ./load_images.sh #!/bin/bash ### config the image map declare -A images map=() images["k8s.gcr.io/kube-apiserver:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3" images["k8s.gcr.io/kube-controller-manager:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3" images["k8s.gcr.io/kube-scheduler:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3" images["k8s.gcr.io/kube-proxy:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.3" images["k8s.gcr.io/pause:3.1"]="registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1" images["k8s.gcr.io/etcd:3.2.24"]="registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24" images["k8s.gcr.io/coredns:1.2.2"]="registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2" ### re-tag foreach for key in ${!images[@]} do docker pull ${images[$key]} docker tag ${images[$key]} $key docker rmi ${images[$key]} done ### check docker images # c. 执行脚本准镜像 sudo chmod +x load_images.sh ./load_images.sh
初始化需要指定至少两个参数:
kubernetes-version: 方式kubeadm访问外网获取版本
pod-network-cidr: flannel网络插件配置需要
### 执行初始化命令 sudo kubeadm init --kubernetes-version=v1.12.3 --pod-network-cidr=10.244.0.0/16 ### 最后的结果如下 ... ... Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.132.152:6443 --token ymny55.4jlbbkxiggmn9ezh --discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b9
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
使用非root账号查看节点情况:
kubectl get nodes NAME STATUS ROLES AGE VERSION servermaster NotReady master 28m v1.12.3
发现有一个master节点, 但是状态是NotReady, 这里需要做一个决定:
如果希望是单机则执行如下
kubectl taint nodes --all node-role.kubernetes.io/master-
如果希望搭建继续, 则继续后续步骤, 此时主节点状态可以忽略.
查看kube-flannel.yml文件内容, 复制到本地文件避免terminal无法远程获取
kubectl apply -f kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
worker节点新建参考[1. 安装步骤]在另外一台服务器上新建即可, worker节点不用准备2.1~2.3及之后的所有步骤, 仅需完成基本安装, 安装完毕进入新的worker节点, 执行上一步最后得到join命令:
kubeadm join 192.168.132.152:6443 --token ymny55.4jlbbkxiggmn9ezh --discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b9 ... ... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
kubectl get nodes NAME STATUS ROLES AGE VERSION servermaster Ready master 94m v1.12.3 worker01 Ready54m v1.12.3
复制kubernetes-dashboard.yaml内容到本地文件, 方式命令行无法访问远程文件, 编辑最后一个配置Dashboard Service, 增加type和nodePort, 结果如下:
# ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: ports: - port: 443 targetPort: 8443 nodePort: 30000 type: NodePort selector: k8s-app: kubernetes-dashboard
在master节点执行创建dashboard服务的命令:
kubectl create -f kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created
浏览器输入worker节点ip和端口使用https访问如下:https://my.worker01.local:30000/#!/login 即可以验证dashboard是否安装成功.
通过kubectl获取secret,然后在获取详细的token,复制到上一步中登录页选择Token(令牌), 即可以登录
### 查看秘钥, 列出所有kube-system命名空间下秘钥 kubectl -n kube-system get secret NAME TYPE DATA AGE clusterrole-aggregation-controller-token-vxzmt kubernetes.io/service-account-token 3 10h ### 查看token, 这里选择clusterrole-aggregation-controller-token-*****的秘钥 kubectl -n kube-system describe secret clusterrole-aggregation-controller-token-vxzmt Name: clusterrole-aggregation-controller-token-vxzmt Namespace: kube-system Labels:Annotations: kubernetes.io/service-account.name: clusterrole-aggregation-controller kubernetes.io/service-account.uid: dfb9d9c3-f646-11e8-9861-000c29b7e604 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjbHVzdGVycm9sZS1hZ2dyZWdhdGlvbi1jb250cm9sbGVyLXRva2VuLXZ4em10Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXJyb2xlLWFnZ3JlZ2F0aW9uLWNvbnRyb2xsZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkZmI5ZDljMy1mNjQ2LTExZTgtOTg2MS0wMDBjMjliN2U2MDQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Y2x1c3RlcnJvbGUtYWdncmVnYXRpb24tY29udHJvbGxlciJ9.MfjiMrmyKl1GUci1ivD5RNrzo_s6wxXwFzgM_3NIAmTCfRQreYdhat3yyd6agaCLpUResnNC0ZGRi4CBs_Jgjqkovhb80V05_YVIvCrlf7xHxBKEtGfkJ-qLDvtAwR5zrXNNd0Ge8hTRxw67gZ3lGMkPpw5nfWmc0rzk90xTTQD1vAtrHMvxjr3dVXph5rT8GNuCSXA_J6o2AwYUbaKCc2ugdx8t8zX6oFJfVcw0ZNYYYIyxoXzzfhdppORtKR9t9v60KsI_-q0TxY-TU-JBtzUJU-hL6lB5MOgoBWpbQiV-aG8Ov74nDC54-DH7EhYEzzsLci6uUQCPlHNvLo_J2A
master搭建好了, worker也join了get nodes发现还是NotReady状态
原因: 太复杂说不清楚任然是一个k8s issue, 查看issue基本可以确定是cni(Container Network Interface)问题,而flannel覆盖修改了这个问题
解决方法: 安装flannel插件(kubectl apply -f kube-flannel.yml)
配置错误重新开始搭建集群
解决方案: kubeadm reset
不能访问dashboard
原因: Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
解决方案:
修改 kubernetes-dashboard-ce.yaml 文件中的 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 为 registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0
提前下载好镜像并配置好tag, 注意下载的位置worker节点, 可以通过: kubectl describe pod kubernetes-dashboard-85477d54d7-wzt7 -n kube-system 查看比较具体的信息
如何增加token失效时间
原因: 默认15分钟
解决方法:
... ... - name: kubernetes-dashboard image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --token-ttl=86400 # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: ... ...
如果创建dashboard后: kubectl -n kube-system edit deployment kubernetes-dashboard, 在Deployment部分关于containers的args部分添加一行: - --token-ttl=86400 即可, 和创建前修改一样的方式一样
如果创建dashboard前: 可以修改kubernetes-dashboard.yaml文件中Dashboard Deployment部分关于containers的args部分添加一行: - --token-ttl=86400 即可,数字自定义单位是秒 如下:
使用 kubeadm completion --help 查看使用详情,这里直接贴出bash的自动完成命令
kubeadm completion bash > ~/.kube/kubeadm_completion.bash.inc printf "\n# Kubeadm shell completion\nsource '$HOME/.kube/kubeadm_completion.bash.inc'\n" >> $HOME/.bash_profile source $HOME/.bash_profile
使用 kubectl completion --help 查看使用详情,这里直接贴出bash的自动完成命令, 注意第二行命令不要一次性复制,先复制第一行printf再复制剩余.
kubectl completion bash > ~/.kube/completion.bash.inc printf " # Kubectl shell completion source '$HOME/.kube/completion.bash.inc' " >> $HOME/.bash_profile source $HOME/.bash_profile
创建 secret, 然后增加添加 imagePullSecrets 配置在指定image的地方. 创建和查看secret如下:
kubectl create secret docker-registry regcred --docker-server=registry.domain.cn:5001 --docker-username=xxxxx --docker-password=xxxxx --docker-email=jimmy.w@aliyun.com kubectl get secret regcred --output=yaml kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
配置 imagePullSecrets 如下:
... ... containers: - name: mirage image: registry.domain.cn:5001/mirage:latest imagePullSecrets: - name: regcred ports: - containerPort: 3008 protocol: TCP volumeMounts: ... ...
如果有一些特别的入口或者以前放置到/etc/hosts中的可以通过配置hostAliases进行配置, 作用和本地的hosts一样, 且这些hostAlieas配置会放置到容器/etc/hosts中, 具体使用如下:
apiVersion: v1 kind: Pod metadata: name: hostaliases-pod spec: hostAliases: - ip: "127.0.0.1" hostnames: - "foo.local" - "bar.local" - ip: "10.1.2.3" hostnames: - "foo.remote" - "bar.remote" containers: - name: cat-hosts image: busybox command: - cat args: - "/etc/hosts"
到此,相信大家对“Kubernetes集群的搭建方法”有了更深的了解,不妨来实际操作一番吧!这里是创新互联网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!