您当前的位置: 首页 > 

暂无认证

  • 0浏览

    0关注

    92582博文

    0收益

  • 0浏览

    0点赞

    0打赏

    0留言

私信
关注
热门博文

kubenertes 1.17集群部署总结

发布时间:2019-12-27 06:23:07 ,浏览量:0

在这里插入图片描述 使用Easypack下提供的Ansible脚本进行一主多从的集群构建,本次所发布的版本未遇到明显问题,倒是解决了1.16版本中出现的kubectl get cs的unknown显示的问题。

部署方法

详细操作方法可参看:https://blog.csdn.net/liumiaocn/article/details/103725251

集群部署 集群说明 机器名称 IP 操作系统 Master节点 kube-apiserver kube-scheduler kube-controller-manager ETCD Node节点 Flannel Docker kubelet kube-proxy host131 192.168.163.131 CentOS 7.6 Yes 安装 安装 安装 安装 Yes 安装 安装 安装 安装 host132 192.168.163.132 CentOS 7.6 - - - - - Yes 安装 安装 安装 安装 host133 192.168.163.133 CentOS 7.6 - - - - - Yes 安装 安装 安装 安装 host134 192.168.163.134 CentOS 7.6 - - - - - Yes 安装 安装 安装 安装 hosts准备
[root@host131 ansible]# cat hosts.multi-nodes 
# kubernetes : master
[master-nodes]
host131 var_master_host=192.168.163.131 var_master_node_flag=True

# kubernetes : node
[agent-nodes]
host131 var_node_host=192.168.163.131 var_etcd_host=192.168.163.131 var_master_host=192.168.163.131 var_master_node_flag=True
host132 var_node_host=192.168.163.132 var_etcd_host=192.168.163.131 var_master_host=192.168.163.131 var_master_node_flag=False
host133 var_node_host=192.168.163.133 var_etcd_host=192.168.163.131 var_master_host=192.168.163.131 var_master_node_flag=False
host134 var_node_host=192.168.163.134 var_etcd_host=192.168.163.131 var_master_host=192.168.163.131 var_master_node_flag=False

# kubernetes : etcd
[etcd]
host131 var_etcd_host=192.168.163.131
[root@host131 ansible]#
集群部署
[root@host131 ansible]# ansible-playbook 20.multi-nodes.yml 

PLAY [agent-nodes] *********************************************************************************************************************

TASK [clean : stop services] ***********************************************************************************************************
changed: [host134]
changed: [host132]
changed: [host133]
changed: [host131]
...省略
PLAY RECAP *****************************************************************************************************************************
host131                    : ok=94   changed=81   unreachable=0    failed=0   
host132                    : ok=56   changed=46   unreachable=0    failed=0   
host133                    : ok=56   changed=46   unreachable=0    failed=0   
host134                    : ok=56   changed=46   unreachable=0    failed=0   

[root@host131 ansible]#
结果确认
  • 版本确认
[root@host131 ansible]# kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
[root@host131 ansible]#
  • 节点确认
[root@host131 ansible]# kubectl get nodes -o wide
NAME              STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
192.168.163.131   Ready41s   v1.17.0   192.168.163.131CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.7
192.168.163.132   Ready45s   v1.17.0   192.168.163.132CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.7
192.168.163.133   Ready45s   v1.17.0   192.168.163.133CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.7
192.168.163.134   Ready45s   v1.17.0   192.168.163.134CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.7
[root@host131 ansible]#
  • kubectl get cs
[root@host131 ansible]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
[root@host131 ansible]#

1.16中出现的问题已经不存在了。

问题总结

部署时有时会碰到如下错误提示,原因未定,重新执行就不再出现此问题。

"error: the server doesn't have a resource type \"clusterrolebinding\"\nerror: no matches for kind \"ClusterRoleBinding\" in version \"rbac.authorization.k8s.io/v1beta1\"", "stderr_lines": ["error: the server doesn't have a resource type \"clusterrolebinding\"", "error: no matches for kind \"ClusterRoleBinding\" in version \"rbac.authorization.k8s.io/v1beta1\""],
关注
打赏
1653961664
查看更多评论
立即登录/注册

微信扫码登录

1.5130s