您当前的位置: 首页 > 
  • 1浏览

    0关注

    212博文

    0收益

  • 0浏览

    0点赞

    0打赏

    0留言

私信
关注
热门博文

使用kubeadm快速部署一套稳定的K8S集群

杰哥的技术杂货铺 发布时间:2021-02-04 21:09:52 ,浏览量:1

文章目录
  • 一、 操作系统初始化配置
    • 1.1 设置主机名
    • 1.2 设置hosts解析主机名
    • 1.3 关闭SELinux和防火墙
    • 1.4 关闭swap
    • 1.5 设置系统参数
    • 1.6 设置节点间的SSH免密码登录
    • 1.7 配置docker
  • 二、etcd 部署
    • 2.1 编译二进制文件
    • 2.2 启动etcd
    • 2.3 新节点加入
  • 三、k8s集群部署
    • 3.1 启动maser节点
      • 3.1.1 新增 kubernetes yum源
      • 3.1.2 安装kubelet kubeadm kubectl
      • 3.1.3 kubeadm 连接前准备
      • 3.1.4 初始化master节点
      • 3.1.5 查看当前k8s状态
    • 3.2 部署 flannel
    • 3.3 添加worker 节点
  • 四、harbor部署
    • 4.1 docker-compose安装
    • 4.2 harbor配置
    • 4.3 harbor安装并启动
    • 4.4 访问harbor
    • 4.5 踩坑记录

一、 操作系统初始化配置 1.1 设置主机名

根据规划设置主机名 (所有节点)

hostnamectl set-hostname 
1.2 设置hosts解析主机名

设置/etc/hosts保证主机名能够解析 (所有节点)

# cat /etc/hosts
192.168.21.209 k8s-master
192.168.21.203 k8s-node1
192.168.21.202 k8s-node2
1.3 关闭SELinux和防火墙

所有节点

# 关闭防火墙
systemctl disable firewalld  永久
systemctl stop firewalld   临时


# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时
1.4 关闭swap

所有节点

swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久
1.5 设置系统参数

所有节点

vim /etc/sysctl.conf

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

net.ipv4.ip_forward = 1

#iptables透明网桥的实现
# NOTE: kube-proxy 要求 NODE 节点操作系统中要具备 /sys/module/br_netfilter 文件,而且还要设置 bridge-nf-call-iptables=1,如果不满足要求,那么 kube-proxy 只是将检查信息记录到日志中,kube-proxy 仍然会正常运行,但是这样通过 Kube-proxy 设置的某些 iptables 规则就不会工作。

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1


# 生效配置
sysctl -p
1.6 设置节点间的SSH免密码登录

所有节点

ssh-keygen -t rsa
ssh-copy-id k8s-master -p 2322
ssh-copy-id k8s-node1 -p 2322
ssh-copy-id k8s-node2 -p 2322
1.7 配置docker
  • 安装docker

所有节点需要安装docker,并且docker版本需要一致,此处不再详解docker的安装方式

  • 修改daemon.json配置文件
# vim /etc/docker/daemon.json
{
    "log-driver": "json-file",
    "log-opts": {"max-size":"10m", "max-file":"1"},
    "insecure-registries": ["http://baas-harbor.peogoo.com"],
    "registry-mirrors": ["https://zn14eon5.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "max-concurrent-downloads": 10,
    "data-root": "/opt/data/docker",
    "hosts": ["tcp://127.0.0.1:5000", "unix:///var/run/docker.sock"]
}

参数解释


{
    "log-driver": "json-file", #容器日志的默认驱动程序(默认为“ json-file”)
    "insecure-registries": ["http://10.20.17.20"], # 设置私有仓库地址可以设为http
    "registry-mirrors": ["https://zn14eon5.mirror.aliyuncs.com"], # 设置阿里云镜像加速器
    "log-opts": {"max-size":"10m", "max-file":"1"},  # 容器默认日志驱动程序选项
    "exec-opts": ["native.cgroupdriver=systemd"], # 高版本 kubernetes kubeadm 依赖;
    "max-concurrent-downloads": 10, # 设置每个请求的最大并发下载量(默认为3)
    "data-root": "/opt/data/docker", # Docker运行时使用的根路径
    "hosts": ["tcp://0.0.0.0:5000", "unix:///var/run/docker.sock"], # 需要删除docker启动文件:service文件中的 -H fd://(/usr/lib/systemd/system/docker.service)
    "graph": "/opt/data/docker", # 已标记为未来版本弃用,建议 data-root ;
}
  • 修改docker启动配置文件
# vim /usr/lib/systemd/system/docker.service

14 #ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock  --graph=/opt/data/docker
15 ExecStart=/usr/bin/dockerd --containerd=/run/containerd/containerd.sock
  • 重启docker生效配置文件
systemctl daemon-reload
systemctl restart docker
systemctl status docker

如果报错,可执行命令查看原因

dockerd

基本上都是修改这个文件导致的:/etc/docker/daemon.json

如果依然无法启动,可查看系统日志,查看详细报错

tail -f /var/log/messages
二、etcd 部署

使用etcdadm 方式进行部署:

优点:一键搞定证书 + etcd + 扩容问题;

缺点:支持的 etcd 启动参数非常少,安装后需要通过 etcd.env 调整运行参数;

2.1 编译二进制文件
# 安装git
yum -y install git

# 下载源码包
cd /opt/tools/
git clone https://github.com/kubernetes-sigs/etcdadm.git; cd etcdadm

# 编译:本地有 Go 环境;
make etcdadm

# 编译:本地有 docker 环境;
make container-build
2.2 启动etcd
  • 启动命令如下:
/opt/tools/etcdadm/etcdadm init --certs-dir /etc/kubernetes/pki/etcd/ --install-dir /usr/bin/
  • 启动过程如下:
INFO[0000] [install] Artifact not found in cache. Trying to fetch from upstream: https://github.com/coreos/etcd/releases/download 
INFO[0000] [install] Downloading & installing etcd https://github.com/coreos/etcd/releases/download from 3.4.9 to /var/cache/etcdadm/etcd/v3.4.9 
INFO[0000] [install] downloading etcd from https://github.com/coreos/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz to /var/cache/etcdadm/etcd/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz 
######################################################################## 100.0%
INFO[0003] [install] extracting etcd archive /var/cache/etcdadm/etcd/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz to /tmp/etcd103503474 
INFO[0003] [install] verifying etcd 3.4.9 is installed in /usr/bin/ 
INFO[0003] [certificates] creating PKI assets           
INFO[0003] creating a self signed etcd CA certificate and key files 
[certificates] Generated ca certificate and key.
INFO[0003] creating a new server certificate and key files for etcd 
[certificates] Generated server certificate and key.
[certificates] server serving cert is signed for DNS names [k8s-master] and IPs [192.168.21.209 127.0.0.1]
INFO[0004] creating a new certificate and key files for etcd peering 
[certificates] Generated peer certificate and key.
[certificates] peer serving cert is signed for DNS names [k8s-master] and IPs [192.168.21.209]
INFO[0004] creating a new client certificate for the etcdctl 
[certificates] Generated etcdctl-etcd-client certificate and key.
INFO[0004] creating a new client certificate for the apiserver calling etcd 
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki/etcd/"
INFO[0005] [health] Checking local etcd endpoint health 
INFO[0005] [health] Local etcd endpoint is healthy      
INFO[0005] To add another member to the cluster, copy the CA cert/key to its certificate dir and run: 
INFO[0005] 	etcdadm join https://192.168.21.209:2379  
  • 查看etcd服务状态为已启动状态
# systemctl status etcd.service

注:此处单节点etcd完成,如不需要etcd集群,可跳过以下步骤

2.3 新节点加入
  • 传输文件 传输已生成的证书文件、编译完成的etcd二进制工具(已有节点中执行)
# scp -P 2322 /etc/kubernetes/pki/etcd/ca.* 192.168.21.203:/etc/kubernetes/pki/etcd/
# scp -P 2322 /opt/tools/etcdadm/etcdadm 192.168.21.203:/opt/tools/
  • 新节点加入etcd集群

将新节点加入etcd集群(新节点中执行)

# /opt/tools/etcdadm join https://192.168.21.209:2379 --certs-dir /etc/kubernetes/pki/etcd/ --install-dir /usr/bin/

注:IP地址为已有节点IP

新节点加入过程如下

INFO[0000] [certificates] creating PKI assets           
INFO[0000] creating a self signed etcd CA certificate and key files 
[certificates] Using the existing ca certificate and key.
INFO[0000] creating a new server certificate and key files for etcd 
[certificates] Generated server certificate and key.
[certificates] server serving cert is signed for DNS names [k8s-node1] and IPs [192.168.21.203 127.0.0.1]
INFO[0000] creating a new certificate and key files for etcd peering 
[certificates] Generated peer certificate and key.
[certificates] peer serving cert is signed for DNS names [k8s-node1] and IPs [192.168.21.203]
INFO[0000] creating a new client certificate for the etcdctl 
[certificates] Generated etcdctl-etcd-client certificate and key.
INFO[0001] creating a new client certificate for the apiserver calling etcd 
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki/etcd/"
INFO[0001] [membership] Checking if this member was added 
INFO[0001] [membership] Member was not added            
INFO[0001] Removing existing data dir "/var/lib/etcd"   
INFO[0001] [membership] Adding member                   
INFO[0001] [membership] Checking if member was started  
INFO[0001] [membership] Member was not started          
INFO[0001] [membership] Removing existing data dir "/var/lib/etcd" 
INFO[0001] [install] Artifact not found in cache. Trying to fetch from upstream: https://github.com/coreos/etcd/releases/download 
INFO[0001] [install] Downloading & installing etcd https://github.com/coreos/etcd/releases/download from 3.4.9 to /var/cache/etcdadm/etcd/v3.4.9 
INFO[0001] [install] downloading etcd from https://github.com/coreos/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz to /var/cache/etcdadm/etcd/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz 
######################################################################## 100.0%
INFO[0003] [install] extracting etcd archive /var/cache/etcdadm/etcd/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz to /tmp/etcd662320697 
INFO[0003] [install] verifying etcd 3.4.9 is installed in /usr/bin/ 
INFO[0004] [health] Checking local etcd endpoint health 
INFO[0004] [health] Local etcd endpoint is healthy
  • 检查节点状态 (旧节点中执行)
# echo -e "export ETCDCTL_API=3\nalias etcdctl='etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --endpoints=https://192.168.21.209:2379,https://192.168.21.203:2379,https://192.168.21.202:2379 --write-out=table'" >> /root/.bashrc; source /root/.bashrc


# etcdctl endpoint health
+-----------------------------+--------+-------------+-------+
|          ENDPOINT           | HEALTH |    TOOK     | ERROR |
+-----------------------------+--------+-------------+-------+
| https://192.168.21.209:2379 |   true | 10.540029ms |       |
| https://192.168.21.202:2379 |   true | 11.533006ms |       |
| https://192.168.21.203:2379 |   true | 10.948705ms |       |
+-----------------------------+--------+-------------+-------+
三、k8s集群部署 3.1 启动maser节点 3.1.1 新增 kubernetes yum源

新增 kubernetes yum源 (所有节点执行)

# cat  [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
3.1.2 安装kubelet kubeadm kubectl
  • 安装kubelet kubeadm kubectl(master节点执行)
# yum -y install kubeadm-1.19.0 kubectl-1.19.0 kubelet-1.19.0

注:安装时不指定版本号,默认安装最新版本

# yum install kubeadm kubectl kubelet
  • 启动kubelet服务(master节点执行)
# systemctl enable kubelet && systemctl start kubelet
3.1.3 kubeadm 连接前准备

kubeadm 连接前准备(通过 yaml 文件,指定使用外部 etcd)

  • 拷贝etcd证书至kubernetes目录下 (etcd所在节点执行)
注:如果etcd与k8s-master不在同一台服务器,则在etcd节点执行以下命令即可
scp /etc/kubernetes/pki/etcd/ca.crt k8s-master:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/apiserver-etcd-client.crt k8s-master:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/apiserver-etcd-client.key k8s-master:/etc/kubernetes/pki/


注:如果etcd与k8s-master在同一台服务器,则执行以下命令即可
cp /etc/kubernetes/pki/etcd/apiserver-etcd-client.crt /etc/kubernetes/pki/
cp /etc/kubernetes/pki/etcd/apiserver-etcd-client.key /etc/kubernetes/pki/
  • kubeadm的yml文件编写 (master节点执行)
# cd /opt/kubernetes/yaml

# cat > kubeadm-config.yml 8080/tcp        nginx
8f6de08dac41        goharbor/harbor-core:v2.0.1          "/harbor/entrypoint.…"   6 seconds ago       Up 5 seconds (health: starting)                               harbor-core
6bf71d634d1f        goharbor/registry-photon:v2.0.1      "/home/harbor/entryp…"   7 seconds ago       Up 5 seconds (health: starting)   5000/tcp                    registry
7d249368c18e        goharbor/harbor-registryctl:v2.0.1   "/home/harbor/start.…"   7 seconds ago       Up 6 seconds (health: starting)                               registryctl
53a677135f83        goharbor/redis-photon:v2.0.1         "redis-server /etc/r…"   7 seconds ago       Up 6 seconds (health: starting)   6379/tcp                    redis
d94b3e718501        goharbor/harbor-db:v2.0.1            "/docker-entrypoint.…"   7 seconds ago       Up 5 seconds (health: starting)   5432/tcp                    harbor-db
5911494e5df4        goharbor/harbor-portal:v2.0.1        "nginx -g 'daemon of…"   7 seconds ago       Up 6 seconds (health: starting)   8080/tcp                    harbor-portal
c94a91def7be        goharbor/harbor-log:v2.0.1           "/bin/sh -c /usr/loc…"   7 seconds ago       Up 6 seconds (health: starting)   127.0.0.1:1514->10514/tcp   harbor-log
4.4 访问harbor
  • 需要登录harbor的服务器,需要修改Docker的配置文件/etc/docker/daemon.json
# cat /etc/docker/daemon.json 
    "insecure-registries": ["http://baas-harbor.peogoo.com"],
  • 重新加载启动Docker:
systemctl daemon-reload
systemctl restart docker
  • 服务器登录验证
# docker login http://baas-harbor.peogoo.com
Username: admin
Password: Harbor12345
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
  • 访问harbor WEB界面
域名:http://baas-harbor.peogoo.com
用户名:admin
密码:Harbor12345
4.5 踩坑记录

如果安装遇到报错:

ERROR:root:Error: The protocol is https but attribute ssl_cert is not set。

原因是harbor.yml中默认是配置https的端口及证书路径的。

解决办法是把这些配置都注释掉。

# https related config
# https:
  # # https port for harbor, default is 443
  # port: 443
  # # The path of cert and key files for nginx
  # certificate: /your/certificate/path
  # private_key: /your/private/key/path
关注
打赏
1666063422
查看更多评论
立即登录/注册

微信扫码登录

0.0414s