一、安装helm
官方地址、文档中心
helm-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
推荐迅雷下载,传到服务器
https://get.helm.sh/helm-v2.16.3-linux-amd64.tar.gz
解压
tar -zxvf helm-v2.16.3-linux-amd64.tar.gz
移动到bin目录
mv helm /usr/local/bin/helm
mv tiller /usr/local/bin/tiller
kubectl apply -f helm_rbac.yaml
helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.3 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
helm version
helm
tiller
kubectl get pods --all-namespaces
[root@k8s-node1 k8s]# ls
get_helm.sh helm-v2.16.3-linux-amd64.tar.gz ingress-tomcat6.yaml kubernetes-dashboard.yaml linux-amd64 mypod.yaml product.yaml tomcat6.yaml
helm-v2.14.1-linux-amd64.tar.gz ingress-controller.yaml kube-flannel.yml kubesphere-complete-setup.yaml master_images.sh node_images.sh tomcat6-deployment.yaml Vagrantfile
[root@k8s-node1 k8s]# cd linux-amd64/
[root@k8s-node1 linux-amd64]# ls
helm LICENSE README.md tiller
[root@k8s-node1 linux-amd64]# mv helm /usr/local/bin/helm
[root@k8s-node1 linux-amd64]# ls
LICENSE README.md tiller
[root@k8s-node1 linux-amd64]# helm version
Client: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}
Error: could not find tiller
[root@k8s-node1 linux-amd64]# mv tiller /usr/local/bin/tiller
[root@k8s-node1 linux-amd64]# ls
LICENSE README.md
[root@k8s-node1 linux-amd64]# cd ..
[root@k8s-node1 k8s]# ls
get_helm.sh helm-v2.16.3-linux-amd64.tar.gz ingress-tomcat6.yaml kubernetes-dashboard.yaml linux-amd64 mypod.yaml product.yaml tomcat6.yaml
helm-v2.14.1-linux-amd64.tar.gz ingress-controller.yaml kube-flannel.yml kubesphere-complete-setup.yaml master_images.sh node_images.sh tomcat6-deployment.yaml Vagrantfile
[root@k8s-node1 k8s]# vi helm_rbac.yaml
[root@k8s-node1 k8s]# ls
get_helm.sh helm-v2.14.1-linux-amd64.tar.gz ingress-controller.yaml kube-flannel.yml kubesphere-complete-setup.yaml master_images.sh node_images.sh tomcat6-deployment.yaml Vagrantfile
helm_rbac.yaml helm-v2.16.3-linux-amd64.tar.gz ingress-tomcat6.yaml kubernetes-dashboard.yaml linux-amd64 mypod.yaml product.yaml tomcat6.yaml
[root@k8s-node1 k8s]# kubectl apply -f helm_rbac.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
[root@k8s-node1 k8s]# helm init --service-account=tiller --tiller-image=jessestuart/tiller:v2.16.3 --history-max 300
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden
[root@k8s-node1 k8s]# helm
The Kubernetes package manager
To begin working with Helm, run the 'helm init' command:
$ helm init
This will install Tiller to your running Kubernetes cluster.
It will also set up any necessary local configuration.
Common actions from this point include:
- helm search: Search for charts
- helm fetch: Download a chart to your local directory to view
- helm install: Upload the chart to Kubernetes
- helm list: List releases of charts
Environment:
- $HELM_HOME: Set an alternative location for Helm files. By default, these are stored in ~/.helm
- $HELM_HOST: Set an alternative Tiller host. The format is host:port
- $HELM_NO_PLUGINS: Disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins.
- $TILLER_NAMESPACE: Set an alternative Tiller namespace (default "kube-system")
- $KUBECONFIG: Set an alternative Kubernetes configuration file (default "~/.kube/config")
- $HELM_TLS_CA_CERT: Path to TLS CA certificate used to verify the Helm client and Tiller server certificates (default "$HELM_HOME/ca.pem")
- $HELM_TLS_CERT: Path to TLS client certificate file for authenticating to Tiller (default "$HELM_HOME/cert.pem")
- $HELM_TLS_KEY: Path to TLS client key file for authenticating to Tiller (default "$HELM_HOME/key.pem")
- $HELM_TLS_ENABLE: Enable TLS connection between Helm and Tiller (default "false")
- $HELM_TLS_VERIFY: Enable TLS connection between Helm and Tiller and verify Tiller server certificate (default "false")
- $HELM_TLS_HOSTNAME: The hostname or IP address used to verify the Tiller server certificate (default "127.0.0.1")
- $HELM_KEY_PASSPHRASE: Set HELM_KEY_PASSPHRASE to the passphrase of your PGP private key. If set, you will not be prompted for the passphrase while signing helm charts
Usage:
helm [command]
Available Commands:
completion Generate autocompletions script for the specified shell (bash or zsh)
create Create a new chart with the given name
delete Given a release name, delete the release from Kubernetes
dependency Manage a chart's dependencies
fetch Download a chart from a repository and (optionally) unpack it in local directory
get Download a named release
help Help about any command
history Fetch release history
home Displays the location of HELM_HOME
init Initialize Helm on both client and server
inspect Inspect a chart
install Install a chart archive
lint Examines a chart for possible issues
list List releases
package Package a chart directory into a chart archive
plugin Add, list, or remove Helm plugins
repo Add, list, remove, update, and index chart repositories
reset Uninstalls Tiller from a cluster
rollback Rollback a release to a previous revision
search Search for a keyword in charts
serve Start a local http web server
status Displays the status of the named release
template Locally render templates
test Test a release
upgrade Upgrade a release
verify Verify that a chart at the given path has been signed and is valid
version Print the client/server version information
Flags:
--debug Enable verbose output
-h, --help help for helm
--home string Location of your Helm config. Overrides $HELM_HOME (default "/root/.helm")
--host string Address of Tiller. Overrides $HELM_HOST
--kube-context string Name of the kubeconfig context to use
--kubeconfig string Absolute path of the kubeconfig file to be used
--tiller-connection-timeout int The duration (in seconds) Helm will wait to establish a connection to Tiller (default 300)
--tiller-namespace string Namespace of Tiller (default "kube-system")
Use "helm [command] --help" for more information about a command.
[root@k8s-node1 k8s]# helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.3 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
[root@k8s-node1 k8s]# helm
The Kubernetes package manager
To begin working with Helm, run the 'helm init' command:
$ helm init
This will install Tiller to your running Kubernetes cluster.
It will also set up any necessary local configuration.
Common actions from this point include:
- helm search: Search for charts
- helm fetch: Download a chart to your local directory to view
- helm install: Upload the chart to Kubernetes
- helm list: List releases of charts
Environment:
- $HELM_HOME: Set an alternative location for Helm files. By default, these are stored in ~/.helm
- $HELM_HOST: Set an alternative Tiller host. The format is host:port
- $HELM_NO_PLUGINS: Disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins.
- $TILLER_NAMESPACE: Set an alternative Tiller namespace (default "kube-system")
- $KUBECONFIG: Set an alternative Kubernetes configuration file (default "~/.kube/config")
- $HELM_TLS_CA_CERT: Path to TLS CA certificate used to verify the Helm client and Tiller server certificates (default "$HELM_HOME/ca.pem")
- $HELM_TLS_CERT: Path to TLS client certificate file for authenticating to Tiller (default "$HELM_HOME/cert.pem")
- $HELM_TLS_KEY: Path to TLS client key file for authenticating to Tiller (default "$HELM_HOME/key.pem")
- $HELM_TLS_ENABLE: Enable TLS connection between Helm and Tiller (default "false")
- $HELM_TLS_VERIFY: Enable TLS connection between Helm and Tiller and verify Tiller server certificate (default "false")
- $HELM_TLS_HOSTNAME: The hostname or IP address used to verify the Tiller server certificate (default "127.0.0.1")
- $HELM_KEY_PASSPHRASE: Set HELM_KEY_PASSPHRASE to the passphrase of your PGP private key. If set, you will not be prompted for the passphrase while signing helm charts
Usage:
helm [command]
Available Commands:
completion Generate autocompletions script for the specified shell (bash or zsh)
create Create a new chart with the given name
delete Given a release name, delete the release from Kubernetes
dependency Manage a chart's dependencies
fetch Download a chart from a repository and (optionally) unpack it in local directory
get Download a named release
help Help about any command
history Fetch release history
home Displays the location of HELM_HOME
init Initialize Helm on both client and server
inspect Inspect a chart
install Install a chart archive
lint Examines a chart for possible issues
list List releases
package Package a chart directory into a chart archive
plugin Add, list, or remove Helm plugins
repo Add, list, remove, update, and index chart repositories
reset Uninstalls Tiller from a cluster
rollback Rollback a release to a previous revision
search Search for a keyword in charts
serve Start a local http web server
status Displays the status of the named release
template Locally render templates
test Test a release
upgrade Upgrade a release
verify Verify that a chart at the given path has been signed and is valid
version Print the client/server version information
Flags:
--debug Enable verbose output
-h, --help help for helm
--home string Location of your Helm config. Overrides $HELM_HOME (default "/root/.helm")
--host string Address of Tiller. Overrides $HELM_HOST
--kube-context string Name of the kubeconfig context to use
--kubeconfig string Absolute path of the kubeconfig file to be used
--tiller-connection-timeout int The duration (in seconds) Helm will wait to establish a connection to Tiller (default 300)
--tiller-namespace string Namespace of Tiller (default "kube-system")
Use "helm [command] --help" for more information about a command.
[root@k8s-node1 k8s]# tiller
[main] 2021/04/06 04:47:48 Starting Tiller v2.16.3 (tls=false)
[main] 2021/04/06 04:47:48 GRPC listening on :44134
[main] 2021/04/06 04:47:48 Probes listening on :44135
[main] 2021/04/06 04:47:48 Storage driver is ConfigMap
[main] 2021/04/06 04:47:48 Max history per release is 0
^C
[root@k8s-node1 k8s]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default tomcat6-5f7ccf4cb9-77j4m 1/1 Running 0 170m
default tomcat6-5f7ccf4cb9-hdb87 1/1 Running 0 170m
default tomcat6-5f7ccf4cb9-zfjft 1/1 Running 0 170m
ingress-nginx nginx-ingress-controller-hmlcg 1/1 Running 0 160m
ingress-nginx nginx-ingress-controller-lmbb4 1/1 Running 0 160m
kube-system coredns-7f9c544f75-5v7dn 1/1 Running 0 15h
kube-system coredns-7f9c544f75-6jshq 1/1 Running 0 15h
kube-system etcd-k8s-node1 1/1 Running 0 15h
kube-system kube-apiserver-k8s-node1 1/1 Running 0 15h
kube-system kube-controller-manager-k8s-node1 1/1 Running 0 15h
kube-system kube-flannel-ds-amd64-m64kw 1/1 Running 0 15h
kube-system kube-flannel-ds-amd64-qdbgp 1/1 Running 0 15h
kube-system kube-flannel-ds-amd64-rxp6p 1/1 Running 0 15h
kube-system kube-proxy-jssns 1/1 Running 0 15h
kube-system kube-proxy-smq7v 1/1 Running 0 15h
kube-system kube-proxy-xc5dv 1/1 Running 0 15h
kube-system kube-scheduler-k8s-node1 1/1 Running 0 15h
kube-system tiller-deploy-57b6bb8-4d7h9 1/1 Running 0 104s
[root@k8s-node1 k8s]# helm version
Client: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}
参考1,参考2,参考3
二、安装StorageClass安装文档
[root@k8s-node1 k8s]# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node1 Ready master 15h v1.17.3 10.0.2.5 CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://20.10.5
k8s-node2 Ready 15h v1.17.3 10.0.2.6 CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://20.10.5
k8s-node3 Ready 15h v1.17.3 10.0.2.7 CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://20.10.5
[root@k8s-node1 k8s]# kubectl describe node k8s-node1 | grep Taint
Taints: node-role.kubernetes.io/master:NoSchedule
[root@k8s-node1 k8s]# kubectl taint nodes k8s-node1 node-role.kubernetes.io/master:NoSchedule-
node/k8s-node1 untainted
[root@k8s-node1 k8s]# kubectl describe node k8s-node1 | grep Taint
Taints:
[root@k8s-node1 k8s]# kubectl create ns openebs
namespace/openebs created
[root@k8s-node1 k8s]# kubectl get ns
NAME STATUS AGE
default Active 16h
ingress-nginx Active 179m
kube-node-lease Active 16h
kube-public Active 16h
kube-system Active 16h
openebs Active 12s
[root@k8s-node1 k8s]# helm install --namespace openebs --name openebs stable/openebs --version 1.5.0
Error: failed to download "stable/openebs" (hint: running `helm repo update` may help)
[root@k8s-node1 k8s]#
错误解决方式
[root@k8s-node1 k8s]# helm version
Client: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}
[root@k8s-node1 k8s]# kubectl get serviceaccount [-n kube-system]
Error from server (NotFound): serviceaccounts "[-n" not found
Error from server (NotFound): serviceaccounts "kube-system]" not found
[root@k8s-node1 k8s]# clear
[root@k8s-node1 k8s]# kubectl create serviceaccount tiller
serviceaccount/tiller created
[root@k8s-node1 k8s]# helm install --namespace openebs --name openebs stable/openebs --version 1.5.0
NAME: openebs
LAST DEPLOYED: Tue Apr 6 05:26:07 2021
NAMESPACE: openebs
STATUS: DEPLOYED
RESOURCES:
==> v1/ClusterRole
[root@k8s-node1 k8s]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default tomcat6-5f7ccf4cb9-77j4m 1/1 Running 0 3h37m
default tomcat6-5f7ccf4cb9-hdb87 1/1 Running 0 3h37m
default tomcat6-5f7ccf4cb9-zfjft 1/1 Running 0 3h37m
ingress-nginx nginx-ingress-controller-gj2gc 1/1 Running 0 28m
ingress-nginx nginx-ingress-controller-hmlcg 1/1 Running 0 3h27m
ingress-nginx nginx-ingress-controller-lmbb4 1/1 Running 0 3h27m
kube-system coredns-7f9c544f75-5v7dn 1/1 Running 0 16h
kube-system coredns-7f9c544f75-6jshq 1/1 Running 0 16h
kube-system etcd-k8s-node1 1/1 Running 0 16h
kube-system kube-apiserver-k8s-node1 1/1 Running 0 16h
kube-system kube-controller-manager-k8s-node1 1/1 Running 0 16h
kube-system kube-flannel-ds-amd64-m64kw 1/1 Running 0 16h
kube-system kube-flannel-ds-amd64-qdbgp 1/1 Running 0 16h
kube-system kube-flannel-ds-amd64-rxp6p 1/1 Running 0 16h
kube-system kube-proxy-jssns 1/1 Running 0 16h
kube-system kube-proxy-smq7v 1/1 Running 0 16h
kube-system kube-proxy-xc5dv 1/1 Running 0 16h
kube-system kube-scheduler-k8s-node1 1/1 Running 0 16h
kube-system tiller-deploy-6d8dfbb696-c5cn9 1/1 Running 0 13m
openebs openebs-admission-server-5cf6864fbf-vqg29 1/1 Running 0 9m2s
openebs openebs-apiserver-bc55cd99b-9cp5n 1/1 Running 0 9m2s
openebs openebs-localpv-provisioner-85ff89dd44-84t6s 1/1 Running 0 9m2s
openebs openebs-ndm-7nd9g 1/1 Running 0 9m2s
openebs openebs-ndm-operator-87df44d9-w489p 1/1 Running 1 9m2s
openebs openebs-ndm-pz4nt 1/1 Running 0 9m2s
openebs openebs-ndm-r4skp 1/1 Running 0 9m2s
openebs openebs-provisioner-7f86c6bb64-rgfhc 1/1 Running 0 9m2s
openebs openebs-snapshot-operator-54b9c886bf-bvz4j 2/2 Running 0 9m2s
[root@k8s-node1 k8s]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-device openebs.io/local Delete WaitForFirstConsumer false 2m38s
openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 2m38s
openebs-jiva-default openebs.io/provisioner-iscsi Delete Immediate false 2m39s
openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter Delete Immediate false 2m38s
[root@k8s-node1 k8s]# kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/openebs-hostpath patched
[root@k8s-node1 k8s]# kubectl taint nodes k8s-node1 node-role.kubernetes.io/master=:NoSchedule
node/k8s-node1 tainted
[root@k8s-node1 k8s]#
yaml
经典教程
视频教程