kubeadm安装kubernetes集群

一、环境准备

1.安装配置docker

v1.11.0版本推荐使用docker v17.03,v1.11,v1.12,v1.13, 也可以使用,再高版本的docker可能无法正常使用。

#移除以前安装的docker,并安装指定的版本
[root@Docker-5 ~]# yum remove -y docker-ce docker-ce-selinux container-selinux
[root@Docker-5 ~]# rm -rf /var/lib/docker
[root@Docker-5 ~]# yum install -y --setopt=obsoletes=0 docker-ce-17.03.1.ce-1.el7.centos docker-ce-selinux-17.03.1.ce-1.el7.centos
[root@Docker-5 ~]# systemctl enable docker && systemctl restart docker

2.配置阿里yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3.安装kubeadm等软件

[root@Docker-5 ~]# yum install -y kubelet kubeadm kubectl
[root@Docker-5 ~]# systemctl enable kubelet && systemctl start kubelet

4.配置系统相关参数

#关闭selinux
[root@Docker-5 ~]# setenforce 0
#关闭swap
[root@Docker-5 ~]# swapoff -a
[root@Docker-5 ~]# sed -i 's/.*swap.*/#&/' /etc/fstab
#关闭防火墙
[root@Docker-5 ~]# systemctl stop firewalld
# 配置相关参数
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
[root@Docker-5 ~]# sysctl --system

二、master节点配置

1.因为国内无法访问Google的镜像源,所以使用自己上传到阿里的镜像源

[root@Docker-5 ~]# docker login --username=du11589 registry.cn-shenzhen.aliyuncs.com
Password: 
Login Succeeded
[root@Docker-5 ~]# ./kube.sh
[root@Docker-5 ~]# cat kube.sh 
#!/bin/bash
images=(kube-proxy-amd64:v1.11.0
        kube-scheduler-amd64:v1.11.0
        kube-controller-manager-amd64:v1.11.0
        kube-apiserver-amd64:v1.11.0
        etcd-amd64:3.2.18
        coredns:1.1.3
        pause-amd64:3.1
        kubernetes-dashboard-amd64:v1.8.3
        k8s-dns-sidecar-amd64:1.14.8
        k8s-dns-kube-dns-amd64:1.14.8
        k8s-dns-dnsmasq-nanny-amd64:1.14.8 )
for imageName in ${images[@]} ; do
docker pull registry.cn-shenzhen.aliyuncs.com/duyj/$imageName
docker tag registry.cn-shenzhen.aliyuncs.com/duyj/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-shenzhen.aliyuncs.com/duyj/$imageName
done

docker pull quay.io/coreos/flannel:v0.10.0-amd64
docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1

2.查看下载的镜像

[root@Docker-5 ~]# docker image ls
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-controller-manager-amd64   v1.11.0             55b70b420785        4 weeks ago         155 MB
k8s.gcr.io/kube-scheduler-amd64            v1.11.0             0e4a34a3b0e6        4 weeks ago         56.8 MB
k8s.gcr.io/kube-proxy-amd64                v1.11.0             1d3d7afd77d1        4 weeks ago         97.8 MB
k8s.gcr.io/kube-apiserver-amd64            v1.11.0             214c48e87f58        4 weeks ago         187 MB
k8s.gcr.io/coredns                         1.1.3               b3b94275d97c        2 months ago        45.6 MB
k8s.gcr.io/etcd-amd64                      3.2.18              b8df3b177be2        3 months ago        219 MB
k8s.gcr.io/kubernetes-dashboard-amd64      v1.8.3              0c60bcf89900        5 months ago        102 MB
k8s.gcr.io/k8s-dns-sidecar-amd64           1.14.8              9d10ba894459        5 months ago        42.2 MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64     1.14.8              ac4746d72dc4        5 months ago        40.9 MB
k8s.gcr.io/k8s-dns-kube-dns-amd64          1.14.8              6ceab6c8330d        5 months ago        50.5 MB
quay.io/coreos/flannel                     v0.10.0-amd64       f0fad859c909        6 months ago        44.6 MB
k8s.gcr.io/pause-amd64                     3.1                 da86e6ba6ca1        7 months ago        742 kB
k8s.gcr.io/pause                           3.1                 da86e6ba6ca1        7 months ago        742 kB

3.执行Master节点初始化

[root@Docker-5 ~]# kubeadm init --kubernetes-version=v1.11.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=20.0.30.105
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
I0726 17:41:23.621027   65735 kernel_validator.go:81] Validating kernel version
I0726 17:41:23.621099   65735 kernel_validator.go:96] Validating kernel config
    [WARNING Hostname]: hostname "docker-5" could not be reached
    [WARNING Hostname]: hostname "docker-5" lookup docker-5 on 8.8.8.8:53: no such host
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [docker-5 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 20.0.30.105]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [docker-5 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [docker-5 localhost] and IPs [20.0.30.105 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 39.001159 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node docker-5 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node docker-5 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "docker-5" as an annotation
[bootstraptoken] using token: g80a49.qghzuffg3z58ykmv
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 20.0.30.105:6443 --token g80a49.qghzuffg3z58ykmv --discovery-token-ca-cert-hash sha256:8ae3e31892f930ba48eb33e96a2d86c0daf2a13847f8dc009e25e200a9cee6f6

[root@Docker-5 ~]#

4.查看初始化情况

[root@Docker-5 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf 
[root@Docker-5 ~]# kubectl get nodes
NAME       STATUS     ROLES     AGE       VERSION
docker-5   NotReady   master    35m       v1.11.1
[root@Docker-5 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-99kct           0/1       Pending   0          35m
kube-system   coredns-78fcdf6894-wsf4g           0/1       Pending   0          35m
kube-system   etcd-docker-5                      1/1       Running   0          34m
kube-system   kube-apiserver-docker-5            1/1       Running   0          35m
kube-system   kube-controller-manager-docker-5   1/1       Running   0          35m
kube-system   kube-proxy-ktks6                   1/1       Running   0          35m
kube-system   kube-scheduler-docker-5            1/1       Running   0          35m

5.配置master的网络

[root@Docker-5 ~]# wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
[root@Docker-5 ~]# kubectl apply -f  kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds created
[root@Docker-5 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-99kct           1/1       Running   0          41m
kube-system   coredns-78fcdf6894-wsf4g           1/1       Running   0          41m
kube-system   etcd-docker-5                      1/1       Running   0          40m
kube-system   kube-apiserver-docker-5            1/1       Running   0          40m
kube-system   kube-controller-manager-docker-5   1/1       Running   0          40m
kube-system   kube-flannel-ds-fmd97              1/1       Running   0          37s
kube-system   kube-proxy-ktks6                   1/1       Running   0          41m
kube-system   kube-scheduler-docker-5            1/1       Running   0          40m

三、添加node节点

1.node节点加入集群前,需要完成第一部分的环境准备

2.下载镜像

[root@Docker-2 ~]# docker login --username=du11589 registry.cn-shenzhen.aliyuncs.com
Password: 
Login Succeeded
[root@Docker-2 ~]# ./nodekube.sh
[root@Docker-2 ~]# cat nodekube.sh 
#!/bin/bash
images=(kube-proxy-amd64:v1.11.0
        pause-amd64:3.1
        kubernetes-dashboard-amd64:v1.8.3
    heapster-influxdb-amd64:v1.3.3
    heapster-grafana-amd64:v4.4.3
    heapster-amd64:v1.4.2 )
for imageName in ${images[@]} ; do
docker pull registry.cn-shenzhen.aliyuncs.com/duyj/$imageName
docker tag registry.cn-shenzhen.aliyuncs.com/duyj/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-shenzhen.aliyuncs.com/duyj/$imageName
done

docker pull quay.io/coreos/flannel:v0.10.0-amd64
docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1

3.查看下载的镜像

[root@Docker-2 ~]# docker image ls
REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy-amd64             v1.11.0             1d3d7afd77d1        4 weeks ago         97.8 MB
k8s.gcr.io/kubernetes-dashboard-amd64   v1.8.3              0c60bcf89900        5 months ago        102 MB
quay.io/coreos/flannel                  v0.10.0-amd64       f0fad859c909        6 months ago        44.6 MB
k8s.gcr.io/pause-amd64                  3.1                 da86e6ba6ca1        7 months ago        742 kB
k8s.gcr.io/pause                        3.1                 da86e6ba6ca1        7 months ago        742 kB
k8s.gcr.io/heapster-influxdb-amd64      v1.3.3              577260d221db        10 months ago       12.5 MB
k8s.gcr.io/heapster-grafana-amd64       v4.4.3              8cb3de219af7        10 months ago       152 MB
k8s.gcr.io/heapster-amd64               v1.4.2              d4e02f5922ca        11 months ago       73.4 MB

4.加入节点

[root@Docker-2 ~]# kubeadm join 20.0.30.105:6443 --token g80a49.qghzuffg3z58ykmv --discovery-token-ca-cert-hash sha256:8ae3e31892f930ba48eb33e96a2d86c0daf2a13847f8dc009e25e200a9cee6f6
[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0726 19:17:28.277627   36641 kernel_validator.go:81] Validating kernel version
I0726 19:17:28.277705   36641 kernel_validator.go:96] Validating kernel config
[discovery] Trying to connect to API Server "20.0.30.105:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://20.0.30.105:6443"
[discovery] Requesting info from "https://20.0.30.105:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "20.0.30.105:6443"
[discovery] Successfully established connection with API Server "20.0.30.105:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "docker-2" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

5.在master上查看node加入情况

[root@Docker-5 ~]# kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
docker-2   Ready     <none>    3m        v1.11.1
docker-5   Ready     master    4h        v1.11.1
[root@Docker-5 ~]# kubectl get pods -n kube-system -o wide
NAME                               READY     STATUS    RESTARTS   AGE       IP            NODE
coredns-78fcdf6894-99kct           1/1       Running   0          4h        10.244.0.2    docker-5
coredns-78fcdf6894-wsf4g           1/1       Running   0          4h        10.244.0.3    docker-5
etcd-docker-5                      1/1       Running   0          4h        20.0.30.105   docker-5
kube-apiserver-docker-5            1/1       Running   0          4h        20.0.30.105   docker-5
kube-controller-manager-docker-5   1/1       Running   0          4h        20.0.30.105   docker-5
kube-flannel-ds-c7rb4              1/1       Running   0          7m        20.0.30.102   docker-2
kube-flannel-ds-fmd97              1/1       Running   0          3h        20.0.30.105   docker-5
kube-proxy-7tmtg                   1/1       Running   0          7m        20.0.30.102   docker-2
kube-proxy-ktks6                   1/1       Running   0          4h        20.0.30.105   docker-5
kube-scheduler-docker-5            1/1       Running   0          4h        20.0.30.105   docker-5
相关文章
相关标签/搜索