shuijingping 发表于 2018-1-5 18:18:25

Kubeadm安装k8s集群

环境准备
  准备3台机安装一个master,两个Node的k8s集群,本文是在centos7的环境下安装k8s
  

192.168.144.2(k8s-master)  

192.168.144.3(k8s-node-1)  

192.168.144.4(k8s-node-2)  


1,分别修改hostname
  使用vim /etc/hostname修改各个节点为对应的名字

2,将对应的ip映射加入三台物理机的hosts
  

cat > /etc/hosts <<EOF  

192.168.144.2k8s-master  

192.168.144.3k8s-node-1  
192.168.144.4k8s-node-2
  
EOF
  


3,关闭selinux防火墙
  

setenforce 0  


4,安装docker-1.12.6并启动
  

yum install -y docker  
systemctl start docker
  
systemctl enable docker
  


5,网络设置
  

cat > /etc/sysctl.d/k8s.conf <<EOF  
net.bridge.bridge
-nf-call-ip6tables = 1  
net.bridge.bridge
-nf-call-iptables = 1  
EOF
  

  并使它生效
  

sysctl -p /etc/sysctl.d/k8s.conf  


安装k8s

1,增加k8s的yum源
  使用测试是否可以访问k8s的网址
  

curl http://yum.kubernetes.io/repos/kubernetes-el7-x86_64  

  如果可以,使用下面命令增加一个yum源
  

curl http://yum.kubernetes.io/repos/kubernetes-el7-x86_64cat <<EOF > /etc/yum.repos.d/kubernetes.repo  

  
name=Kubernetes
  
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
  
enabled=1
  
gpgcheck=1
  
repo_gpgcheck=1
  
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
  https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
  
EOF
  


2,安装kubeadm, kubelet, kubectl, kubernets-cni
  

yum install -y kubelet-1.6.2 kubeadm-1.6.2 kubectl-1.6.2 kubernetes-cni  
systemctl enable kubelet.service
  


3,初始化集群(如果不能访问DOCKER镜像服务器,请提前准备好镜像)
  

kubeadm init --kubernetes-version=v1.6.2 --pod-network-cidr=10.96.0.0/16  

  
WARNING: kubeadm is
in beta, please do not use it for production clusters.  
Using Kubernetes version: v1.
6.2  
Using Authorization mode: RBAC
  
Running pre
-flight checks  
Starting the kubelet service
  
Generated CA certificate and key.
  
Generated API server certificate and key.
  
API Server serving cert is signed
for DNS names and IPs   
Generated API server kubelet client certificate and key.
  
Generated service account token signing key and public key.
  
Generated front
-proxy CA certificate and key.  
Generated front
-proxy client certificate and key.  
Valid certificates and keys now exist
in "/etc/kubernetes/pki"  
Wrote KubeConfig
file to disk: "/etc/kubernetes/admin.conf"  
Wrote KubeConfig
file to disk: "/etc/kubernetes/kubelet.conf"  
Wrote KubeConfig
file to disk: "/etc/kubernetes/controller-manager.conf"  
Wrote KubeConfig
file to disk: "/etc/kubernetes/scheduler.conf"  
Created API client, waiting
for the control plane to become ready  
All control plane components are healthy after
14.583864 seconds  
Waiting
for at least one node to register  
First node has registered after
6.008990 seconds  
Using token: e7986d.e440de5882342711
  
Created RBAC rules
  
Created essential addon: kube
-proxy  
Created essential addon: kube
-dns  

  
Your Kubernetes master has initialized successfully
!  

  
To start using your cluster, you need to run (as a regular user):
  

sudo cp /etc/kubernetes/admin.conf $HOME/  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf
  

  
You should now deploy a pod network to the cluster.
  
Run "kubectl apply -f .yaml" with one of the options listed at:
  
http://kubernetes.io/docs/admin/addons/
  

  
You can now join any number of machines by running the following on each node
  
as root:
  

  kubeadm join --token e7986d.e440de5882342711 192.168.144.2:6443
  


  因为后续k8s的集群网络采用flannel安装,而flannel的默认网络是10.224.0.0/16,如果是后续不想修改kube-flannel.yml的网段为10.96.0.0/16,此处可以生命--service-cidr=10.244.0.0、16来指定


如果没有安装好,可以清理掉已安装的k8s,重新安装
  

kubeadm reset  

ifconfig cni0 down  
ip link delete cni0
  

ifconfig flannel.1 down  
ip link delete flannel.
1  
rm -rf /var/lib/cni/
  


4,为kubectl增加执行权限,可以调用api-server
  

sudo cp /etc/kubernetes/admin.conf $HOME/  
sudo chown $(id -u):$(id -g) $HOME/admin.conf
  
export KUBECONFIG=$HOME/admin.conf
  


5,在master节点查看已有节点,当dns没有初始化完,状态不会变为Ready
  

kubectl get nodes  
NAME      STATUS   AGE       VERSION
  
node0   NotReady   3m      v1.
6.1  


6,查看POD状态
  执行kubectl get pod --all-namespaces -o wide耐心等待,最后除了dns其他的pod都是running状态,当dns等待网络安装flannel后会自动变为running

7,使master node参与工作负载
  使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。
  这里搭建的是测试环境可以使用下面的命令使Master Node参与工作负载
  

kubectl taint nodes --allnode-role.kubernetes.io/master-  


8,安装Flannel初始化POD网络
  

mkdir -p flannel  
cd flanne
  

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml  
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  
cp kube-flannel.yml kube-flannel.yml.default
  
sed -i 's/10.244.0.0/10.96.0.0/g' kube-flannel.yml
  
sed -i 's/"--kube-subnet-mgr"/"--kube-subnet-mgr", "--iface=eth0"/g' kube-flannel.yml
  
kubectl create -f kube-flannel-rbac.yml
  
kubectl apply -f kube-flannel.yml
  


  此处有两个修改:
  1,设置flannel的网段为10.96.0.0/16
  2,指定网卡,不指定默认使用会换网卡lo
  当所有的POD的状态都变成running后测试dns是否正常
  

kubectl run curl --image=radial/busyboxplus:curl -i --tty  

  接着会先是如下内容
  

Waiting for pod default/curl-2421989462-vldmp to be running, status is Pending, pod ready: false  
Waiting
for pod default/curl-2421989462-vldmp to be running, status is Pending, pod ready: false  
If you don
't see a command prompt, try pressing enter.  
[ root@curl-2421989462-vldmp:/ ]$
  

  然后在命令行输入nslookup kubernetes.default
  

[ root@curl-2421989462-vldmp:/ ]$ nslookup kubernetes.default  
Server:
10.96.0.10  
Address
1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local  

  
Name:      kubernetes.default
  
Address
1: 10.96.0.1 kubernetes.default.svc.cluster.local  

  测试成功后删除测试的POD
  

kubectl delete deploy curl  


9,将k8s-node-1和k8s-node-2添加到集群
  执行上面初始化时,控制台打印的命令命令kubeadm join --token e7986d.e440de5882342711 192.168.144.2:6443
  

kubeadm join --token e7986d.e440de5882342711 192.168.144.2:6443  
WARNING: kubeadm is
in beta, please do not use it for production clusters.  
Running pre
-flight checks  
Trying to connect to API Server
"192.168.144.2:6443"  
Created cluster
-info discovery client, requesting info from "https://192.168.144.2:6443"  
Cluster
info signature and contents are valid, will use API Server "https://192.168.144.2:6443"  
Successfully established connection with API Server
"192.168.61.41:6443"  
Detected server version: v1.
6.2  
The server supports the Certificates API (certificates.k8s.io
/v1beta1)  
Created API client to obtain unique certificate
for this node, generating keys and certificate signing request  
Received signed certificate from the API server, generating KubeConfig...
  
Wrote KubeConfig
file to disk: "/etc/kubernetes/kubelet.conf"  

  
Node
join complete:  

* Certificate signing request sent to master and response  
received.
  

* Kubelet informed of new secure connection details.  

  
Run
'kubectl get nodes' on the master to see this machine join.  

  查看集群中节点
  

kubectl get nodes  
NAME      STATUS    AGE       VERSION
  
k8s
-master   Ready   12m       v1.6.2  
k8s
-node-1   Ready   4m      v1.6.2  
k8s
-node-2   Ready   2m      v1.6.2  


10, Dashboard安装
  

mkdir -p dashboard  
cd dashboard
  

wget https://git.io/kube-dashboard  
mv kube-dashboard kube-dashboard.yml
  
kubectl create -f kube-dashboard.yml
  
kubectl proxy &
  
cd ..
  

  执行kubectl get nodes --all-namespaces | grep dashboard查看节点是否启动成功
  执行curl http://127.0.0.1:8001/ui查看能否访问得到页面
页: [1]
查看完整版本: Kubeadm安装k8s集群