kukubeadm 1.6.1 + docker1.2.6 安装问题
https://images2015.cnblogs.com/blog/886010/201704/886010-20170410151734032-1044655251.pngkubeadm init --apiserver-advertise-address=192.168.20.229 --pod-network-cidr=10.244.0.0/16
kubelet: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
docker相比1.10增加了KernelMemory变量和CgroupDriver变量,KernelMemory变量表示是否设置linux内核内存限制,CgroupDriver变量表示使用哪个Cgroup驱动,有两种驱动,分别是cgroupfs和systemd,默认使用cgroupfs
https://images2015.cnblogs.com/blog/886010/201704/886010-20170410151822407-607363839.png
由 systemd 变更成 cgroupfs
############################################
或者--cgroup-driver=systemd \
kubelet的服务配置文件加上这么一行
使用kubeadm 安装 kubernetes1.6.1
环境准备
master 192.168.20.229
node
192.168.20.223
软件版本:
docker使用 1.12.6
https://images2015.cnblogs.com/blog/886010/201704/886010-20170412104243485-1239744750.png
查看版本
yum list kubeadm--showduplicates |sort -r
kubeadm.x86_64
1.6.1-0 kubernetes
kubeadm.x86_64
1.6.0-0 kubernetes
yum list kubelet
--showduplicates |sort -r
kubelet.x86_64
1.6.1-0 kubernetes
kubelet.x86_64
1.6.0-0 kubernetes
kubelet.x86_64
1.5.4-0 kubernetes
yum list kubectl
--showduplicates |sort -r
kubectl.x86_64
1.6.1-0 kubernetes
kubectl.x86_64
1.6.0-0 kubernetes
kubectl.x86_64
1.5.4-0 kubernetes
yum list kubernets
-cni--showduplicates |sort -r
kubernetes
-cni x86_64 0.5.1-0 kubernetes
系统配置
根据官方文档Installing Kubernetes on Linux with kubeadm中的Limitations小节中的内容,对各节点系统做如下设置:
创建/etc/sysctl.d/k8s.conf文件,添加如下内容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge
-nf-call-iptables = 1
初始化集群
kubeadm init --kubernetes-version=v1.6.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.20.229
kubeadm init执行成功后输出下面的信息:
kubeadm init --kubernetes-version=v1.6.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.61.41
WARNING: kubeadm
is in beta, please do not use it for production clusters.
Using Kubernetes version: v1.
6.1
Using Authorization mode: RBAC
Running pre
-flight checks
Starting the kubelet service
Generated CA certificate and key.
Generated API server certificate and key.
API Server serving cert
is signed for DNS names and IPs
Generated API server kubelet client certificate and key.
Generated service account token signing key and
public key.
Generated front
-proxy CA certificate and key.
Generated front
-proxy client certificate and key.
Valid certificates and keys now exist
in "/etc/kubernetes/pki"
Wrote KubeConfig file to disk:
"/etc/kubernetes/admin.conf"
Wrote KubeConfig file to disk:
"/etc/kubernetes/kubelet.conf"
Wrote KubeConfig file to disk:
"/etc/kubernetes/controller-manager.conf"
Wrote KubeConfig file to disk:
"/etc/kubernetes/scheduler.conf"
Created API client, waiting
for the control plane to become ready
All control plane components are healthy after
14.583864 seconds
Waiting
for at least one node to register
First node has registered after
6.008990 seconds
Using token: e7986d.e440de5882342711
Created RBAC rules
Created essential addon: kube
-proxy
Created essential addon: kube
-dns
Your Kubernetes master has initialized successfully
!
To start
using your cluster, you need to run (as a regular user): sudo cp
/etc/kubernetes/admin.conf $HOME/ sudo chown $(id
-u):$(id -g) $HOME/admin.conf export KUBECONFIG
=$HOME/admin.conf
You should now deploy a pod network to the cluster.
Run
"kubectl apply -f .yaml" with one of the options listed at: http:
//kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 881f96.aaf02f1f8dc53889 192.168.20.229:6443
Master Node初始化完成,使用kubeadm初始化的Kubernetes集群在Master节点上的核心组件:kube-apiserver,kube-scheduler, kube-controller-manager是以静态Pod的形式运行的。
ls /etc/kubernetes/manifests/
etcd.yamlkube
-apiserver.yamlkube-controller-manager.yamlkube-scheduler.yaml
在/etc/kubernetes/manifests/目录里可以看到kube-apiserver,kube-scheduler, kube-controller-manager的定义文件。另外集群持久化存储etcd也是以单点静态Pod的形式运行的,对于etcd后边我们会把它切换成etcd集群。
查看一下kube-apiserver.yaml的内容:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp:
null labels:
component: kube
-apiserver tier: control
-plane name: kube
-apiservernamespace: kube-system
spec:
containers:
- command:- kube-apiserver .......
- --insecure-port=0
注意到kube-apiserver的选项--insecure-port=0,也就是说kubeadm 1.6.0初始化的集群,kube-apiserver没有监听默认的http 8080端口。
所以我们使用kubectl get nodes会报The connection to the server localhost:8080 was refused - did you specify the right host or port?。
查看kube-apiserver的监听端口可以看到只监听了https的6443端口
netstat -nltp | grep apiserver
tcp6
0 0 :::6443 :::* LISTEN 9831/kube-apiserver
为了使用kubectl访问apiserver,在~/.bash_profile中追加下面的环境变量:
export KUBECONFIG=/etc/kubernetes/admin.conf
source
~/.bash_profile
此时kubectl命令在master node上就好用了,查看一下当前机器中的Node:
kubectl get nodes
NAME STATUS AGE VERSION
k8s1 NotReady 3m v1.
6.1
安装Pod Network
接下来安装flannel network add-on:
kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
kubectl apply -fhttps://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
如果Node有多个网卡的话,参考flannel issues 39701,目前需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上--iface=<iface-name>
......
apiVersion: extensions
/v1beta1
kind: DaemonSet
metadata:
name: kube
-flannel-ds
......
containers:
- name: kube-flannel image: quay.io
/coreos/flannel:v0.7.0-amd64 command: [
"/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=eth1" ]
......
使用kubectl get pod --all-namespaces -o wide确保所有的Pod都处于Running状态
kubectl get pod --all-namespaces -o wide
或者
kubectl
--kubeconfig=/etc/kubernetes/admin.conf get pod--all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube
-system etcd-k8s1 1/1 Running 0 10m 192.168.20.229 k8s1
kube
-system kube-apiserver-k8s1 1/1 Running 0 10m 192.168.20.229 k8s1
kube
-system kube-controller-manager-k8s1 1/1 Running 0 10m 192.168.20.229 k8s1
kube
-system kube-dns-3913472980-g97bm 3/3 Running 0 10m 10.244.1.2 k8s5
kube
-system kube-flannel-ds-k87tt 2/2 Running 0 2m 192.168.20.233 k8s5
kube
-system kube-flannel-ds-lq62q 2/2 Running 0 2m 192.168.20.229 k8s1
kube
-system kube-proxy-0nrp0 1/1 Running 0 10m 192.168.20.229 k8s1
kube
-system kube-proxy-qcds5 1/1 Running 0 10m 192.168.20.233 k8s5
kube
-system kube-scheduler-k8s1 1/1 Running 0 10m 192.168.20.229 k8s1
使master node参与工作负载
使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。
这里搭建的是测试环境可以使用下面的命令使Master Node参与工作负载:
kubectl taint nodes --allnode-role.kubernetes.io/master-
测试DNS
# kubectl --kubeconfig=/etc/kubernetes/admin.conf run curl --image=radial/busyboxplus:curl -i --tty
If you don
't see a command prompt, try pressing enter.
[ root@curl-57077659-s2l5v:/ ]$ nslookup
BusyBox v1.22.1 (2014-09-13 22:15:30 PDT) multi-call binary.
Usage: nslookup
Query the nameserver for the IP address of the given HOST
optionally using a specified DNS server
[ root@curl-57077659-s2l5v:/ ]$ nslookup kube-dns.kube-system
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
[ root@curl-57077659-s2l5v:/ ]$ nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
测试OK后,删除掉curl这个Pod。
kubectl delete deploy curl
向集群中添加节点
kubeadm join --token 881f96.aaf02f1f8dc53889 192.168.20.229:6443
查看集群中节点:
# kubectl --kubeconfig=/etc/kubernetes/admin.conf get nodes
NAME STATUS AGE VERSION
k8s1 Ready 54m v1.
6.1
k8s5 Ready 54m v1.
6.1
安装Dashboard插件
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard.yaml
从http://NodeIp:NodePort访问dashboard,浏览器显示下面的错误
User "system:serviceaccount:kube-system:default" cannot list statefulsets.apps in the namespace "default". (get statefulsets.apps)
这是因为Kubernetes 1.6开始API Server启用了RBAC授权,当前的kubernetes-dashboard.yaml没有定义授权的ServiceAccount,所以访问API Server时被拒绝了。
根据https://github.com/kubernetes/dashboard/issues/1803中的内容临时授予system:serviceaccount:kube-system:default cluster_admin的角色,临时解决一下。
创建dashboard-rbac.yaml,定义system:serviceaccount:kube-system:default和ClusterRole cluster-admin绑定:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io
/v1beta1
metadata:
name: dashboard
-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster
-admin
subjects:
- kind: ServiceAccount name:
default namespace: kube-system
kubectl--kubeconfig=/etc/kubernetes/admin.conf create -f dashboard-rbac.yml
在集群中运行Heapster
下面安装Heapster为集群添加使用统计和监控功能,为Dashboard添加仪表盘。
下载最新的Heapster到集群中的某个Node上
wget https://github.com/kubernetes/heapster/archive/v1.3.0.tar.gz
使用InfluxDB做为Heapster的后端存储,开始部署,中间会pull相关镜像,包含gcr.io/google_containers/heapster_grafana:v2.6.0-2
tar -zxvf v1.3.0.tar.gz
cd heapster
-1.3.0/deploy/kube-config/influxdb
添加了RBAC授权
# cat heapster-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: heapster
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io
/v1alpha1
metadata:
name: heapster
subjects:
- kind: ServiceAccount name: heapster
namespace: kube-system
roleRef:
kind: ClusterRole
name: system:heapster
apiGroup: rbac.authorization.k8s.io
# vim heapster-deployment.yaml
apiVersion: extensions
/v1beta1
kind: Deployment
metadata:
name: heapster
namespace: kube-system
spec:
replicas:
1 template:
metadata:
labels:
task: monitoring
k8s
-app: heapster spec:
serviceAccountName: heapster
containers:
- name: heapster image: gcr.io
/google_containers/heapster-amd64:v1.3.0-beta.1 imagePullPolicy: IfNotPresent
command:
- /heapster- --source=kubernetes:https://kubernetes.default - --sink=influxdb:http://monitoring-influxdb:8086
参考
http://blog.frognew.com/2017/04/kubeadm-install-kubernetes-1.6.html
https://github.com/opsnull/follow-me-install-kubernetes-cluster/blob/master/10-%E9%83%A8%E7%BD%B2Heapster%E6%8F%92%E4%BB%B6.md
页:
[1]