设为首页 收藏本站
查看: 1371|回复: 0

[经验分享] kubernetes 1.5.1, 配置文档

[复制链接]
累计签到:19 天
连续签到:1 天
发表于 2017-7-19 16:39:53 | 显示全部楼层 |阅读模式
kubernetes 1.5.1, 配置文档

# 1 初始化环境

## 1.1 环境:

| 节 点  |      I P      |
|--------|-------------|
|master|192.168.99.117|
|node1|192.168.99.118|
|node2|192.168.99.119|

前提:配置docker私有库,参考docker私有库搭建


## 1.2 设置hostname
hostnamectl --static set-hostname hostname
## 1.3配置/etc/hosts(所有节点都要配置)
[iyunv@master ~]# cat  /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.99.117 master
192.168.99.118 node1
192.168.99.119 node2

# 2.0 部署 kubernetes master

## 2.1 添加yum

# 配置yum源cat <<EOF> /etc/yum.repos.d/kubernetes.repo[mritdrepo]name=Mritd Repositorybaseurl=https://yum.mritd.me/centos/7/x86_64enabled=1gpgcheck=1gpgkey=https://cdn.mritd.me/keys/rpm.public.keyEOFyum makecache ##生成yum缓存 yum install -y socat kubelet kubeadm kubectl kubernetes-cni

## 2.2 安装docker
wget -qO- https://get.docker.com/ | sh
或者
yum install -y docker
systemctl enable dockersystemctl start docker

## 2.3 安装 etcd 集群(每个节点都要配置,可以根据自己的情况配置,如下为三节点集群)
yum -y install etcd# 创建etcd data 目录mkdir -p /opt/etcd/datachown -R etcd:etcd /opt/etcd/# 修改配置文件,/etc/etcd/etcd.conf 需要修改如下参数:#以主节点为例
[iyunv@master ~]# cat  /etc/etcd/etcd.conf  
# [member]
#
ETCD_NAME=etcd1#其他节点可改名etcd2或etcd3
ETCD_DATA_DIR="/opt/etcd/data/etcd1.etcd" #同上
ETCD_LISTEN_PEER_URLS="http://192.168.99.117:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.99.117:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.99.117:2380"
#[cluster]
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.99.117:2380,etcd2=http://192.168.99.118:2380,etcd3=http://192.168.99.119:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.99.117:2379"

#以下为node1配置
[iyunv@node1 ~]# cat  /etc/etcd/etcd.conf
# [member]
#
ETCD_NAME=etcd2
ETCD_DATA_DIR="/opt/etcd/data/etcd2.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.99.118:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.99.118:2379,http://127.0.0.1:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.99.118:2380"

#[cluster]
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.99.117:2380,etcd2=http://192.168.99.118:2380,etcd3=http://192.168.99.119:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.99.118:2379"

# 修改 etcd 启动文件sed -i 's/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\"/\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --listen-client-urls=\\\"${ETCD_LISTEN_CLIENT_URLS}\\\" --advertise-client-urls=\\\"${ETCD_ADVERTISE_CLIENT_URLS}\\\" --initial-cluster-token=\\\"${ETCD_INITIAL_CLUSTER_TOKEN}\\\" --initial-cluster=\\\"${ETCD_INITIAL_CLUSTER}\\\" --initial-cluster-state=\\\"${ETCD_INITIAL_CLUSTER_STATE}\\\"/g' /usr/lib/systemd/system/etcd.service
# 启动 etcdsystemctl enable etcdsystemctl start etcdsystemctl status etcd# 查看集群状态etcdctl cluster-health

## 2.4 下载镜像(每个节点都下载,可用下载后push到私有库,然后在各个节点pull私有库下载)
images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)for imageName in ${images[@]} ; do  docker pull jicki/$imageName  docker tag jicki/$imageName gcr.io/google_containers/$imageName  docker rmi jicki/$imageNamedone
# 如果速度很慢,可配置一下加速docker 启动文件 增加 --registry-mirror="http://b438f72b.m.daocloud.io"
## 2.4 启动 kubernetessystemctl enable kubeletsystemctl start kubelet
systemctl status kubelet #注意观察状态

## 2.5 创建集群

kubeadm init --api-advertise-addresses=192.168.99.117 --external-etcd-  endpoints=http://192.168.99.117:2379,http://192.168.99.118:2379,http://192.168.99.119:2379 --use-kubernetes-version v1.5.1 --pod-network-cidr 10.244.0.0/16

Flag --external-etcd-endpoints has been deprecated, this flag will be removed when componentconfig exists[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.[preflight] Running pre-flight checks[preflight] Starting the kubelet service[init] Using Kubernetes version: v1.5.1[tokens] Generated token: "c53ef2.d257d49589d634f0"[certificates] Generated Certificate Authority key and certificate.[certificates] Generated API Server key and certificate[certificates] Generated Service Account signing keys[certificates] Created keys and certificates in "/etc/kubernetes/pki"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"[apiclient] Created API client, waiting for the control plane to become ready[apiclient] All control plane components are healthy after 15.299235 seconds[apiclient] Waiting for at least one node to register and become ready[apiclient] First node is ready after 1.002937 seconds[apiclient] Creating a test deployment[apiclient] Test deployment succeeded[token-discovery] Created the kube-discovery deployment, waiting for it to become ready[token-discovery] kube-discovery is ready after 2.502881 seconds[addons] Created essential addon: kube-proxy[addons] Created essential addon: kube-dnsYour Kubernetes master has initialized successfully!You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:    http://kubernetes.io/docs/admin/addons/You can now join any number of machines by running the following on each node:kubeadm join --token=c53ef2.d257d49589d634f0 192.168.99.117 #此行需要记录,后面节点加入集群
## 2.6 记录 token

You can now join any number of machines by running the following on each node:kubeadm join --token=c53ef2.d257d49589d634f0 192.168.99.117

## 2.7 配置网络
# 建议先下载镜像,否则容易下载不到docker pull quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64# 或者这样docker pull jicki/flannel-git:v0.6.1-28-g5dde68d-amd64docker tag jicki/flannel-git:v0.6.1-28-g5dde68d-amd64 quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64docker rmi jicki/flannel-git:v0.6.1-28-g5dde68d-amd64
# http://kubernetes.io/docs/admin/addons/  这里有多种网络模式,选择一种# 这里选择 Flannel  选择 Flannel  init 时必须配置 --pod-network-cidr
wget  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl create -f kube-flannel.yml

## 2.8 检查 kubelet 状态

systemctl status kubelet

# 3.0 部署 kubernetes node

## 3.1 安装docker
yum install -y docker

## 3.2 下载镜像 #同master下载镜像一样。可从之前的push到的私有库pull并改名。

images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)for imageName in ${images[@]} ; do  docker pull jicki/$imageName  docker tag jicki/$imageName gcr.io/google_containers/$imageName  docker rmi jicki/$imageNamedone
或者:(192.168.99.117:5000为私有库地址

images=(kube-proxy-amd64:v1.5.1 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.1 kube-controller-manager-amd64:v1.5.1 kube-apiserver-amd64:v1.5.1 etcd-amd64:3.0.14-kubeadm kube-dnsmasq-amd64:1.4 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 dnsmasq-metrics-amd64:1.0)for imageName in ${images[@]} ; do  docker pull 192.168.99.117:5000/$imageName  docker tag 192.168.99.117:5000/$imageName gcr.io/google_containers/$imageName  docker rmi 192.168.99.117:5000/$imageNamedone

## 3.3 启动 kubernetes
systemctl enable kubeletsystemctl start kubelet
## 3.4 加入集群
kubeadm join --token=c53ef2.d257d49589d634f0 192.168.99.117

Node join complete:* Certificate signing request sent to master and response  received.* Kubelet informed of new secure connection details.Run 'kubectl get nodes' on the master to see this machine join.

## 3.5 查看集群状态 (在主节点上查询,其他节点目前不支持。后面会介绍其他节点查询方法)

[iyunv@master ~]# kubectl get nodeNAME         STATUS         AGEk8s-node-1   Ready,master   27mk8s-node-2   Ready          6sk8s-node-3   Ready          9s

## 3.6 查看服务状态
[iyunv@master ~]# kubectl get pods --all-namespacesNAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGEkube-system   dummy-2088944543-qrp68               1/1       Running   1          1hkube-system   kube-apiserver-k8s-node-1            1/1       Running   2          1hkube-system   kube-controller-manager-k8s-node-1   1/1       Running   2          1hkube-system   kube-discovery-1769846148-g2lpc      1/1       Running   1          1hkube-system   kube-dns-2924299975-xbhv4            4/4       Running   3          1hkube-system   kube-flannel-ds-39g5n                2/2       Running   2          1hkube-system   kube-flannel-ds-dwc82                2/2       Running   2          1hkube-system   kube-flannel-ds-qpkm0                2/2       Running   2          1hkube-system   kube-proxy-16c50                     1/1       Running   2          1hkube-system   kube-proxy-5rkc8                     1/1       Running   2          1hkube-system   kube-proxy-xwrq0                     1/1       Running   2          1hkube-system   kube-scheduler-k8s-node-1            1/1       Running   2          1h


# 4.0 设置 kubernetes
## 4.1 其他主机控制集群# 备份master节点的 配置文件/etc/kubernetes/admin.conf拷贝到其他节点相同位置

scp /etc/kubernetes/admin.conf root@node1:/etc/kubernetes[iyunv@node1 ~]#kubectl --kubeconfig /etc/kubernetes/admin.conf get nodes
NAME      STATUS         AGE
master    Ready,master   7h
node1     Ready          6h
node2     Ready          6h

## 4.2 配置dashboard(主节点)
wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

#编辑 yaml 文件vi kubernetes-dashboard.yaml

  image修改为本地库地址:
  image: 192.168.99.117:5000/kubernetes-dashboard-amd64:v1.5.0

[iyunv@master ~]# kubectl create -f ./kubernetes-dashboard.yaml        deployment "kubernetes-dashboard" created        service "kubernetes-dashboard" created
# 查看 NodePort ,既各个node节点外网访问端口
[iyunv@master ~]# kubectl describe svc kubernetes-dashboard --namespace=kube-system
Name:                   kubernetes-dashboard
Namespace:              kube-system
Labels:                 app=kubernetes-dashboard
Selector:               app=kubernetes-dashboard
Type:                   NodePort
IP:                     10.96.94.33
Port:                   <unset> 80/TCP
NodePort:               <unset> 30923/TCP
Endpoints:              10.244.1.3:9090
Session Affinity:       None
No events.

# 访问 dashboardhttp://192.168.99.118/119:31736 目前测试只能在node节点访问,master端口不通,问题在查询中。
C:/Users/Administrator/AppData/Local/YNote/data/qq41E380972A3128D92AECC7E981CEFD76/05fc7891b8e847d69bf44501ea21a683/clipboard.png

部署nginx服务
kubectl create  -f nginx-rc.yaml
kubectl create  -f nginx-service.yaml

[iyunv@node2 nginx]#
[iyunv@node2 nginx]# cat  nginx-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-controller
spec:
  replicas: 2
  selector:
    name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
      - name: nginx
        image: 192.168.99.117:5000/nginx-php-fpm
        ports:
          - containerPort: 80
            hostPort: 800
        volumeMounts:
        - mountPath: /etc/nginx/conf.d
          name: etc-nginx-confd
        - mountPath: /var/www/html
          name: nginx-www
      volumes:
      - hostPath:
          path: /etc/nginx
        name: etc-nginx-confd
      - hostPath:
          path: /data
        name: nginx-www
[iyunv@node2 nginx]# cat  nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service-nodeport
spec:
  ports:
    - port: 8000
      targetPort: 80
      protocol: TCP
  type: NodePort
  selector:
    name: nginx
[iyunv@node2 nginx]#


运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-394846-1-1.html 上篇帖子: kubernetes 学习笔记4 下篇帖子: 请问kubernetes1.6之后的版本是跟之前版本的部署方法
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表