yes-no 发表于 2018-11-16 09:17:06

nginx源码编译和集群及高可用

  实验环境
  server7      nginx主机
  server8      http
  server9      http
  server10      nginx
  # tar zxf nginx-1.12.0.tar.gz
  # ls
  nginx-1.12.0nginx-1.12.0.tar.gzvarnish
  # cd nginx-1.12.0
  # ls
  auto   CHANGES.ruconfigurehtml   man   src
  CHANGESconf      contrib    LICENSEREADME
  # cd src/core/
  # vim nginx.h#去掉版本号

  # cd /root/nginx-1.12.0/auto/cc/
  # vim gcc   注释debug

  #yum install -y pcre-devel openssl-devel zlib-devel gcc
  #安装依赖性
  # ./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-http_stub_status_module
  # make && make install#编译完成
  # cd /usr/local/nginx/sbin/
  # ls
  nginx
  # ./nginx
  查看端口:

  浏览器访问

  添加用户
  # useradd-u 800 nginx

  #>  uid=800(nginx) gid=800(nginx) 组=800(nginx)
  # cd/usr/local/nginx/conf/
  # vim nginx.conf

  做软连接
  # cd /usr/local/nginx/sbin/
  # ls
  nginx
  # ln -s /usr/local/nginx/sbin/nginx/usr/local/sbin/
  # which nginx
  /usr/local/sbin/nginx

  # nginx -s>  # vim /usr/local/nginx/conf/nginx.conf

  # vim /etc/security/limits.conf

  $ ulimit-a

  core file>
  data seg>  scheduling priority             (-e) 0

  file>  pending signals               (-i) 7812
  max locked memory       (kbytes, -l) 64

  max memory>  open files                      (-n) 65535

  pipe>  POSIX message queues   (bytes, -q) 819200
  real-time priority            (-r) 0

  stack>  cpu time               (seconds, -t) unlimited
  max user processes            (-u) 1024
  virtual memory          (kbytes, -v) unlimited
  file locks                      (-x) unlimited
  # vim /usr/local/nginx/conf/nginx.conf


  制作证书
  # cd /etc/pki/tls/certs/
  # ls
  ca-bundle.crtca-bundle.trust.crtmake-dummy-certMakefilerenew-dummy-cert
  # pwd
  /etc/pki/tls/certs
  # make cert.pem

  # mv -r cert.pem   /usr/local/nginx/conf/

  # nginx-s>  nginx: SSL_CTX_use_PrivateKey_file("/usr/local/nginx/conf/cert.key") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/usr/local/nginx/conf/cert.key','r') error:20074002:BIO routines:FILE_CTRL:system lib error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib)
  nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed
  原来配置文件证书中默认是cert.key 这里生成cert.pem 故改之

  # nginx-t
  nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
  nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful

  # nginx-s>  浏览器访问

  加模块
  # cd/usr/local/nginx/conf/
  # vim nginx.conf


  # nginx -s>  加入http虚拟主机
  # vim nginx.conf

  新建目录,重新加载服务
  # mkdir   /www1
  # cd /www1/
  # ls
  # vim index.html

  # nginx-s>  物理机加解析测试:
  # curl www.cara.org
  cara1.............
  # vim nginx.conf


  # nginx -s>  物理机加解析测试:
  # curl bbs.cara.org
  cara2.............
  #负载均衡


  # nginx-s >  server9.server8 安装httpd服务:
  # vim /var/www/html/index.html
  # cat /var/www/html/index.html
  server8
  # cat /var/www/html/index.html
  server 9
  物理机测试:

  可通过加入不同的参数,实现不同的需求


  高可用
  # scp -r nginx/server10:/usr/local
  # cd /usr/local/
  # ls
  binetcgamesincludeliblib64libexecnginxsbinsharesrc
  # useradd -u 800 nginx

  #>  uid=800(nginx) gid=800(nginx) 组=800(nginx)
  server7.10均安装ricci服务(系统自带高可用包,需yum源添加),设置密码,设置为开机启动
  # yum install -y ricci
  # passwd ricci
  更改用户 ricci 的密码 。
  新的 密码:
  无效的密码: 它基于字典单词
  无效的密码: 过于简单
  重新输入新的 密码:
  passwd: 所有的身份验证令牌已经成功更新。
  # /etc/init.d/riccistart
  Starting oddjobd:                                          
  generating SSL certificates...done
  Generating NSS database...done
  启动 ricci:                                             [确定]
  # chkconfigricci on
  #yum install -y luci
  # /etc/init.d/lucistart
  Adding following auto-detected host>server7' address, to the configuration of self-managed certificate/var/lib/luci/etc/cacert.config' (you can change them by editing /var/lib/luci/etc/cacert.config', removing the generated certificate/var/lib/luci/certs/host.pem' and restarting luci):
  (none suitable found, you can still do it manually as mentioned above)
  Generating a 2048 bit RSA private key
  writing new private key to '/var/lib/luci/certs/host.pem'
  Start luci...                                              [确定]
  Point your web browser to https://server7:8084 (or equivalent) to access luci
  # chkconfigluci on
  浏览器访问,做好解析,用root用户进入,添加节点

  查看集群

  用物理机安装fence 控制断电。物理机安装:
  # rpm -qa |grep fence
  libxshmfence-1.2-1.el7.x86_64
  fence-virtd-multicast-0.3.0-16.el7.x86_64
  fence-virtd-libvirt-0.3.0-16.el7.x86_64
  fence-virtd-0.3.0-16.el7.x86_64
  # fence_virtd -c
  Module search path :
  Available backends:
  libvirt 0.1
  Available listeners:
  multicast 1.2
  Listener modules are responsible for accepting requests
  from fencing clients.
  Listener module :
  The multicast listener module is designed for use environments
  where the guests and hosts may communicate over a network using
  multicast.
  The multicast address is the address that a client will use to
  send fencing requests to fence_virtd.
  Multicast IP Address :
  Using ipv4 as family.
  Multicast IP Port :
  Setting a preferred interface causes fence_virtd to listen only
  on that interface.Normally, it listens on all interfaces.
  In environments where the virtual machines are using the host
  machine as a gateway, this must be set (typically to virbr0).
  Set to 'none' for no interface.
  Interface : br0
  The key file is the shared key information which is used to
  authenticate fencing requests.The contents of this file must
  be distributed to each physical host and virtual machine within
  a cluster.
  Key File :
  Backend modules are responsible for routing requests to
  the appropriate hypervisor or management layer.
  Backend module :
  Configuration complete.
  === Begin Configuration ===
  backends {
  libvirt {
  uri = "qemu:///system";
  }
  }
  listeners {
  multicast {
  port = "1229";
  family = "ipv4";
  interface = "br0";
  address = "225.0.0.12";
  key_file = "/etc/cluster/fence_xvm.key";
  }
  }
  fence_virtd {
  module_path = "/usr/lib64/fence-virt";
  backend = "libvirt";
  listener = "multicast";
  }
  === End Configuration ===
  Replace /etc/fence_virt.conf with the above ? y
  # mkdir -p /etc/cluster/
  # cd /etc/cluster/
  # dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1
  记录了1+0 的读入
  记录了1+0 的写出
  128字节(128 B)已复制,0.000668548 秒,191 kB/秒
  # ls
  fence_xvm.key
  # scp fence_xvm.keyserver7:/etc/cluster/
  root@server7's password:
  fence_xvm.key                                 100%128   0.1KB/s   00:00
  # scp fence_xvm.keyserver10:/etc/cluster/
  root@server10's password:
  fence_xvm.key                                 100%128   0.1KB/s   00:00
  建立fence:

  回到 Nodes,并选择 server7



  粘贴uuid

  server10同server7



  # cd
  # vim/etc/init.d/nginx
  #!/bin/sh
  #

nginx      Startup script for nginx
  #

chkconfig: - 85 15

processname: nginx

config: /usr/local/nginx/conf/nginx/nginx.conf

pidfile: /usr/local/nginx/logs/nginx.pid

description: nginx is an HTTP and reverse proxy server
  #

BEGIN INIT INFO

Provides: nginx

Required-Start: $local_fs $remote_fs $network

Required-Stop: $local_fs $remote_fs $network

Default-Start: 2 3 4 5

Default-Stop: 0 1 6

Short-Description: start and stop nginx

END INIT INFO

Source function library.
  . /etc/rc.d/init.d/functions
  if [ -L $0 ]; then
  initscript=/bin/readlink -f $0
  else
  initscript=$0
  fi
  #sysconfig=/bin/basename $initscript
  #if [ -f /etc/sysconfig/$sysconfig ]; then

. /etc/sysconfig/$sysconfig
  #fi
  nginx=${NGINX-/usr/local/nginx/sbin/nginx}
  prog=/bin/basename $nginx
  conffile=${CONFFILE-/usr/local/nginx/conf/nginx.conf}
  lockfile=${LOCKFILE-/var/lock/subsys/nginx}
  pidfile=${PIDFILE-/usr/local/nginx/logs/nginx.pid}
  SLEEPMSEC=${SLEEPMSEC-200000}
  UPGRADEWAITLOOPS=${UPGRADEWAITLOOPS-5}
  RETVAL=0
  start() {
  echo -n $"Starting $prog: "
  

daemon --pidfile=${pidfile} ${nginx} -c ${conffile}  
RETVAL=$?
  
echo
  
[ $RETVAL = 0 ] && touch ${lockfile}
  
return $RETVAL
  

  }
  stop() {
  echo -n $"Stopping $prog: "
  killproc -p ${pidfile} ${prog}
  RETVAL=$?
  echo
  [ $RETVAL = 0 ] && rm -f ${lockfile} ${pidfile}
  }
  reload() {
  echo -n $"Reloading $prog: "
  killproc -p ${pidfile} ${prog} -HUP
  RETVAL=$?
  echo
  }
  upgrade() {
  oldbinpidfile=${pidfile}.oldbin
  

configtest -q || return  
echo -n $"Starting new master $prog: "
  
killproc -p ${pidfile} ${prog} -USR2
  
echo
  

  
for i in `/usr/bin/seq $UPGRADEWAITLOOPS`; do
  /bin/usleep $SLEEPMSEC
  if [ -f ${oldbinpidfile} -a -f ${pidfile} ]; then
  echo -n $"Graceful shutdown of old $prog: "
  killproc -p ${oldbinpidfile} ${prog} -QUIT
  RETVAL=$?
  echo
  return
  fi
  
done
  

  
echo $"Upgrade failed!"
  
RETVAL=1
  

  }
  configtest() {
  if [ "$#" -ne 0 ] ; then
  case "$1" in
  -q)
  FLAG=$1
  ;;
  *)
  ;;
  esac
  shift
  fi
  ${nginx} -t -c ${conffile} $FLAG
  RETVAL=$?
  return $RETVAL
  }
  rh_status() {
  status -p ${pidfile} ${nginx}
  }

See how we were called.
  case "$1" in
  start)
  rh_status >/dev/null 2>&1 && exit 0
  start
  ;;
  stop)
  stop
  ;;
  status)
  rh_status
  RETVAL=$?
  ;;
  restart)
  configtest -q || exit $RETVAL
  stop
  start
  ;;
  upgrade)
  rh_status >/dev/null 2>&1 || exit 0
  upgrade
  ;;
  condrestart|try-restart)
  if rh_status >/dev/null 2>&1; then
  stop
  start
  fi
  ;;
  force-reload|reload)
  reload
  ;;
  configtest)
  configtest
  ;;
  *)
  echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|upgrade|reload|status|help|configtest}"
  RETVAL=2
  esac
  exit $RETVAL
  # chmod +x /etc/init.d/nginx
  # /etc/init.d/nginxstart
  正在启动 nginx:                                           [确定]
  # scp nginxserver10:/etc/init.d/


  # clustat
  Cluster Status for luci @ Wed Jul4 22:00:02 2018
  Member Status: Quorate

  Member Name                            >  server7                                     1 Online, Local, rgmanager
  server10                                    2 Online, rgmanager
  Service Name                   Owner (Last)                   State
  service:nginx                  server7                        started
  # clusvcadm-rnginx -m server10

  Trying to>  service:nginx is now running on server10
  ##将服务转移到server10上   粗体nginx为建立时的组名
  # clustat
  Cluster Status for luci @ Wed Jul4 22:07:31 2018
  Member Status: Quorate

  Member Name                            >  server7                                     1 Online, Local, rgmanager
  server10                                    2 Online, rgmanager
  Service Name                   Owner (Last)                   State
  service:nginx                  server10                     started
  网页来回测试

  # ip addr
  1: lo:mtu 16436 qdisc noqueue state UNKNOWN
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
  inet6 ::1/128 scope host
  valid_lft forever preferred_lft forever
  2: eth0:mtu 1500 qdisc pfifo_fast state UP qlen 1000
  link/ether 52:54:00:6d:4b:07 brd ff:ff:ff:ff:ff:ff
  inet 172.25.35.10/24 brd 172.25.35.255 scope global eth0
  inet 172.25.35.200/24 scope global secondary eth0
  inet6 fe80::5054:ff:fe6d:4b07/64 scope link
  valid_lft forever preferred_lft forever
  私有vip地址会随master漂移
  ##使内核崩溃,测试fence是否生效
  # echo c > /proc/sysrq-trigger自动回到7上
  # clustat
  Cluster Status for luci @ Wed Jul4 22:22:25 2018
  Member Status: Quorate

  Member Name                            >  server7                                     1 Online, Local, rgmanager
  server10                                    2 Online, rgmanager
  Service Name                   Owner (Last)                   State
  service:nginx                  server7                        started
  #ip ad
  1: lo:mtu 16436 qdisc noqueue state UNKNOWN
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
  inet6 ::1/128 scope host
  valid_lft forever preferred_lft forever
  2: eth0:mtu 1500 qdisc pfifo_fast state UP qlen 1000
  link/ether 52:54:00:09:2d:4d brd ff:ff:ff:ff:ff:ff
  inet 172.25.35.7/24 brd 172.25.35.255 scope global eth0
  inet 172.25.35.200/24 scope global secondary eth0
  inet6 fe80::5054:ff:fe09:2d4d/64 scope link
  valid_lft forever preferred_lft forever
  vip地址也漂移过来了


页: [1]
查看完整版本: nginx源码编译和集群及高可用