fjqzyc 发表于 2018-9-21 13:47:21

Building Oracle HA With PowerHA 6.1 On AIX 6.1-candon123

  In this article,you will learn how to configure IBM PowerHA on aix.My surroundings list by following worksheets:


  The en0 just for boot ip and the en1 just for standby ip.
  一.Requirements
  1.Append follwoing lines to /etc/hosts on all of nodes.
  


[*]#For Boot IP
[*]172.16.255.11   dbserv1
[*]172.16.255.13   dbserv2
[*]#For Standby IP
[*]192.168.0.11    dbserv1-stby
[*]192.168.0.13    dbserv2-stby
[*]#For Service IP
[*]172.16.255.15   dbserv1-serv
[*]172.16.255.17   dbserv2-serv
[*]#For Persistent IP
[*]192.168.2.11    dbserv1-pers
[*]192.168.2.13    dbserv2-pers
  

  2.Ensure that the following aix filesets are installed:
  


[*]#lslpp -l bos.data bos.adt.lib bos.adt.libm bos.adt.syscalls bos.net.tcp.client bos.net.tcp.server bos.rte.SRC bos.rte.libc bos.rte.libcfg bos.rte.libpthreads bos.rte.odm bos.rte.lvm bos.clvm.enh bos.adt.base bos.perf.perfstat bos.perf.libperfstat bos.perf.proctools rsct.basic.rte rsct.compat.clients.rte xlC.aix61.rte
  

  3.Install PowerHA on all of nodes:
  


[*]#loopmount -i powerHA_v6.1.iso -o "-V cdrfs -o ro" -m /mnt
[*]#installp -a -d /mnt all
  

  After installation,keep the PowerHA up to date and reboot all of nodes.
  4.Append boot ip and standby ip to /usr/es/sbin/cluster/etc/rhosts
  


[*]#cat rhosts
[*]172.16.255.11
[*]172.16.255.13
[*]192.168.0.11
[*]192.168.0.13
[*]#cat rhosts
[*]172.16.255.11
[*]172.16.255.13
[*]192.168.0.11
[*]192.168.0.13
  

  5.Edit /usr/es/sbin/cluster/netmon.cf file.Append each boot ip and standby ip to it on each node.
  


[*]#cat netmon.cf
[*]172.16.255.11
[*]192.168.0.11
[*]#cat netmon.cf
[*]172.16.255.13
[*]192.168.0.13
  

  6.Create a disk heartbeat:
  


[*]//Create heartvg on dbserv1
[*]#mkvg -x -y heartvg -C hdisk5
[*]#lspv|grep hdisk5
[*]hdisk5          000c1acf7ca3bc3b                  heartvg
[*]//import heartvg on dbserv2
[*]#importvg –y heartvg hdisk5
[*]#lspv|grep hdisk5
[*]hdisk5          000c1acf7ca3bc3b                  heartvg
  

  Test the disk heartbeat:
  


[*]//Running following command on dbserv1
[*]#/usr/sbin/rsct/bin/dhb_read -p hdisk5 -r
[*]DHB CLASSIC MODE
[*]First node byte offset: 61440
[*]Second node byte offset: 62976
[*]Handshaking byte offset: 65024
[*]       Test byte offset: 64512
[*]
[*]Receive Mode:
[*]Waiting for response . . .
[*]Magic number = 0x87654321
[*]Magic number = 0x87654321
[*]Magic number = 0x87654321
[*]Link operating normally
[*]
[*]//Running following command on dbserv2
[*]#/usr/sbin/rsct/bin/dhb_read -p hdisk5 -t
[*]DHB CLASSIC MODE
[*]First node byte offset: 61440
[*]Second node byte offset: 62976
[*]Handshaking byte offset: 65024
[*]       Test byte offset: 64512
[*]
[*]Transmit Mode:
[*]Magic number = 0x87654321
[*]Detected remote utility in receive mode.Waiting for response . . .
[*]Magic number = 0x87654321
[*]Magic number = 0x87654321
[*]Link operating normally
  

  7.Create a Share Volume Group:
  


[*]//On dbserv1
[*]#mkvg -V 48 -y oradata hdisk6 hdisk7
[*]0516-1254 mkvg: Changing the PVID in the ODM.
[*]0516-1254 mkvg: Changing the PVID in the ODM.
[*]oradata
[*]#mklv -y lv02 -t jfs2 oradata 20G
[*]lv02
[*]#crfs -v jfs2 -d /dev/lv02 –m /oradata
[*]File system created successfully.
[*]20970676 kilobytes total disk space.
[*]New File System size is 41943040
[*]#chvg -an oradata
[*]#varyoffvg oradata
[*]#exportvg oradata
[*]
[*]//On dbserv2 import oradata volume group
[*]#importvg -V 48 -y oradata hdisk6
[*]oradata
[*]#lspv
[*]hdisk0          000c18cf00094faa                  rootvg          active
[*]hdisk1          000c18cf003ca02c                  None
[*]hdisk2          000c1acf3e6440c6                  None
[*]hdisk3          000c1acf3e645312                  None
[*]hdisk4          000c1acf3e6460d9                  None
[*]hdisk5          000c1acf7ca3bc3b                  heartvg
[*]hdisk6          000c1acf7cb764d9                  oradata         active
[*]hdisk7          000c1acf7cb765aa                  oradata         active
  

  8.For oracle do following steps.
  (1).Check following softwares:
  


[*]#lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat bos.perf.libperfstat bos.perf.proctools xlC.aix61.rte
  

  (2).Change following parameters:
  


[*]#no -p -o tcp_ephemeral_low=9000
[*]#no -p -o tcp_ephemeral_high=65500
[*]#no -p -o udp_ephemeral_low=9000
[*]#no -p -o udp_ephemeral_high=65500
[*]#no -p -o tcp_ephemeral_low=9000
[*]#no -p -o tcp_ephemeral_high=65500
[*]#no -p -o udp_ephemeral_low=9000
[*]#no -p -o udp_ephemeral_high=65500
  

  (3).Create oracle user and groups:
  


[*]//On dbserv1
[*]#for id in oinstall dba oper;do mkgroup $id;done
[*]#mkuser oracle;passwd oracle
[*]#chuser pgrp=oinstall oracle
[*]#chuser groups=oinstall,dba,oper oracle
[*]#chuser fsize=-1 oracle
[*]#chuser data=-1 oracle
[*]//On dbserv2
[*]#for id in oinstall dba oper;do mkgroup $id;done
[*]#mkuser oracle;passwd oracle
[*]#chuser pgrp=oinstall oracle
[*]#chuser groups=oinstall,dba,oper oracle
[*]#chuser fsize=-1 oracle
[*]#chuser data=-1 oracle
  

  (4).Change maxuprocs parameter:
  


[*]#chdev -l sys0 -a maxuproc=16384
[*]sys0 changed
[*]#chdev -l sys0 -a maxuproc=16384
[*]sys0 changed
  

  (5).Create Oracle home:
  


[*]#mkdir /u01;chown oracle:oinstall /u01;su - oracle
[*]$vi .profile
[*]export ORACLE_SID=example
[*]export ORACLE_BASE=/u01/app/oracle
[*]export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
[*]export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
[*]export TNS_ADMIN=$ORACLE_HOME/network/admin
[*]export ORA_NLS11=$ORACLE_HOME/nls/data
[*]export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$JAVA_HOME/bin
[*]export LD_LIBRARY_PATH=$ORACLE_HOME/lib:${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib:/lib:/usr/lib:/usr/local/lib
[*]export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
[*]export THREADS_FLAG=native
[*]$source .profile;mkdir -p $ORACLE_HOME
[*]
[*]#mkdir /u01;chown oracle:oinstall /u01;su - oracle
[*]$vi .profile
[*]export ORACLE_SID=example
[*]export ORACLE_BASE=/u01/app/oracle
[*]export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
[*]export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
[*]export TNS_ADMIN=$ORACLE_HOME/network/admin
[*]export ORA_NLS11=$ORACLE_HOME/nls/data
[*]export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$JAVA_HOME/bin
[*]export LD_LIBRARY_PATH=$ORACLE_HOME/lib:${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib:/lib:/usr/lib:/usr/local/lib
[*]export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
[*]export THREADS_FLAG=native
[*]$source .profile;mkdir -p $ORACLE_HOME
  

  (6).Ensure that the /tmp filesystem has enough space:
  


[*]#chfs -a size=+1G /tmp
  

  二.Create a cluster:
  1.Add a cluster:
  


[*]#/usr/es/sbin/cluster/utilities/claddclstr -n hatest
[*]Current cluster configuration:
[*]
[*]Cluster Name: hatest
[*]Cluster Connection Authentication Mode: Standard
[*]Cluster Message Authentication Mode: None
[*]Cluster Message Encryption: None
[*]Use Persistent Labels for Communication: No
[*]There are 0 node(s) and 0 network(s) defined
[*]
[*]No resource groups defined
  

  2.Add nodes to cluster:
  


[*]#/usr/es/sbin/cluster/utilities/clnodename -a dbserv1-p dbserv1
[*]#/usr/es/sbin/cluster/utilities/clnodename -a dbserv2-p dbserv2
[*]#/usr/es/sbin/cluster/utilities/clnodename
[*]dbserv1
[*]dbserv2
  

  3. Configure HACMP diskhb network:
  


[*]#/usr/es/sbin/cluster/utilities/clmodnetwork -a -l no -n net_diskhb_01 -i diskhb
[*]#/usr/es/sbin/cluster/cspoc/cl_ls2ndhbnets
[*]Network Name   Node and Disk List
[*]============   ==================      ==================
[*]net_diskhb_01
  

  4.Configure HACMP Communication devices:
  


[*]#/usr/es/sbin/cluster/utilities/claddnode -a diskhb_dbserv1:diskhb:net_diskhb_01:serial:service:/dev/hdisk5 -n dbserv1
[*]#/usr/es/sbin/cluster/utilities/claddnode -a diskhb_dbserv2:diskhb:net_diskhb_01:serial:service:/dev/hdisk5 -n dbserv2
[*]#/usr/es/sbin/cluster/cspoc/cl_ls2ndhbnets
[*]Network Name   Node and Disk List
[*]============   ==================      ==================
[*]net_diskhb_01dbserv1:/dev/hdisk5   dbserv2:/dev/hdisk5
  

  Test diskhb nerwork:
  


[*]#/usr/es/sbin/cluster/sbin/cl_tst_2ndhbnet -cspoc -n'dbserv1,dbserv2' '/dev/hdisk5' 'dbserv1' '/dev/hdisk5' 'dbserv2'
[*]cl_tst_2ndhbnet: Starting the receive side of the test for disk /dev/hdisk5 on node dbserv1
[*]cl_tst_2ndhbnet: Starting the transmit side of the test for disk /dev/hdisk5 on node dbserv2
[*]dbserv1: DHB CLASSIC MODE
[*]dbserv1:First node byte offset: 61440
[*]dbserv1: Second node byte offset: 62976
[*]dbserv1: Handshaking byte offset: 65024
[*]dbserv1:      Test byte offset: 64512
[*]dbserv1:
[*]dbserv1: Receive Mode:
[*]dbserv1: Waiting for response . . .
[*]dbserv1: Magic number = 0x87654321
[*]dbserv1: Magic number = 0x87654321
[*]dbserv1: Magic number = 0x87654321
[*]dbserv1: Link operating normally
[*]dbserv2: DHB CLASSIC MODE
[*]dbserv2:First node byte offset: 61440
[*]dbserv2: Second node byte offset: 62976
[*]dbserv2: Handshaking byte offset: 65024
[*]dbserv2:      Test byte offset: 64512
[*]dbserv2:
[*]dbserv2: Transmit Mode:
[*]dbserv2: Magic number = 0x87654321
[*]dbserv2: Detected remote utility in receive mode.Waiting for response . . .
[*]dbserv2: Magic number = 0x87654321
[*]dbserv2: Magic number = 0x87654321
[*]dbserv2: Link operating normally
[*]cl_tst_2ndhbnet: Test complete
  

  5.Configure HACMP IP-Based Network:
  


[*]#/usr/es/sbin/cluster/utilities/clmodnetwork -a –n net_ether_02 –i ether –s 255.255.255.0 –l yes
[*]#/usr/es/sbin/cluster/utilities/clmodnetwork -a -n net_ether_03 -i ether -s 255.255.255.0 -l yes
  

  6.Add Communication Interfaces:
  


[*]#/usr/es/sbin/cluster/utilities/claddnode -a'dbserv1' :'ether' :'net_ether_02' : : : -n'dbserv1'
[*]#/usr/es/sbin/cluster/utilities/claddnode -a'dbserv2' :'ether' :'net_ether_02' : : : -n'dbserv2'
[*]#/usr/es/sbin/cluster/utilities/claddnode -a'dbserv1-stby' :'ether' :'net_ether_03' : : : -n'dbserv1'
[*]#/usr/es/sbin/cluster/utilities/claddnode -a'dbserv2-stby' :'ether' :'net_ether_03' : : : -n'dbserv2'
  

  7.Add service ip:
  


[*]#/usr/es/sbin/cluster/utilities/claddnode -Tservice -B'dbserv1-serv' -w'net_ether_01'
[*]#/usr/es/sbin/cluster/utilities/claddnode -Tservice -B'dbserv2-serv' -w'net_ether_01'
  

  8.Add Resource Group:
  Extended Configuration->Extended Resource Configuration->HACMP Extended Resource Group Configuration->Add a Resource Group

  9.Add persistent ip:
  Extended Configuration->Extended Topology Configuration->Configure HACMP Persistent Node IP Label/Addresses->Add a Persistent Node IP Label/Address


  10.Verification and Synchronization:
  Extended Configuration->Extended Verification and Synchronization or you can use following command:
  


[*]#usr/es/sbin/cluster/utilities/cldare -rt -V normal
  

  三.Install Oracle databse:
  1.Run rootpre.sh scripts:
  Before install,you must run rootpre.sh from oralce meadia:
  


[*]#./rootpre.sh
[*]./rootpre.sh output will be logged in /tmp/rootpre.out_12-05-28.13:38:43
[*]
[*]Checking if group services should be configured....
[*]Group "hagsuser" does not exist.
[*]Creating required group for group services: hagsuser
[*]Please add your Oracle userid to the group: hagsuser
[*]Configuring HACMP group services socket for possible use by Oracle.
[*]
[*]The group or permissions of the group services socket have changed.
[*]
[*]Please stop and restart HACMP before trying to use Oracle.
[*]
[*]
[*]#./rootpre.sh
[*]./rootpre.sh output will be logged in /tmp/rootpre.out_12-05-28.13:38:11
[*]
[*]Checking if group services should be configured....
[*]Group "hagsuser" does not exist.
[*]Creating required group for group services: hagsuser
[*]Please add your Oracle userid to the group: hagsuser
[*]Configuring HACMP group services socket for possible use by Oracle.
[*]
[*]The group or permissions of the group services socket have changed.
[*]
[*]Please stop and restart HACMP before trying to use Oracle.
  

  After do above step then you can install oracle database and copy the oracle installed files to another node.Make sure that the oracle listener address is you service ip.
  2.Create start and stop scripts:
  


[*]#vi /etc/dbstart
[*]#!/usr/bin/bash
[*]#Define Oracle Home
[*]ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
[*]#Start Oracle Listener
[*]if [ -x $ORACLE_HOME/bin/lsnrctl ]; then
[*]su - oracle "-c lsnrctl start"
[*]fi
[*]#Start Oracle Instance
[*]if [ -x $ORACLE_HOME/bin/sqlplus ]; then
[*]su - oracle "-c sqlplus"Add an Application Server

  2.Create Application Monitor:
  Extended Configuration->Extended Resource Configuration->Configure HACMP Application Servers->Configure HACMP Application Monitoring->Add a Process Application Monitor

  3.Register A resource to resource group:
  Extended Configuration->Extended Resource Configuration->HACMP Extended Resource Group Configuration->Change/Show Resources and Attributes for a Resource Group


5.Verification and Synchronization  Excute following command:
  


[*]#usr/es/sbin/cluster/utilities/cldare -rt -V normal
  

  After above steps,start hacmp service and test you configuration.
  6.Display HACMP Configuration:
  


[*]#/usr/es/sbin/cluster/utilities/cldisp
[*]Cluster: hatest
[*]   Cluster services: active
[*]   State of cluster: up
[*]      Substate: stable
[*]
[*]#############
[*]APPLICATIONS
[*]#############
[*]   Cluster hatest provides the following applications: example
[*]      Application: example
[*]         example is started by /etc/dbstart
[*]         example is stopped by /etc/dbstop
[*]         Application monitor of example: example
[*]            Monitor name: example
[*]               Type: process
[*]               Process monitored: tnslsnr
[*]               Process owner: oracle
[*]               Instance count: 1
[*]               Stabilization interval: 60 seconds
[*]               Retry count: 3 tries
[*]               Restart interval: 198 seconds
[*]               Failure action: fallover
[*]               Cleanup method: /etc/lsnrClear.sh
[*]               Restart method: /etc/lsnrRestart.sh
[*]         This application is part of resource group 'oradb'.
[*]            Resource group policies:
[*]               Startup: on first available node
[*]               Fallover: to next priority node in the list
[*]               Fallback: never
[*]            State of example: online
[*]            Nodes configured to provide example: dbserv1 {up}dbserv2 {up}
[*]               Node currently providing example: dbserv1 {up}
[*]               The node that will provide example if dbserv1 fails is: dbserv2
[*]            Resources associated with example:
[*]               Service Labels
[*]                  dbserv1-serv(172.16.255.15) {online}
[*]                     Interfaces configured to provide dbserv1-serv:
[*]                        dbserv1 {up}
[*]                           with IP address: 172.16.255.11
[*]                           on interface: en0
[*]                           on node: dbserv1 {up}
[*]                           on network: net_ether_02 {up}
[*]                        dbserv2 {up}
[*]                           with IP address: 172.16.255.13
[*]                           on interface: en0
[*]                           on node: dbserv2 {up}
[*]                           on network: net_ether_02 {up}
[*]                  dbserv2-serv(172.16.255.17) {online}
[*]                     Interfaces configured to provide dbserv2-serv:
[*]                        dbserv1 {up}
[*]                           with IP address: 172.16.255.11
[*]                           on interface: en0
[*]                           on node: dbserv1 {up}
[*]                           on network: net_ether_02 {up}
[*]                        dbserv2 {up}
[*]                           with IP address: 172.16.255.13
[*]                           on interface: en0
[*]                           on node: dbserv2 {up}
[*]                           on network: net_ether_02 {up}
[*]               Shared Volume Groups:
[*]                  oradata
[*]
[*]#############
[*]TOPOLOGY
[*]#############
[*]   hatest consists of the following nodes: dbserv1 dbserv2
[*]      dbserv1
[*]         Network interfaces:
[*]            diskhb_01 {up}
[*]               device: /dev/hdisk5
[*]               on network: net_diskhb_01 {up}
[*]            dbserv1 {up}
[*]               with IP address: 172.16.255.11
[*]               on interface: en0
[*]               on network: net_ether_02 {up}
[*]            dbserv1-stby {up}
[*]               with IP address: 192.168.0.11
[*]               on interface: en1
[*]               on network: net_ether_03 {up}
[*]      dbserv2
[*]         Network interfaces:
[*]            diskhb_02 {up}
[*]               device: /dev/hdisk5
[*]               on network: net_diskhb_01 {up}
[*]            dbserv2 {up}
[*]               with IP address: 172.16.255.13
[*]               on interface: en0
[*]               on network: net_ether_02 {up}
[*]            dbserv2-stby {up}
[*]               with IP address: 192.168.0.13
[*]               on interface: en1
[*]               on network: net_ether_03 {up}
  

  Append Following On 2012/6/11:
  Before you start powerHA service,you must excute following steps on both nodes then you can run clstat command.
  


[*]# snmpv3_ssw -1
[*]Stop daemon: snmpmibd
[*]In /etc/rc.tcpip file, comment out the line that contains: snmpmibd
[*]In /etc/rc.tcpip file, remove the comment from the line that contains: dpid2
[*]Stop daemon: snmpd
[*]Make the symbolic link from /usr/sbin/snmpd to /usr/sbin/snmpdv1
[*]Make the symbolic link from /usr/sbin/clsnmp to /usr/sbin/clsnmpne
[*]Start daemon: dpid2
[*]Start daemon: snmpd
[*]
[*]# snmpv3_ssw -1
[*]Stop daemon: snmpmibd
[*]In /etc/rc.tcpip file, comment out the line that contains: snmpmibd
[*]In /etc/rc.tcpip file, remove the comment from the line that contains: dpid2
[*]Stop daemon: snmpd
[*]Make the symbolic link from /usr/sbin/snmpd to /usr/sbin/snmpdv1
[*]Make the symbolic link from /usr/sbin/clsnmp to /usr/sbin/clsnmpne
[*]Start daemon: dpid2
[*]Start daemon: snmpd
  



页: [1]
查看完整版本: Building Oracle HA With PowerHA 6.1 On AIX 6.1-candon123