设为首页 收藏本站
查看: 878|回复: 0

[经验分享] Building Oracle HA With PowerHA 6.1 On AIX 6.1-candon123

[复制链接]

尚未签到

发表于 2018-9-21 13:47:21 | 显示全部楼层 |阅读模式
  In this article,you will learn how to configure IBM PowerHA on aix.My surroundings list by following worksheets:

DSC0000.jpg

  The en0 just for boot ip and the en1 just for standby ip.
  一.Requirements
  1.Append follwoing lines to /etc/hosts on all of nodes.
  


  • #For Boot IP
  • 172.16.255.11   dbserv1
  • 172.16.255.13   dbserv2
  • #For Standby IP
  • 192.168.0.11    dbserv1-stby
  • 192.168.0.13    dbserv2-stby
  • #For Service IP
  • 172.16.255.15   dbserv1-serv
  • 172.16.255.17   dbserv2-serv
  • #For Persistent IP
  • 192.168.2.11    dbserv1-pers
  • 192.168.2.13    dbserv2-pers
  

  2.Ensure that the following aix filesets are installed:
  


  • [root@dbserv1 /]#lslpp -l bos.data bos.adt.lib bos.adt.libm bos.adt.syscalls bos.net.tcp.client bos.net.tcp.server bos.rte.SRC bos.rte.libc bos.rte.libcfg bos.rte.libpthreads bos.rte.odm bos.rte.lvm bos.clvm.enh bos.adt.base bos.perf.perfstat bos.perf.libperfstat bos.perf.proctools rsct.basic.rte rsct.compat.clients.rte xlC.aix61.rte
  

  3.Install PowerHA on all of nodes:
  


  • [root@dbserv1 /]#loopmount -i powerHA_v6.1.iso -o "-V cdrfs -o ro" -m /mnt
  • [root@dbserv1 /]#installp -a -d /mnt all
  

  After installation,keep the PowerHA up to date and reboot all of nodes.
  4.Append boot ip and standby ip to /usr/es/sbin/cluster/etc/rhosts
  


  • [root@dbserv1 etc]#cat rhosts
  • 172.16.255.11
  • 172.16.255.13
  • 192.168.0.11
  • 192.168.0.13
  • [root@dbserv2 etc]#cat rhosts
  • 172.16.255.11
  • 172.16.255.13
  • 192.168.0.11
  • 192.168.0.13
  

  5.Edit /usr/es/sbin/cluster/netmon.cf file.Append each boot ip and standby ip to it on each node.
  


  • [root@dbserv1 cluster]#cat netmon.cf
  • 172.16.255.11
  • 192.168.0.11
  • [root@dbserv2 cluster]#cat netmon.cf
  • 172.16.255.13
  • 192.168.0.13
  

  6.Create a disk heartbeat:
  


  • //Create heartvg on dbserv1
  • [root@dbserv1 /]#mkvg -x -y heartvg -C hdisk5
  • [root@dbserv1 /]#lspv|grep hdisk5
  • hdisk5          000c1acf7ca3bc3b                    heartvg
  • //import heartvg on dbserv2
  • [root@dbserv2 /]#importvg –y heartvg hdisk5
  • [root@dbserv2 /]#lspv|grep hdisk5
  • hdisk5          000c1acf7ca3bc3b                    heartvg
  

  Test the disk heartbeat:
  


  • //Running following command on dbserv1
  • [root@dbserv1 /]#/usr/sbin/rsct/bin/dhb_read -p hdisk5 -r
  • DHB CLASSIC MODE
  • First node byte offset: 61440
  • Second node byte offset: 62976
  • Handshaking byte offset: 65024
  •        Test byte offset: 64512

  • Receive Mode:
  • Waiting for response . . .
  • Magic number = 0x87654321
  • Magic number = 0x87654321
  • Magic number = 0x87654321
  • Link operating normally

  • //Running following command on dbserv2
  • [root@dbserv2 /]#/usr/sbin/rsct/bin/dhb_read -p hdisk5 -t
  • DHB CLASSIC MODE
  • First node byte offset: 61440
  • Second node byte offset: 62976
  • Handshaking byte offset: 65024
  •        Test byte offset: 64512

  • Transmit Mode:
  • Magic number = 0x87654321
  • Detected remote utility in receive mode.  Waiting for response . . .
  • Magic number = 0x87654321
  • Magic number = 0x87654321
  • Link operating normally
  

  7.Create a Share Volume Group:
  


  • //On dbserv1
  • [root@dbserv1 /]#mkvg -V 48 -y oradata hdisk6 hdisk7
  • 0516-1254 mkvg: Changing the PVID in the ODM.
  • 0516-1254 mkvg: Changing the PVID in the ODM.
  • oradata
  • [root@dbserv1 /]#mklv -y lv02 -t jfs2 oradata 20G
  • lv02
  • [root@dbserv1 /]#crfs -v jfs2 -d /dev/lv02 –m /oradata
  • File system created successfully.
  • 20970676 kilobytes total disk space.
  • New File System size is 41943040
  • [root@dbserv1 /]#chvg -an oradata
  • [root@dbserv1 /]#varyoffvg oradata
  • [root@dbserv1 /]#exportvg oradata

  • //On dbserv2 import oradata volume group
  • [root@dbserv2 /]#importvg -V 48 -y oradata hdisk6
  • oradata
  • [root@dbserv2 /]#lspv
  • hdisk0          000c18cf00094faa                    rootvg          active
  • hdisk1          000c18cf003ca02c                    None
  • hdisk2          000c1acf3e6440c6                    None
  • hdisk3          000c1acf3e645312                    None
  • hdisk4          000c1acf3e6460d9                    None
  • hdisk5          000c1acf7ca3bc3b                    heartvg
  • hdisk6          000c1acf7cb764d9                    oradata         active
  • hdisk7          000c1acf7cb765aa                    oradata         active
  

  8.For oracle do following steps.
  (1).Check following softwares:
  


  • [root@dbserv2 /]#lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat bos.perf.libperfstat bos.perf.proctools xlC.aix61.rte
  

  (2).Change following parameters:
  


  • [root@dbserv1 /]#no -p -o tcp_ephemeral_low=9000
  • [root@dbserv1 /]#no -p -o tcp_ephemeral_high=65500
  • [root@dbserv1 /]#no -p -o udp_ephemeral_low=9000
  • [root@dbserv1 /]#no -p -o udp_ephemeral_high=65500
  • [root@dbserv2 /]#no -p -o tcp_ephemeral_low=9000
  • [root@dbserv2 /]#no -p -o tcp_ephemeral_high=65500
  • [root@dbserv2 /]#no -p -o udp_ephemeral_low=9000
  • [root@dbserv2 /]#no -p -o udp_ephemeral_high=65500
  

  (3).Create oracle user and groups:
  


  • //On dbserv1
  • [root@dbserv1 /]#for id in oinstall dba oper;do mkgroup $id;done
  • [root@dbserv1 /]#mkuser oracle;passwd oracle
  • [root@dbserv1 /]#chuser pgrp=oinstall oracle
  • [root@dbserv1 /]#chuser groups=oinstall,dba,oper oracle
  • [root@dbserv1 /]#chuser fsize=-1 oracle
  • [root@dbserv1 /]#chuser data=-1 oracle
  • //On dbserv2
  • [root@dbserv2 /]#for id in oinstall dba oper;do mkgroup $id;done
  • [root@dbserv2 /]#mkuser oracle;passwd oracle
  • [root@dbserv2 /]#chuser pgrp=oinstall oracle
  • [root@dbserv2 /]#chuser groups=oinstall,dba,oper oracle
  • [root@dbserv2 /]#chuser fsize=-1 oracle
  • [root@dbserv2 /]#chuser data=-1 oracle
  

  (4).Change maxuprocs parameter:
  


  • [root@dbserv1 /]#chdev -l sys0 -a maxuproc=16384
  • sys0 changed
  • [root@dbserv2 /]#chdev -l sys0 -a maxuproc=16384
  • sys0 changed
  

  (5).Create Oracle home:
  


  • [root@dbserv1 /]#mkdir /u01;chown oracle:oinstall /u01;su - oracle
  • [root@dbserv1 /]$vi .profile
  • export ORACLE_SID=example
  • export ORACLE_BASE=/u01/app/oracle
  • export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
  • export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
  • export TNS_ADMIN=$ORACLE_HOME/network/admin
  • export ORA_NLS11=$ORACLE_HOME/nls/data
  • export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$JAVA_HOME/bin
  • export LD_LIBRARY_PATH=$ORACLE_HOME/lib:${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib:/lib:/usr/lib:/usr/local/lib
  • export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
  • export THREADS_FLAG=native
  • [root@dbserv1 /]$source .profile;mkdir -p $ORACLE_HOME

  • [root@dbserv2 /]#mkdir /u01;chown oracle:oinstall /u01;su - oracle
  • [root@dbserv2 /]$vi .profile
  • export ORACLE_SID=example
  • export ORACLE_BASE=/u01/app/oracle
  • export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
  • export NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
  • export TNS_ADMIN=$ORACLE_HOME/network/admin
  • export ORA_NLS11=$ORACLE_HOME/nls/data
  • export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$JAVA_HOME/bin
  • export LD_LIBRARY_PATH=$ORACLE_HOME/lib:${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib:/lib:/usr/lib:/usr/local/lib
  • export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
  • export THREADS_FLAG=native
  • [root@dbserv2 /]$source .profile;mkdir -p $ORACLE_HOME
  

  (6).Ensure that the /tmp filesystem has enough space:
  


  • [root@dbserv1 /]#chfs -a size=+1G /tmp
  

  二.Create a cluster:
  1.Add a cluster:
  


  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddclstr -n hatest
  • Current cluster configuration:

  • Cluster Name: hatest
  • Cluster Connection Authentication Mode: Standard
  • Cluster Message Authentication Mode: None
  • Cluster Message Encryption: None
  • Use Persistent Labels for Communication: No
  • There are 0 node(s) and 0 network(s) defined

  • No resource groups defined
  

  2.Add nodes to cluster:
  


  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/clnodename -a dbserv1  -p dbserv1
  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/clnodename -a dbserv2  -p dbserv2
  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/clnodename
  • dbserv1
  • dbserv2
  

  3. Configure HACMP diskhb network:
  


  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/clmodnetwork -a -l no -n net_diskhb_01 -i diskhb
  • [root@dbserv1 /]#/usr/es/sbin/cluster/cspoc/cl_ls2ndhbnets
  • Network Name   Node and Disk List
  • ============   ==================      ==================
  • net_diskhb_01
  

  4.Configure HACMP Communication devices:
  


  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a diskhb_dbserv1:diskhb:net_diskhb_01:serial:service:/dev/hdisk5 -n dbserv1
  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a diskhb_dbserv2:diskhb:net_diskhb_01:serial:service:/dev/hdisk5 -n dbserv2
  • [root@dbserv1 /]#/usr/es/sbin/cluster/cspoc/cl_ls2ndhbnets
  • Network Name   Node and Disk List
  • ============   ==================      ==================
  • net_diskhb_01  dbserv1:/dev/hdisk5     dbserv2:/dev/hdisk5
  

  Test diskhb nerwork:
  


  • [root@dbserv1 /]#/usr/es/sbin/cluster/sbin/cl_tst_2ndhbnet -cspoc -n'dbserv1,dbserv2' '/dev/hdisk5' 'dbserv1' '/dev/hdisk5' 'dbserv2'
  • cl_tst_2ndhbnet: Starting the receive side of the test for disk /dev/hdisk5 on node dbserv1
  • cl_tst_2ndhbnet: Starting the transmit side of the test for disk /dev/hdisk5 on node dbserv2
  • dbserv1: DHB CLASSIC MODE
  • dbserv1:  First node byte offset: 61440
  • dbserv1: Second node byte offset: 62976
  • dbserv1: Handshaking byte offset: 65024
  • dbserv1:        Test byte offset: 64512
  • dbserv1:
  • dbserv1: Receive Mode:
  • dbserv1: Waiting for response . . .
  • dbserv1: Magic number = 0x87654321
  • dbserv1: Magic number = 0x87654321
  • dbserv1: Magic number = 0x87654321
  • dbserv1: Link operating normally
  • dbserv2: DHB CLASSIC MODE
  • dbserv2:  First node byte offset: 61440
  • dbserv2: Second node byte offset: 62976
  • dbserv2: Handshaking byte offset: 65024
  • dbserv2:        Test byte offset: 64512
  • dbserv2:
  • dbserv2: Transmit Mode:
  • dbserv2: Magic number = 0x87654321
  • dbserv2: Detected remote utility in receive mode.  Waiting for response . . .
  • dbserv2: Magic number = 0x87654321
  • dbserv2: Magic number = 0x87654321
  • dbserv2: Link operating normally
  • cl_tst_2ndhbnet: Test complete
  

  5.Configure HACMP IP-Based Network:
  


  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/clmodnetwork -a –n net_ether_02 –i ether –s 255.255.255.0 –l yes
  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/clmodnetwork -a -n net_ether_03 -i ether -s 255.255.255.0 -l yes
  

  6.Add Communication Interfaces:
  


  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a'dbserv1' :'ether' :'net_ether_02' : : : -n'dbserv1'
  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a'dbserv2' :'ether' :'net_ether_02' : : : -n'dbserv2'
  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a'dbserv1-stby' :'ether' :'net_ether_03' : : : -n'dbserv1'
  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -a'dbserv2-stby' :'ether' :'net_ether_03' : : : -n'dbserv2'
  

  7.Add service ip:
  


  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -Tservice -B'dbserv1-serv' -w'net_ether_01'
  • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/claddnode -Tservice -B'dbserv2-serv' -w'net_ether_01'
  

  8.Add Resource Group:
  Extended Configuration->Extended Resource Configuration->HACMP Extended Resource Group Configuration->Add a Resource Group
DSC0001.jpg

  9.Add persistent ip:
  Extended Configuration->Extended Topology Configuration->Configure HACMP Persistent Node IP Label/Addresses->Add a Persistent Node IP Label/Address
DSC0002.jpg

DSC0003.jpg

  10.Verification and Synchronization:
  Extended Configuration->Extended Verification and Synchronization or you can use following command:
  


  • [root@dbserv1 /]#usr/es/sbin/cluster/utilities/cldare -rt -V normal
  

  三.Install Oracle databse:
  1.Run rootpre.sh scripts:
  Before install,you must run rootpre.sh from oralce meadia:
  


  • [root@dbserv1 database]#./rootpre.sh
  • ./rootpre.sh output will be logged in /tmp/rootpre.out_12-05-28.13:38:43

  • Checking if group services should be configured....
  • Group "hagsuser" does not exist.
  • Creating required group for group services: hagsuser
  • Please add your Oracle userid to the group: hagsuser
  • Configuring HACMP group services socket for possible use by Oracle.

  • The group or permissions of the group services socket have changed.

  • Please stop and restart HACMP before trying to use Oracle.


  • [root@dbserv2 database]#./rootpre.sh
  • ./rootpre.sh output will be logged in /tmp/rootpre.out_12-05-28.13:38:11

  • Checking if group services should be configured....
  • Group "hagsuser" does not exist.
  • Creating required group for group services: hagsuser
  • Please add your Oracle userid to the group: hagsuser
  • Configuring HACMP group services socket for possible use by Oracle.

  • The group or permissions of the group services socket have changed.

  • Please stop and restart HACMP before trying to use Oracle.
  

  After do above step then you can install oracle database and copy the oracle installed files to another node.Make sure that the oracle listener address is you service ip.
  2.Create start and stop scripts:
  


  • [root@dbserv1 /]#vi /etc/dbstart
  • #!/usr/bin/bash
  • #Define Oracle Home
  • ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
  • #Start Oracle Listener
  • if [ -x $ORACLE_HOME/bin/lsnrctl ]; then
  • su - oracle "-c lsnrctl start"
  • fi
  • #Start Oracle Instance
  • if [ -x $ORACLE_HOME/bin/sqlplus ]; then
  • su - oracle "-c sqlplus"Add an Application Server
    DSC0004.jpg

      2.Create Application Monitor:
      Extended Configuration->Extended Resource Configuration->Configure HACMP Application Servers->Configure HACMP Application Monitoring->Add a Process Application Monitor
    DSC0005.jpg

      3.Register A resource to resource group:
      Extended Configuration->Extended Resource Configuration->HACMP Extended Resource Group Configuration->Change/Show Resources and Attributes for a Resource Group
    DSC0006.jpg


    5.Verification and Synchronization  Excute following command:
      


    • [root@dbserv1 /]#usr/es/sbin/cluster/utilities/cldare -rt -V normal
      

      After above steps,start hacmp service and test you configuration.
      6.Display HACMP Configuration:
      


    • [root@dbserv1 /]#/usr/es/sbin/cluster/utilities/cldisp
    • Cluster: hatest
    •    Cluster services: active
    •    State of cluster: up
    •       Substate: stable

    • #############
    • APPLICATIONS
    • #############
    •    Cluster hatest provides the following applications: example
    •       Application: example
    •          example is started by /etc/dbstart
    •          example is stopped by /etc/dbstop
    •          Application monitor of example: example
    •             Monitor name: example
    •                Type: process
    •                Process monitored: tnslsnr
    •                Process owner: oracle
    •                Instance count: 1
    •                Stabilization interval: 60 seconds
    •                Retry count: 3 tries
    •                Restart interval: 198 seconds
    •                Failure action: fallover
    •                Cleanup method: /etc/lsnrClear.sh
    •                Restart method: /etc/lsnrRestart.sh
    •          This application is part of resource group 'oradb'.
    •             Resource group policies:
    •                Startup: on first available node
    •                Fallover: to next priority node in the list
    •                Fallback: never
    •             State of example: online
    •             Nodes configured to provide example: dbserv1 {up}  dbserv2 {up}
    •                Node currently providing example: dbserv1 {up}
    •                The node that will provide example if dbserv1 fails is: dbserv2
    •             Resources associated with example:
    •                Service Labels
    •                   dbserv1-serv(172.16.255.15) {online}
    •                      Interfaces configured to provide dbserv1-serv:
    •                         dbserv1 {up}
    •                            with IP address: 172.16.255.11
    •                            on interface: en0
    •                            on node: dbserv1 {up}
    •                            on network: net_ether_02 {up}
    •                         dbserv2 {up}
    •                            with IP address: 172.16.255.13
    •                            on interface: en0
    •                            on node: dbserv2 {up}
    •                            on network: net_ether_02 {up}
    •                   dbserv2-serv(172.16.255.17) {online}
    •                      Interfaces configured to provide dbserv2-serv:
    •                         dbserv1 {up}
    •                            with IP address: 172.16.255.11
    •                            on interface: en0
    •                            on node: dbserv1 {up}
    •                            on network: net_ether_02 {up}
    •                         dbserv2 {up}
    •                            with IP address: 172.16.255.13
    •                            on interface: en0
    •                            on node: dbserv2 {up}
    •                            on network: net_ether_02 {up}
    •                Shared Volume Groups:
    •                   oradata

    • #############
    • TOPOLOGY
    • #############
    •    hatest consists of the following nodes: dbserv1 dbserv2
    •       dbserv1
    •          Network interfaces:
    •             diskhb_01 {up}
    •                device: /dev/hdisk5
    •                on network: net_diskhb_01 {up}
    •             dbserv1 {up}
    •                with IP address: 172.16.255.11
    •                on interface: en0
    •                on network: net_ether_02 {up}
    •             dbserv1-stby {up}
    •                with IP address: 192.168.0.11
    •                on interface: en1
    •                on network: net_ether_03 {up}
    •       dbserv2
    •          Network interfaces:
    •             diskhb_02 {up}
    •                device: /dev/hdisk5
    •                on network: net_diskhb_01 {up}
    •             dbserv2 {up}
    •                with IP address: 172.16.255.13
    •                on interface: en0
    •                on network: net_ether_02 {up}
    •             dbserv2-stby {up}
    •                with IP address: 192.168.0.13
    •                on interface: en1
    •                on network: net_ether_03 {up}
      

      Append Following On 2012/6/11:
      Before you start powerHA service,you must excute following steps on both nodes then you can run clstat command.
      


    • [root@dbserv1 utilities]# snmpv3_ssw -1
    • Stop daemon: snmpmibd
    • In /etc/rc.tcpip file, comment out the line that contains: snmpmibd
    • In /etc/rc.tcpip file, remove the comment from the line that contains: dpid2
    • Stop daemon: snmpd
    • Make the symbolic link from /usr/sbin/snmpd to /usr/sbin/snmpdv1
    • Make the symbolic link from /usr/sbin/clsnmp to /usr/sbin/clsnmpne
    • Start daemon: dpid2
    • Start daemon: snmpd

    • [root@dbserv2 /]# snmpv3_ssw -1
    • Stop daemon: snmpmibd
    • In /etc/rc.tcpip file, comment out the line that contains: snmpmibd
    • In /etc/rc.tcpip file, remove the comment from the line that contains: dpid2
    • Stop daemon: snmpd
    • Make the symbolic link from /usr/sbin/snmpd to /usr/sbin/snmpdv1
    • Make the symbolic link from /usr/sbin/clsnmp to /usr/sbin/clsnmpne
    • Start daemon: dpid2
    • Start daemon: snmpd
      




运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.iyunv.com/thread-599480-1-1.html 上篇帖子: 安装oracle 10g 的艰难之旅 下篇帖子: Oracle性能优化 之 库缓存与Pin
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表