加载中…
个人资料
  • 博客等级:
  • 博客积分:
  • 博客访问:
  • 关注人气:
  • 获赠金笔:0支
  • 赠出金笔:0支
  • 荣誉徽章:
正文 字体大小:

Oracle RAC之检查维护

(2017-02-09 08:37:51)
分类: IT
经过纷繁复杂的配置,Oracle数据库集群环境终于搭建完成。期间出现令人匪夷所思的各种技术故障,通过查阅资料将一个个技术难题攻克,这一个多月来集群环境安装不少于10遍,每一步能够出现什么情况也了然于胸。环境搭建过程比较流畅,是因为提前规避了掉了所有的oracle错误。在篇幅的最后简单介绍一下Oracle RAC的可操作命令。
1、检查集群运行状态。
[grid@rac1 ~]$ srvctl status database -d orcl
实例 orcl1 正在节点 rac1 上运行
实例 orcl2 没有在 rac2 节点上运行
2、检查CRS状态。
1)检查本地节点的CRS状态。
[grid@rac1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
2)检查集群的CRS状态。
[grid@rac1 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
3)检查集群所有节点的CRS状态
[grid@rac1 ~]$ crsctl check cluster  -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
3、查看集群中节点配置信息。
[grid@rac1 ~]$ olsnodes
rac1
rac2

[grid@rac1 ~]$ olsnodes -n
rac1 1
rac2 2

[grid@rac1 ~]$ olsnodes -n -i -s -t
rac1 1 rac1-vip Active Unpinned
rac2 2 rac2-vip Active Unpinned
4、查看集群件的表决磁盘信息。
[grid@rac1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   79088fcfdf0a4feebf4d67a130730e21 (ORCL:VOL1) [GRIDDG]
Located 1 voting disk(s).
5、查看集群SCAN VIP信息。
[grid@rac1 ~]$ srvctl config scan
SCAN 名称: scan-ip, 网络: 1/10.10.5.0/255.255.255.0/eth0
SCAN VIP 名称: scan1, IP: /scan-ip/10.10.5.200
6、查看集群SCAN Listener信息。
[grid@rac1 ~]$ srvctl config scan_listener
SCAN 监听程序 LISTENER_SCAN1 已存在。端口: TCP:1521
Oracle数据库登陆。
7、查看组件资源的相互依赖关系。
[grid@rac1 ~]$ crsctl stat res ora.orcl.db -p
NAME=ora.orcl.db
TYPE=ora.database.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTIVE_PLACEMENT=1
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
AUTO_START=restore
CARDINALITY=2
CHECK_INTERVAL=1
CHECK_TIMEOUT=600
CLUSTER_DATABASE=true
DB_UNIQUE_NAME=orcl
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%)
DEGREE=1
DESCRIPTION=Oracle Database resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=60
FAILURE_THRESHOLD=1
GEN_AUDIT_FILE_DEST=/mydata/u01/app/oracle/admin/orcl/adump
GEN_USR_ORA_INST_NAME=
GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=orcl1
GEN_USR_ORA_INST_NAME@SERVERNAME(rac2)=orcl2
HOSTING_MEMBERS=
INSTANCE_FAILOVER=0
LOAD=1
LOGGING_LEVEL=1
MANAGEMENT_POLICY=AUTOMATIC
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
ORACLE_HOME=/mydata/u01/app/oracle/product/11.1.0/db_1
PLACEMENT=restricted
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=2
ROLE=PRIMARY
SCRIPT_TIMEOUT=60
SERVER_POOLS=ora.orcl
SPFILE=+DATA/orcl/spfileorcl.ora
START_DEPENDENCIES=hard(ora.DATA.dg,ora.FRA.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DATA.dg,ora.FRA.dg)
START_TIMEOUT=600
STATE_CHANGE_TEMPLATE=
STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DATA.dg,shutdown:ora.FRA.dg)
STOP_TIMEOUT=600
UPTIME_THRESHOLD=1h
USR_ORA_DB_NAME=orcl
USR_ORA_DOMAIN=
USR_ORA_ENV=
USR_ORA_FLAGS=
USR_ORA_INST_NAME=
USR_ORA_INST_NAME@SERVERNAME(rac1)=orcl1
USR_ORA_INST_NAME@SERVERNAME(rac2)=orcl2
USR_ORA_OPEN_MODE=open
USR_ORA_OPI=false
USR_ORA_STOP_MODE=immediate
VERSION=11.2.0.1.0

[grid@rac1 ~]$ crsctl stat res ora.scan1.vip -p
NAME=ora.scan1.vip
TYPE=ora.scan_vip.type
ACL=owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:grid:r-x
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=
ACTIVE_PLACEMENT=1
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=1
DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=scan_vip)
DEGREE=1
DESCRIPTION=Oracle SCAN VIP resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=
LOAD=1
LOGGING_LEVEL=1
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=balanced
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=0
SCAN_NAME=scan-ip
SCRIPT_TIMEOUT=60
SERVER_POOLS=*
START_DEPENDENCIES=hard(ora.net1.network) dispersion:active(type:ora.scan_vip.type) pullup(ora.net1.network)
START_TIMEOUT=0
STATE_CHANGE_TEMPLATE=
STOP_DEPENDENCIES=hard(ora.net1.network)
STOP_TIMEOUT=0
UPTIME_THRESHOLD=1h
USR_ORA_ENV=
USR_ORA_VIP=10.10.5.200
VERSION=11.2.0.1.0
8、查看指定组件资源服务
[grid@rac1 ~]$ crs_stat -v ora.scan1.vip
NAME=ora.scan1.vip
TYPE=ora.scan_vip.type
RESTART_ATTEMPTS=0
RESTART_COUNT=0
FAILURE_THRESHOLD=0
FAILURE_COUNT=0
TARGET=ONLINE
STATE=ONLINE on rac1
查看rac进程、ASM实例进程:
[root@rac1 diag]# ps -U grid -f
UID         PID   PPID  C STIME TTY          TIME CMD
grid       3253      1  0 Feb08 ?        00:03:52 /mydata/u01/app/11.1.0/grid/bin/oraagent.bin
grid       3268      1  0 Feb08 ?        00:00:04 /mydata/u01/app/11.1.0/grid/bin/mdnsd.bin
grid       3279      1  0 Feb08 ?        00:00:03 /mydata/u01/app/11.1.0/grid/bin/gipcd.bin
grid       3290      1  0 Feb08 ?        00:03:07 /mydata/u01/app/11.1.0/grid/bin/gpnpd.bin
grid       3340      1  0 Feb08 ?        00:02:53 /mydata/u01/app/11.1.0/grid/bin/diskmon.bin -d -
grid       3353      1  2 Feb08 ?        00:29:49 /mydata/u01/app/11.1.0/grid/bin/ocssd.bin 
grid       3474      1  0 Feb08 ?        00:00:28 asm_pmon_+ASM1
grid       3476      1  0 Feb08 ?        00:02:08 asm_vktm_+ASM1
grid       3480      1  0 Feb08 ?        00:00:04 asm_gen0_+ASM1
grid       3482      1  0 Feb08 ?        00:01:20 asm_diag_+ASM1
grid       3484      1  0 Feb08 ?        00:00:10 asm_ping_+ASM1
grid       3486      1  0 Feb08 ?        00:00:04 asm_psp0_+ASM1
grid       3488      1  0 Feb08 ?        00:04:29 asm_dia0_+ASM1
grid       3490      1  0 Feb08 ?        00:02:43 asm_lmon_+ASM1
grid       3492      1  0 Feb08 ?        00:01:58 asm_lmd0_+ASM1
grid       3494      1  0 Feb08 ?        00:01:45 asm_lms0_+ASM1
grid       3498      1  0 Feb08 ?        00:00:06 asm_lmhb_+ASM1
grid       3500      1  0 Feb08 ?        00:00:04 asm_mman_+ASM1
grid       3502      1  0 Feb08 ?        00:00:05 asm_dbw0_+ASM1
grid       3504      1  0 Feb08 ?        00:00:05 asm_lgwr_+ASM1
grid       3506      1  0 Feb08 ?        00:00:08 asm_ckpt_+ASM1
grid       3508      1  0 Feb08 ?        00:00:06 asm_smon_+ASM1
grid       3510      1  0 Feb08 ?        00:00:30 asm_rbal_+ASM1
grid       3512      1  0 Feb08 ?        00:00:36 asm_gmon_+ASM1
grid       3514      1  0 Feb08 ?        00:00:11 asm_mmon_+ASM1
grid       3516      1  0 Feb08 ?        00:00:17 asm_mmnl_+ASM1
grid       3518      1  0 Feb08 ?        00:02:26 /mydata/u01/app/11.1.0/grid/bin/oclskd.bin
grid       3525      1  0 Feb08 ?        00:00:13 asm_lck0_+ASM1
grid       3547      1  0 Feb08 ?        00:05:49 /mydata/u01/app/11.1.0/grid/bin/evmd.bin
grid       3554      1  0 Feb08 ?        00:00:03 asm_asmb_+ASM1
grid       3556      1  0 Feb08 ?        00:00:09 oracle+ASM1_asmb_+asm1 (DESCRIPTION=(LOCAL=YES)(
grid       3633   3547  0 Feb08 ?        00:00:02 /mydata/u01/app/11.1.0/grid/bin/evmlogger.bin -o
grid       3763      1  0 Feb08 ?        00:00:00 /mydata/u01/app/11.1.0/grid/opmn/bin/ons -d
grid       3764   3763  0 Feb08 ?        00:00:09 /mydata/u01/app/11.1.0/grid/opmn/bin/ons -d
grid       3820      1  1 Feb08 ?        00:12:23 /mydata/u01/app/11.1.0/grid/jdk/jre//bin/java -D
grid      15072      1  0 Feb08 ?        00:00:00 oracle+ASM1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PR
grid      15074      1  0 Feb08 ?        00:00:00 oracle+ASM1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PR
grid      15075      1  0 Feb08 ?        00:00:00 oracle+ASM1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PR
grid      23075      1  0 Feb08 ?        00:00:06 /mydata/u01/app/11.1.0/grid/bin/tnslsnr LISTENER
grid      25364      1  0 Feb08 ?        00:00:04 /mydata/u01/app/11.1.0/grid/bin/tnslsnr LISTENER
grid      66372  66371  0 06:26 pts/0    00:00:00 -bash
root      69336  66372  0 07:05 pts/0    00:00:00 su - oracle
grid      69587   3498  0 07:10 ?        00:00:00 [oracle]
grid      70155      1  0 07:20 ?        00:00:08 /mydata/u01/app/11.1.0/grid/bin/oraagent.bin
oracle    70580      1  0 07:25 ?        00:00:01 oracleorcl1 (LOCAL=NO)
oracle    70587      1  0 07:25 ?        00:00:01 oracleorcl1 (LOCAL=NO)
oracle    70590      1  0 07:25 ?        00:00:04 oracleorcl1 (LOCAL=NO)
grid      70876  70712  0 07:31 pts/0    00:00:00 -bash
grid      71007      1  0 07:33 ?        00:00:00 oracle+ASM1_ocr (DESCRIPTION=(LOCAL=YES)(ADDRESS
oracle    71481      1  0 07:40 ?        00:00:00 oracleorcl1 (LOCAL=NO)
查看Oracle、root进程。
[root@rac1 diag]# ps -U root -f|grep /u01
root       2374      1  0 Feb08 ?        00:00:00 /mydata/u01/app/11.1.0/grid/bin/ohasd.bin reboot
root       2919      1  2 Feb08 ?        00:24:35 /mydata/u01/app/11.1.0/grid/bin/ohasd.bin reboot
root       3305      1  0 Feb08 ?        00:05:15 /mydata/u01/app/11.1.0/grid/bin/cssdmonitor
root       3322      1  0 Feb08 ?        00:07:48 /mydata/u01/app/11.1.0/grid/bin/cssdagent
root       3324      1  0 Feb08 ?        00:00:53 /mydata/u01/app/11.1.0/grid/bin/orarootagent.bin
root      69567      1  0 07:10 ?        00:00:02 /mydata/u01/app/11.1.0/grid/bin/octssd.bin
root      69817      1  1 07:15 ?        00:00:30 /mydata/u01/app/11.1.0/grid/bin/crsd.bin reboot
root      70050      1  0 07:19 ?        00:00:03 /mydata/u01/app/11.1.0/grid/bin/oclskd.bin
root      70163      1  0 07:20 ?        00:00:08 /mydata/u01/app/11.1.0/grid/bin/orarootagent.bin
root      71602  69555  0 07:42 pts/1    00:00:00 grep /u01

[root@rac1 diag]# ps -U oracle -f
UID         PID   PPID  C STIME TTY          TIME CMD
oracle    16411      1  0 Feb08 ?        00:00:25 ora_pmon_orcl1
oracle    16413      1  0 Feb08 ?        00:01:32 ora_vktm_orcl1
oracle    16417      1  0 Feb08 ?        00:00:03 ora_gen0_orcl1
oracle    16419      1  0 Feb08 ?        00:00:56 ora_diag_orcl1
oracle    16421      1  0 Feb08 ?        00:00:03 ora_dbrm_orcl1
oracle    16423      1  0 Feb08 ?        00:00:06 ora_ping_orcl1
oracle    16425      1  0 Feb08 ?        00:00:07 ora_psp0_orcl1
oracle    16427      1  0 Feb08 ?        00:00:02 ora_acms_orcl1
oracle    16429      1  0 Feb08 ?        00:03:52 ora_dia0_orcl1
oracle    16431      1  0 Feb08 ?        00:02:29 ora_lmon_orcl1
oracle    16433      1  0 Feb08 ?        00:01:31 ora_lmd0_orcl1
oracle    16437      1  0 Feb08 ?        00:03:14 ora_lms0_orcl1
oracle    16441      1  0 Feb08 ?        00:00:04 ora_rms0_orcl1
oracle    16443      1  0 Feb08 ?        00:00:25 ora_lmhb_orcl1
oracle    16445      1  0 Feb08 ?        00:00:06 ora_mman_orcl1
oracle    16447      1  0 Feb08 ?        00:00:20 ora_dbw0_orcl1
oracle    16449      1  0 Feb08 ?        00:00:20 ora_lgwr_orcl1
oracle    16451      1  0 Feb08 ?        00:00:35 ora_ckpt_orcl1
oracle    16453      1  0 Feb08 ?        00:00:35 ora_smon_orcl1
oracle    16455      1  0 Feb08 ?        00:00:02 ora_reco_orcl1
oracle    16457      1  0 Feb08 ?        00:00:02 ora_rbal_orcl1
oracle    16459      1  0 Feb08 ?        00:00:02 ora_asmb_orcl1
oracle    16461      1  0 Feb08 ?        00:00:33 ora_mmon_orcl1
grid      16463      1  0 Feb08 ?        00:00:07 oracle+ASM1_asmb_orcl1 (DESCRIPTION=(LOCAL=YES)(
oracle    16465      1  0 Feb08 ?        00:00:32 ora_mmnl_orcl1
oracle    16467      1  0 Feb08 ?        00:00:01 ora_d000_orcl1
oracle    16469      1  0 Feb08 ?        00:00:03 ora_mark_orcl1
oracle    16471      1  0 Feb08 ?        00:00:01 ora_s000_orcl1
oracle    16477      1  0 Feb08 ?        00:01:53 /mydata/u01/app/11.1.0/grid/bin/oclskd.bin
oracle    16480      1  0 Feb08 ?        00:00:56 ora_lck0_orcl1
oracle    16482      1  0 Feb08 ?        00:00:05 ora_rsmn_orcl1
oracle    16501      1  0 Feb08 ?        00:00:02 ora_gtx0_orcl1
oracle    16505      1  0 Feb08 ?        00:00:03 ora_rcbg_orcl1
oracle    16507      1  0 Feb08 ?        00:00:14 ora_qmnc_orcl1
oracle    16517      1  0 Feb08 ?        00:00:04 ora_q001_orcl1
oracle    16561      1  0 Feb08 ?        00:01:24 ora_cjq0_orcl1
oracle    16720      1  0 Feb08 ?        00:00:03 ora_smco_orcl1
oracle    20326      1  0 Feb08 ?        00:00:22 /mydata/u01/app/oracle/product/11.1.0/db_1/perl/
oracle    20370  20326  0 Feb08 ?        00:01:03 /mydata/u01/app/oracle/product/11.1.0/db_1/bin/e
oracle    21083  16561  0 Feb08 ?        00:00:00 [oracle]
oracle    21141  16443  0 Feb08 ?        00:00:00 [sh]
oracle    35343      1  0 Feb08 ?        00:00:02 ora_q003_orcl1
oracle    69394  69336  0 07:07 pts/0    00:00:00 -bash
oracle    69454  16461  0 07:08 ?        00:00:00 [sh]
oracle    70158      1  0 07:20 ?        00:00:06 /mydata/u01/app/11.1.0/grid/bin/oraagent.bin
root      70712  69394  0 07:27 pts/0    00:00:00 su - grid
oracle    70861  16461  0 07:31 ?        00:00:00 [sh]
oracle    70925      1  0 07:31 ?        00:00:05 ora_j000_orcl1
oracle    70958      1  0 07:32 ?        00:00:01 ora_q002_orcl1
oracle    71322      1  0 07:38 ?        00:00:00 ora_q000_orcl1
oracle    71333  20326 10 07:39 ?        00:00:30 /mydata/u01/app/oracle/product/11.1.0/db_1/jdk/b
oracle    71454      1  0 07:40 ?        00:00:00 ora_o000_orcl1
oracle    71468      1  0 07:40 ?        00:00:00 ora_w000_orcl1
oracle    71470      1  0 07:40 ?        00:00:00 ora_q004_orcl1
grid      71472      1  0 07:40 ?        00:00:00 oracle+ASM1_o000_orcl1 (DESCRIPTION=(LOCAL=YES)(
oracle    71707      1  0 07:43 ?        00:00:00 ora_j001_orcl1
oracle    71712  20326  0 07:43 ?        00:00:00 sh -c /mydata/u01/app/oracle/product/11.1.0/db_1
oracle    71714  71712  0 07:43 ?        00:00:00 /mydata/u01/app/oracle/product/11.1.0/db_1/bin/e
oracle    71721      1  1 07:43 ?        00:00:00 ora_j002_orcl1
oracle    71723      1  0 07:43 ?        00:00:00 ora_j003_orcl1
管理知识暂时讲这么多,看一下Oracle登陆状态。切换到oracle用户。
[grid@rac1 ~]$ su - oracle
密码:
[oracle@rac1 ~]$ sqlplus sys/123456@10.10.5.200:1521/orcl as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Thu Feb 9 07:12:01 2017

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select instance_name, status from v$instance;

INSTANCE_NAME STATUS
---------------- ------------
orcl2 OPEN

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
[oracle@rac1 ~]$ 
可以正常登陆并检索出信息。
查询集群数据库实例状态。
[grid@rac1 ~]$ srvctl status database -d orcl
实例 orcl1 正在节点 rac1 上运行
实例 orcl2 没有在 rac2 节点上运行
启动和停止集群数据库。
[grid@rac1 ~]$ srvctl start database -d orcl
[grid@rac1 ~]$ srvctl stop database -d orcl
关于Oracle RAC集群环境搭建的内容先写这么多。单位基础设施中IBM小型机、各种型号服务器、IBM光纤存储等设备都是有的,上面运行有核心业务不能随意操作,原理是一样的。其它资料有待后期整理完善。
完结。

0

阅读 收藏 喜欢 打印举报/Report
后一篇:No.1
  

新浪BLOG意见反馈留言板 欢迎批评指正

新浪简介 | About Sina | 广告服务 | 联系我们 | 招聘信息 | 网站律师 | SINA English | 产品答疑

新浪公司 版权所有