千家信息网

root.sh Fails on the First Node for 11gR2 Grid Infrastructure Installation

发表于:2025-01-21 作者:千家信息网编辑
千家信息网最后更新 2025年01月21日,在第二节点执行脚本报如下错误,根据官方提示进行修复仍无济于事。由于测试环境使用虚拟机搭建共享磁盘,最终发现导致如下报错原因为共享磁盘问题,更换共享磁盘后问题解决。node2:/u01/app/11.2
千家信息网最后更新 2025年01月21日root.sh Fails on the First Node for 11gR2 Grid Infrastructure Installation

在第二节点执行脚本报如下错误,根据官方提示进行修复仍无济于事。由于测试环境使用虚拟机搭建共享磁盘,最终发现导致如下报错原因为共享磁盘问题,更换共享磁盘后问题解决。

node2:/u01/app/11.2.0/grid # ./root.sh

Performing root user operation for Oracle 11g


The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/app/11.2.0/grid


Enter the full pathname of the local bin directory: [/usr/local/bin]:

Copying dbhome to /usr/local/bin ...

Copying oraenv to /usr/local/bin ...

Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

Installing Trace File Analyzer

OLR initialization - successful

Adding Clusterware entries to inittab

CRS-2672: Attempting to start 'ora.mdnsd' on 'node2'

CRS-2676: Start of 'ora.mdnsd' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'node2'

CRS-2676: Start of 'ora.gpnpd' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node2'

CRS-2672: Attempting to start 'ora.gipcd' on 'node2'

CRS-2676: Start of 'ora.cssdmonitor' on 'node2' succeeded

CRS-2676: Start of 'ora.gipcd' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'node2'

CRS-2672: Attempting to start 'ora.diskmon' on 'node2'

CRS-2676: Start of 'ora.diskmon' on 'node2' succeeded

CRS-2676: Start of 'ora.cssd' on 'node2' succeeded


Disk Group OCR_VOTE creation failed with the following message:

ORA-15018: diskgroup cannot be created

ORA-15017: diskgroup "OCR_VOTE" cannot be mounted

ORA-15003: diskgroup "OCR_VOTE" already mounted in another lock name space


Configuration of ASM ... failed

see asmca logs at /u01/app/grid/cfgtoollogs/asmca for details

Did not succssfully configure and start ASM at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 6912.

/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed


以下信息为官方给予的解决方法:
root.sh Fails on the First Node for 11gR2 Grid Infrastructure Installation (文档 ID 1191783.1)

APPLIES TO:

Oracle Database - Enterprise Edition - Version 11.2.0.1 and later
Information in this document applies to any platform.

SYMPTOMS

Multiple nodes cluster, installing 11gR2 Grid Infrastructure for the first time, root.sh fails on the first node.

/cfgtoollogs/crsconfig/rootcrs_.log shows:

2010-07-24 23:29:36: Configuring ASM via ASMCA
2010-07-24 23:29:36: Executing as oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3 -redundancy NORMAL -configureLocalASM
2010-07-24 23:29:36: Running as user oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3 -redundancy NORMAL -configureLocalASM
2010-07-24 23:29:36: Invoking "/opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTE_DISK2,ORCL:VOTE_DISK3 -redundancy NORMAL -configureLocalASM" as user "oracle"
2010-07-24 23:29:53:Configuration failed, see logfile for details


$ORACLE_BASE/cfgtoollogs/asmca/asmca-.log shows error:

ORA-15018 diskgroup cannot be created
ORA-15017 diskgroup OCR_VOTING_DG cannot be mounted
ORA-15003 diskgroup OCR_VOTING_DG already mounted in another lock name space


This is a new installation, the disks used by ASM are not shared on any other cluster system.

CHANGES

New installation.

CAUSE

The problem is caused by runing root.sh simultaneously on the first node and the remaining node(s) rather than completing root.sh on the first node before running it on the remaining node(s).

On node 2, /cfgtoollogs/crsconfig/rootcrs_.log has approximate same time stamp:

2010-07-24 23:29:39: Configuring ASM via ASMCA
2010-07-24 23:29:39: Executing as oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3 -redundancy NORMAL -configureLocalASM
2010-07-24 23:29:39: Running as user oracle: /opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTING_DISK2,ORCL:VOTING_DISK3 -redundancy NORMAL -configureLocalASM
2010-07-24 23:29:39: Invoking "/opt/grid/bin/asmca -silent -diskGroupName OCR -diskList ORCL:VOTING_DISK1,ORCL:VOTE_DISK2,ORCL:VOTE_DISK3 -redundancy NORMAL -configureLocalASM" as user "oracle"
2010-07-24 23:29:55:Configuration failed, see logfile for details


It has similar content, the only difference is its time started 3 seconds later than the first node. This indicates root.sh was running simultaneously on both nodes.

The root.sh on the 2nd node also created +ASM1 instance ( as it also appears as it is the first node to run root.sh), it mounted the same diskgroup, led to the +ASM1 on node 1 reporting:
ORA-15003 diskgroup OCR_VOTING_DG already mounted in another lock name space

SOLUTION

1. Deconfig the Grid Infrastructure without removing binaries, refer to Document 942166.1 How to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation. For two nodes case:

As root, run "$GRID_HOME/crs/install/rootcrs.pl -deconfig -force -verbose" on node 1,
As root, run "$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode" on node 2.


2. Rerun root.sh on the first node first, only proceed with the remaining node(s) after root.sh completes on the first node.

REFERENCES

NOTE:1050908.1 - Troubleshoot Grid Infrastructure Startup Issues

NOTE:942166.1 - How to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation


0