Oracle 11G RAC如何删除节点
发表于:2025-01-24 作者:千家信息网编辑
千家信息网最后更新 2025年01月24日,这篇文章主要介绍了Oracle 11G RAC如何删除节点,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。软硬件环境数据库版本:11.2
千家信息网最后更新 2025年01月24日Oracle 11G RAC如何删除节点
这篇文章主要介绍了Oracle 11G RAC如何删除节点,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。
软硬件环境
数据库版本:11.2.0.4 RAC双节点
虚拟环境系统:Linux 6.5
一般对CRS层面数据结构做重要操作之前一定要先备份OCR。
[root@vastdata4 ~]# ocrconfig -manualbackupvastdata4 2019/02/25 00:04:20 /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_000420.ocrvastdata4 2019/02/25 00:00:08 /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_000008.ocr
删除节点简单为分3个步骤:删除实例、删除DB软件、删除GI软件
一. 删除实例
1. 关闭计划删除的目标实例
[root@vastdata4 ~]# srvctl status database -d PROD -helpDisplays the current state of the database.Usage: srvctl status database -d[-f] [-v] -d Unique name for the database -f Include disabled applications -v Verbose output-h Print usage [root@vastdata4 ~]# srvctl status database -d PROD -fInstance PROD1 is running on node vastdata3Instance PROD2 is running on node vastdata4 [root@vastdata4 ~]# srvctl stop instance -d PROD -n vastdata3 [root@vastdata4 ~]# crsctl stat res -t--------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS --------------------------------------------------------------------------------Local Resources-------------------------------------------------------------------------------- ora.prod.db 1 OFFLINE OFFLINE Instance Shutdown 2 ONLINE ONLINE vastdata4 Open [root@vastdata4 ~]# srvctl status database -d PROD -fInstance PROD1 is not running on node vastdata3Instance PROD2 is running on node vastdata4
2. 删除实例
[oracle@vastdata4 ~]$ dbca -silent -deleteInstance -nodeList vastdata3.us.oracle.com -gdbName PROD -instanceName PROD1 -sysDBAUserName sys -sysDBAPassword oracleDeleting instance1% complete2% complete6% complete13% complete20% complete26% complete33% complete40% complete46% complete53% complete60% complete66% completeCompleting instance management.100% completeLook at the log file "/u01/app/oracle/cfgtoollogs/dbca/PROD.log" for further details.
3. 再次检查
[root@vastdata4 ~]# srvctl status database -d PROD -fInstance PROD2 is running on node vastdata4
二. 删除DB软件
1. 更新inventory
[oracle@vastdata3 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=vastdata3" -localStarting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 6143 MB PassedThe inventory pointer is located at /etc/oraInst.locThe inventory is located at /u01/app/oraInventory'UpdateNodeList' was successful.
2. 卸载DB软件
[oracle@vastdata3 ~]$ $ORACLE_HOME/deinstall/deinstall -localChecking for required files and bootstrapping ...Please wait ...Location of logs /u01/app/oraInventory/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ##################################### CHECK OPERATION START ########################### [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/db_1Oracle Home type selected for deinstall is: Oracle Real Application Cluster DatabaseOracle Base selected for deinstall is: /u01/app/oracleChecking for existence of central inventory location /u01/app/oraInventoryChecking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/gridThe following nodes are part of this cluster: vastdata3Checking for sufficient temp space availability on node(s) : 'vastdata3'## [END] Install check configuration ## Network Configuration check config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2019-02-25_12-27-57-AM.log Network Configuration check config END Database Check Configuration START Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2019-02-25_12-27-59-AM.log Database Check Configuration END Enterprise Manager Configuration Assistant START EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2019-02-25_12-28-02-AM.log Enterprise Manager Configuration Assistant ENDOracle Configuration Manager check STARTOCM check log file location : /u01/app/oraInventory/logs//ocm_check2332.logOracle Configuration Manager check END ######################### CHECK OPERATION END ################################################ CHECK OPERATION SUMMARY #######################Oracle Grid Infrastructure Home is: /u01/app/11.2.0/gridThe cluster node(s) on which the Oracle home deinstallation will be performed are:vastdata3Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'vastdata3', and the global configuration will be removed.Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/db_1Inventory Location where the Oracle home registered is: /u01/app/oraInventoryThe option -local will not modify any database configuration for this Oracle home. No Enterprise Manager configuration to be updated for any database(s)No Enterprise Manager ASM targets to updateNo Enterprise Manager listener targets to migrateChecking the config status for CCROracle Home exists with CCR directory, but CCR is not configuredCCR check is finishedDo you want to continue (y - yes, n - no)? [n]: yA log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-02-25_12-27-50-AM.out'Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-02-25_12-27-50-AM.err'######################## CLEAN OPERATION START ######################## Enterprise Manager Configuration Assistant START EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2019-02-25_12-28-02-AM.log Updating Enterprise Manager ASM targets (if any)Updating Enterprise Manager listener targets (if any)Enterprise Manager Configuration Assistant ENDDatabase de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2019-02-25_12-29-25-AM.log Network Configuration clean config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2019-02-25_12-29-25-AM.log De-configuring Local Net Service Names configuration file...Local Net Service Names configuration file de-configured successfully. De-configuring backup files...Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END Oracle Configuration Manager clean STARTOCM clean log file location : /u01/app/oraInventory/logs//ocm_clean2332.logOracle Configuration Manager clean ENDSetting the force flag to falseSetting the force flag to cleanup the Oracle BaseOracle Universal Installer clean START Detach Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node : Done Delete directory '/u01/app/oracle/product/11.2.0/db_1' on the local node : Done Failed to delete the directory '/u01/app/oracle'. The directory is in use.Delete directory '/u01/app/oracle' on the local node : Failed <<<< Oracle Universal Installer cleanup completed with errors. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2019-02-25_00-27-29AM' on node 'vastdata3' ## [END] Oracle install clean ########################### CLEAN OPERATION END ################################################ CLEAN OPERATION SUMMARY #######################Cleaning the config for CCRAs CCR is not configured, so skipping the cleaning of CCR configurationCCR clean is finishedSuccessfully detached Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node.Successfully deleted directory '/u01/app/oracle/product/11.2.0/db_1' on the local node.Failed to delete directory '/u01/app/oracle' on the local node.Oracle Universal Installer cleanup completed with errors. Oracle deinstall tool successfully cleaned up temporary directories.#################################################################################### ORACLE DEINSTALL & DECONFIG TOOL END #############
3. 更新inventory
[oracle@vastdata4 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=vastdata4" -localStarting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 6143 MB PassedThe inventory pointer is located at /etc/oraInst.locThe inventory is located at /u01/app/oraInventory'UpdateNodeList' was successful.
4. 如果卸载不干净,需要人为手工执行下面命令
[oracle@vastdata3 ~]$ rm -rf $ORACLE_HOME/*
三. 删除GI软件
1. 检查被删除节点状态
[grid@vastdata4 ~]$ olsnodes -s -n -tvastdata3 1 Active Unpinnedvastdata4 2 Active Unpinned
2. 节点被PIN住,需要UNPIN
[root@vastdata4 ~]# crsctl unpin css -n vastdata3CRS-4667: Node vastdata3 successfully unpinned.
3. 停止被删节点HAS服务
[root@vastdata3 ~]# export ORACLE_HOME=/u01/app/11.2.0/grid[root@vastdata3 ~]# cd $ORACLE_HOME/crs/install[root@vastdata3 install]# perl rootcrs.pl -deconfig -forceUsing configuration parameter file: ./crsconfig_paramsNetwork exists: 1/192.168.0.0/255.255.255.0/eth0, type staticVIP exists: /vastdata3-vip/192.168.0.22/192.168.0.0/255.255.255.0/eth0, hosting node vastdata3VIP exists: /vastdata4-vip/192.168.0.23/192.168.0.0/255.255.255.0/eth0, hosting node vastdata4GSD existsONS exists: Local port 6100, remote port 6200, EM port 2016CRS-2673: Attempting to stop 'ora.registry.acfs' on 'vastdata3'CRS-2677: Stop of 'ora.registry.acfs' on 'vastdata3' succeededCRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'vastdata3'CRS-2673: Attempting to stop 'ora.crsd' on 'vastdata3'CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'vastdata3'CRS-2673: Attempting to stop 'ora.DATA.dg' on 'vastdata3'CRS-2673: Attempting to stop 'ora.FRA.dg' on 'vastdata3'CRS-2677: Stop of 'ora.FRA.dg' on 'vastdata3' succeededCRS-2677: Stop of 'ora.DATA.dg' on 'vastdata3' succeededCRS-2673: Attempting to stop 'ora.asm' on 'vastdata3'CRS-2677: Stop of 'ora.asm' on 'vastdata3' succeededCRS-2792: Shutdown of Cluster Ready Services-managed resources on 'vastdata3' has completedCRS-2677: Stop of 'ora.crsd' on 'vastdata3' succeededCRS-2673: Attempting to stop 'ora.mdnsd' on 'vastdata3'CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'vastdata3'CRS-2673: Attempting to stop 'ora.crf' on 'vastdata3'CRS-2673: Attempting to stop 'ora.ctssd' on 'vastdata3'CRS-2673: Attempting to stop 'ora.evmd' on 'vastdata3'CRS-2673: Attempting to stop 'ora.asm' on 'vastdata3'CRS-2677: Stop of 'ora.mdnsd' on 'vastdata3' succeededCRS-2677: Stop of 'ora.crf' on 'vastdata3' succeededCRS-2677: Stop of 'ora.evmd' on 'vastdata3' succeededCRS-2677: Stop of 'ora.asm' on 'vastdata3' succeededCRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'vastdata3'CRS-2677: Stop of 'ora.ctssd' on 'vastdata3' succeededCRS-2677: Stop of 'ora.drivers.acfs' on 'vastdata3' succeededCRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'vastdata3' succeededCRS-2673: Attempting to stop 'ora.cssd' on 'vastdata3'CRS-2677: Stop of 'ora.cssd' on 'vastdata3' succeededCRS-2673: Attempting to stop 'ora.gipcd' on 'vastdata3'CRS-2677: Stop of 'ora.gipcd' on 'vastdata3' succeededCRS-2673: Attempting to stop 'ora.gpnpd' on 'vastdata3'CRS-2677: Stop of 'ora.gpnpd' on 'vastdata3' succeededCRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'vastdata3' has completedCRS-4133: Oracle High Availability Services has been stopped.Removing Trace File AnalyzerSuccessfully deconfigured Oracle clusterware stack on this node
4. 检查集群资源状态
[root@vastdata4 ~]# crsctl stat res -t--------------------------------------------------------------------------------NAME TARGET STATE SERVER STATE_DETAILS --------------------------------------------------------------------------------Local Resources--------------------------------------------------------------------------------ora.DATA.dg ONLINE ONLINE vastdata4 ora.FRA.dg ONLINE ONLINE vastdata4 ora.LISTENER.lsnr ONLINE ONLINE vastdata4 ora.asm ONLINE ONLINE vastdata4 Started ora.gsd OFFLINE OFFLINE vastdata4 ora.net1.network ONLINE ONLINE vastdata4 ora.ons ONLINE ONLINE vastdata4 ora.registry.acfs ONLINE ONLINE vastdata4 --------------------------------------------------------------------------------Cluster Resources--------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE vastdata4 ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE vastdata4 ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE vastdata4 ora.cvu 1 ONLINE ONLINE vastdata4 ora.oc4j 1 ONLINE ONLINE vastdata4 ora.prod.db 2 ONLINE ONLINE vastdata4 Open ora.scan1.vip 1 ONLINE ONLINE vastdata4 ora.scan2.vip 1 ONLINE ONLINE vastdata4 ora.scan3.vip 1 ONLINE ONLINE vastdata4 ora.vastdata4.vip 1 ONLINE ONLINE vastdata4
5. 检查集群下所有节点状态
[root@vastdata4 ~]# olsnodes -s -n -tvastdata3 1 Inactive Unpinnedvastdata4 2 Active Unpinned
6. 更新inventory
[grid@vastdata3 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=vastdata3" CRS=TRUE -silent -localStarting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 6143 MB PassedThe inventory pointer is located at /etc/oraInst.locThe inventory is located at /u01/app/oraInventory'UpdateNodeList' was successful.
7. 卸载GI软件
[grid@vastdata3 ~]$ $ORACLE_HOME/deinstall/deinstall -localChecking for required files and bootstrapping ...Please wait ...Location of logs /tmp/deinstall2019-02-25_00-43-06AM/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################### CHECK OPERATION START ########################### [START] Install check configuration ## Checking for existence of the Oracle home location /u01/app/11.2.0/gridOracle Home type selected for deinstall is: Oracle Grid Infrastructure for a ClusterOracle Base selected for deinstall is: /u01/app/gridChecking for existence of central inventory location /u01/app/oraInventoryChecking for existence of the Oracle Grid Infrastructure home The following nodes are part of this cluster: vastdata3Checking for sufficient temp space availability on node(s) : 'vastdata3' ## [END] Install check configuration ## Traces log file: /tmp/deinstall2019-02-25_00-43-06AM/logs//crsdc.logEnter an address or the name of the virtual IP used on node "vastdata3"[vastdata3-vip] > The following information can be collected by running "/sbin/ifconfig -a" on node "vastdata3"Enter the IP netmask of Virtual IP "192.168.0.22" on node "vastdata3"[255.255.255.0] > Enter the network interface name on which the virtual IP address "192.168.0.22" is active > Enter an address or the name of the virtual IP[] > Network Configuration check config START Network de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/netdc_check2019-02-25_12-44-14-AM.log Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]: Network Configuration check config END Asm Check Configuration START ASM de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/asmcadc_check2019-02-25_12-44-19-AM.log ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY #######################Oracle Grid Infrastructure Home is: The cluster node(s) on which the Oracle home deinstallation will be performed are:vastdata3Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'vastdata3', and the global configuration will be removed.Oracle Home selected for deinstall is: /u01/app/11.2.0/gridInventory Location where the Oracle home registered is: /u01/app/oraInventoryFollowing RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1Option -local will not modify any ASM configuration.Do you want to continue (y - yes, n - no)? [n]: yA log of this session will be written to: '/tmp/deinstall2019-02-25_00-43-06AM/logs/deinstall_deconfig2019-02-25_12-43-22-AM.out'Any error messages from this session will be written to: '/tmp/deinstall2019-02-25_00-43-06AM/logs/deinstall_deconfig2019-02-25_12-43-22-AM.err' ######################## CLEAN OPERATION START ########################ASM de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/asmcadc_clean2019-02-25_12-44-27-AM.logASM Clean Configuration END Network Configuration clean config START Network de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/netdc_clean2019-02-25_12-44-27-AM.log De-configuring RAC listener(s): LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1 De-configuring listener: LISTENER Stopping listener on node "vastdata3": LISTENER Warning: Failed to stop listener. Listener may not be running.Listener de-configured successfully. De-configuring listener: LISTENER_SCAN3 Stopping listener on node "vastdata3": LISTENER_SCAN3 Warning: Failed to stop listener. Listener may not be running.Listener de-configured successfully. De-configuring listener: LISTENER_SCAN2 Stopping listener on node "vastdata3": LISTENER_SCAN2 Warning: Failed to stop listener. Listener may not be running.Listener de-configured successfully. De-configuring listener: LISTENER_SCAN1 Stopping listener on node "vastdata3": LISTENER_SCAN1 Warning: Failed to stop listener. Listener may not be running.Listener de-configured successfully. De-configuring Naming Methods configuration file...Naming Methods configuration file de-configured successfully. De-configuring backup files...Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END----------------------------------------> The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes. Run the following command as the root user or the administrator on node "vastdata3". /tmp/deinstall2019-02-25_00-43-06AM/perl/bin/perl -I/tmp/deinstall2019-02-25_00-43-06AM/perl/lib -I/tmp/deinstall2019-02-25_00-43-06AM/crs/install /tmp/deinstall2019-02-25_00-43-06AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2019-02-25_00-43-06AM/response/deinstall_Ora11g_gridinfrahome1.rsp" Press Enter after you finish running the above commands<---------------------------------------- [root@vastdata3 ~]# /tmp/deinstall2019-02-25_00-43-06AM/perl/bin/perl -I/tmp/deinstall2019-02-25_00-43-06AM/perl/lib -I/tmp/deinstall2019-02-25_00-43-06AM/crs/install /tmp/deinstall2019-02-25_00-43-06AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2019-02-25_00-43-06AM/response/deinstall_Ora11g_gridinfrahome1.rsp"Using configuration parameter file: /tmp/deinstall2019-02-25_00-43-06AM/response/deinstall_Ora11g_gridinfrahome1.rsp****Unable to retrieve Oracle Clusterware home.Start Oracle Clusterware stack and try again.CRS-4047: No Oracle Clusterware components configured.CRS-4000: Command Stop failed, or completed with errors.Either /etc/oracle/ocr.loc does not exist or is not readableMake sure the file exists and it has read and execute accessEither /etc/oracle/ocr.loc does not exist or is not readableMake sure the file exists and it has read and execute accessCRS-4047: No Oracle Clusterware components configured.CRS-4000: Command Modify failed, or completed with errors.CRS-4047: No Oracle Clusterware components configured.CRS-4000: Command Delete failed, or completed with errors.CRS-4047: No Oracle Clusterware components configured.CRS-4000: Command Stop failed, or completed with errors.################################################################# You must kill processes or reboot the system to properly ## cleanup the processes started by Oracle clusterware #################################################################ACFS-9313: No ADVM/ACFS installation detected.Either /etc/oracle/olr.loc does not exist or is not readableMake sure the file exists and it has read and execute accessEither /etc/oracle/olr.loc does not exist or is not readableMake sure the file exists and it has read and execute accessFailure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstallerror: package cvuqdisk is not installedSuccessfully deconfigured Oracle clusterware stack on this node Remove the directory: /tmp/deinstall2019-02-25_00-43-06AM on node: Setting the force flag to falseSetting the force flag to cleanup the Oracle BaseOracle Universal Installer clean START Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done Delete directory '/u01/app/11.2.0/grid' on the local node : Done Delete directory '/u01/app/oraInventory' on the local node : Done Delete directory '/u01/app/grid' on the local node : Done Oracle Universal Installer cleanup was successful. Oracle Universal Installer clean END ## [START] Oracle install clean ## Clean install operation removing temporary directory '/tmp/deinstall2019-02-25_00-43-06AM' on node 'vastdata3' ## [END] Oracle install clean ## ######################### CLEAN OPERATION END ######################### ####################### CLEAN OPERATION SUMMARY #######################Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1Oracle Clusterware is stopped and successfully de-configured on node "vastdata3"Oracle Clusterware is stopped and de-configured successfully.Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.Successfully deleted directory '/u01/app/oraInventory' on the local node.Successfully deleted directory '/u01/app/grid' on the local node.Oracle Universal Installer cleanup was successful. Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'vastdata3' at the end of the session. Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'vastdata3' at the end of the session.Run 'rm -rf /etc/oratab' as root on node(s) 'vastdata3' at the end of the session.Oracle deinstall tool successfully cleaned up temporary directories.####################################################################### ############# ORACLE DEINSTALL & DECONFIG TOOL END #############
8. 更新inventory
[grid@vastdata4 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=vastdata4" CRS=TRUE -silentStarting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 6143 MB PassedThe inventory pointer is located at /etc/oraInst.locThe inventory is located at /u01/app/oraInventory'UpdateNodeList' was successful.
9. 检查集群下所有节点状态
[grid@vastdata4 ~]$ olsnodes -svastdata3 Inactivevastdata4 Active
10. 如果卸载不干净,需要人为手工执行下面命令
ps -ef |grep ora |awk '{print $2}' |xargs kill -9ps -ef |grep grid |awk '{print $2}' |xargs kill -9ps -ef |grep asm |awk '{print $2}' |xargs kill -9ps -ef |grep storage |awk '{print $2}' |xargs kill -9ps -ef |grep ohasd |awk '{print $2}' |xargs kill -9 ps -ef |grep gridps -ef |grep oraps -ef |grep asm export ORACLE_BASE=/u01/app/gridexport ORACLE_HOME=/u01/app/11.2.0/gridcd $ORACLE_HOMErm -rf *cd $ORACLE_BASErm -rf * rm -rf /etc/rc5.d/S96ohasdrm -rf /etc/rc3.d/S96ohasdrm -rf /rc.d/init.d/ohasdrm -rf /etc/oraclerm -rf /etc/ora*rm -rf /etc/oratabrm -rf /etc/oraInst.locrm -rf /opt/ORCLfmap/rm -rf /taryartar/12c/oraInventoryrm -rf /usr/local/bin/dbhomerm -rf /usr/local/bin/oraenvrm -rf /usr/local/bin/coraenvrm -rf /tmp/*rm -rf /var/tmp/.oraclerm -rf /var/tmprm -rf /home/grid/*rm -rf /home/oracle/*rm -rf /etc/init/oracle*rm -rf /etc/init.d/orarm -rf /tmp/.*
11. 从集群中删除节点
[root@vastdata4 ~]# /u01/app/11.2.0/grid/bin/crsctl delete node -n vastdata3CRS-4661: Node vastdata3 successfully deleted.
12. 检查集群下所有节点状态
[root@vastdata4 ~]# olsnodes -svastdata4 Active
13. 检查节点删除是否成功
这步非常重要,关系以后是否可以顺利增加节点到集群中。
[grid@vastdata4 ~]$ cluvfy stage -post nodedel -n vastdata3 -verbose Performing post-checks for node removal Checking CRS integrity... Clusterware version consistency passedThe Oracle Clusterware is healthy on node "vastdata4" CRS integrity check passedResult: Node removal check passed Post-check for node removal was successful.
14. 备份OCR
[root@vastdata4 ~]# ocrconfig -manualbackupvastdata4 2019/02/25 00:53:53 /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_005353.ocrvastdata4 2019/02/25 00:04:20 /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_000420.ocrvastdata4 2019/02/25 00:00:08 /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_000008.ocr
感谢你能够认真阅读完这篇文章,希望小编分享的"Oracle 11G RAC如何删除节点"这篇文章对大家有帮助,同时也希望大家多多支持,关注行业资讯频道,更多相关知识等着你来学习!
节点
检查
软件
集群
状态
实例
篇文章
更新
干净
重要
命令
备份
手工
数据
环境
成功
价值
兴趣
再次
同时
数据库的安全要保护哪些东西
数据库安全各自的含义是什么
生产安全数据库录入
数据库的安全性及管理
数据库安全策略包含哪些
海淀数据库安全审计系统
建立农村房屋安全信息数据库
易用的数据库客户端支持安全管理
连接数据库失败ssl安全错误
数据库的锁怎样保障安全
网络安全工培训
河南数据库空投箱厂家现货
博安通网络安全知识竞赛
云学生服务器
java写数据库的逻辑是什么
数据库 头像存储位置
国际软件开发资质等级
数据库重构
阿里校准服务器
全国网络安全小知识
设计一个学生成绩管理数据库
华为服务器哪家优惠
服务器连接dns是什么
怀柔区大规模软件开发经历
手机查看网络安全密匙
wps公司软件开发
网吧服务器开了没反应
ios端软件开发
索尔亚湖主播服务器
中国信息网络安全认证
试卷生成数据库模板
上海软件开发工资收入水平
网络安全歌谣顺口溜简单版
华为公司对网络安全人员的选择
陕西工行软件开发
应用服务器 安全配置
计算机应用及网络技术 开发
山西棋牌游戏软件开发定制
街电服务软件开发
中职网络技术类专业试题