$ sqlplus / as sysdba alter system set events '10281 trace name context forever, level xxx'; -- where xxxxxx is the number of seconds to timeout at. eg: alter system set events '10281 trace name context forever, level 300';
Memory (Avail / Total) = 221.27M / 32181.19M Swap (Avail / Total) = 47847.41M / 49151.99M F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD 0 S oracle 6755 1 0 80 0 - 3206692 semtim 10:21 ? 00:00:00 ora_w003_xxxxx
*** 2020-07-02 10:23:53.079 Stack: [Thread debugging using libthread_db enabled] 0x0000003595ceb197 in semop () from /lib64/libc.so.6 #0 0x0000003595ceb197 in semop () from /lib64/libc.so.6 #1 0x0000000009825453 in sskgpwwait () #2 0x00000000098241b0 in skgpwwait () #3 0x0000000009411825 in kslges () #4 0x00000000094111be in kslgetl () #5 0x000000000940e60a in ksfglt () #6 0x00000000098097e9 in kghalo () #7 0x0000000000b3f9f2 in ksp_param_handle_alloc () #8 0x0000000000b3eec0 in kspcrec () #9 0x0000000000ba8e2c in ksucre () #10 0x0000000000bdd2c4 in ksvrdp () #11 0x00000000025a57ea in opirip () #12 0x0000000001850509 in opidrv () #13 0x0000000001e27b37 in sou2o () #14 0x0000000000a298d6 in opimai_real () #15 0x0000000001e2de55 in ssthrdmain () #16 0x0000000000a297cd in main () A debugging session is active. Inferior 1 [process 6755] will be detached. Quit anyway? (y or n) [answered Y; input not from terminal]
------------------------------------------------------------------------------- Process diagnostic dump actual duration=8.680000 sec (max dump time=30.000000 sec)
*** 2020-07-02 10:23:53.079 Killing process (ospid 6755): (reason=KSOREQ_WAIT_CANCELLED error=0) ... and the process is still alive after kill!
Data Pump client compatibility. ===============================
连接数据库版本 数据泵客户端版本 10gR1 10gR2 11gR1 11gR2 12cR1 12cR2 18c 19c version 10.1.0.x 10.2.0.x 11.1.0.x 11.2.0.x 12.1.0.x 12.2.0.x 18.x.y.z 19.x.y.z ----------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- 10.1.0.x supported supported supported supported no no no no 10.2.0.x no supported supported supported supported no no no 11.1.0.x no supported supported supported supported no no no 11.2.0.x no no no supported supported supported supported supported 12.1.0.x no no no no supported supported supported supported 12.2.0.x no no no no no supported supported supported 18.x.y.z no no no no no supported supported supported 19.x.y.z no no no no no no no supported
数据泵dmp文件版本
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Data Pump file version. =======================
Version Written by Can be imported into Target: Data Pump database with 10gR1 10gR2 11gR1 11gR2 12cR1 12cR2 18c 19c Dumpfile Set compatibility 10.1.0.x 10.2.0.x 11.1.0.x 11.2.0.x 12.1.0.x 12.2.0.x 18.x.y.z 19.x.y.z ------------ --------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- 0.1 10.1.x supported supported supported supported supported supported supported supported 1.1 10.2.x no supported supported supported supported supported supported supported 2.1 11.1.x no no supported supported supported supported supported supported 3.1 11.2.x no no no supported supported supported supported supported 4.1 12.1.x no no no no supported supported supported supported 5.1 12.2.x no no no no no supported supported supported 5.1 18.x.y no no no no no no supported supported 5.1 19.x.y no no no no no no no supported
Version New Feature Remark: --------- --------------------------------- ---------------------------------------------- 10.1.0.1 Data Pump Technology Introduction of high-speed data movement. --------- 10.2.0.1 COMPRESSION Compress metadata in export dumpfile. 10.2.0.1 ENCRYPTION_PASSWORD Allows encrypted column data in dumpfile. 10.2.0.1 SAMPLE Specify a percentage of data to be unloaded. 10.2.0.1 TRANSFORM Change DDL for OID's and space allocation. 10.2.0.1 VERSION Create file compatible with earlier release. --------- 11.1.0.1 COMPRESSION Both data and metadata can be compressed. 11.1.0.1 DATA_OPTIONS (expdp) Specify handling of XMLType columns. 11.1.0.1 DATA_OPTIONS (impdp) Specify handling of constraint violations. 11.1.0.1 ENCRYPTION Both data and metadata can be encrypted. 11.1.0.1 ENCRYPTION_ALGORITHM Specify a specific encryption algorithm. 11.1.0.1 ENCRYPTION_MODE Specify the type of security to be used. 11.1.0.1 PARTITION_OPTIONS Specify how to handle partitioned tables. 11.1.0.1 REMAP_DATA Change column values based on a function. 11.1.0.1 REMAP_TABLE Rename tables during an import operation. 11.1.0.1 REUSE_DUMPFILES Option to overwrite existing dumpfiles. 11.1.0.1 TRANSPORTABLE Transfer table data by copying datafiles. --------- 11.2.0.1 Legacy mode: accept exp and imp parameters. 11.2.0.1 ABORT_STEP Stop job after initializing to query master. 11.2.0.1 ACCESS_METHOD Specify a particular method to (un)load data. 11.2.0.1 CLUSTER Control whether workers use all RAC instances. 11.2.0.1 DATA_OPTIONS (impdp) Specify to disable the APPEND hint. 11.2.0.1 KEEP_MASTER Specify whether to retain master table. 11.2.0.1 MASTER_ONLY Only import the master table. 11.2.0.1 METRICS Report additional information in logfile. 11.2.0.1 SERVICE_NAME Use with CLUSTER to specify a service name. 11.2.0.1 SOURCE_EDITION (expdp) Specify the edition from which to exp objects. 11.2.0.1 TABLES Now specify tables in multiple schemas. 11.2.0.1 TABLES Now specify % for multiple tables and part. 11.2.0.1 TARGET_EDITION (impdp) Specify the edition from which to exp objects. 11.2.0.2 Default first segment now 8M for part. table. 11.2.0.2 TRANSFORM Specify how to handle SEGMENT CREATION. --------- 12.1.0.1 Support for CDB and Pluggable db's (PDB). 12.1.0.1 COMPRESSION_ALGORITHM Specify the algorithm when compressing data. 12.1.0.1 ENCRYPTION_PWD_PROMPT Specify whether Data to prompt for password. 12.1.0.1 FULL Can now be used together with TRANSPORTABLE. 12.1.0.1 LOGTIME Provide timestamp for messages in the logfile. 12.1.0.1 TRANSFORM Specify to disable logging during import. 12.1.0.1 TRANSFORM Specify to change the LOB storing on import. 12.1.0.1 TRANSFORM Specify to change table compression type. 12.1.0.1 VIEWS_AS_TABLES Export views as tables. --------- 12.2.0.1 12.2.0.1 REMAP_DIRECTORY Let you remap directories when you move databases between platforms. 12.2.0.1 TRUST_EXISTING_TABLE_PARTITIONS Enable data from multiple partitions to be loaded in parallel into a pre-existing table. This is a flag on Data Pump Import DATA_OPTIONS 12.2.0.1 VALIDATE_TABLE_DATA Verify the format number and date data types in table data columns. This is a flag on Data Pump Import DATA_OPTIONS 12.2.0.1 ENABLE_NETWORK_COMPRESSION Tell Data Pump to compress data before sending it over the network. This is a flag on DATA_OPTIONS parameter. 12.2.0.1 GROUP_PARTITION_TABLE_DATA Enable data for all partitions to be loaded at once and in parallel. This is a flag on the Data Pump Export DATA_OPTIONS parameter. 12.2.0.1 VERIFY_STREAM_FORMAT Validate the format of a data stream before it is written to the Data Pump dump file. This is a flag on the Data Pump Export DATA_OPTIONS parameter. --------- 18.1.X.X CONTINUE_LOAD_ON_FORMAT_ERROR This is a new value for the DATA_OPTIONS parameter for impdp. When it is set, Data Pump jumps ahead and continue loading from the next granule when an inconsistency is found. --------- 19c Use Document 2457955.1 19c DataPump New Features are described in this document.
介绍
确定数据库的兼容版本
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
col value for a20 col description for a49 select * from database_compatible_level;
VALUE DESCRIPTION -------------------- ------------------------------------------------- 18.0.0 Database will be completely compatible with this software version -- 或者: sys@> show parameter compatible
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ compatible string 18.0.0 noncdb_compatible boolean FALSE
检查数据库和数据泵的版本
1 2 3 4 5 6 7 8 9 10 11 12
sys@> select banner_full from v$version;
BANNER_FULL ---------------------------------------------------------------------------------------------------------------------------------------------------------------- Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.5.0.0.0
ORA-00210: cannot open the specified control file ORA-00202: control file: '+FRA/STBRAC/CONTROLFILE/current.294.1005908473' ORA-17503: ksfdopn:2 Failed to open file +FRA/STBRAC/CONTROLFILE/current.294.1005908473 ORA-15001: diskgroup "FRA" does not exist or is not mounted ORA-15040: diskgroup is incomplete ORA-205 signalled during: alter database mount...
WARNING: failed to open a disk[/dev/asmdiskd] ORA-15025: could not open disk "/dev/asmdiskd" ORA-27041: unable to open file Linux-x86_64 Error: 13: Permission denied
检查asm的磁盘等信息均正常
1 2 3 4 5 6 7 8 9 10 11
[grid@stbracnode1 ~]$ asmcmd ASMCMD> lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 512 4096 4194304 51200 16784 0 16784 0 Y CRS/ MOUNTED EXTERN N 512 512 4096 4194304 61440 55336 0 55336 0 N DATA/ MOUNTED EXTERN N 512 512 4096 4194304 20480 17404 0 17404 0 N FRA/
[root@racnode1 ~]# /u01/app/12.2.0/grid/root.sh Performing root user operation.
The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/12.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/racnode1/crsconfig/rootcrs_racnode1_2019-04-11_10-21-10AM.log 2019/04/11 10:21:12 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'. 2019/04/11 10:21:12 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector. 2019/04/11 10:21:46 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2019/04/11 10:21:46 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'. 2019/04/11 10:21:52 CLSRSC-363: User ignored prerequisites during installation 2019/04/11 10:21:52 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'. 2019/04/11 10:21:54 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'. 2019/04/11 10:21:55 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'. 2019/04/11 10:22:08 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'. 2019/04/11 10:22:09 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'. 2019/04/11 10:22:09 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'. 2019/04/11 10:23:00 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'. 2019/04/11 10:23:12 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'. 2019/04/11 10:23:12 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'. 2019/04/11 10:23:18 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'. 2019/04/11 10:23:34 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service' 2019/04/11 10:23:55 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'. 2019/04/11 10:24:02 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racnode1' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racnode1' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2019/04/11 10:24:24 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'. 2019/04/11 10:24:36 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racnode1' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racnode1' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.evmd' on 'racnode1' CRS-2672: Attempting to start 'ora.mdnsd' on 'racnode1' CRS-2676: Start of 'ora.mdnsd' on 'racnode1' succeeded CRS-2676: Start of 'ora.evmd' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'racnode1' CRS-2676: Start of 'ora.gpnpd' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racnode1' CRS-2672: Attempting to start 'ora.gipcd' on 'racnode1' CRS-2676: Start of 'ora.cssdmonitor' on 'racnode1' succeeded CRS-2676: Start of 'ora.gipcd' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'racnode1' CRS-2672: Attempting to start 'ora.diskmon' on 'racnode1' CRS-2676: Start of 'ora.diskmon' on 'racnode1' succeeded CRS-2676: Start of 'ora.cssd' on 'racnode1' succeeded
ASM failed to start. Check /u01/app/grid/cfgtoollogs/asmca/asmca-190411AM102519.log for details.
2019/04/11 10:25:41 CLSRSC-184: Configuration of ASM failed 2019/04/11 10:25:47 CLSRSC-258: Failed to configure and start ASM Died at /u01/app/12.2.0/grid/crs/install/crsinstall.pm line 2091. The command '/u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl ' execution failed
查看日志有报错
1 2 3 4 5
[main] [ 2019-04-11 10:25:29.659 CST ] [OracleHome.getVersion:1152] Current version from sqlplus: 12.2.0.1.0 [main] [ 2019-04-11 10:25:29.659 CST ] [UsmcaLogger.logInfo:156] Role SYSASM [main] [ 2019-04-11 10:25:29.659 CST ] [UsmcaLogger.logInfo:156] OS Auth true [main] [ 2019-04-11 10:25:40.692 CST ] [SQLEngine.done:2314] Done called [main] [ 2019-04-11 10:25:40.695 CST ] [USMInstance.configureLocalASM:3367] ORA-00845: MEMORY_TARGET not supported on this system
[root@racnode1 trace]# /u01/app/12.2.0/grid/crs/install/rootcrs.sh -deconfig -force Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/racnode1/crsconfig/crsdeconfig_racnode1_2019-04-11_11-06-32AM.log PRCR-1070 : Failed to check if resource ora.net1.network is registered CRS-0184 : Cannot communicate with the CRS daemon. PRCR-1070 : Failed to check if resource ora.helper is registered CRS-0184 : Cannot communicate with the CRS daemon. PRCR-1070 : Failed to check if resource ora.ons is registered CRS-0184 : Cannot communicate with the CRS daemon.
2019/04/11 11:06:45 CLSRSC-180: An error occurred while executing the command '/u01/app/12.2.0/grid/bin/srvctl config nodeapps' CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racnode1' CRS-2673: Attempting to stop 'ora.ctssd' on 'racnode1' CRS-2673: Attempting to stop 'ora.evmd' on 'racnode1' CRS-2673: Attempting to stop 'ora.mdnsd' on 'racnode1' CRS-2673: Attempting to stop 'ora.gpnpd' on 'racnode1' CRS-2679: Attempting to clean 'ora.asm' on 'racnode1' CRS-2677: Stop of 'ora.evmd' on 'racnode1' succeeded CRS-2677: Stop of 'ora.ctssd' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'racnode1' CRS-2677: Stop of 'ora.mdnsd' on 'racnode1' succeeded CRS-2677: Stop of 'ora.cssd' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'racnode1' CRS-2677: Stop of 'ora.gpnpd' on 'racnode1' succeeded CRS-2681: Clean of 'ora.asm' on 'racnode1' succeeded CRS-2677: Stop of 'ora.gipcd' on 'racnode1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racnode1' has completed CRS-4133: Oracle High Availability Services has been stopped. 2019/04/11 11:07:16 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector. 2019/04/11 11:09:01 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector. 2019/04/11 11:09:02 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node
[root@racnode1 trace]# /u01/app/12.2.0/grid/root.sh Performing root user operation.
The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/12.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/racnode1/crsconfig/rootcrs_racnode1_2019-04-11_11-09-55AM.log 2019/04/11 11:09:58 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'. 2019/04/11 11:09:58 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector. 2019/04/11 11:10:29 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2019/04/11 11:10:29 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'. 2019/04/11 11:10:34 CLSRSC-363: User ignored prerequisites during installation 2019/04/11 11:10:34 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'. 2019/04/11 11:10:37 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'. 2019/04/11 11:10:38 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'. 2019/04/11 11:10:46 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'. 2019/04/11 11:10:46 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'. 2019/04/11 11:10:46 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'. 2019/04/11 11:11:24 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'. 2019/04/11 11:11:33 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'. 2019/04/11 11:11:33 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'. 2019/04/11 11:11:39 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'. 2019/04/11 11:11:55 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service' 2019/04/11 11:12:15 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'. 2019/04/11 11:12:20 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racnode1' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racnode1' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2019/04/11 11:12:42 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'. 2019/04/11 11:12:48 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racnode1' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racnode1' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.evmd' on 'racnode1' CRS-2672: Attempting to start 'ora.mdnsd' on 'racnode1' CRS-2676: Start of 'ora.evmd' on 'racnode1' succeeded CRS-2676: Start of 'ora.mdnsd' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'racnode1' CRS-2676: Start of 'ora.gpnpd' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racnode1' CRS-2672: Attempting to start 'ora.gipcd' on 'racnode1' CRS-2676: Start of 'ora.cssdmonitor' on 'racnode1' succeeded CRS-2676: Start of 'ora.gipcd' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'racnode1' CRS-2672: Attempting to start 'ora.diskmon' on 'racnode1' CRS-2676: Start of 'ora.diskmon' on 'racnode1' succeeded CRS-2676: Start of 'ora.cssd' on 'racnode1' succeeded
Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-190411AM111328.log for details.
2019/04/11 11:20:23 CLSRSC-482: Running command: '/u01/app/12.2.0/grid/bin/ocrconfig -upgrade grid oinstall' CRS-2672: Attempting to start 'ora.crf' on 'racnode1' CRS-2672: Attempting to start 'ora.storage' on 'racnode1' CRS-2676: Start of 'ora.storage' on 'racnode1' succeeded CRS-2676: Start of 'ora.crf' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'racnode1' CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded CRS-4256: Updating the profile Successful addition of voting disk d5d63504c2084fd1bf20c9109c45188c. Successfully replaced voting disk group with +DATA. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE d5d63504c2084fd1bf20c9109c45188c (/dev/asmdiskf) [DATA] Located 1 voting disk(s). CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racnode1' CRS-2673: Attempting to stop 'ora.crsd' on 'racnode1' CRS-2677: Stop of 'ora.crsd' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.storage' on 'racnode1' CRS-2673: Attempting to stop 'ora.crf' on 'racnode1' CRS-2673: Attempting to stop 'ora.gpnpd' on 'racnode1' CRS-2673: Attempting to stop 'ora.mdnsd' on 'racnode1' CRS-2677: Stop of 'ora.crf' on 'racnode1' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'racnode1' succeeded CRS-2677: Stop of 'ora.storage' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'racnode1' CRS-2677: Stop of 'ora.mdnsd' on 'racnode1' succeeded CRS-2677: Stop of 'ora.asm' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'racnode1' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'racnode1' CRS-2673: Attempting to stop 'ora.evmd' on 'racnode1' CRS-2677: Stop of 'ora.evmd' on 'racnode1' succeeded CRS-2677: Stop of 'ora.ctssd' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'racnode1' CRS-2677: Stop of 'ora.cssd' on 'racnode1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'racnode1' CRS-2677: Stop of 'ora.gipcd' on 'racnode1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racnode1' has completed CRS-4133: Oracle High Availability Services has been stopped. 2019/04/11 11:26:42 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.mdnsd' on 'racnode1' CRS-2672: Attempting to start 'ora.evmd' on 'racnode1' CRS-2676: Start of 'ora.evmd' on 'racnode1' succeeded CRS-2676: Start of 'ora.mdnsd' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'racnode1' CRS-2676: Start of 'ora.gpnpd' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'racnode1' CRS-2676: Start of 'ora.gipcd' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racnode1' CRS-2676: Start of 'ora.cssdmonitor' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'racnode1' CRS-2672: Attempting to start 'ora.diskmon' on 'racnode1' CRS-2676: Start of 'ora.diskmon' on 'racnode1' succeeded CRS-2676: Start of 'ora.cssd' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'racnode1' CRS-2672: Attempting to start 'ora.ctssd' on 'racnode1' CRS-2676: Start of 'ora.ctssd' on 'racnode1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'racnode1' CRS-2676: Start of 'ora.asm' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.storage' on 'racnode1' CRS-2676: Start of 'ora.storage' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.crf' on 'racnode1' CRS-2676: Start of 'ora.crf' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'racnode1' CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded CRS-6023: Starting Oracle Cluster Ready Services-managed resources CRS-6017: Processing resource auto-start for servers: racnode1 CRS-6016: Resource auto-start has completed for server racnode1 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2019/04/11 11:31:30 CLSRSC-343: Successfully started Oracle Clusterware stack 2019/04/11 11:31:33 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'. CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'racnode1' CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'racnode1' CRS-2676: Start of 'ora.asm' on 'racnode1' succeeded CRS-2672: Attempting to start 'ora.DATA.dg' on 'racnode1' CRS-2676: Start of 'ora.DATA.dg' on 'racnode1' succeeded 2019/04/11 11:43:18 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'. 2019/04/11 11:49:21 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
SQL> startup nomount ORA-27154: post/wait create failed ORA-27300: OS system dependent operation:semget failed with status: 28 ORA-27301: OS failure message: No space left on device ORA-27302: failure occurred at: sskgpcreates
------ Semaphore Limits -------- max number of arrays = 128 // SEMMNI max semaphores per array = 250 // SEMMSL max semaphores system wide = 32000 // SEMMNS max ops per semop call = 100 // SEMOP semaphore max value = 32767
Oracle Error: ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below ORA-01194: file 1 needs more recovery to be consistent ORA-01110: data file 1: '+DG/xxx/datafile/system01.dbf'
16:54:07 sys. >alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01194: file 1 needs more recovery to be consistent ORA-01110: data file 1: '+DG/xxx/datafile/system01.dbf'
sys >select (select distinct checkpoint_change# from v$datafile) dfile_ckp,(select checkpoint_change# from v$database) db_ckp,(select distinct checkpoint_change# from v$datafile_header) dh_ckp from dual;
set numwidth 30; set pagesize 50000; alter session set nls_date_format = 'DD-MON-RRRR HH24:MI:SS';
sys.>select status,checkpoint_change#,checkpoint_time, resetlogs_change#, resetlogs_time, count(*), fuzzy from v$datafile_header group by status,checkpoint_change#,checkpoint_time, resetlogs_change#, resetlogs_time, fuzzy;
GROUP# STATUS TYPE MEMBER IS_ ---------- ------- ------- ---------------------------------------------------------------------- --- 3 ONLINE +REDO/xxx/redo03.log NO 2 ONLINE +REDO/xxx/redo02.log NO 1 ONLINE +REDO/xxx/redo01.log NO 4 STANDBY +REDO/xxx/stdredo01.log NO 5 STANDBY +REDO/xxx/stdredo02.log NO 6 STANDBY +REDO/xxx/stdredo03.log NO 7 STANDBY +REDO/xxx/stdredo04.log NO 10 ONLINE +REDO/xxx/redo10.log NO 11 ONLINE +REDO/xxx/redo11.log NO
ALTER DATABASE RENAME FILE '/home/oradata/xxx/redo03.log' to '+REDO/xxx/redo03.log'; ALTER DATABASE RENAME FILE '/home/oradata/xxx/redo02.log' to '+REDO/xxx/redo02.log'; ALTER DATABASE RENAME FILE '/home/oradata/xxx/redo01.log' to '+REDO/xxx/redo01.log';
ALTER DATABASE RENAME FILE '/home/oradata/xxx/stdredo01.log' to '+REDO/xxx/stdredo01.log'; ALTER DATABASE RENAME FILE '/home/oradata/xxx/stdredo02.log' to '+REDO/xxx/stdredo02.log'; ALTER DATABASE RENAME FILE '/home/oradata/xxx/stdredo03.log' to '+REDO/xxx/stdredo03.log'; ALTER DATABASE RENAME FILE '/home/oradata/xxx/stdredo04.log' to '+REDO/xxx/stdredo04.log'; ALTER DATABASE RENAME FILE '/home/oradata/xxx/redo10.log' to '+REDO/xxx/redo10.log'; ALTER DATABASE RENAME FILE '/home/oradata/xxx/redo11.log' to '+REDO/xxx/redo11.log';
22:52:18 sys. xxx>RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL ; ORA-00279: change 75238614094 generated at 02/23/2019 00:15:46 needed for thread 1 ORA-00289: suggestion : +NVMEDG/xxx/archivelog/2019_03_01/thread_1_seq_21201.664.1001796429 ORA-00280: change 75238614094 for thread 1 is in sequence #21201
22:52:35 Spexxxy log: {>RET>=suggested | filename | AUTO | CANCEL} +NVMEDG/xxx/archivelog/2019_03_01/thread_1_seq_21201.664.1001796429 ORA-00279: change 75238624057 generated at 02/23/2019 00:19:25 needed for thread 1 ORA-00289: suggestion : +NVMEDG/xxx/archivelog/2019_03_01/thread_1_seq_21202.663.1001796179 ORA-00280: change 75238624057 for thread 1 is in sequence #21202 ORA-00278: log file '+NVMEDG/xxx/archivelog/2019_03_01/thread_1_seq_21201.664.1001796429' no longer needed for this recovery
22:52:49 Spexxxy log: {>RET>=suggested | filename | AUTO | CANCEL} +NVMEDG/xxx/archivelog/2019_03_01/thread_1_seq_21202.663.1001796179 ORA-00279: change 75238632711 generated at 02/23/2019 00:22:37 needed for thread 1 ORA-00289: suggestion : +nvmedg ORA-00280: change 75238632711 for thread 1 is in sequence #21203 ORA-00278: log file '+NVMEDG/xxx/archivelog/2019_03_01/thread_1_seq_21202.663.1001796179' no longer needed for this recovery
22:53:03 Spexxxy log: {>RET>=suggested | filename | AUTO | CANCEL} >>>>====缺少进一步的日志用于恢复 auto ORA-00308: cannot open archived log '+nvmedg' ORA-17503: ksfdopn:2 Failed to open file +nvmedg ORA-15045: ASM file name '+nvmedg' is not in reference form
ORA-00308: cannot open archived log '+nvmedg' ORA-17503: ksfdopn:2 Failed to open file +nvmedg ORA-15045: ASM file name '+nvmedg' is not in reference form
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below ORA-01194: file 1 needs more recovery to be consistent ORA-01110: data file 1: '+NVMEDG/xxx/datafile/system01.dbf'
当我恢复完21201和21202两个文件以后,发现依然无法打开数据库。
1 2 3 4 5 6
sys. >alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01194: file 1 needs more recovery to be consistent ORA-01110: data file 1: '+DG/xxx/datafile/system01.dbf'
sys >select (select distinct checkpoint_change# from v$datafile) dfile_ckp,(select checkpoint_change# from v$database) db_ckp,(select distinct checkpoint_change# from v$datafile_header) dh_ckp from dual;
set numwidth 30; set pagesize 50000; alter session set nls_date_format = 'DD-MON-RRRR HH24:MI:SS';
select status,checkpoint_change#,checkpoint_time, resetlogs_change#, resetlogs_time, count(*), fuzzy from v$datafile_header group by status,checkpoint_change#,checkpoint_time, resetlogs_change#, resetlogs_time, fuzzy;
alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01139: RESETLOGS option only valid after an incomplete database recovery
alter database open; alter database open * ERROR at line 1: ORA-03113: end-of-file on communication channel Process ID: 97239 Session ID: 1921 Serial number: 1
早上来检查告警邮件,一套10.2.0.5的dg主库一直在报ora-16086错误,根据错误提示大概是备库没有可用的standby log files。在11g以后,这个错误名称换成了ORA-16086 Redo data cannot be written to the standby redo log
1 2 3 4 5
16086, 0000, "standby database does not contain available standby log files" // *Cause: The primary database is in "no data loss" mode, but the standby // database does not contain any "standby log files". // *Action: Add one or more standby log files to the standby database. // This can be done while the standby database is mounted.
SQL> select protection_mode,protection_level from v$database;
PROTECTION_MODE PROTECTION_LEVEL -------------------- -------------------- MAXIMUM AVAILABILITY RESYNCHRONIZATION
而主备库的standby log file均存在,且大小一致,状态正常
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
SQL> select * from v$logfile;
GROUP# STATUS TYPE MEMBER IS_ ---------- ------- ------- -------------------------------------------------------------------------------- --- 1 ONLINE /u01/app/oracle/oradata/test/redo01.log NO 2 ONLINE /u01/app/oracle/oradata/test/redo02.log NO 3 ONLINE /u01/app/oracle/oradata/test/redo03.log NO 4 ONLINE /u01/app/oracle/oradata/test/redo04.log NO 5 ONLINE /u01/app/oracle/oradata/test/redo05.log NO 6 ONLINE /u01/app/oracle/oradata/test/redo06.log NO 7 STANDBY /u01/app/oracle/oradata/test/redo07.log NO 8 STANDBY /u01/app/oracle/oradata/test/redo08.log NO 9 STANDBY /u01/app/oracle/oradata/test/redo09.log NO 10 STANDBY /u01/app/oracle/oradata/test/redo10.log NO 11 STANDBY /u01/app/oracle/oradata/test/redo11.log NO 12 STANDBY /u01/app/oracle/oradata/test/redo12.log NO 13 STANDBY /u01/app/oracle/oradata/test/redo13.log NO
MOS上有说可能是因为fra目录满导致的bug,但检查数据库并没有启用fra
1 2 3 4 5 6 7
SQL> show parameter recovery
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_recovery_file_dest string db_recovery_file_dest_size big integer 0 recovery_parallelism integer 0
尝试重建standby log file,主备库均删除重建
1 2 3 4 5 6 7 8 9 10 11 12 13
alter database drop standby logfile group 7; alter database drop standby logfile group 8; alter database drop standby logfile group 9; alter database drop standby logfile group 10; alter database drop standby logfile group 11; alter database drop standby logfile group 12; alter database drop standby logfile group 13;
select 'alter database add standby logfile group '||group#||' '''||member||''' size 1g reuse;' from v$logfile where type='STANDBY';
recover managed standby database cancel;
recover managed standby database using current logfile disconnect from session;
SYS@> show sga ORA-04031: 无法分配 32 字节的共享内存 ("shared pool","SELECT DECODE(null,'','Total...","SQLA","tmp")
根据错误代码可以判断是共享池使用出了问题
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
[oracle@xxx trace]$ oerr ora 4031 04031, 00000, "unable to allocate %s bytes of shared memory (\"%s\",\"%s\",\"%s\",\"%s\")" // *Cause: More shared memory is needed than was allocated in the shared // pool or Streams pool. // *Action: If the shared pool is out of memory, either use the // DBMS_SHARED_POOL package to pin large packages, // reduce your use of shared memory, or increase the amount of // available shared memory by increasing the value of the // initialization parameters SHARED_POOL_RESERVED_SIZE and // SHARED_POOL_SIZE. // If the large pool is out of memory, increase the initialization // parameter LARGE_POOL_SIZE. // If the error is issued from an Oracle Streams or XStream process, // increase the initialization parameter STREAMS_POOL_SIZE or increase // the capture or apply parameter MAX_SGA_SIZE.
PARAMETER_NAME TYPE VALUE ------------------------------------------------------------ ----------- ------------------------------------------------------ sga_target big integer 12G
在sga中有6个子池,每个子池中大部分内存都分配给了”KGH: NO ACCESS”,都在1G左右。
NUM N_HEX NAME VALUE DESCRIPTION ---------- ----- ------------------------------------------------------ ------------------------------ --------------------------------------------- 111 6F _enable_shared_pool_durations TRUE temporary to disable/enable kgh policy
最后解决办法
1 2
SQL> alter system set "_enable_shared_pool_durations"=false scope=spfile; - restart the database