A test environment for the client, hostabnormalAfter power outage, the undo file is found to be corrupted and cannot be started. An error was reported during the open stage:
Errors in file /u01/app/oracle/diag/rdbms/cdrdb/CDRDB/trace/CDRDB_ora_4109.trc:
ORA-01122: database file 3 failed verification check
ORA-01110: data file 3: '/u01/app/oracle/oradata/CDRDB/'
ORA-01210: data file header is media corrupt
ORA-1122 signalled during: ALTER DATABASE OPEN...
Since it is a test environment, there is no backup, but some data is needed, I tried using an unconventional recovery method.
Cool back up the existing environment first! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
Create a pfile file:
create pfile from spfile;
Modify these two parameters in pfile
#*.undo_tablespace='UNDOTBS1'
*.undo_management= MANUAL
Then use this pfile to start:
SYS@CDRDB>startup force pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/';
ORACLE instance started.
Total System Global Area 523108352 bytes
Fixed Size 1337632 bytes
Variable Size 364906208 bytes
Database Buffers 150994944 bytes
Redo Buffers 5869568 bytes
Database mounted.
ORA-01122: database file 3 failed verification check
ORA-01110: data file 3: '/u01/app/oracle/oradata/CDRDB/'
ORA-01210: data file header is media corrupt
Still reported an error
Then try dropping this undo
SYS@CDRDB>alter database datafile 3 offline drop;
Database altered.
Reopen the library later
SYS@CDRDB>alter database open;
Database altered.
After that, create a new undo tablespace undotbs2
SYS@CDRDB>create undo tablespace undotbs2 datafile '/u01/app/oracle/oradata/CDRDB/undotbs02_01.dbf' size 100M;
create undo tablespace undotbs2 datafile '/u01/app/oracle/oradata/CDRDB/undotbs02_01.dbf' size 100M
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01552: cannot use system rollback segment for non-system tablespace
'DATA_OL'
ORA-06512: at line 999
ORA-01552: cannot use system rollback segment for non-system tablespace
'DATA_OL'
An error occurred as above
I'm quite puzzled why creating undo affects the DATA_OL tablespace, so I made 10046
SYS@CDRDB>oradebug event 10046 trace name context off
Statement processed.
SYS@CDRDB>oradebug tracefile_name
/u01/app/oracle/diag/rdbms/cdrdb/CDRDB/trace/CDRDB_ora_4279.trc
[oracle@centos-1 ~]$ tkprof /u01/app/oracle/diag/rdbms/cdrdb/CDRDB/trace/CDRDB_ora_4279.trc
output =
Check it out and found the reason:
"OGG". IS NULL THEN
SELECT database_role,
open_mode
INTO dbRole, dbOpenMode
FROM v$database;
"OGG". := TRUE;
END IF;
IF NOT (
^@ (dbRole = 'PRIMARY' OR dbRole = 'LOGICAL STANDBY')
AND dbOpenMode =
'READ WRITE'
)
THEN
-- do not write any trace even though it
should work as this is standby
"OGG"
.(-1,-1,-1,-1,-1);
RETURN; -- do not use
trigger if not read/write and primary/logical_standby
END IF;
EXCEPTION
......slightly
It turns out that this library has configured OGG's DDL synchronization, and an insert operation will be generated when DDL is generated, using the DATA_OL tablespace.
The reason is found~ Run the script to close the DDL configuration of the library OGG
SYS@CDRDB>@ddl_disable.sql
Trigger altered.
Then recreate the undotbs2 tablespace
SYS@CDRDB>create undo tablespace undotbs2 datafile '/u01/app/oracle/oradata/CDRDB/undotbs02_01.dbf' size 1000M;
Tablespace created.
This time, the addition was successful successfully.
After undotbs2 is added, close the database and modify the pfile parameters
*.undo_tablespace='UNDOTBS2'
*.undo_management=AUTO
Restart the database with pfile
SYS@CDRDB>startup pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/';
ORACLE instance started.
Total System Global Area 523108352 bytes
Fixed Size 1337632 bytes
Variable Size 364906208 bytes
Database Buffers 150994944 bytes
Redo Buffers 5869568 bytes
Database mounted.
Database opened.
The database is started successfully, but there are still errors when many business table queries:
ERROR at line 1:
ORA-00376: file 3 cannot be read at this time
ORA-01110: data file 3: '/u01/app/oracle/oradata/CDRDB/'
Still need to rollback the original transaction that was not submitted when the database crashes
After that, I first tried setting event 10513 to block smon's rollback
SYS@CDRDB>alter system set events '10513 trace name context forever, level 2';
System altered.
I found that I still reported the previous error. It seems that I need to manually block the rollback section.
SYS@CDRDB>SELECT SEGMENT_NAME,STATUS FROM DBA_ROLLBACK_SEGS;
SEGMENT_NAME STATUS
------------------------------ ----------------
SYSTEM ONLINE
_SYSSMU10_4131489474$ NEEDS RECOVERY
_SYSSMU9_1735643689$ NEEDS RECOVERY
_SYSSMU8_3901294357$ NEEDS RECOVERY
_SYSSMU7_3517345427$ NEEDS RECOVERY
_SYSSMU6_2897970769$ NEEDS RECOVERY
_SYSSMU5_538557934$ NEEDS RECOVERY
_SYSSMU4_1003442803$ NEEDS RECOVERY
_SYSSMU3_1204390606$ NEEDS RECOVERY
_SYSSMU2_967517682$ NEEDS RECOVERY
_SYSSMU1_592353410$ NEEDS RECOVERY
SEGMENT_NAME STATUS
------------------------------ ----------------
_SYSSMU30_244658789$ ONLINE
_SYSSMU29_1020880693$ ONLINE
_SYSSMU28_2912622077$ ONLINE
_SYSSMU27_747253598$ ONLINE
_SYSSMU26_560868814$ ONLINE
_SYSSMU25_1357066082$ ONLINE
_SYSSMU24_103440716$ ONLINE
_SYSSMU23_1006903361$ ONLINE
_SYSSMU22_2808190508$ ONLINE
_SYSSMU21_39626587$ ONLINE
21 rows selected.
Then add the query NEEDS RECOVERY table to the following two parameters to block these rollback segments
_offline_rollback_segments/_corrupted_rollback_segments parameters
*._offline_rollback_segments=(_SYSSMU10_4131489474$,_SYSSMU9_1735643689$,_SYSSMU8_3901294357$,_SYSSMU7_3517345427$,_SYSSMU6_2897970769$,_SYSSMU5_538557934$,_SYSSMU4_1003442803$,_SYSSMU3_1204390606$,_SYSSMU2_967517682$,_SYSSMU1_592353410$)
*._corrupted_rollback_segments=(_SYSSMU10_4131489474$,_SYSSMU9_1735643689$,_SYSSMU8_3901294357$,_SYSSMU7_3517345427$,_SYSSMU6_2897970769$,_SYSSMU5_538557934$,_SYSSMU4_1003442803$,_SYSSMU3_1204390606$,_SYSSMU2_967517682$,_SYSSMU1_592353410$)
Use this pfile to start the database
startup force pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/';
After that, I checked all the test tables and confirmed that they can query them. Then export them to export the data. Everything is OK~