19c Data Guard Series Part III adding a PDB to and existing Data Guard configuration
In the previous posts we have setup Data Guard with the broker.
Now lets add a PDB we will not create it from the seed but from an existing PDB using the HOT (why in bold keep on reading ) Clone feature introduced in 12.2
ok on our primary
In the alert log on the standby we see following :
let's check on the standby
The question is was this necessary ?
So when doing COLD clones everything works as expected the manual RMAN and recovery is not needed anymore ....
It would. be a good enchancement if Data Guard could handle HOT Clones as well, I played around with PDB_FILE_CONVERT and so on but could get it working with HOT Clones.
I will talk with the Data Guard PM Pieter Van Puymbroeck about this .
Now lets add a PDB we will not create it from the seed but from an existing PDB using the HOT (why in bold keep on reading ) Clone feature introduced in 12.2
HOT CLONE
ok on our primary
show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 TDETEST READ WRITE NO 4 ACMEPDB READ WRITE NO 5 NO_TDE_DG READ WRITE NO
sys@CDBT01> create pluggable database testphil from acmepdb; Pluggable database created.
In the alert log on the standby we see following :
2020-04-29 12:21:18.831000 +00:00 Recovery created pluggable database TESTPHIL Tablespace-SYSTEM during PDB create skipped since source is in r/w mode or this is a refresh clone File #72 added to control file as 'UNNAMED00072'. Originally created as: '+DATA2/c99V02T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/system.370.1039004473' because the pluggable database was created with nostandby or the tablespace belonging to the pluggable database is offline. Tablespace-SYSAUX during PDB create skipped since source is in r/w mode or this is a refresh clone File #73 added to control file as 'UNNAMED00073'. Originally created as: '+DATA2/c99V02T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/sysaux.366.1039004473' because the pluggable database was created with nostandby or the tablespace belonging to the pluggable database is offline. Tablespace-UNDOTBS1 during PDB create skipped since source is in r/w mode or this is a refresh clone File #74 added to control file as 'UNNAMED00074'. Originally created as: '+DATA2/c99V02T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/undotbs1.1273.1039004473' because the pluggable database was created with nostandby or the tablespace belonging to the pluggable database is offline. Tablespace-TEMP during PDB create skipped since source is in r/w mode or this is a refresh clone Tablespace-USERS during PDB create skipped since source is in r/w mode or this is a refresh clone File #75 added to control file as 'UNNAMED00075'. Originally created as: '+DATA2/c99V02T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/users.1238.1039004473' because the pluggable database was created with nostandby or the tablespace belonging to the pluggable database is offline. Tablespace-UNDO_2 during PDB create skipped since source is in r/w mode or this is a refresh clone File #76 added to control file as 'UNNAMED00076'. Originally created as: '+DATA2/c99V02T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/undo_2.1258.1039004473' because the pluggable database was created with nostandby or the tablespace belonging to the pluggable database is offline.
That didn't work well
let's check on the standby
SQL> column name format a100 SQL> column con_id format 99999 column file# format 99999 set lines 200SQL> SQL> SQL> list 1* select con_id,file#,name from v$datafile SQL> / CON_ID FILE# NAME ------ ------ ---------------------------------------------------------------------------------------------------- 7 72 /u01/app/orls/product/19.5.0.0/dbhome_1/dbs/UNNAMED00072 7 73 /u01/app/orls/product/19.5.0.0/dbhome_1/dbs/UNNAMED00073 7 74 /u01/app/orls/product/19.5.0.0/dbhome_1/dbs/UNNAMED00074 7 75 /u01/app/orls/product/19.5.0.0/dbhome_1/dbs/UNNAMED00075 7 76 /u01/app/orls/product/19.5.0.0/dbhome_1/dbs/UNNAMED00076 29 rows selected.
Rman to the rescue
rman target sys/@c99v01t1.acme.com first we need to turn off recovery ( in the broker edit database c99v01t1 set state = 'APPLY-OFF' )
in the alert log we seerun { set newname for database to '+DATA1'; restore datafile 72,73,74,75,76 from service "c99v02t1.acme.com"; switch datafile all; }2> 3> 4> 5> 6> executing command: SET NEWNAME Starting restore at 29-APR-20 using channel ORA_DISK_1 using channel ORA_DISK_2 using channel ORA_DISK_3 using channel ORA_DISK_4 using channel ORA_DISK_5 using channel ORA_DISK_6 using channel ORA_DISK_7 using channel ORA_DISK_8 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: using network backup set from service c99v02t1.acme.com channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00072 to +DATA1 channel ORA_DISK_2: starting datafile backup set restore channel ORA_DISK_2: using network backup set from service c99v02t1.acme.com channel ORA_DISK_2: specifying datafile(s) to restore from backup set channel ORA_DISK_2: restoring datafile 00073 to +DATA1 channel ORA_DISK_3: starting datafile backup set restore channel ORA_DISK_3: using network backup set from service c99v02t1.acme.com channel ORA_DISK_3: specifying datafile(s) to restore from backup set channel ORA_DISK_3: restoring datafile 00074 to +DATA1 channel ORA_DISK_4: starting datafile backup set restore channel ORA_DISK_4: using network backup set from service c99v02t1.acme.com channel ORA_DISK_4: specifying datafile(s) to restore from backup set channel ORA_DISK_4: restoring datafile 00075 to +DATA1 channel ORA_DISK_5: starting datafile backup set restore channel ORA_DISK_5: using network backup set from service c99v02t1.acme.com channel ORA_DISK_5: specifying datafile(s) to restore from backup set channel ORA_DISK_5: restoring datafile 00076 to +DATA1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:12 channel ORA_DISK_2: restore complete, elapsed time: 00:00:09 channel ORA_DISK_3: restore complete, elapsed time: 00:00:07 channel ORA_DISK_4: restore complete, elapsed time: 00:00:05 channel ORA_DISK_5: restore complete, elapsed time: 00:00:04 Finished restore at 29-APR-20 datafile 72 switched to datafile copy input datafile copy RECID=56 STAMP=1039005294 file name=+DATA1/c99V01T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/system.1097.1039005291 datafile 73 switched to datafile copy input datafile copy RECID=58 STAMP=1039005297 file name=+DATA1/c99V01T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/sysaux.1098.1039005293 datafile 74 switched to datafile copy input datafile copy RECID=57 STAMP=1039005295 file name=+DATA1/c99V01T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/undotbs1.1099.1039005295 datafile 75 switched to datafile copy input datafile copy RECID=59 STAMP=1039005298 file name=+DATA1/c99V01T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/users.1100.1039005297 datafile 76 switched to datafile copy input datafile copy RECID=60 STAMP=1039005299 file name=+DATA1/c99V01T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/undo_2.1101.1039005299
2020-04-29 12:34:54.078000 +00:00 Full restore complete of datafile 72 to datafile copy +DATA1/c99V01T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/system.1097.1039005291. Elapsed time: 0:00:04 checkpoint is 42661226 last deallocation scn is 33456177 Undo Optimization current scn is 30760618 2020-04-29 12:34:55.584000 +00:00 Full restore complete of datafile 74 to datafile copy +DATA1/c99V01T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/undotbs1.1099.1039005295. Elapsed time: 0:00:01 checkpoint is 42661247 last deallocation scn is 40584027 Undo Optimization current scn is 30760618 2020-04-29 12:34:57.164000 +00:00 Full restore complete of datafile 73 to datafile copy +DATA1/c99V01T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/sysaux.1098.1039005293. Elapsed time: 0:00:05 checkpoint is 42661236 last deallocation scn is 41357874 2020-04-29 12:34:58.824000 +00:00 Full restore complete of datafile 75 to datafile copy +DATA1/c99V01T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/users.1100.1039005297. Elapsed time: 0:00:01 checkpoint is 42661257 last deallocation scn is 3 Full restore complete of datafile 76 to datafile copy +DATA1/c99V01T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/undo_2.1101.1039005299. Elapsed time: 0:00:00 checkpoint is 42661265 last deallocation scn is 41428294 Undo Optimization current scn is 30760618 2020-04-29 12:35:00.594000 +00:00 Switch of datafile 72 complete to datafile copy checkpoint is 42661226 Switch of datafile 73 complete to datafile copy checkpoint is 42661236 Switch of datafile 74 complete to datafile copy checkpoint is 42661247 Switch of datafile 75 complete to datafile copy checkpoint is 42661257 Switch of datafile 76 complete to datafile copy checkpoint is 42661265When we stop and start recovery we will see following entry in the alert.log
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL 2020-04-29 12:36:57.468000 +00:00 ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT NODELAY Attempt to start background Managed Standby Recovery process (cdbt011) Starting background process MRP0 MRP0 started with pid=98, OS id=116531 Background Managed Standby Recovery process started (cdbt011) 2020-04-29 12:37:02.597000 +00:00 Starting single instance redo apply (SIRA) Started logmerger process .... (PID:364649): Managed Standby Recovery starting Real Time Apply max_pdb is 9 Parallel Media Recovery started with 8 slaves stopping change tracking Warning: Datafile 72 (+DATA1/c99V01T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/system.1097.1039005291) is offline during full database recovery and will not be recovered Warning: Datafile 73 (+DATA1/c99V01T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/sysaux.1098.1039005293) is offline during full database recovery and will not be recovered Warning: Datafile 74 (+DATA1/c99V01T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/undotbs1.1099.1039005295) is offline during full database recovery and will not be recovered Warning: Datafile 75 (+DATA1/c99V01T1/A46C2292C49FE4B0E053468A400A3FBA/DATAFILE/users.1100.1039005297) is offline during full database recovery and will not be recoveredThe datafile are offline before we can bring them online again we need to put the scn in sync let's turn the redo apply on and off and then recover
SQLPLUS>recover standby database; SQLPLUS>alter session set container=PHILTEST; SQLPLUS>alter database datafile 72,73,74,75,76 online;Voila we are in the game again.
The question is was this necessary ?
Let's do a cold clone
On the primaryshow pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 TDETEST READ WRITE NO 4 ACMEPDB READ ONLY NO sys@CDBT01> create pluggable database testphil from acmepdb; Pluggable database created.Results in this in the alert.log on the standby
Recovery created pluggable database TEST 2020-04-28 15:10:40.992000 +00:00 Recovery copied files for tablespace SYSTEM Recovery successfully copied file +DATA1/c99V01T1/A45C2FAC4E080F5FE053478A400A77C3/DATAFILE/system.1097.1038928239 from +DATA1/c99V01T1/94DB3A8C954C5769E053468A400A8580/DATAFILE/system.1036.1038565341 Datafile 52 added to flashback set Successfully added datafile 52 to media recovery Datafile #52: '+DATA1/c99V01T1/A45C2FAC4E080F5FE053478A400A77C3/DATAFILE/system.1097.1038928239' 2020-04-28 15:10:44.860000 +00:00 Recovery copied files for tablespace SYSAUX Recovery successfully copied file +DATA1/c99V01T1/A45C2FAC4E080F5FE053478A400A77C3/DATAFILE/sysaux.1098.1038928241 from +DATA1/c99V01T1/94DB3A8C954C5769E053468A400A8580/DATAFILE/sysaux.1035.1038565343 Datafile 53 added to flashback set Successfully added datafile 53 to media recovery Datafile #53: '+DATA1/c99V01T1/A45C2FAC4E080F5FE053478A400A77C3/DATAFILE/sysaux.1098.1038928241' 2020-04-28 15:10:46.189000 +00:00 Recovery copied files for tablespace UNDOTBS1 Recovery successfully copied file +DATA1/c99V01T1/A45C2FAC4E080F5FE053478A400A77C3/DATAFILE/undotbs1.1099.1038928245 from +DATA1/c99V01T1/94DB3A8C954C5769E053468A400A8580/DATAFILE/undotbs1.1034.1038565345 Datafile 54 added to flashback set Successfully added datafile 54 to media recovery Datafile #54: '+DATA1/c99V01T1/A45C2FAC4E080F5FE053478A400A77C3/DATAFILE/undotbs1.1099.1038928245' 2020-04-28 15:10:48.985000 +00:00 Recovery copied files for tablespace USERS Recovery successfully copied file +DATA1/c99V01T1/A45C2FAC4E080F5FE053478A400A77C3/DATAFILE/users.1100.1038928247 from +DATA1/c99V01T1/94DB3A8C954C5769E053468A400A8580/DATAFILE/users.1031.1038565353 Datafile 55 added to flashback set Successfully added datafile 55 to media recovery Datafile #55: '+DATA1/c99V01T1/A45C2FAC4E080F5FE053478A400A77C3/DATAFILE/users.1100.1038928247' Recovery copied files for tablespace UNDO_2 Recovery successfully copied file +DATA1/c99V01T1/A45C2FAC4E080F5FE053478A400A77C3/DATAFILE/undo_2.1101.1038928249 from +DATA1/c99V01T1/94DB3A8C954C5769E053468A400A8580/DATAFILE/undo_2.1033.1038565349 Datafile 56 added to flashback set Successfully added datafile 56 to media recovery Datafile #56: '+DATA1/c99V01T1/A45C2FAC4E080F5FE053478A400A77C3/DATAFILE/undo_2.1101.1038928249'
So when doing COLD clones everything works as expected the manual RMAN and recovery is not needed anymore ....
It would. be a good enchancement if Data Guard could handle HOT Clones as well, I played around with PDB_FILE_CONVERT and so on but could get it working with HOT Clones.
I will talk with the Data Guard PM Pieter Van Puymbroeck about this .
Comments