Monday, February 11, 2019

opatch check the patches included in a patch

I was aware of opatch lsinventory -bugs_fixed option to check the patches applied on an oracle home but didn't know you could also do this on a downloaded patch(set)


opatch lspatches -bugs  


note that if it is a bundle patch such as 12.1.0.2.19015DBBP  you need to go in the main patchset number and point to the individual patches.


in our case :


opatch lspatches -bugs  ./unzipped/28833531/28729220
patch_id:28729220
unique_patch_id:22494611
date_of_patch:10 Oct 2018, 18:36:59 hrs PST8PDT
patch_description:ACFS PATCH SET UPDATE 12.1.0.2.190115 (28729220)
component:oracle.usm,12.1.0.2.0,optional
platform:226,Linux x86-64
instance_shutdown:true
online_rac_installable:true
patch_type:bundle_member
product_family:db
auto:false
bug:19452723, NEED TO FIX THE SUPPORTED VERSIONS FOR KA ON LINUX
bug:18900953, CONFUSING AFD MESSAGE IN THE GI ALERT LOG
bug:23625427, DLM_PANIC_MSG  <INVALID CHECKSUM>
bug:24308283, AFD FAILED TO SEND OUT UNMAP WHILE USING PARTITIONS IN 12.1.0.2.0 CODE LINE
bug:26882237, ODA  SBIN ACFSUTIL SNAP INFO FAILS WITH   ACFS-03044  FAILED TO OPEN MOUNT POINT
bug:26396215, FSCK CHANGES NEEDED TO IMPROVE PERFORMANCE ON MANY TB SIZED FILE SYSTEMS
bug:28142134, RETPOLINE SUPPORT FOR SLES - ACFS - USM - SPECTRE
bug:25381434, SLES12 SP2 SUPPORT FOR ACFS
bug:23639692, LNX64-112-CMT  HEAP CORRUPTION RELOCATING ACL VOLUME
bug:18951113, AFD FILTERING STATUS IS NOT PERISTENT ACROSS NODE REBOOT
bug:22810422, UEKR4 SUPPORT FOR ACFS
bug:21815339, OPNS PANIC AT OFSOBFUSCATEENCRPRIVCTXT WITH ACTIVE ENCR STRESS TEST
bug:20923224, AFD LINUX SHOULD ISSUE IO WITH 512 SECTOR ADDRESSING
bug:26275740, DIAGNOSIBILITY   AUTOMATICALLY DUMPSTATE AND DUMPSTATS ON FILE SYSTEM INCIDENT
bug:19517835, KA+EF:TEST HANG MIGHT BE RELATED TO LOST MESSAGES TO KA DURING MULTI-BLOCK READ
bug:21474561, LINUX DRIVER SIGNING SUPPORT
bug:18185024, T5 SSC: MACHINE PANIC IN KOBJ_LOAD_MODULE DURING GRID INSTALL
bug:28111958, ACFS-1022 DESPITE BUG FIX

......



Out of place GI upgrade on Exadata OVM



The client I am currently working for wanted to patch their Exadata’s to the latest and greatest patchset that came out 1,5 weeks ago.


This QFSDP January 2019, upgrades the  GI from 12.2.0.1.180116
To  12.2.0.1.190115.

We followed the Oracle recommendation to patch out of place however when we tried to use the same method as last time when we went from 12.1 to 12.2  … as indicated in note

12.2 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.3 and later on Oracle Linux (Doc ID 2111010.1)


That didn’t work unfortunately because we aren’t doing an upgrade but just a patch


My colleague tried to use OPatchauto  -prepare-clone etc… but ran into issues


After a while I found that there is a -switchHome option with gridSetup.sh

So basically executing that from your new home by specifying :


./gridSetup.sh -switchHome -silent



So these are the steps we followed :


  • Download golden image via MOS note: 888828.1 
  • Create a disk image and partition it on the Dom0. 
  • Create DomU specific RefLink. 
  • Mount the new device on the DomU. 
  • Install the patched software of the 12.2 GI (executed as GI-owner). 
  • Adapt template response file (generated via interactive mode on the first node of the first DomU). 
  • Set environment correct for existing GI. 
  • unset ORACLE_HOME ORACLE_BASE ORACLE_SID 
  • cd /u01/app/12.2.0.1_190115/grid (which is the new GI HOME) 
  • ./gridSetup.sh -silent -responseFile /home/grid/grid_install_12.2.0.1.190115.rsp 
  • Execute root.sh script as indicated on the screen (as root) on the local node only. 
  • Repeat this procedure on the second node. 
  • The actual switch of the existing GI HOME towards the new GI HOME (executed as GI-owner). 
  • Check if ASM rebalance is active. If so wait… and retry later. 
  • unset ORACLE_HOME ORACLE_BASE ORACLE_SID 
  • cd /u01/app/12.2.0.1_190115/grid (which is the new GI HOME) 
  • ./gridSetup.sh -switchGridHome -silent 
  • Check new binaries are relinked with RDS (if not relink). 
  • Execute root.sh script as indicated on the screen (as root) first on the local node and after that on the second node. ==> takes a while