Wednesday, November 23, 2016

Part 3: Grid Infrastructure and Database Upgrade from 11gR2 11.2.0.3 to 12C 12.1.0.2

Upgrade Database to 12.1.0.2
Check Invalid Object:
set pagesize500
set linesize 100
select substr(comp_name,1,40) comp_name, status, substr(version,1,10) version from dba_registry order by comp_name;

select substr(object_name,1,40) object_name,substr(owner,1,15) owner,object_type from dba_objects where status='INVALID' order by owner,object_type;

select owner,object_type,count(*) from dba_objects where status='INVALID' group by owner,object_type order by owner,object_type ;

To VALIDATE the invalid objects:

$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus "/ as sysdba"
SQL> @utlrp.sql

Ref : Complete Checklist for Upgrading to Oracle Database 12c Release 1 using DBUA (Doc ID 1516557.1)

Collect DB Upgrade/Migrate Diagnostic Information:
cd <location of the script>
$ sqlplus / as sysdba
sql> alter session set nls_language='American';
sql> @dbupgdiag.sql
sql> exit

Ref: Script to Collect DB Upgrade/Migrate Diagnostic Information (dbupgdiag.sql) (Doc ID 556610.1)

Check What User Should Run Statistics:
$ sqlplus / as sysdba

SQL> @check_stale_stats.sql

Example:

EXEC DBMS_STATS.GATHER_DICTIONARY_STATS('SYS',OPTIONS=>'GATHER', ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE, METHOD_OPT => 'FOR ALL COLUMNS SIZE AUTO', CASCADE => TRUE);


Disable Oracle Database Vault & Oracle Label Security
Ref : Complete Checklist for Upgrading to Oracle Database 12c Release 1 using DBUA (Doc ID 1516557.1)

Drop EM DB Console Repository from 11gR2 DB

[oracle@ol6-112-rac1 ~]$ emca -deconfig dbcontrol db -repos drop

STARTED EMCA at Nov 7, 2016 4:09:00 PM
EM Configuration Assistant, Version 11.2.0.3.0 Production
Copyright (c) 2003, 2011, Oracle.  All rights reserved.

Enter the following information:
Database SID: RAC1
Listener port number: 1521
Password for SYS user:
Password for SYSMAN user:

----------------------------------------------------------------------
WARNING : While repository is dropped the database will be put in quiesce mode.
----------------------------------------------------------------------
Do you wish to continue? [yes(Y)/no(N)]: y
Nov 7, 2016 4:10:45 PM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /u01/app/oracle/cfgtoollogs/emca/RAC/emca_2016_11_07_16_09_00.log.
Nov 7, 2016 4:10:46 PM oracle.sysman.emcp.EMDBPreConfig performDeconfiguration
WARNING: EM is not configured for this database. No EM-specific actions can be performed. Some of the possible reasons may be:
 1) EM is configured with different hostname then physical host. Set environment variable ORACLE_HOSTNAME=<hostname> and re-run EMCA script
 2) ORACLE_HOSTNAME is set. Unset it and re-run EMCA script
Nov 7, 2016 4:10:46 PM oracle.sysman.emcp.EMReposConfig invoke
INFO: Dropping the EM repository (this may take a while) ...
Nov 7, 2016 4:14:24 PM oracle.sysman.emcp.EMReposConfig invoke
INFO: Repository successfully dropped
Enterprise Manager configuration completed successfully
FINISHED EMCA at Nov 7, 2016 4:14:24 PM
[oracle@ol6-112-rac1 ~]$

For alternative solution please see step in:
Ref : Complete Checklist for Upgrading to Oracle Database 12c Release 1 using DBUA (Doc ID 1516557.1)

Verify cluster status
Perform an extra check on the status of the Grid Infrastructure post upgrade by executing the following command from one of the compute nodes:

(root)# /u01/app/12.1.0.2/grid/bin/crsctl check cluster -all

Change Custom Scripts and environment variables to Reference the 12.1.0.2 Grid Home
Start all primary instances in restricted mode
DBUA requires all RAC instances to be running from the current database home before starting the upgrade. To prevent an application from accidentally connecting to the primary database and performing work causing the standby to fall behind, startup the primary database in restricted mode, as follows:
(oracle)$ srvctl start database -d PRIM -o restrict

Upgrade the Database with Database Upgrade Assistant (DBUA)

NOTE: Before starting the Database Upgrade Assistant it is required change the preference for 'concurrent statistics gathering' on the current release if the current setting is not set to 'FALSE'.

First, while still on the 11.2. release, obtain the current setting:
SQL> SELECT dbms_stats.get_prefs('CONCURRENT') from dual;
When 'concurrent statistics gathering' is not not set to 'FALSE', change the value to 'FALSE before the upgrade.
BEGIN
DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','FALSE');
END;
/


NOTE: Before starting the database upgrade assistant all databases that will be upgraded having services configured with PRECONNECT as option for 'TAF Policy specification' should have these services  stopped and disabled. Once a database upgrade is completed, services can be enabled and brought online. Not disabling services having the PRECONNECT option for 'TAF Policy specification' will cause an upgrade to fail.

For each database being upgraded use the srvctl command to determine if a 'TAF policy specification' with 'PRECONNECT' is defined. Example as follows:

(oracle)$ srvctl config service -d <db_unique_name> | grep -i preconnect | wc –l

For each database being upgraded the output of the above command should be 0. When the output of the above command is not equal to 0, find the specific service(s) for which PRECONNECT is defined. Example as follows:
(oracle)$ srvctl config service -d <db_unique_name> -s <service_name>

Those services found need to be stopped and disabled before proceeding the upgrade. Example as follows:
(oracle)$ srvctl stop service -d <db_unique_name> -s "<service_name_list>"
(oracle)$ srvctl disable service -d <db_unique_name> -s "<service_name_list>"


NOTE: Prior to running dbua and upgrading the primary database, remove DIAG_ADR_ENABLED=off from sqlnet.ora (or) either set sqlnet.ora parameter "DIAG_ADR_ENABLED" to ON (the default value). This is further described in Document 1969684.1 "DBCA & DBUA Does not Identify Disk Groups When DIAG_ADR_ENABLED=OFF In sqlnet.ora"


Oracle recommends removing the value for the init.ora parameter 'listener_networks' before starting DBUA. The value will be restored after running DBUA. Be sure to obtain the original value before removing, as follows:

SYS@PRIM1> set lines 200
SYS@PRIM1> select name, value from v$parameter where name='listener_networks';

If the value for parameter listener_networks was set, then the value needs to be removed as follows:

SYS@PRIM1> alter system set listener_networks='' sid='*' scope=both;



Run DBUA from the new 12.1.0.2 ORACLE_HOME as follows:
[oracle@ol6-112-rac1 ~]$ which dbua
/u01/app/oracle/product/12.1.0.2/db_1/bin/dbua
[oracle@ol6-112-rac1 ~]$
 [oracle@ol6-112-rac1 ~]$ dbua



































Part 2: Grid Infrastructure and Database Upgrade from 11gR2 11.2.0.3 to 12C 12.1.0.2

Install Database 12.1.0.2 Software
Prepare Installation Software
Unzip the 12.1.0.2 database software.  Run the following command on the database server where the software is staged.
(oracle)$ unzip -q /u01/app/oracle/patchdepot/linuxamd64_12c_database_1of2.zip -d /u01/app/oracle/patchdepot
(oracle)$ unzip -q /u01/app/oracle/patchdepot/linuxamd64_12c_database_2of2.zip -d /u01/app/oracle/patchdepot

Create the new Oracle DB Home directory on all database server nodes
Node 1 and Node 2:
mkdir -p /u01/app/oracle/product/12.1.0.2/db_1

Perform 12.1.0.2 Database Software Installation with the Oracle Universal Installer (OUI)
Note: For OUI installations or execution of important scripts it is recommend to use VNC to avoid problems in case connection with the server is lost.

[oracle@ol6-112-rac1 database]$ ./runInstaller




























[root@ol6-112-rac1 ~]# /u01/app/oracle/product/12.1.0.2/db_1/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/12.1.0.2/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@ol6-112-rac1 ~]#

[root@ol6-112-rac2 ~]# /u01/app/oracle/product/12.1.0.2/db_1/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/12.1.0.2/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@ol6-112-rac2 ~]#



 Now  Install Database 12.1.0.2 Software is completed.
Please click below link to see the next step Upgrade DB from 11gR2 to 12C




Thursday, November 17, 2016

Part 1: Grid Infrastructure and Database Upgrade from 11gR2 11.2.0.3 to 12C 12.1.0.2



Oracle Clusterware upgrades can be rolling upgrades, in which a subset of nodes are brought downand upgraded while other nodes remain active. Oracle ASM 12c Release 1 (12.1) upgrades can be rolling upgrades. If you upgrade a subset of nodes, then a software-only installation is performed on the existing cluster nodes that you do not select for upgrade.

Three things need to do:
1. Upgrade Grid Infra from 11gR2(11.2.0.3) to 12C(12.1.0.2)
2. Install Database Software12C(12.1.0.2)
3. Upgrade DB from 11gR2 to 12C

In this article I will show you how to upgrade grid infrastructure from  11gR2(11.2.0.3.15) to 12C(12.1.0.2). The rest I will post in separate articles.


Upgrade Grid Infrastructure


Both nodes with root user:

mkdir -p /u01/app/12.1.0.2/grid
chown -R root:oinstall /u01/app/12.1.0.2/

Check Grid Software Version:
Grid Home:
$ crsctl query crs softwareversion ol6-112-rac1
Oracle Clusterware version on node [ol6-112-rac1] is [11.2.0.3.0]

$ crsctl query crs softwareversion ol6-112-rac2
Oracle Clusterware version on node [ol6-112-rac2] is [11.2.0.3.0]


$ /media/sf_Linux_x86-64/11.2.0.3/grid/runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0.3/grid -dest_crshome /u01/app/12.1.0.2/grid -dest_version 12.1.0.2.0


Backup Software from Node 1:
# cd /u01/app
# tar -cv oracle | gzip > /media/sf_Linux_x86-64/bkvm/oracle.tar.gz
# tar -cv 11.2.0.3 | gzip > /media/sf_Linux_x86-64/bkvm/11.2.0.3.tar.gz
# tar -cv oraInventory | gzip > /media/sf_Linux_x86-64/bkvm/oraInventory.tar.gz

Backup Software from Node 2:
# cd /u01/app
# tar -cv oracle | gzip > /media/sf_Linux_x86-64/bkvm/oracle_node2.tar.gz
# tar -cv 11.2.0.3 | gzip > /media/sf_Linux_x86-64/bkvm/11.2.0.3_node2.tar.gz
# tar -cv oraInventory | gzip > /media/sf_Linux_x86-64/bkvm/oraInventory_node2.tar.gz

Unzip:
# cd /media/sf_Linux_x86-64/resvm
# gunzip < /media/sf_Linux_x86-64/bkvm/oracle.tar.gz | tar -xvf -

$ chmod 755 orachk
./orachk -u -o pre

[oracle@ol6-112-rac1 grid]$ pwd
/media/sf_Linux_x86-64/12.1.0.2.0/grid

./runInstaller














  




Choose "Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management" then click "Next



  





















Click "SSH Connectivity"
























Click "Test"




























Click "Ok" then click "Next"










Click "Next"























Choose group "oinstall" and click "Next"
























Click "Yes"











 
 Specify Oracle base: /u01/app/oracle and Software location: /u01/app/12.1.0.2/grid then click Next


























Click "Next"























The following are only warning and after checking detail it can be ignore. So please click "Ignore All" then click "Next"























Review the summary such as "Grid Home", Selected nodes to upgraded. then click "Install"























Please wait for it until it completed























Please run the following script on node 1 and node 2 as root user.


      




















Node 1:




[root@ol6-112-rac1 oracle]# /u01/app/12.1.0.2/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0.2/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2016/11/07 10:45:08 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2016/11/07 10:45:40 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2016/11/07 10:45:45 CLSRSC-464: Starting retrieval of the cluster configuration data

2016/11/07 10:45:55 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2016/11/07 10:45:55 CLSRSC-363: User ignored prerequisites during installation

2016/11/07 10:46:04 CLSRSC-515: Starting OCR manual backup.

2016/11/07 10:46:07 CLSRSC-516: OCR manual backup successful.

2016/11/07 10:46:11 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode

2016/11/07 10:46:11 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/11.2.0.3/grid -oldCRSVersion 11.2.0.3.0 -nodeNumber 1 -firstNode true -startRolling true'


ASM configuration upgraded in local node successfully.

2016/11/07 10:46:21 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode

2016/11/07 10:46:21 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2016/11/07 10:48:20 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2016/11/07 10:51:32 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2016/11/07 10:55:25 CLSRSC-472: Attempting to export the OCR

2016/11/07 10:55:25 CLSRSC-482: Running command: 'ocrconfig -upgrade oracle oinstall'

2016/11/07 10:55:42 CLSRSC-473: Successfully exported the OCR

2016/11/07 10:55:48 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.

2016/11/07 10:55:48 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.

2016/11/07 10:55:48 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.

2016/11/07 10:55:48 CLSRSC-543:
 3. The downgrade command must be run on the node ol6-112-rac1 with the '-lastnode' option to restore global configuration data.
2016/11/07 10:56:18 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2016/11/07 10:56:44 CLSRSC-474: Initiating upgrade of resource types

2016/11/07 10:58:38 CLSRSC-482: Running command: 'upgrade model  -s 11.2.0.3.0 -d 12.1.0.2.0 -p first'

2016/11/07 10:58:38 CLSRSC-475: Upgrade of resource types successfully initiated.

2016/11/07 10:58:43 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@ol6-112-rac1 oracle]#



Node 2:



[root@ol6-112-rac2 ~]# /u01/app/12.1.0.2/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0.2/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2016/11/07 11:02:09 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2016/11/07 11:02:41 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2016/11/07 11:02:42 CLSRSC-464: Starting retrieval of the cluster configuration data

2016/11/07 11:02:47 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2016/11/07 11:02:47 CLSRSC-363: User ignored prerequisites during installation


ASM configuration upgraded in local node successfully.

2016/11/07 11:02:58 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2016/11/07 11:04:59 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2016/11/07 11:05:30 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2016/11/07 11:08:21 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Start upgrade invoked..
2016/11/07 11:08:43 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded

2016/11/07 11:08:43 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/crsctl set crs activeversion'

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade the CSS.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2016/11/07 11:10:25 CLSRSC-479: Successfully set Oracle Clusterware active version

2016/11/07 11:10:29 CLSRSC-476: Finishing upgrade of resource types

2016/11/07 11:10:41 CLSRSC-482: Running command: 'upgrade model  -s 11.2.0.3.0 -d 12.1.0.2.0 -p last'

2016/11/07 11:10:41 CLSRSC-477: Successfully completed upgrade of resource types

2016/11/07 11:11:09 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@ol6-112-rac2 ~]#



After run scripts on both node please click "Ok"





















After click "Ok" a message pop up as below but it can be ignored then click "Ok"


















After click “Skip”

























Click “Next”

























Click “Yes”












Click “Close”



























[oracle@ol6-112-rac2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[oracle@ol6-112-rac2 ~]$
[oracle@ol6-112-rac2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [ol6-112-rac2] is [12.1.0.2.0]
[oracle@ol6-112-rac2 ~]$
[oracle@ol6-112-rac2 ~]$ crsctl check cluster -all
**************************************************************
ol6-112-rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
ol6-112-rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@ol6-112-rac2 ~]$
[oracle@ol6-112-rac2 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       8956
         Available space (kbytes) :     400612
         ID                       :  953791745
         Device/File Name         :       +OCR
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

[oracle@ol6-112-rac2 ~]$
[oracle@ol6-112-rac2 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   831ee4a036364f53bff1bb16bf40ac79 (ORCL:ASM1) [OCR]
Located 1 voting disk(s).
[oracle@ol6-112-rac2 ~]$




Verify cluster status
Perform an extra check on the status of the Grid Infrastructure post upgrade by executing the following command from one of the compute nodes:

(root)# /u01/app/12.1.0.2/grid/bin/crsctl check cluster -all

Change Custom Scripts and environment variables to Reference the 12.1.0.2 Grid Home
Now Upgrade Grid Infrastructure is completed.
Please click below link to see the next step Install Database Software12C(12.1.0.2)

Followers