Installing the Oracle Certification Environment Software for Oracle RAC
The OCE Certification Kit required to certify the system for Oracle RAC 11g Release 1 (11.1) is available for download only. The Single Instance certification tests should be completed prior to installing the OCE kit for Oracle RAC. Refer to the previous section if necessary. Once Single Instance testing has successfully completed, the single instance OCE installations must be archived to allow OCE installations for Oracle RAC to succeed. The OCE kit for Oracle RAC should be installed separately on each node of the cluster. If the ORACLE_HOME
is located on a shared disk, multiple installations of OCE will not be possible. In that case, it will not be possible to run OCE tests simultaneously, and the time required to complete certification will be greatly increased. To install the OCE Kit:
/tmp/oce
. The OCE archives are either CPIO archives, or compressed CPIO archives.gunzip -c OCE ARCHIVE | cpio -idmv
cpio -idmv < OCE_ARCHIVE
OCE_ARCHIVE
is the name of the archive.3. $ archive_location/oce_install.sh
$ORACLE_HOME/OCEinstallRAC.log
file, and verify that there are no errors.9. $ /tmp/oce/oce_exes_install.sh
$ORACLE_HOME/OCE/install_log.txt,
and verify that the installation was successful.$ORACLE_HOME/oce
directory.- If you are using raw devices or logical volumes for shared storage perform the following steps:
Note:
In this example, the OCE user is oracle, which is a member of the dba group; there are 4 nodes in the cluster; and the OCE logical volumes are located in /dev/ocevg/ directory.
|
- Set up the devices or logical volumesrequired by the tests.
- Ensure raw devices are accessible and writable across all nodes by the OCE user
- # chown -R oracle:dba /dev/ocevg
- # chmod -R og+w /dev/ocevg
- Export $ORACLE_HOME/oce/work, $ORACLE_HOME/dbs, and $ORACLE_HOME/network/admin from the node 1 to all other nodes in the cluster.
- # exportfs -i -o rw <node2>:$ORACLE_HOME/oce/work
- # exportfs -i -o rw <node2>:$ORACLE_HOME/dbs
- # exportfs -i -o rw <node2>:$ORACLE_HOME/network/admin
- $ORACLE_HOME/oce/work, $ORACLE_HOME/network/admin, and $ORACLE_HOME/dbs must be mounted on all secondary nodes from the primary (exported) node.
- # mkdir –p $ORACLE_HOME/oce/work
- # chown oracle:dba $ORACLE_HOME/oce/work
- # mount <node1>:$ORACLE_HOME/oce/work $ORACLE_HOME/oce/work
- # mount <node1>:$ORACLE_HOME/dbs $ORACLE_HOME/dbs
- # mount <node1>:$ORACLE_HOME/network/admin
- If you are using OCFS or NAS or a vendor clustered file system (CFS), and ORACLE_HOME directory is not located on shared partition, then perform the following steps:
Note:
If NAS, ensure that the appropriate mount options are employed when mounting the NAS partition. Oracle requires specific mount options. Consult your NAS Filer documentation for further details.
|
- Symbolically link $ORACLE_HOME/dbs to the OCFS/CFS/NAS partition on all nodes (in this example, the OCFS/CFS/NAS partition is at /sharedfs).
- mkdir /sharedfs/dbs
- chown oracle:dba /sharedfs
- mv $ORACLE_HOME/dbs $ORACLE_HOME/dbs.BAK
- ln -s /sharedfs/dbs $ORACLE_HOME/dbs
- Export $ORACLE_HOME/oce/work and $ORACLE_HOME/network/admin from the primary node.
- # exportfs -i -o rw <node2>:$ORACLE_HOME/oce/work
- # exportfs -i -o rw <node2>:$ORACLE_HOME/network/admin
- $ORACLE_HOME/oce/work and $ORACLE_HOME/network/admin must be mounted on all secondary nodes from the primary (exported) node. Default mount options will suffice.
- # mkdir –p $ORACLE_HOME/oce/work
- # chown oracle:dba $ORACLE_HOME/oce/work
- # mount <node1>:$ORACLE_HOME/oce/work $ORACLE_HOME/oce/work
- # mount <node1>:$ORACLE_HOME/dbs $ORACLE_HOME/dbs
- # mount <node1>:$ORACLE_HOME/network/admin
- If you are using OCFS or CFS accessing a shared Oracle home directory, no setup is required.
- Ensure that no databases are running.
- Ensure that the DISPLAY environment variable is set appropriately for your system. To verify that it is, try starting up xclock. If you do not see the clock, or you receive errors, DISPLAY is not set appropriately. You must correct any errors before proceeding.
- Enter the following command to launch OCE Test Manager:
- Start Test Manager as described in Starting Test Manager.
- From the OCE – Main Menu window, double click Utilities.
- Run the bmchk test by selecting it and clicking Execute.
- When the test completes, click Results in the Test Manager window to check the outcome. If the test fails, you must analyze the output ($INST_HOME/work/bmchk) and resolve any issues. Do not proceed with testing until bmchk executes successfully.
- Run sdbck (the Seed Database Verification utility) test by selecting it and clicking Execute.
- When the test completes, click Results in the Test Manager window to check the outcome. If the test fails, you must analyze the output ($INST_HOME/work/sdbck) and resolve any issues. Do not proceed with testing until sdbck executes successfully.
- Run cssck (the CSS Daemon Verification utility) test by selecting it and clicking Execute.
- When the test completes, click Results in the Test Manager window to check the outcome. If the test fails, you must analyze the output ($INST_HOME/work/cssck) and resolve any issues. Do not proceed with testing until cssck executes successfully.
Running the OCE Test Suites
The OCE release consists of a set of test suites that you run from Test Manager. Each test suite consists of one or more individual tests. To complete the certification, run each of the Test Suites in the kit for the product for which you are certifying your system.
Below are the Test plans for Oracle Clusterware Compatibility (Destructive) Testing
(Category : ORACLE HIGH AVAILABILITY FEATURES)
Clusterware Test Category
|
[Test Code]
Action Target |
Detailed Test Execution
|
Expected Test Outcome
|
Actual Test Outcome
|
[D]
Oracle HA Features |
Run multiple cluvfy operations during Oracle Clusterware and RAC install All RAC hosts
Configuration:
GNS:
Gns with dhcp (1)
Gns without dhcp (2)
Without gns (3)
Preferred option 1, if not applicable option 2, if still not applicable option 3
ASM:
Flex asm (1)
Standard asm (2)
Preferred option 1, if not applicable option 2
DB:
CDB
|
Preconditions:
· Type `cluvfy` to see all available command syntax and options
Steps:
1- Run cluvfy precondition
2- Do the next install step
3- Run cluvfy post-condition
(cluvfy comp software –n node_list) to check the file permissions
No need to collect CRS/RDBMS log for this test. You need to submit the output for cluvfy.
|
Vendor Clusterware:
– same as RAC
RAC:
– Correct cluster verification checks given the state of the cluster hardware and software
Pls provide cvu related logs under
$CRS_HOME/cv/log
|
|
[HW-CW-10]
Run concurrent crsctl start/stop crs commands to stop or start Oracle Clusterware in planned mode All RAC hosts
Configuration:
GNS:
Gns with dhcp (1)
Gns without dhcp (2)
Without gns (3)
Preferred option 1, if not applicable option 2, if still not applicable option 3
ASM:
Flex asm (1)
Standard asm (2)
Preferred option 1, if not applicable option 2
DB:
CDB
|
Preconditions:
· Initiate all Workloads
· Identify both CSS and CRS master nodes
· Type `crsctl` as root to see all available command syntax and options
Steps:
1- As root user, run `crsctl stop crs` command concurrently on more than one RAC host, to stop the resident Oracle Clusterware stack
2- Wait until the target Oracle Clusterware stack is fully stopped (via `ps` command)
3- As root user, run `crsctl start crs -wait` command concurrently on more than one RAC host, to start the resident Oracle Clusterware stack |
Vendor Clusterware:
– N/A
RAC:
– Stop: All Oracle Clusterware daemons stop without leaving open ports or zombie processes
– Start: All Oracle Clusterware daemons start without error messages in stdout or any of the CRS, CSS or EVM traces – Start: All registered HA resource states match the “target” states, as per “crsctl stat res –t”
– For 12cR1, collect
“crsctl stat res –t” in a 60s loop from beginning till the end of run. Attach the output for auditing.
|
|
|
[HW-CW-11]
Run other concurrent crsctl commands, such as crsctl check crs, All RAC hosts
Configuration:
GNS:
Gns with dhcp (1)
Gns without dhcp (2)
Without gns (3)
Preferred option 1, if not applicable option 2, if still not applicable option 3
ASM:
Flex asm (1)
Standard asm (2)
Preferred option 1, if not applicable option 2
DB:
CDB
|
Preconditions:
· Initiate all Workloads
· Identify both CSS and CRS master nodes
· Type `crsctl` as root to see all available command syntax and options
Steps:
1- As root user, run any `crsctl check crs` commands concurrently on all nodes
2- As root user, run any `crsctl check cluster -all` commands concurrently on all nodes
|
Vendor Clusterware:
– same as RAC
RAC:
– Both `crsctl check crs` and `crsctl check cluster -all` commands produce the appropriate, useful output, without any error messages
– Collect output for step 1 and step 2
|
|
|
[HW-CW-12]
Votedisk and OCR operation
Configuration:
GNS:
Gns with dhcp (1)
Gns without dhcp (2)
Without gns (3)
Preferred option 1, if not applicable option 2, if still not applicable option 3
ASM:
Flex asm (1)
Standard asm (2)
Preferred option 1, if not applicable option 2
DB:
CDB
|
Preconditions:
· Make sure votedisk on ASM diskgroup
· Make sure ASM OCR files are used
· Make sure at least one normal redundancy ASM Diskgroup with three failgroups is created and its “compatible.asm” attribute is set to “11.2”;
Steps:
1- Make sure crs stack are running in all nodes.
2- Run “crsctl query css votedisk” to check configured VFs;
3- Run “crsctl replace votedisk +{ASM_DG_NAME}”(As crs user or root user);
4- Run “crsctl query css votedisk” to get the new VF list;
5- Run “ocrconfig –add +{ASM_DGNAME}” as root user;
6- Run “ocrcheck” to verify the OCR files;
7- Restart CRS stack and then verify the VF/OCR after it comes back;
Variants:
1. Add up to 5 OCR files and restart CRS stack;
|
RAC:
– In 12cR1, we can support up to 5 OCRs;
–
|
|
|
|
[HW-CW-13]
crsctl command to manage Oracle clusterware stack
Configuration:
GNS:
Gns with dhcp (1)
Gns without dhcp (2)
Without gns (3)
Preferred option 1, if not applicable option 2, if still not applicable option 3
ASM:
Flex asm (1)
Standard asm (2)
Preferred option 1, if not applicable option 2
DB:
CDB
|
Preconditions:
· CRS stack is up and running on all nodes.
Steps:
1- Run ‘crsctl check cluster –all’ to get the stack status on all cluster nodes. Make sure stack status of all cluster nodes are correct;
2- Run ‘crsctl stop cluster –all’ to stop all CRS resource (CSSD/CRSD/EVMD) with application resources;
3- Run ‘crsctl status cluster –all’ to make sure CRS resource are OFFLINE;
4- Run ‘crsctl start cluster –all’ to bring back the whole cluster stack
|
RAC:
– After running “crsctl stop cluster –all”, make sure all ocssd/evmd/crsd processes are stopped on all cluster nodes by “ps –ef”.
– For 12cR1, collect
“crsctl stat res –t” in a 60s loop from beginning till the end of run. Attach the output for auditing.
|
|
|
[HW-CW-14]
OCR stores in ASM’s diskgroup and kill asm fatal process
Configuration:
GNS:
Gns with dhcp (1)
Gns without dhcp (2)
Without gns (3)
Preferred option 1, if not applicable option 2, if still not applicable option 3
ASM:
Flex asm (1)
Standard asm (2)
Preferred option 1, if not applicable option 2
DB:
CDB
|
Preconditions:
· Initiate Workloads
Steps:
· Make sure only ASM OCR files are used by “ocrcheck –config”;
· Kill the ASM pmon process on the CRSD PE Master node;
Variants:
Repeat the same test on non-OCR Master node.
|
Clusterware:
– Because OCR is stored in ASM, if ASM fails or is brought down on crsd pe master, CRSD pe master will exit and select a new crsd pe master
– ASM, CRSD will be automatically restarted.
-RDBMS instance should connect to other available asm instance in flex asm env
– After CRSD restart, all resources’ state shouldn’t change
– New crsd pe master node should be the old crsd pe standby master. A new crsd pe standby master should be elected on other nodes.
(CRSD should recover resources’ previous state)
– For 12cR1, collect
“crsctl stat res –t” in a 60s loop from beginning till the end of run. Attach the output for auditing.
|
|
Thanks for the article. Instead of a managed hosting plan, more experienced or daring site owners may opt for unmanaged hosting to take more control of their windows 10 vps.