Jagjeet's Oracle Blog!

December 20, 2013

SQL Dashboard – V2

Filed under: Exadata, RAC — Jagjeet Singh @ 12:58 am

Quick Post – I had some fun last weekend with SQL Dashboard, thought it deserves a post. As you can see in attached video now dashboard supports wait graph in colors directly from SQL*Plus  🙂  wait a minute

Well, we all know sql*plus does not support colors, However, I could simulate that using Iterm2 (Mac terminal client) by assigning colors to specific legends.

I have received several emails to make code compatible with 10g, unfortunately it is not possible due to dependency on 11g dictionaries. I have fixed several bugs and publishing 11g R2 and 11g R1 version.

Only noticeable enhancement is graph with wait events.

May 13, 2012

Sqlplus Dashboard For RAC

I have been passionate to work with CLI due to flexibility, lightweight and easy access. Long back, I had an idea to write a sqlplus script to provide a dashboard kind of utility but could never figure out the auto refresh functionality until unless Tanel & Adrian provided moats.

I got the Idea and solution of auto refresh functionality in sqlplus. Big thanks to great Tanel & Adrian. Majority of my time I spent doing bench-marking on Exadata, so I decided to write a sqlplus dashboard kind of utility for RAC, idea was to get some critical metrics for performance benchmarks on one screen and have a automatic refresh capability ( similar EM “TOP Activity” page )

It took almost 3 weeks for me to write this utility from scratch, It tells you info about Exadata smart scan/storage index saving/Flash cache/ CPU usage/TOP sqls/waits/Cluster interconnect traffic/Top segments by GC* waits and a lot more info. which can be useful to monitor system performance. Based on current formatting, It can support up to 8 node cluster information easily.
I tried my best to make this utility lightweight by reading data from gv$ views once and store in plsql/collections for subsequent requests, Could not find session in EM top activity page running this utility. As of today, it will only work with 11g RAC instances or single instance with gv$* views [catclust.sql].

here is video link… video quality isn’t that great. Must be seen in HD mode.

I have tried to explain about metrics below.


############################################
-- Below section shows metrics instance wise
############################################


+Inst-------+CPUIDL%--IO%-USR%--SYS%+--Tprse/s--+Hprse/s+--PhyWIO/s-+-PhyWMB/s-+--PhyRIO/s--+-PhyRMB/s-+-SessLIO/s--+---Exec/s-+RedoMB/s+Commit/s+-ExSSMB/s-+-ExSIMB/s+-ExFCRh/s+
|AIA1       | 67.8     0  31.2    .7|       2714|      0|       2623|        21|        1183|         9|      370001|      8471|      11|    2001|         0|        0|      980|
|AIA2       | 68.2     0  31.1    .6|       2709|      0|       2491|        22|         928|         7|      340040|      8609|      11|    2004|         0|        0|      703|
+-----------+-----------------------+-----------+-------+-----------+----------+------------+----------+------------+----------+--------+--------+----------+---------+---------+


+Inst-------+ <-- Instance name     +CPUIDL%--IO%-USR%--SYS%+ 
|AIA1       |                       | 67.8     0  31.2    .7| -- This sections shows CPU usage IDLE% IO%, USER% SYS% 
|AIA2       |                       | 68.2     0  31.1    .6| -- For Exadata, we may not see IO Usage as Grid disks exposed to ASM only 
+-----------+                       +-----------------------+




+--Tprse/s--+Hprse/s+--PhyWIO/s-+-PhyWMB/s-+--PhyRIO/s--+-PhyRMB/s-+-SessLIO/s--+
|       2714|      0|       2623|        21|        1183|         9|      370001|
|       2709|      0|       2491|        22|         928|         7|      340040|
+-----------+-------+-----------+----------+------------+----------+------------+

--  Tprse/s   : Total parse per second
--  Hprse/s   : Hard parse per second
--  PhyWIO/s  : Physical Write IO per second
--  PhyWMB/s  : Physical Write MB per second                                                         
--  PhyRIO/s  : Physical Read IO per second								
--  PhyRMB/s  : Physical Read MB per second								
--  SessLIO/s : Session Logical IO per second								
		


+---Exec/s-+RedoMB/s+Commit/s+
|      8471|      11|    2001|
|      8609|      11|    2004|
+----------+--------+--------+

-- Exec/s   : Total Executions for the Instance per second
-- RedoMB/s : Redo Generation MB per second 
-- Commit/s : User commit per second






-- Exadata Related Stats
+-ExSSMB/s-+-ExSIMB/s+-ExFCRh/s+
|         0|        0|      980|
|         0|        0|      703|
+----------+---------+---------+

-- ExSSMB/s  : cell physical IO interconnect bytes returned by smart scan MB per second 
-- ExSIMB/s  : cell physical IO bytes saved by storage index MB per second
-- ExFCRh/s  : cell flash cache read hits per second 

--     Note : Above snapshot was taken for OLTP kind of workload, hence no 
--            smartscan/storage index was in picture, only Flash Cache

 
#########################################
-- Below section shows Cluster Level info
#########################################

+IMPACT%-+--TOP WAIT EVENTS-----------------------+-WAIT CLASS--------+ 
|  51.2% | transaction                            | Other             | 
|  28.2% | enq: TX - index contention             | Concurrency       | 
|   8.7% | gc buffer busy acquire                 | Cluster           | 
|     5% | cell single block physical read        | User I/O          | 
|   2.2% | gc current grant busy                  | Cluster           | 
+--------+----------------------------------------+-------------------+ 

 Above sections shows TOP 5 wait events for the last sample [ default 6 second ] 	

-- IMPACT%    :  Impact was calulcated based on AWR formula, Time spent on individual wait event / Total Wait Time * 100
-- WAIT EVENT :  Name of the Wait Event 
-- WAIT CLASS :  Name of the Wait Class

  
 * * Cluster Level info * *
+IMPACT%-+ TOP SQLS -(child)-+-TOP SESSIONS-------INST:SID----------------------------------------+
|  26.5% | bk2mk4dh179y7(2)  | 1:1043, 1:1174, 1:1239, 1:1304, ......                             |
|  17.6% | bk2mk4dh179y7(0)  | 1:1045, 1:1109, 1:1110, 1:1174, .......                            |
|  11.8% | 7jjp97nb9h2up(1)  | 2:1048, 2:1109, 2:1112, 2:1174,                                    |
|   7.5% | 0r5xv5d42p3p6(2)  | 1:653, 1:979, 2:1047, 2:1501, 2:588, 2:68  .....                   |
|   7.3% | apgu0yan9pb6m(5)  | 1:653, 1:979, 2:1047, 2:1501, 2:588, 2:68  ......                  |
+--------+-------------------+--------------------------------------------------------------------+

 Above sections shows TOP 5 SQLs for the last sample [ default 6 second ] 	

-- IMPACT%    :  Impact was calculated based on AWR formula, Time spent on individual sql / Total Wait Time * 100
-- TOP SQL - (child) : Sql id along with child info
-- INST:SID   :  Instance ID and SID for the session running sql, due to space limitation, only few sessions will shown






   * * Cluster Level info * *

           -------+                                 ACTIVE SESSIONS GRAPH                              +----
     Active   375 |                                                                                    | 375
   Sessions   349 |                                                                                    | 349
              322 |                            #                    #                                  | 322
              295 |                            #   # ##       #   # #                                  | 295
              268 |                            #   # ##       #   # #                  #          #_288| 268
              242 |                            #   # ###  #   #   # #                 ### #       #    | 242
              215 |                            #   # ###  #   #   # #                 #####       #    | 215
              188 |                            #   # ###  #   #   # #                 #####  #    #    | 188
              161 |                           ##################  #########     ##    #####  ######    | 161
              134 |                           #############################     #### ##############    | 134
              109 |                           ##############################    ###################    | 109
     	       81 |                           ##############################  #####################    | 81
               24 |             -MAX CPUs [24]-####################################################    | 24
                1 |                           #####################################################    | 1
                0 +------------------------------------------------------------------------------------+ 0
		  ^ 01:42:22                     01:49:02 ^                             01:55:42 ^



-- Above sections shows total active sessions across the cluster, time window calculated based on sample time [6 second default ]
-- total 80 samples are printed in graph, one line represents one sample time. Active Session range will change automatically
-- based on active sessions and graph will adjust accordingly. Idea was to have some trend about the active session history.
-- MAX CPU will also be shown to have some idea about Active sessions Vs CPUs available.

-- Note : if value falls in between 2 range buckets then it will be shown in front of lower value, for example - in above screenshot 
--        current active sessions were 288 and it was displayed in front of 268, same applies to MAX CPU info as well
 

   * * Instance level info * *

    |  Global  |  Global  | Estd.     | 
    |  Cache   |  Cache   | Intercnt  | 
Inst|  Blocks  |  Blocks  | Traffic   | 
  ID|  Sent/s  |  Rcvd/s  | MB/s      | 
+---+----------+----------+-----------+ 
  1 |     10831|     10188|      173.3| 
  2 |     10650|     10586|        175| 
                                        

-- Above sections shows Global Cache info instance wise

-- InstID : Instance ID no.
-- GC Blocks Sents/s :  Global cache blocks sent by instance per second
-- GC Blocks Rcvd/s  :  Global cache blocks received by instance per second
-- Estd. Interconnect Traffic MB :  Estimated traffic MB for interconnect by instance





 * * Cluster Level info * *
                             
 | TOP Segments by GC*     |    
 | Waits                   |    
 | IMPACT% [Type:Segment]  |   
 +-------------------------+    
 | 40.8% IND:REFERENCE_INSTANCE 
 | 23.3% IND:MEDIATOR_INSTANCE  
 | 17.4% IND:REFERENCE_INSTANCE 
 |  5.8% IND:S_DOCK_TXN_LOG_P1  
    4.2% IND:COMPOSITE_INSTANCE                                                                         

-- TOP Segments based on all Global Cache* events, Impact is calculated based on GC* waits on individual
-- Segment / Total GC* waits * 100





  * * Cluster level info * * 

+--SqlID--------+--SqlText----------------------------------------------------------------------------+-LongstDur-+-InstCnt-+-Cnt-+--CPU%--+--CONC%--+--CLUS%--+-IO%-+---ReadMb-+
| 0r5xv5d42p3p6 | INSERT INTO REFERENCE_INSTANCE (ID, BINDING_TYPE, REFERENCE_NAME, UPDATED_TIME, PRO |           |    2    |  12 |      0 |      0  |      0  |    0|         0|
+---------------+-------------------------------------------------------------------------------------+-----------+---------+-----+--------+---------+---------+-----+----------+
| bk2mk4dh179y7 | select VALUE from XREF_DATA  where XREF_COLUMN_NAME = :1  and XREF_TABLE_NAME = :2  | 00:00:09  |    2    |  46 |    59% |     3%  |      0  |   4%|       227|
+---------------+-------------------------------------------------------------------------------------+-----------+---------+-----+--------+---------+---------+-----+----------+
| 7jjp97nb9h2up | INSERT INTO COMPOSITE_INSTANCE (ID, COMPOSITE_DN, CONVERSATION_ID, UPDATED_BY, PARE |           |    2    |  14 |      0 |      0  |      0  |    0|         0|
+---------------+-------------------------------------------------------------------------------------|-----------+---------+-----+--------+---------+---------+-----+----------+
| apgu0yan9pb6m | insert into  XREF_DATA  ( XREF_TABLE_NAME, XREF_COLUMN_NAME, ROW_NUMBER, VALUE, IS  |           |    2    |  14 |      0 |      0  |      0  |    0|         0|
+---------------+-------------------------------------------------------------------------------------|-----------+---------+-----+--------+---------+---------+-----+----------+

-- Above sections shows cummulative info for the TOP Sqls 

-- LongstDur : if multiple sessions are running same sql, this column would tell timing for the first query, longest duration.
-- InstCnt   : Total no. of Instance running same sql at time of sample [ including parallel and non-parallel query ]
-- Cnt       : Total sessions running same query across all the instances [ including parallel and non-parallel ] 

-- CPU%  : %time spent on CPU
-- CONC% : %time spent on Concurrency
-- CLU%  : %time spent on Cluster waits
-- IO%   : %time spent on IO

--ReadMB : Total physical read mb by the query, cummulative for all running queries across the cluster

-- Note: Most of the columns are fetched from gV$sql_Monitor, will only be populated if query running more than 6 seconds on CPU/IO


Code can be downloaded here.

Please do not forget to share your suggestions/feedback. Thanks –

Note: This utility uses ASH and v$sql_monitor which requires diagnostics and tuning pack !

June 30, 2010

Recover Corrupt OCR with no backup

Filed under: RAC — Tags: , , — Jagjeet Singh @ 7:43 am

While working with OCR backup and recovery scenarios, one question came to my mind what if my OCR gets corrupted and there is no backup available to recover. This is rare but still it happens as Oracle takes backups after every 4 hours. I wanted to explore if there is any other way we can get business back and running without installing whole clusterware which could be a lengthy process.

If you are on 10g R2 and later version then this can be done without re-installing Clusterware ( if you have backup of root.sh or it’s not overwritten by any
subsequent patch ) I simulated this on my test machines (OracleVM).

Current Voting Disk

[root@rac1 ~]# crsctl query css votedisk
 0. 0 /OCFS/VOT

located 1 votedisk(s).

 

Current OCR Files location

[root@rac1 ~]# ocrcheck

Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262144
Used space (kbytes) : 4344
Available space (kbytes) : 257800
ID : 601339441

Device/File Name : /OCFS/OCR
Device/File integrity check succeeded

Device/File Name : /OCFS/OCR2
Device/File integrity check succeeded

Cluster registry integrity check succeeded

 

Output of CRS_STAT

[root@rac1 ~]# crs_stat -t

Name Type Target State Host
————————————————————
ora….SM1.asm application ONLINE ONLINE rac1
ora….C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd   application ONLINE ONLINE rac1
ora.rac1.ons   application ONLINE ONLINE rac1
ora.rac1.vip   application ONLINE ONLINE rac1
ora….SM2.asm application ONLINE ONLINE rac2
ora….C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd   application ONLINE ONLINE rac2
ora.rac2.ons   application ONLINE ONLINE rac2
ora.rac2.vip   application ONLINE ONLINE rac2
ora.test.AP.cs application ONLINE ONLINE rac1
ora….st1.srv application ONLINE ONLINE rac1
ora.test.db    application ONLINE ONLINE rac2
ora….t1.inst application ONLINE ONLINE rac1
ora….t2.inst application ONLINE ONLINE rac2

I stopped clusterware on both nodes and removed OCR & Voting Disks.

[root@rac1 ~]# ls -lrt /OCFS/*

-rw-r–r– 1 root root 399507456 Jun 29 14:05 /OCFS/OCR2
-rw-r—– 1 root oinstall 10485760 Jun 29 14:05 /OCFS/OCR
-rw-r–r– 1 oracle oinstall 10240000 Jun 29 14:05 /OCFS/VOT

[root@rac1 ~]# rm -fr /OCFS/*

Tried again to start Cluster

[root@rac1 ~]# crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly

 

Clusterware could not startup.

[root@rac1 ~]# crsctl check crs

Failure 1 contacting CSS daemon
Cannot communicate with CRS
Cannot communicate with EVM

Thrown error in /tmp/crsct.* file about OCR

[root@rac1 ~]# cat /tmp/crsc*

OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage Operating System error [No such file or directory] [2]
OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage Operating System error [No such file or directory] [2]
OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage Operating System error [No such file or directory] [2]

 

Here, I lost all my OCR & Voting disk.

Below procedure can be used for recovery.

1) Execute rootdelete.sh script from All Nodes.
2) Execute rootdeinstall.sh from Primary Node.
3) Run root.sh from Primary node.
4) Run root.sh from all remaining nodes.
5) Execute remaining configurations (ONS,netca,register required resources)

1) Executing rootdelete.sh on all nodes, this script can be found under $ORA_CRS_HOME/install/

[root@rac1 ~]# /u01/app/oracle/product/crs/install/rootdelete.sh
Shutting down Oracle Cluster Ready Services (CRS):
OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage Operating System error [No such file or directory] [2]
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down…
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in ‘/etc/oracle/scls_scr’

[root@rac2 ~]# /u01/app/oracle/product/crs/install/rootdelete.sh
Shutting down Oracle Cluster Ready Services (CRS):
OCR initialization failed accessing OCR device: PROC-26: Error while accessing the physical storage Operating System error [No such file or directory] [2]
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down…
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in ‘/etc/oracle/scls_scr’

OCR initialization error can be safely ignored.

2) Execute rootdeinstall.sh on Primary Node, this script can also be found under $ORA_CRS_HOME/install

[root@rac1 ~]# /u01/app/oracle/product/crs/install/rootdeinstall.sh
Removing contents from OCR mirror device
2560+0 records in
2560+0 records out
10485760 bytes (10 MB) copied, 0.031627 seconds, 332 MB/s
Removing contents from OCR device
2560+0 records in
2560+0 records out
10485760 bytes (10 MB) copied, 0.029947 seconds, 350 MB/s

3) Run root.sh on Primary node, this will create VOT & OCR files.

[root@rac1 ~]# $ORA_CRS_HOME/root.sh
WARNING: directory ‘/u01/app/oracle/product’ is not owned by root
WARNING: directory ‘/u01/app/oracle’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
“/OCFS/VOT” does not exist. Create it before proceeding.
Make sure that this file is shared across cluster nodes.
1

I had to touch this file to proceed

[root@rac1 ~]# touch /OCFS/VOT
[root@rac1 ~]# $ORA_CRS_HOME/root.sh
WARNING: directory ‘/u01/app/oracle/product’ is not owned by root
WARNING: directory ‘/u01/app/oracle’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory ‘/u01/app/oracle/product’ is not owned by root
WARNING: directory ‘/u01/app/oracle’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
Now formatting voting device: /OCFS/VOT
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
CSS is inactive on these nodes.
rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

4) Run root.sh from all remaining nodes.

[root@rac2 crs]# ./root.sh
WARNING: directory ‘/u01/app/oracle/product’ is not owned by root
WARNING: directory ‘/u01/app/oracle’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory ‘/u01/app/oracle/product’ is not owned by root
WARNING: directory ‘/u01/app/oracle’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps

Creating VIP application resource on (2) nodes…
Creating GSD application resource on (2) nodes…
Creating ONS application resource on (2) nodes…
Starting VIP application resource on (2) nodes…
Starting GSD application resource on (2) nodes…
Starting ONS application resource on (2) nodes…

Done.

Clusterware is up and running

[root@rac2 crs]# crs_stat -t

Name Type Target State Host
————————————————————
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2

5) Remaining Configuration

       a) Configuring Server side ONS

[root@rac1 crs]# $ORA_CRS_HOME/bin/racgons add_config rac1:6200 rac2:6200

         b) Listener Configuration usint netca

you might want to remove listener.ora from both nodes as entries may exist already. Take backup or orignial listener.ora  and use netca to configure &
register with OCR. Till 10g, we can not register listener using srvctl

Renaming orginal listener.ora

[oracle@rac1 ~]$ mv $ORACLE_HOME/network/admin/listener.ora $ORACLE_HOME/network/admin/listener.ora.orig
[oracle@rac1 ~]$ ssh rac2 mv $ORACLE_HOME/network/admin/lstener.ora $ORACLE_HOME/network/admin/listener.ora.orig

          c)Adding ASM, Instance, Database

[oracle@rac1 ~]$ srvctl add asm -i +ASM1 -n rac1 -o /u01/app/oracle/product/10.2.0/db_1
[oracle@rac1 ~]$ srvctl add asm -i +ASM2 -n rac2 -o /u01/app/oracle/product/10.2.0/db_1
[oracle@rac1 ~]$ srvctl add database -d test -o /u01/app/oracle/product/10.2.0/db_1
[oracle@rac1 ~]$ srvctl add instance -d test -i test1 -n rac1
[oracle@rac1 ~]$ srvctl add instance -d test -i test2 -n rac2

I restarted both nodes, got everything back. Yes, Services can be re-created.

[oracle@rac1 ~]$ crs_stat -t

Name Type Target State Host
————————————————————
ora….SM1.asm application ONLINE ONLINE rac1
ora….C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd   application ONLINE ONLINE rac1
ora.rac1.ons   application ONLINE ONLINE rac1
ora.rac1.vip   application ONLINE ONLINE rac1
ora….SM2.asm application ONLINE ONLINE rac2
ora….C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd   application ONLINE ONLINE rac2
ora.rac2.ons   application ONLINE ONLINE rac2
ora.rac2.vip   application ONLINE ONLINE rac2
ora.test.db    application ONLINE ONLINE rac2
ora….t1.inst application ONLINE ONLINE rac1
ora….t2.inst application ONLINE ONLINE rac2

Note : That’s why it’s recommended to take backup of root.sh after fresh install. Subsequent patches can
overwrite root.sh script.

This is described on Metalink Note: 399482.1

Create a free website or blog at WordPress.com.