Jagjeet's Oracle Blog!

April 26, 2010

ASM Migration Project

Filed under: ASM — Tags: — Jagjeet Singh @ 5:00 am


For the last month I have been working on ASM migration project for one of our client in United States.


lient wanted to move couple of environments (including Production, QA, and Dev/Patch) to ASM, critical production environment running EBusiness Suit 11.5.10.2 on raw devices utilizing 5.1 TB storage, other instances are utilizing cooked file systems.


I really enjoy working with ASM, the day it was released I got passionate about this feature and have been playing in my lab.  My manager considered my interest with ASM and assigned me to this project (thanks to him). I am very much excited for this project, let’s see how this goes.


This project includes OATM (Oracle Application Tablespace Model) conversion, re-write solution for Cluster Failover (HP Service Guard is configured for Cluster Failover), DR is implemented using EMC Mirror View, BCV & backup methodology.


I would like share my experience after first migration to ASM. This migration was done using fndtsmig.pl (Oracle given utility to migrate segments). This utility migrate (move/rebuild) segments based on pre-defined rules to choose appropriate tablespaces.


We just finished with our first Iteration, I was more interested to see equal distribution of I/O across the Luns.  I pulled below report from 10g grid control for %usage of one diskgroup.

Disk Usage %


It shows well balanced equal usage distribution (it’s awesome, isn’t it),


let’s have a look for IO usage



As soon as I saw this page, I noticed 3 things.


1-      IO distributions are quit close/equal for all the Luns of DGOAPSFC. Other columns Total Read Calls & Total write Calls are almost equal for all the Luns. That is why I like ASM for this capability.


2-      BUT this is not the case with DGOLTPFC diskgroup , this is really strange that 4 disks out of 10 Luns got 40% more high I/O. 


3-      I noticed there is huge difference in IO response time for 2 disks in DGOLTPFC


Point no. 1 is quite obvious and expected, for second point I tried to find the cause but still this is an open issue for me. I am working on this and would post more if I could get something.


For 3rd point, I immediately made call to my storage administrator and asked for the same if something wired at storage end for specifically for 2 disks as there is huge difference in IO response time. Other disks avg. response time is near 5.5 ms where as LUN 43 and LUN 46 it’s more 9 ms.


Storage guy asked me to give some time,  after some time he told me that because of space constraint he could not give me all FC disks, those 2 disks are SATA. J


For final implementation we would be provided FC and SSD disks only so we started using descriptive naming convention.


Again, I am thrilled to work in this project and would share my experience.

About these ads

1 Comment »

  1. Hello
    Did you find the reason why number 2 ws happening (some disks doing more i/o). Was it because of unequally sized LUNS?

    Comment by sarveswara — May 14, 2012 @ 9:22 am


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Silver is the New Black Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 42 other followers

%d bloggers like this: