Cloning A Standby Database Part II

I know I am way behind and wanted to complete this second part months ago. Life got busy and I had some unexpected problems with my VM server. So had to rebuild my VM and this time went with Fedora 16. Since its based off of 3.1 kernel it has several post 2.32 btrfs fixes and I am not sure RHEL has. So a quick recap since part 1. Have a Fedora 16 VM running with 11.2.0.2 installed. Created a standard dbca database using a btrfs volume mounted to /oradata. Next steps are as follows.

Create a copy of the s/pfile in your $ORACLE_HOME/dbs and rename to your clone db.
In my case the source db is prod and the clone name is cprod.

For the next part you have two choices. You can either stop your source database or put it in hot backup mode.
Next snap the btrfs volume you are using for your database. If you are using more than one volume you will need to snap all the volumes that hold your control, redo and data files.

As root run
[root@single2]# btrfs subvolume snapshot /oradata /oradata/clone
Create a snapshot of ‘/oradata’ in ‘/oradata/clone’
With btrfs all snaps have to be in the same volume. Since no data blocks are being copied the snap should completed within seconds.

Now you have a thin copy of your database, regardless of size. The next few steps are basically recovering the database and renaming it.
Since you can’t have two database with the same name running on the same server you will need to stop your source database for this next part.
The init file for your clone database still points to your source control file and your control file still points to your source datafiles and redo files. The next step is to rename the redo and data files to cloned locations and to update the clone init file to the location of your clone control file.

In my case it was a simple case of changing /oradata/PROD/controlfile/o1…’ to ‘/oradata/clone/PROD/controlfile/o1….’. After that you should be able to startup mount your clone database. Make sure you change your oracle_sid to your new instance_name.

Renaming the files is the part I like the least. I really should write a script that generates the sql and its a pain to manually do it. Basically rename all the files return from select name from v$datafile, select name from v$tempfile and select member from v$logfile to your clone location. Since I used the hot backup approach the database will need to be recovered. So after the rename is done a simple

SQL> recover database;
Media recovery complete.
SQL> alter database open;

Database altered.

Great we now have an exact copy our source db. Next shutdown the database and startup in mount mode then lets rename it! Using the nid command line tool allows us to rename the database. The tool handles changing the name in the control file and data files. After is done we need to update our cloned init file to reflect the new db_name. This is the most risky part. If you missed a file in the rename process the source datafile/control file will be updated! After that startup mount and then open resetlogs and you have a thin cloned database with a new database name!

So this is cool but does it really have any real purpose? Yes. If you need to make many copies of your prod database for non prod use this can be a money, time and disk saving dream. Yes high SANs and Netapp can do the same but they cost money and you may not have those tools. So the poor mans way is completely free minus some disk space. Which by the way and be any disk vendor or just plain or hard drives. To not have to shutdown your production database for clone and take the risk of messing it up during the clone and not running your non prod dbs on your production database I would recommend the following. Setup a standby database on another server/VM that supports btrfs. Stop the apply process on the standby and create as many snaps for each clone you want. Rename the clones as shown above. You can either then run the clones off of the same server/VM as your standby or you can use NFS and mount the btrfs to other servers/VMs. Lets say you have a weekly or even nightly refresh schedule. The process would be to shutdown all the clone databases, delete the snapshots using btrsctl subvolume delete . Fire up the standby database and let it catch back up to production. Once in sync either shutdown it or use hot backup to start the clone process all over again. Depending on your refresh schedule and your I/O subsystem it would be possible to clone a 1 TB database dozens of times in an hour or two.

Leave a comment