Tuesday, September 8, 2009

Oracle backup replication with Oracle Streams, Data Guard & RAC

As the world’s leading database, Oracle offers a wealth of options to replicate geographically distributed systems. Each method has its own features and it is the job of the Oracle professional to choose the one that makes sense for their business requirements:

  • Data GuardData Guard standby databases provide standby databases for failover in system that do not have a high downtime cost. In a data guard failover, the last redo log must be applied (up to 20 minutes to re-open the standby database), and if the last redo cannot be recovered from the crashed instance, the most recent updates may be lost.

  • Multi-Master Replication – See the excellent book for implementing multi-master replication. Uses advanced queuing to cross-pollinate many databases with updates. Has an advantage over Data Guard because the standby database can be used for updates, and a disadvantage that replication is not instantaneous and update collisions may occur.

  • Oracle Streams – See the comprehensive book on Oracle Streams Technology by Madhu Tumma. Ready for production use in Oracle 10g, Oracle Streams allows near real-time replication and support for master-to-master replication. Oracle Streams has no licensing costs (RAC costs extra) and it is less complex to configure than a RAC database. This disadvantage over RAC is that there is no continuous availability, update collisions may occur, replication is not instant, and application failover is usually done manually.

  • Real Application Clusters – The Cadillac of Oracle continuous availability tools, RAC allows for Transparent Application Failover (TAF) and RAC is de-rigueur for systems with a high cost of downtime and continuous availability on RAC.

Most shops will combine these technologies to provide replication that is easy to manage and reliable. Let’s explore such a scenario.

Why Replicate?

Oracle has sophisticated mechanisms to manage data concurrency and replication is normally used when we have geographically distributed systems with unreliable network connectivity between systems. In a perfect world, database would not be replicated, but the nature of worldwide networking mandates that global eCommerce system have replicated database to provide fast response time worldwide.

Because the Oracle Streams product is new, Oracle professionals are only now recognizing how Streams can be used to in a master-master replication architecture. Let’s take a closer look.

Oracle Streams for master-master replication

Oracle Streams is an ideal solution for systems that are geographically distributed and have a high-speed connection (e.g. a T1 line) between servers. As long as the server interconnect can keep-up with the changes you can implement systems that provide failover and disaster recovery, simply and reliably. However Oracle Streams replication is not for every database:

  • High update systems – Busy update database may spend excessive resources synchronizing the distributed databases.

  • Real-time replication required – If you require a two-phase commit (where the changes are committed on each database at exactly the same time), then RAC or on-commit replication are a better choice. For example, a banking system would require that all databases update each other simultaneously, and Oracle Streams might not be an ideal solution.

On the other had, Oracle Streams is perfect for geographically distributed applications where real-time synchronization is not required. Let’s explore master-master replication with a hypothetical example. Assume that we have a online database serving the US Federal government which is deployed using APEX making it accessible anywhere in the world with an internet-enabled web browser. Federal agencies will connect with either the California or the Texas server, based on their APEX shortcut URL. The two systems are cross-fed updates via Oracle Streams and the Texas server will use Data Guard to replicate to standby database in Kittrell North Carolina (Figure 1).










Figure 1 – Combining Oracle Streams and Oracle data Guard

Oracle Streams will provide near real-time replication of important information and in case of a server outage, updates are automatically stored in update queues and applied automatically when service is restored to the crashed Oracle server.

Providing reliable connectivity during network outages is very challenging and sophisticated, and Oracle has several tools that will aid TCI in their goals. In case of any type of outage we see alternative connection options:

  • Interconnect is down – All end-users will continue to operate independently and when the interconnect is restored the updates queues will synchronize each database.

  • California or Texas Server down – The end-user will have three icons on their desktop, and a California end-user simply clicks on the URL for the Texas database and continues working.

  • Major Disaster – In this case the Kittrell NC server will be synchronized using Data Guard (with a possible loss of the most recent redo log updates) and end-users with access to the internet can continue to work on the standby server. Upon restoration of service, the Kittrell server will flush all updates back to the main California and Texas databases.

Establishing the replication mechanisms is only part of the task. We also need detailed checks to alert the DBA staff when an outage has occurred. The end-user will have it easier, and know to simply log in to the surviving Oracle database. Let’s look at the replication alert mechanisms.

Streams replication failure alerts

This is a critical component of the replication so that Oracle staff can be aware when there has been a server or network failure. The alerts will include checks from the Kittrell, checks between Texas and California, and checks of the replication queues. Because the connectivity checks must be done outside from Oracle, the checks can be done using Linux cron entries.

  • Ping from standby server - The Kittrell standby server will check every 15 minutes for connectivity to all servers with a “ping” command. If the servers fail to respond a pager alert will be sent to support staff.

  • California and Texas connectivity checks – Each server will cross-check each other, first checking with tnsping, then ping, and sending appropriate alerts in cases of network failure.

Creating a self-synchronizing replication architecture

Once the Oracle Streams, Data Guard and alerts are in-place, the only challenge for the Oracle DBA is ensuring that the Streams replication queues have enough room to hold updates on the surviving Oracle instance.

For more information on using Oracle Streams and Multimaster Replication, we recommend these books and links:

Streams Replication Links:







Monday, September 7, 2009

Somethings about EXPLAIN PLAN

1. Creation of plan table:
@?/rdbms/admin/utlxplan.sql

2. Explaining an sql statment:

EXPLAIN PLAN FOR
SELECT ..............................statement....

3. View EXPLAIN PLAN out put:

set pagesize 0
set linesize 130
@?/rdbms/admin/utlxpls.sql

The above sql shows plan table output for serial processing.

@?/rdbms/admin/utlxpls.sql

The above sql shows output with parallel execution colums.

Delete Operation Explain Plan















DELETE FROM EPDBA01.t_bo_sec_new
WHERE ROWID IN (SELECT tbo.ROWID
FROM EPDBA01.t_bo_sec_new tbo, EPDBA01.t_plan tp
WHERE tbo.plan_id = tp.plan_id
AND NOT tbo.person_id IN (SELECT person_id
FROM EPDBA01.t_sys_admin)
AND tbo.role_id = 2000
AND tp.plan_access = 'S')
Plan hash value: 4027940639


Step Explaination:

1. This plan step retrieves all rows from table T_PLAN.
2. This plan step retrieves all rows from table T_BO_SEC_NEW.
3. This plan step accepts two sets of rows, each from a different table.
A hash table is built using the rows returned by the first child.
Each row returned by the second child is then used to probe the hash table to find row pairs
which satisfy a condition specified in the query's WHERE clause.
Note: The Oracle cost-based optimizer will build the hash table using what it thinks is the smaller of the two tables.
It uses the statistics to determine which is smaller, so out of date statistics could cause the optimizer to make
the wrong choice.
4. This plan step retrieves all ROWIDs from the B*-tree index PK_T_SYS_ADMIN by walking the index starting with its smallest key.
5. This plan step accepts multiple sets of rows. Rows from the first set are eliminated using the data found in the second through n sets.
6 This plan step accepts a row set (its only child) and sorts it in order to identify and eliminate duplicates.
7. This plan step represents the execution plan for the subquery defined by the view VW_NSO_1.
8. This plan step retrieves rows from table T_BO_SEC_NEW through ROWID(s) specified in the WHERE clause of the statement.
9. This plan step joins two sets of rows by iterating over the driving, or outer, row set (the first child of the join) and, for each row, carrying out the steps of the inner row set (the second child). Corresponding pairs of rows are tested against the join condition specified in the query's WHERE clause.
10. This plan step deletes rows from table T_BO_SEC_NEW which satisfy the WHERE clause of the DELETE statement.
11. This plan step designates this statement as a DELETE statement.

Oracle powers world's largest commercial database

Oracle has set a new record by powering 100 TERABYTE DATA WAREHOUSE. Their customers run four top 10 largest support system databases and powers nine top 10 UNIX OLTP. Survey determined that the largest transaction processing system on UNIX runs an Oracle Database, containing over 16 TB of data. Oracle is used by three of the 10 largest UNIX DATA WAREHOUSES.

http://www.oracle.com/corporate/press/2005_sep/091305_wintertopten_finalsite.html

The article says it's the first time Linux had its own categories for DSS and OLTP:

"This year's survey marks the first time that the Linux operating system had its own categories for DSS and OLTP. In total, 12 systems (eight DSS and four OLTP systems) running Linux were measured ? all of them running Oracle Database. Two of these Linux data warehouse systems cracked the TopTen All Environments in the database size category. The world's largest Linux data warehouse measures 24.8 TB, good enough for sixth overall in the top 10 largest data warehouse category. Oracle is the only database vendor whose customers posted top results on Linux, UNIX and Windows platforms."

World's largest commercial Linux data warehouse runs Oracle Database

Oracle is the leading database for decision support and online transaction processing in real-world customer environments, according to the Winter Corporation 2005 Top Ten Program survey. Only Oracle Database produces leading results on Linux, UNIX, and Windows.


The survey, which identifies the world's largest and most heavily used databases, found that the largest commercial data warehouse in the world runs a 100 terabyte Oracle Database. That's more than triple the size of the largest database in the previous TopTen Program survey, which was also powered by Oracle.

The survey's newest category covering Linux DSS and OLTP databases was completely dominated by Oracle customers. Overall, the survey found that Oracle is the only database vendor whose customers posted top results on Linux, UNIX, and Windows platforms.

Other survey findings:


- The world's largest commercial database runs Oracle Database, with 100TB of data.
- The world's largest commercial data warehouse runs Oracle Database.
- The world's largest commercial UNIX data warehouse runs Oracle Database.
- The world's largest commercial Linux data warehouse runs Oracle Database.
- The world's largest scientific database runs Oracle Database.
- Oracle Database powers nine of the world's top 10 UNIX OLTP systems.
- Oracle Database powers 100 percent of all Linux DSS and OLTP measured in the Winter Corporation 2005 TopTen program.
- Oracle customers represent 58 percent of the all validated participants in the Winter Corporation 2005 TopTen program.

The Winter Corporation 2005 TopTen Program surveyed customers from 20 countries on five continents, in 11 industries from government to healthcare to retail to telecommunications, among others.

The Program identifies the world's leading database implementations based on Database Size and Most Rows/Records. As part of the rigorous survey process, respondents must have a validated database size that meets the survey's requirements.

Source: http://www.ameinfo.com/72074.html
Largest Oracle database tops 100 trillion bytes : This articles notes the world's largest data warehouses are running Oracle with Linux

Database Failover Steps (Oracle Dataguard / Physical Standby Database)

1. Sync last archive logfile.

2. Activate standby database by using the following statements:

* Recover managed standby database cancel;
* alter database activate standby database;
* shutdown immediate;
* startup;

3. Hot backup immediately.

4. Build new standby database

Installing the XML & JAVA Components on Oracle Database

Shutdown the instance and then create and run the following sql script from a new sqlplus session:

-- Start of File full_rmjvm.sql
spool full_rmjvm.log
set echo on
connect / as sysdba
startup mount
alter system set "_system_trig_enabled" = false scope=memory;
alter system enable restricted session;
alter database open;
@?/rdbms/admin/catnoexf.sql
@?/rdbms/admin/catnojav.sql
@?/xdk/admin/rmxml.sql
@?/javavm/install/rmjvm.sql
truncate table java$jvm$status;
select * from obj$ where obj#=0 and type#=0;
delete from obj$ where obj#=0 and type#=0;
commit;
select owner, count(*) from all_objects
where object_type like '%JAVA%' group by owner;
select obj#, name from obj$
where type#=28 or type#=29 or type#=30 or namespace=32;
select o1.name from obj$ o1,obj$ o2
where o1.type#=5 and o1.owner#=1 and o1.name=o2.name and o2.type#=29;
shutdown immediate
set echo off
spool off
exit
-- End of File full_rmjvm.sql

OR

- Start of File full_jvminst.sql
spool full_jvminst.log;
set echo on
connect / as sysdba
startup mount
alter system set "_system_trig_enabled" = false scope=memory;
alter database open;
select obj#, name from obj$
where type#=28 or type#=29 or type#=30 or namespace=32;
@?/javavm/install/initjvm.sql
select count(*), object_type from all_objects
where object_type like '%JAVA%' group by object_type;
@?/xdk/admin/initxml.sql
select count(*), object_type from all_objects
where object_type like '%JAVA%' group by object_type;
@?/xdk/admin/xmlja.sql
select count(*), object_type from all_objects
where object_type like '%JAVA%' group by object_type;
@?/rdbms/admin/catjava.sql
select count(*), object_type from all_objects
where object_type like '%JAVA%' group by object_type;
@?/rdbms/admin/catexf.sql
select count(*), object_type from all_objects
where object_type like '%JAVA%' group by object_type;
shutdown immediate
set echo off
spool off
exit
-- End of File full_jvminst.sql


You can also get the same tips from original Oracle metalink notes:
Subject: How to Reload the JVM in 10.1.0.X and 10.2.0.X
Doc ID: Note:276554.1

How to deinstall and install Oracle XML database (XMLDB/XDB)?

If you are on database release 10.1.x or 10.2.x the XDB Feature is Mandatory in order to use any of the member functions of the XMLTYPE. This is true even if you are not using the repository, or registered schema aspects of the XDB feature. Before we begin to install it, let's check the steps of how to remove it manually.

====> Removal of XML DB

1) Shutdown and restart the database

2) Connect as sysdba and run catnoqm.sql script.
connect / as sysdba
@?/rdbms/admin/catnoqm.sql
drop trigger sys.xdb_installation_trigger;
drop trigger sys.dropped_xdb_instll_trigger;
drop table dropped_xdb_instll_tab;

3) Modify parameter values in init.ora or spfile.
shared_pool_size =150 MB # Or larger value
java_pool_size =150 MB # Or larger value

====> Installation of XML DB
Create XDB tablespace as XMLDB repository storage, make sure it has 150MB free space. Restart the database to make the parameters take effect.

Now we are ready to install a new XDB:

1) Connect as sysdba and run catqm.sql script.
set echo on
spool xdb_install.log
@?/rdbms/admin/catqm.sql xdb_user_pass xdb_tbs temp_tbs

2) If you are using Oracle 9.2, reconnect as SYSDBA and run catxdbj.sql script. Oracle 10g also has this script, but have nothing to do.
@?/rdbms/admin/catxdbj.sql

3) Change database system parameters in init.ora or spfile.

a) Non-RAC

dispatchers="(PROTOCOL=TCP)(SERVICE=XDB)"

b) RAC

inst1.dispatchers="(PROTOCOL=TCP)(SERVICE=XDB)"
inst2.dispatchers="(PROTOCOL=TCP)(SERVICE=XDB)"

4) Make sure there is no invalid objects in XDB schema, and check XMLDB status in DBA_REGISTRY.

select count(*) from dba_objects
where owner='XDB' and status='INVALID';

select comp_name, status, version from DBA_REGISTRY
where comp_name= 'Oracle XML Database';

5) Bounce the database to enable the XMLDB protocol.

Manual Installation of XML DB

To manually install Oracle XML DB without using DBCA, perform the following steps:

After the database installation, connect as SYS and create a new tablespace for the Oracle XML DB repository. Run the catqm.sql script in the ORACLE_HOME/rdbms/admin directory to create the tables and views needed for XML DB:

catqm.sql XDB_password XDB_Tablespace_Name TEMP_Tablespace


For Example:
XDB_password = 123456
XDB_Tablespace_Name = XDBTS
TEMP_Tablespace = TEMP


So now the syntax will look something like this:


SQL> catqm.sql 123456 XDBTS TEMP