Tuesday, September 8, 2009

Oracle backup replication with Oracle Streams, Data Guard & RAC

As the world’s leading database, Oracle offers a wealth of options to replicate geographically distributed systems. Each method has its own features and it is the job of the Oracle professional to choose the one that makes sense for their business requirements:

  • Data GuardData Guard standby databases provide standby databases for failover in system that do not have a high downtime cost. In a data guard failover, the last redo log must be applied (up to 20 minutes to re-open the standby database), and if the last redo cannot be recovered from the crashed instance, the most recent updates may be lost.

  • Multi-Master Replication – See the excellent book for implementing multi-master replication. Uses advanced queuing to cross-pollinate many databases with updates. Has an advantage over Data Guard because the standby database can be used for updates, and a disadvantage that replication is not instantaneous and update collisions may occur.

  • Oracle Streams – See the comprehensive book on Oracle Streams Technology by Madhu Tumma. Ready for production use in Oracle 10g, Oracle Streams allows near real-time replication and support for master-to-master replication. Oracle Streams has no licensing costs (RAC costs extra) and it is less complex to configure than a RAC database. This disadvantage over RAC is that there is no continuous availability, update collisions may occur, replication is not instant, and application failover is usually done manually.

  • Real Application Clusters – The Cadillac of Oracle continuous availability tools, RAC allows for Transparent Application Failover (TAF) and RAC is de-rigueur for systems with a high cost of downtime and continuous availability on RAC.

Most shops will combine these technologies to provide replication that is easy to manage and reliable. Let’s explore such a scenario.

Why Replicate?

Oracle has sophisticated mechanisms to manage data concurrency and replication is normally used when we have geographically distributed systems with unreliable network connectivity between systems. In a perfect world, database would not be replicated, but the nature of worldwide networking mandates that global eCommerce system have replicated database to provide fast response time worldwide.

Because the Oracle Streams product is new, Oracle professionals are only now recognizing how Streams can be used to in a master-master replication architecture. Let’s take a closer look.

Oracle Streams for master-master replication

Oracle Streams is an ideal solution for systems that are geographically distributed and have a high-speed connection (e.g. a T1 line) between servers. As long as the server interconnect can keep-up with the changes you can implement systems that provide failover and disaster recovery, simply and reliably. However Oracle Streams replication is not for every database:

  • High update systems – Busy update database may spend excessive resources synchronizing the distributed databases.

  • Real-time replication required – If you require a two-phase commit (where the changes are committed on each database at exactly the same time), then RAC or on-commit replication are a better choice. For example, a banking system would require that all databases update each other simultaneously, and Oracle Streams might not be an ideal solution.

On the other had, Oracle Streams is perfect for geographically distributed applications where real-time synchronization is not required. Let’s explore master-master replication with a hypothetical example. Assume that we have a online database serving the US Federal government which is deployed using APEX making it accessible anywhere in the world with an internet-enabled web browser. Federal agencies will connect with either the California or the Texas server, based on their APEX shortcut URL. The two systems are cross-fed updates via Oracle Streams and the Texas server will use Data Guard to replicate to standby database in Kittrell North Carolina (Figure 1).










Figure 1 – Combining Oracle Streams and Oracle data Guard

Oracle Streams will provide near real-time replication of important information and in case of a server outage, updates are automatically stored in update queues and applied automatically when service is restored to the crashed Oracle server.

Providing reliable connectivity during network outages is very challenging and sophisticated, and Oracle has several tools that will aid TCI in their goals. In case of any type of outage we see alternative connection options:

  • Interconnect is down – All end-users will continue to operate independently and when the interconnect is restored the updates queues will synchronize each database.

  • California or Texas Server down – The end-user will have three icons on their desktop, and a California end-user simply clicks on the URL for the Texas database and continues working.

  • Major Disaster – In this case the Kittrell NC server will be synchronized using Data Guard (with a possible loss of the most recent redo log updates) and end-users with access to the internet can continue to work on the standby server. Upon restoration of service, the Kittrell server will flush all updates back to the main California and Texas databases.

Establishing the replication mechanisms is only part of the task. We also need detailed checks to alert the DBA staff when an outage has occurred. The end-user will have it easier, and know to simply log in to the surviving Oracle database. Let’s look at the replication alert mechanisms.

Streams replication failure alerts

This is a critical component of the replication so that Oracle staff can be aware when there has been a server or network failure. The alerts will include checks from the Kittrell, checks between Texas and California, and checks of the replication queues. Because the connectivity checks must be done outside from Oracle, the checks can be done using Linux cron entries.

  • Ping from standby server - The Kittrell standby server will check every 15 minutes for connectivity to all servers with a “ping” command. If the servers fail to respond a pager alert will be sent to support staff.

  • California and Texas connectivity checks – Each server will cross-check each other, first checking with tnsping, then ping, and sending appropriate alerts in cases of network failure.

Creating a self-synchronizing replication architecture

Once the Oracle Streams, Data Guard and alerts are in-place, the only challenge for the Oracle DBA is ensuring that the Streams replication queues have enough room to hold updates on the surviving Oracle instance.

For more information on using Oracle Streams and Multimaster Replication, we recommend these books and links:

Streams Replication Links:







Monday, September 7, 2009

Somethings about EXPLAIN PLAN

1. Creation of plan table:
@?/rdbms/admin/utlxplan.sql

2. Explaining an sql statment:

EXPLAIN PLAN FOR
SELECT ..............................statement....

3. View EXPLAIN PLAN out put:

set pagesize 0
set linesize 130
@?/rdbms/admin/utlxpls.sql

The above sql shows plan table output for serial processing.

@?/rdbms/admin/utlxpls.sql

The above sql shows output with parallel execution colums.

Delete Operation Explain Plan















DELETE FROM EPDBA01.t_bo_sec_new
WHERE ROWID IN (SELECT tbo.ROWID
FROM EPDBA01.t_bo_sec_new tbo, EPDBA01.t_plan tp
WHERE tbo.plan_id = tp.plan_id
AND NOT tbo.person_id IN (SELECT person_id
FROM EPDBA01.t_sys_admin)
AND tbo.role_id = 2000
AND tp.plan_access = 'S')
Plan hash value: 4027940639


Step Explaination:

1. This plan step retrieves all rows from table T_PLAN.
2. This plan step retrieves all rows from table T_BO_SEC_NEW.
3. This plan step accepts two sets of rows, each from a different table.
A hash table is built using the rows returned by the first child.
Each row returned by the second child is then used to probe the hash table to find row pairs
which satisfy a condition specified in the query's WHERE clause.
Note: The Oracle cost-based optimizer will build the hash table using what it thinks is the smaller of the two tables.
It uses the statistics to determine which is smaller, so out of date statistics could cause the optimizer to make
the wrong choice.
4. This plan step retrieves all ROWIDs from the B*-tree index PK_T_SYS_ADMIN by walking the index starting with its smallest key.
5. This plan step accepts multiple sets of rows. Rows from the first set are eliminated using the data found in the second through n sets.
6 This plan step accepts a row set (its only child) and sorts it in order to identify and eliminate duplicates.
7. This plan step represents the execution plan for the subquery defined by the view VW_NSO_1.
8. This plan step retrieves rows from table T_BO_SEC_NEW through ROWID(s) specified in the WHERE clause of the statement.
9. This plan step joins two sets of rows by iterating over the driving, or outer, row set (the first child of the join) and, for each row, carrying out the steps of the inner row set (the second child). Corresponding pairs of rows are tested against the join condition specified in the query's WHERE clause.
10. This plan step deletes rows from table T_BO_SEC_NEW which satisfy the WHERE clause of the DELETE statement.
11. This plan step designates this statement as a DELETE statement.

Oracle powers world's largest commercial database

Oracle has set a new record by powering 100 TERABYTE DATA WAREHOUSE. Their customers run four top 10 largest support system databases and powers nine top 10 UNIX OLTP. Survey determined that the largest transaction processing system on UNIX runs an Oracle Database, containing over 16 TB of data. Oracle is used by three of the 10 largest UNIX DATA WAREHOUSES.

http://www.oracle.com/corporate/press/2005_sep/091305_wintertopten_finalsite.html

The article says it's the first time Linux had its own categories for DSS and OLTP:

"This year's survey marks the first time that the Linux operating system had its own categories for DSS and OLTP. In total, 12 systems (eight DSS and four OLTP systems) running Linux were measured ? all of them running Oracle Database. Two of these Linux data warehouse systems cracked the TopTen All Environments in the database size category. The world's largest Linux data warehouse measures 24.8 TB, good enough for sixth overall in the top 10 largest data warehouse category. Oracle is the only database vendor whose customers posted top results on Linux, UNIX and Windows platforms."

World's largest commercial Linux data warehouse runs Oracle Database

Oracle is the leading database for decision support and online transaction processing in real-world customer environments, according to the Winter Corporation 2005 Top Ten Program survey. Only Oracle Database produces leading results on Linux, UNIX, and Windows.


The survey, which identifies the world's largest and most heavily used databases, found that the largest commercial data warehouse in the world runs a 100 terabyte Oracle Database. That's more than triple the size of the largest database in the previous TopTen Program survey, which was also powered by Oracle.

The survey's newest category covering Linux DSS and OLTP databases was completely dominated by Oracle customers. Overall, the survey found that Oracle is the only database vendor whose customers posted top results on Linux, UNIX, and Windows platforms.

Other survey findings:


- The world's largest commercial database runs Oracle Database, with 100TB of data.
- The world's largest commercial data warehouse runs Oracle Database.
- The world's largest commercial UNIX data warehouse runs Oracle Database.
- The world's largest commercial Linux data warehouse runs Oracle Database.
- The world's largest scientific database runs Oracle Database.
- Oracle Database powers nine of the world's top 10 UNIX OLTP systems.
- Oracle Database powers 100 percent of all Linux DSS and OLTP measured in the Winter Corporation 2005 TopTen program.
- Oracle customers represent 58 percent of the all validated participants in the Winter Corporation 2005 TopTen program.

The Winter Corporation 2005 TopTen Program surveyed customers from 20 countries on five continents, in 11 industries from government to healthcare to retail to telecommunications, among others.

The Program identifies the world's leading database implementations based on Database Size and Most Rows/Records. As part of the rigorous survey process, respondents must have a validated database size that meets the survey's requirements.

Source: http://www.ameinfo.com/72074.html
Largest Oracle database tops 100 trillion bytes : This articles notes the world's largest data warehouses are running Oracle with Linux

Database Failover Steps (Oracle Dataguard / Physical Standby Database)

1. Sync last archive logfile.

2. Activate standby database by using the following statements:

* Recover managed standby database cancel;
* alter database activate standby database;
* shutdown immediate;
* startup;

3. Hot backup immediately.

4. Build new standby database

Installing the XML & JAVA Components on Oracle Database

Shutdown the instance and then create and run the following sql script from a new sqlplus session:

-- Start of File full_rmjvm.sql
spool full_rmjvm.log
set echo on
connect / as sysdba
startup mount
alter system set "_system_trig_enabled" = false scope=memory;
alter system enable restricted session;
alter database open;
@?/rdbms/admin/catnoexf.sql
@?/rdbms/admin/catnojav.sql
@?/xdk/admin/rmxml.sql
@?/javavm/install/rmjvm.sql
truncate table java$jvm$status;
select * from obj$ where obj#=0 and type#=0;
delete from obj$ where obj#=0 and type#=0;
commit;
select owner, count(*) from all_objects
where object_type like '%JAVA%' group by owner;
select obj#, name from obj$
where type#=28 or type#=29 or type#=30 or namespace=32;
select o1.name from obj$ o1,obj$ o2
where o1.type#=5 and o1.owner#=1 and o1.name=o2.name and o2.type#=29;
shutdown immediate
set echo off
spool off
exit
-- End of File full_rmjvm.sql

OR

- Start of File full_jvminst.sql
spool full_jvminst.log;
set echo on
connect / as sysdba
startup mount
alter system set "_system_trig_enabled" = false scope=memory;
alter database open;
select obj#, name from obj$
where type#=28 or type#=29 or type#=30 or namespace=32;
@?/javavm/install/initjvm.sql
select count(*), object_type from all_objects
where object_type like '%JAVA%' group by object_type;
@?/xdk/admin/initxml.sql
select count(*), object_type from all_objects
where object_type like '%JAVA%' group by object_type;
@?/xdk/admin/xmlja.sql
select count(*), object_type from all_objects
where object_type like '%JAVA%' group by object_type;
@?/rdbms/admin/catjava.sql
select count(*), object_type from all_objects
where object_type like '%JAVA%' group by object_type;
@?/rdbms/admin/catexf.sql
select count(*), object_type from all_objects
where object_type like '%JAVA%' group by object_type;
shutdown immediate
set echo off
spool off
exit
-- End of File full_jvminst.sql


You can also get the same tips from original Oracle metalink notes:
Subject: How to Reload the JVM in 10.1.0.X and 10.2.0.X
Doc ID: Note:276554.1

How to deinstall and install Oracle XML database (XMLDB/XDB)?

If you are on database release 10.1.x or 10.2.x the XDB Feature is Mandatory in order to use any of the member functions of the XMLTYPE. This is true even if you are not using the repository, or registered schema aspects of the XDB feature. Before we begin to install it, let's check the steps of how to remove it manually.

====> Removal of XML DB

1) Shutdown and restart the database

2) Connect as sysdba and run catnoqm.sql script.
connect / as sysdba
@?/rdbms/admin/catnoqm.sql
drop trigger sys.xdb_installation_trigger;
drop trigger sys.dropped_xdb_instll_trigger;
drop table dropped_xdb_instll_tab;

3) Modify parameter values in init.ora or spfile.
shared_pool_size =150 MB # Or larger value
java_pool_size =150 MB # Or larger value

====> Installation of XML DB
Create XDB tablespace as XMLDB repository storage, make sure it has 150MB free space. Restart the database to make the parameters take effect.

Now we are ready to install a new XDB:

1) Connect as sysdba and run catqm.sql script.
set echo on
spool xdb_install.log
@?/rdbms/admin/catqm.sql xdb_user_pass xdb_tbs temp_tbs

2) If you are using Oracle 9.2, reconnect as SYSDBA and run catxdbj.sql script. Oracle 10g also has this script, but have nothing to do.
@?/rdbms/admin/catxdbj.sql

3) Change database system parameters in init.ora or spfile.

a) Non-RAC

dispatchers="(PROTOCOL=TCP)(SERVICE=XDB)"

b) RAC

inst1.dispatchers="(PROTOCOL=TCP)(SERVICE=XDB)"
inst2.dispatchers="(PROTOCOL=TCP)(SERVICE=XDB)"

4) Make sure there is no invalid objects in XDB schema, and check XMLDB status in DBA_REGISTRY.

select count(*) from dba_objects
where owner='XDB' and status='INVALID';

select comp_name, status, version from DBA_REGISTRY
where comp_name= 'Oracle XML Database';

5) Bounce the database to enable the XMLDB protocol.

Manual Installation of XML DB

To manually install Oracle XML DB without using DBCA, perform the following steps:

After the database installation, connect as SYS and create a new tablespace for the Oracle XML DB repository. Run the catqm.sql script in the ORACLE_HOME/rdbms/admin directory to create the tables and views needed for XML DB:

catqm.sql XDB_password XDB_Tablespace_Name TEMP_Tablespace


For Example:
XDB_password = 123456
XDB_Tablespace_Name = XDBTS
TEMP_Tablespace = TEMP


So now the syntax will look something like this:


SQL> catqm.sql 123456 XDBTS TEMP

Wednesday, March 18, 2009

How can one improve Import/ Export performance?

EXPORT:

1) Set the BUFFER parameter to a high value (e.g. 2Mb -- entered as an integer "2000000")
2) Set the RECORDLENGTH parameter to a high value (e.g. 64Kb -- entered as an integer "64000")
3) Use DIRECT=yes (direct mode export)
4) Stop unnecessary applications to free-up resources for your job.
5) If you run multiple export sessions, ensure they write to different physical disks.
6) DO NOT export to an NFS mounted filesystem. It will take forever.

IMPORT:

1) Create an indexfile so that you can create indexes AFTER you have imported data. Do this by setting INDEXFILE to a filename and then import. No data will be imported but a file containing index definitions will be created. You must edit this file afterwards and supply the passwords for the schemas on all CONNECT statements.
2) Place the file to be imported on a separate physical disk from the oracle data files
3) Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init$SID.ora file
4) Set the LOG_BUFFER to a big value and restart oracle.
5) Stop redo log archiving if it is running (ALTER DATABASE NOARCHIVELOG;)
6) Create a BIG tablespace with a BIG rollback segment inside. Set all other rollback segments offline (except the SYSTEM rollback segment of course). The rollback segment must be as big as your biggest table (I think?)
7) Use COMMIT=N in the import parameter file if you can afford it
8) Use STATISTICS=NONE in the import parameter file to avoid time consuming to import the statistics
9) Remember to run the indexfile previously created

Common Import/ Export Problems

ORA-00001: Unique constraint (...) violated
You are importing duplicate rows. Use IGNORE=YES to skip tables that already exist (imp will give an error if the object is re-created).

ORA-01555: Snapshot too old
Ask your users to STOP working while you are exporting or try using parameter CONSISTENT=NO

ORA-01562: Failed to extend rollback segment
Create bigger rollback segments or set parameter COMMIT=Y while importing

IMP-00015: Statement failed ... object already exists...
Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might end up with duplicate rows.

Wednesday, March 11, 2009

Enabling ARCHIVELOG Mode

This is a good review of enabling archive log - an article coming from cuddletech(http://www.cuddletech.com/articles/oracle/node58.html)

Most of the High Availability features of Oracle require you to enable ARCHIVELOG mode for your database. When you enable this mode redo logs will be archived instead of overwritten. The archivelogs are stored in a seperate place usually can backed up regularly by your standard filesystem backup system (NetBackup or whatever). Archive logs are utilized by RMAN, Data Guard, Flashback and many others.

If your going to enable archivelog mode on a real database thats important to you, I would recommend shutting down the database and doing a cold backup just in case. Keeping a “final noarchivelog mode backup” seems to be a good and excepted practice.

Enabling archive mode is simple, just connect to your database in mounted but closed mode (startup mount) and alter the database. But if you don’t tune alittle you’ll run into problems down the road, so lets specify some parameters too. Namely, consider LOG_ARCHIVE_DEST.

Lets start by checking the current archive mode.

SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;
LOG_MODE
------------
NOARCHIVELOG


So we’re in NOARCHIVELOG mode and we need to change. We can use a database alter statement, but that won’t be perminant, so lets just update the pfile directly. The pfile should be in either $ORACLE_BASE/admin/SID/pfile or $ORACLE_HOME/admin/SID/pfile. I’ll add the following lines to the end of the file:

############################
# Archive Log Destinations -benr(10/15/04)
############################
log_archive_dest_1='location=/u02/oradata/cuddle/archive'
log_archive_start=TRUE


Note that we’re not actually required to specify the location of the log destination, but if you don’t it’ll end up in strange places (in my test it went to $ORACLE_HOME/dbs making a mess). You can specify as many as 10 diffrent archive log destinations by using the paramters log_archive_dest_1 through log_archive_dest_10. Remember, if you run out of space in your archive log destination the database will shut down!

Now we can startup the database in mount mode and put it in archivelog mode.

[oracle@vixen pfile]$sqlplus sys/passwd as sysdba;
SQL*Plus: Release 10.1.0.2.0 - Production on Fri Oct 15 16:00:58 2004
Copyright (c) 1982, 2004, Oracle. All rights reserved.
Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area 184549376 bytes
Fixed Size 1300928 bytes
Variable Size 157820480 bytes
Database Buffers 25165824 bytes
Redo Buffers 262144 bytes
Database mounted.

SQL> alter database archivelog;
Database altered.

SQL> alter database open;
Database altered.

You can see here that we put the database in ARCHIVELOG mode by using the SQL statement “alter database archivelog”, but Oracle won’t let us do this unless the instance is mounted but not open. To make the change we shutdown the instance, and then startup the instance again but this time with the “mount” option which will mount the instance but not open it. Then we can enable ARCHIVELOG mode and open the database fully with the “alter database open” statement.

There are several system views that can provide us with information reguarding archives, such as:

V$DATABASE - Identifies whether the database is in ARCHIVELOG or NOARCHIVELOG mode and whether MANUAL (archiving mode) has been specified.
V$ARCHIVED_LOG - Displays historical archived log information from the control file. If you use a recovery catalog, the RC_ARCHIVED_LOG view contains similar information.
V$ARCHIVE_DEST - Describes the current instance, all archive destinations, and the current value, mode, and status of these destinations.
V$ARCHIVE_PROCESSES - Displays information about the state of the various archive processes for an instance.
V$BACKUP_REDOLOG - Contains information about any backups of archived logs. If you use a recovery catalog, the RC_BACKUP_REDOLOG contains similar information.
V$LOG - Displays all redo log groups for the database and indicates which need to be archived.
V$LOG_HISTORY - Contains log history information such as which logs have been archived and the SCN range for each archived log.

Using these tables we can verify that we are infact in ARCHIVELOG mode:

SQL> select log_mode from v$database;
LOG_MODE
------------
ARCHIVELOG

SQL> select DEST_NAME,STATUS,DESTINATION from V$ARCHIVE_DEST;

Archivelog ORA-16014 log sequence not archived…

One common mistake in 10g, when we enable archivelog mode, happens when we use the flash recovery area default settings as the repository for our archived redo log’s… The mistake consequences will happen 20-30 days after the database archivelog mode been enabled.
By default Oracle 10g Database Configuration Assistant (DBCA) sets flash recovery area size to 2GB (db_recovery_file_dest_size parameter) and when we enable DB archivelog mode it will use the default db_recovery_file_dest parameter for the destination of our archivelogs unless we set any other directory as the destination for them.


A common small 10g database generates an average 100MB of archivelog files daily… Sometimes even at weekend days when most applications DML is almost zero. Why this happens? The reason is Automatic Workload Repository (AWR) and Automatic Database Diagnostic Monitor (ADDM). AWR collects detailed performance-related metrics from the database at regular intervals, known as snapshots. After each snapshot is taken, ADDM is invoked to thoroughly analyze the data and metrics deriving from the difference between snapshots, and then recommend necessary actions. This generates lot’s of DML and consequently lot’s of redo data.

So, after some days, your flash recovery area will reach the default 2GB because we have our default settings database archiving there… When this happens our database will be unable to archive due to flash recovery area went full. This happens even if we have lot’s of disk space!

In our alert log file we’ll see something like this:

ORA-16038: log one sequence 3144 cannot be archived
ORA-19809: limit exceeded for recovery files
ORA-00312: online log string thread string: 'string: '/u01/app/oracle/oradata/mydb/redo01.log'
Sat Oct 11 10:43:56 2008
ARCH: Archival stopped, error occurred. Will continue retrying
Sat Oct 11 10:43:56 2008
ORACLE Instance neo - Archival Error


Oracle saves all the information about what we place in the flash recovery area in the RMAN repository/controlfile. If it determines that there is not sufficient space in the recovery file destination, as set by db_recovery_file_dest_size then it will fail. Just deleting the old backups and archive logs from disk is not sufficient as it’s the rman repository/controlfile that holds the space used information.

How to fix this? If we google we see lot’s of places saying to execute “delete archivelog all” in RMAN. This is the fast easy, but dirty solution, as we don’t want to delete all our precious archivelogs by endanger the ability to do future database recovery. The fix can be done with any of the following solutions:

Solution 1 - Delete unwanted archive log files at OS side, then crosscheck archivelogs to marks the controlfile that the archives have been deleted and deleting expired ones.
1. [oracle@app oracle]$ cd /u01/app/oracle/flash_recovery_area/mydb/
2. [oracle@app oracle]$ rm archivelogs*
3. [oracle@app oracle]$ rman target /
4. connected to target database: MYDB (DBID=1649570311)
5. RMAN> crosscheck archivelog all
6. RMAN> delete expired archivelog all
[oracle@app oracle]$ cd /u01/app/oracle/flash_recovery_area/mydb/
[oracle@app oracle]$ rm archivelogs*
[oracle@app oracle]$ rman target /
connected to target database: MYDB (DBID=1649570311)
RMAN> crosscheck archivelog all
RMAN> delete expired archivelog all


Solution 2 - Connect RMAN to backup and then delete your archivelogs… this is a much better solution.
1. [oracle@app oracle]$ rman target /
2. connected to target database: MYDB (DBID=1649570311)
3. RMAN> backup archivelog until logseq delete all input;
4. or
5. RMAN> backup archivelog until time 'sysdate-15' delete all input;
[oracle@app oracle]$ rman target /
connected to target database: MYDB (DBID=1649570311)
RMAN> backup archivelog until logseq delete all input;
or
RMAN> backup archivelog until time 'sysdate-15' delete all input;


Solution 3 - increase dynamically (without shutdown/startup database) the parameter db_recovery_file_dest_size.
1. [oracle@app oracle]$ sqlplus "/ as sysdba"
2. SQL> alter system set db_recovery_file_dest_size=4G
[oracle@app oracle]$ sqlplus "/ as sysdba"
SQL> alter system set db_recovery_file_dest_size=4G


Then, if needed (usually with solution 1 and 2), just shutdown and startup your database and you’ll get you database again up. We may have to shutdown abort if the database does not shutdowns normally.

To avoid the same problem in future when the archivelog size reachs the db_recovery_file_dest_size we should set the following parameters to set an alternate place to archive the redo logs.

log_archive_dest_1='LOCATION=use_db_recovery_file_dest NOREOPEN ALTERNATE=LOG_ARCHIVE_DEST_2'
log_archive_dest_2='LOCATION=/other_destination_for_archiving'
log_archive_dest_state_1='enable'
log_archive_dest_state_2='alternate'
db_recovery_file_dest='/u01/app/oracle/flash_recovery_area/mydb/'
db_recovery_file_dest_size=2G


Tuesday, March 10, 2009

Setup Oracle Database Archivelog mode

A short musical video tutorial showing how easily you can configure Oracle Database to use archivelog mode. Using archivelog mode is highly recommended in any production database and allows RMAN hot backups (online backups) . You can also use RMAN without archivelog, but you’ll have to shutdown the database to backup it (cold backup).

Let’s system of a down :) Beware… highly advisable to TURN OFF sound if you don’t like hard rock :P



It can be done easily with the following statements:

SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database archivelog;
SQL> alter database open;


If your database version is prior 10g then after shutdown and before open database you must set log_archive_start = TRUE in your init.ora and/or spfile.ora. After that, login as "sys" and you can check it out with:

SQL> archive log list;
SQL> select log_mode from v$database;


Note: In this case, we need to do a quick restart of the database to enable it to "Archive Log Mode". So please ensure your the required outage window.