Creating an RMAN Recovery Catalog

Leave a comment

Need to create and RMAN Recovery catalog? Check the Creating a Recovery Catalog section of the Backup and Recovery User Guide document.

Steps listed there are:
1. Create a tablespace for the catalog
2. Create a user that will own the schema and assign to it the newly created tablespace.
3. Grant the RECOVERY_CATALOG_OWNER privilege to the user, and if 11g or higher, grant CONNECT as well.

SQL> create tablespace TOOLS datafile 'C:\ORADATA\RCATDB\TOOLS_01.DBF' size 100M;

Tablespace created.

SQL> create user RCAT identified by RCAT default tablespace TOOLS temporary tablespace TEMP quota unlimited on TOOLS;

User created.

SQL> grant recovery_catalog_owner to RCAT;

Grant succeeded.

SQL> grant connect to RCAT;

Grant succeeded.

4. Using the RMAN utility, login with the CONNECT CATALOG syntax as the recovery catalog schema owner and run CREATE CATALOG. This will build the schema. You can specify a tablespace if desired.

C:\Users\emcadmin> rman

Recovery Manager: Release - Production on Wed Jan 2 18:30:33 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect catalog RCAT/RCAT

connected to recovery catalog database

RMAN> create catalog tablespace tools;

recovery catalog created

At this point one can begin registering within the catalog databases to be backed up.

RMAN> connect target;

RMAN> register database; 

SQL for checking RMAN catalog or control file for backup information

Leave a comment

Do you need a quick query to check the local control file to see if backups succeeded? Use the below if there is no RMAN catalog involved.

Query to use when storing meta data in control file.
SID column will be 0 unless working with Oracle RAC.
select sid, object_type, status, 
round((end_time - start_time) * 24 * 60, 2) duration_minutes,  
to_char(start_time, 'mm/dd/yyyy hh:mi:ss') start_time, 
to_char(end_time, 'mm/dd/yyyy hh:mi:ss') end_time,
round((input_bytes/(1024*1024*1024)),2) input_gb, 
round((output_bytes/(1024*1024*1024)),2) output_gb
from v$rman_status
where operation = 'BACKUP';

If you are using the RMAN catalog, then you can run this query instead.

Query to use when storing meta data RMAN catalog.
select db_name, object_type, status, 
round((end_time - start_time) * 24 * 60, 2) duration_minutes,  
to_char(start_time, 'mm/dd/yyyy hh:mi:ss') start_time, 
to_char(end_time, 'mm/dd/yyyy hh:mi:ss') end_time,
round((input_bytes/(1024*1024*1024)),2) input_gb, 
round((output_bytes/(1024*1024*1024)),2) output_gb
from rc_rman_status
where operation = 'BACKUP';
where end_time  > (SYSDATE - 31)

Oracle 11G: Encryption Everywhere

Leave a comment

The following is another entry I wrote in 2010 that I’ve refrained from publishing. It’s a follow on to this overview of Oracle 11g security.

Keeping data safe in today’s computing environment demands encryption. In fact, the law often requires it. Several states in the USA now have laws stating that companies must safe guard personal data belonging to consumers. If the data gets stolen, and if the data is not encrypted, then the company that suffered the data breach must compensate consumers for their loss of privacy. So naturally, companies want to encrypt personal information stored within their databases. Oracle 11G makes such encryption straightforward. This article summarizes older encryption features and details the new encryption features in 11G releases 1 and 2.

Here’s a quick rundown on what’s been available for encryption for the last few releases. The ability to encrypt data as it travels across the network to and from the database has been available for several releases. Called sqlnet encryption, it requires purchasing the extra cost option Advanced Security and dates back to at least Oracle 8i. Note that sqlnet encryption does not encrypt the data once it is inside the database residing on disk, or at rest, as the catch phrase goes. Another feature available in Oracle 8i and onward are PL/SQL functions to encrypt and decrypt a piece of data. These functions could be used with custom programming to ensure the encryption of key data in the database. However, such functions demand custom programming and make it impossible to index the columns storing such data. Oracle 10g announced the arrival of Transparent Data Encryption (TDE), enabling the database to encrypt all data in a column such that a programmer did not have to alter a SQL statement to encrypt and decrypt it. Users needed only to set up the encryption wallet for the Oracle instance and then run an ALTER TABLE statement to modify the column to be encrypted. Provided a user had SELECT privileges on the table with encrypted columns, that user could transparently access the data using exactly the same SQL statement required if the column were not encrypted. And a hacker with unauthorized access to the underlying data file would not be able to see the data. Version 10g also enhanced the RMAN backup utility so it could create encrypted output files.

Despite the TDE feature, encryption in 10g had serious flaws. You had to specify individual columns and encrypt them one at a time. Certain column types could not be encrypted, most notably BLOB columns. If you created a new table or added columns with sensitive data, you had to remember to encrypt them.

Oracle 11G solves these short comings. The new tablespace encryption features enables an entire tablespace to be encrypted. Any new table created in that tablespace will get encrypted. 11G also provides encryption for the export data pump (expdp) utility. And BLOB files can be now be encrypted by means of using an 11G feature called SecureFiles. Please note that all three of these features require purchase of the extra cost option Advanced Security, which can only be used with the Enterprise Edition of Oracle database.

Here’s how to implement tablespace encryption. As in previous releases, you start by creating the Wallet, which contains the master encryption key for the entire database.

The default location for the wallet is $ORACLE_BASE/admin/$ORACLE_SID/wallet. You can override this in the $TNS_ADMIN/sqlnet.ora file by adding an ENCRYPTION_WALLET_LOCATION entry.


You must then create an encryption key, using an alter system key.


A note for release 11G R1 versus 11G R2. In 11G R1, you can only create an encryption key once. In 11G R2, you can reset the encryption key by running the above command again. Doing so will not invalidate already encrypted data. This feature is to provide support for environments where regulations require periodic changing of the encryption key. I have not tested what the performance or overhead penalty is for resetting the encryption key.

Whenever you open the database, you must open the wallet.


With the wallet in place, you can now create an encrypted tablespace. Note that you cannot ALTER an existing tablespace to become encrypted. One can move an existing table from a decrypted tablespace to an encrypted one, however. The following is sample code for creating an encrypted tablespace, then creating a table inside it using the CREATE TABLE AS SELECT method.

-- Make unencrypted tablespace
create tablespace foo
datafile '/u02/oradata/TEST/foo_1.dbf' size 100M ;

-- Make encrypted tablespace
create tablespace foo_enc 
datafile '/u02/oradata/TEST/foo_enc_01.dbf' size 100M 
encryption using 'AES256' 
default storage (encrypt);

-- Put a table in the unencrypted tablespace. Query the all_objects view to make table.
16:55:06 SQL> create table decrypt_object tablespace foo as select * from all_objects;
Table created.
Elapsed: 00:00:08.08

-- Put a table in the encrypted tablespace. Query the all_objects view to make table.
16:54:50 SQL> create table enc_object tablespace foo_enc as select * from all_objects;
Table created.
Elapsed: 00:00:11.23

Finally, an illustration with unix commands show that the underlying datafiles for the tablespace are indeed encrypted. The first command shows the grep utility scanning for the string value “ALL_TABLES” in the unencrypted data files. It finds a match for a SYNONYM and a VIEW.

$ grep ALL_TABLES foo_1.dbf

But when the same command is run against the data file for the encrypted tablespace, no matches are found.

$ grep ALL_TABLES foo_enc_01.dbf

LOB columns could not be encrypted with Oracle 10G encryption. In 11G, you have 2 ways to encrypt the LOB columns. Tablespace encryption will work. I verified this by loading BLOB files into tables residing in the FOO and FOO_ENC tablespaces loaded above and than repeated the test of scanning the underlying data files in each for strings in the LOB files. You can also encrypt LOB columns with a new feature called SecureFiles. I’m not sure what is the performance impact of using SecureFiles vs Tablespace Encryption for encrypting LOBS.

SecureFiles is a new option to store BLOB and CLOB columns, and the old way is now called BasicFiles. SecureFiles offers performance enhancements over BasicFiles. Oracle recommends using SecureFiles for LOB storage over the older BasicFiles. Caching, locking, the write mechanism and logging are all enhanced in Securefiles. And if you pay for the extra cost Advanced Security Option, you can encrypt your SecureFile columns. Note that there is also another extra cost option called Advanced Compression, which enables you to use the compression and deduplication features of SecureFiles. These won’t be discussed here.

Before discussing the encryption feature for SecureFiles, I’ll provide some background information deploying SecureFiles in general.

A SecureFile LOB column must be created in a tablespace using Automatic Segment Storage Management (ASSM). To convert an existing pre-11G LOB column to a SecureFile column, you have the following options.

  • CREATE TABLE AS SELECT. Use a CTAS statement to insert the data from the existing column into a SecureFile column in a new table.
  • INSERT INTO using SELECT. Pre-create the target table with the SecureFile column and then run an INSERT INTO statement.
  • Online table redefinition.
  • Export/Import. You can use the expdp and impdp utilities to load the data. Note that the old exp and imp do not support encryption, so if you plan to import into an encrypted securefile, you will need expdp and impdp.
  • Create a new column, update the new column with the values in the original column, then drop the old column.

The syntax for creating a SecureFile column looks like this:

create table company_docs
(doc_id, number not null primary key,
name varchar2(255) not null,
blob_content blob)
tablespace company_docs_data
lob (blob_content) store as securefile 

For details on the detailed SecureFiles storage clause, see the oracle documentation.

If you would like to create the SecureFile column in encrypted format, you need to have the Oracle Wallet set up first, as described above. Once that is in place, the syntax for creating a SecureFile LOB column with encryption turned on is the following.

CREATE TABLE encrypt_a_lot 
(id NUMBER,  document  BLOB) 
LOB (document) 
(ENCRYPT [ USING 'encrypt_algorithm' ] [ IDENTIFIED BY password ])

All records in the LOB column get encrypted, and that includes records across all partitions if the LOB column is spread over partitions. Note that in the above example, the ‘encrypt_algorithm’ indicates the name of the encryption algorithm. Valid algorithms are:

  • 3DES168
  • AES128
  • AES192 (default)
  • AES256

The last encryption feature to be covered here is support for encrypting the output of the Oracle’s export utility, expdp, alternately known as datapump. The old export utility exp does NOT support encryption.

The command line for datapump now has four parameters that govern the encryption feature.

  • ENCRYPTION will encrypt part or all of a dump file. Valid keyword values are: ALL, DATA_ONLY, ENCRYPTED_COLUMNS_ONLY, METADATA_ONLY and NONE.
  • ENCRYPTION_ALGORITHM specifies how encryption should be done. Valid keyword values are: AES128, AES192 and AES256. The default choice is AES128.
  • ENCRYPTION_MODE indicates the method of generating the encryption key. Valid keyword values are: DUAL, PASSWORD and TRANSPARENT. The default choice is TRANSPARENT.
  • ENCRYPTION_PASSWORD indicates the password key for creating encrypted data within a dump file. This is not a required argument unless you specify you want to use a password.

Here’s an illustration of using datapump to create encrypted output. First is a sample of running the utility with no encryption to create an output file called cleartext.dmp.

C:\oracle\admin\TESTDB\dpdump>expdp system/pwd directory=data_pump_dir dumpfile=cleartext.dmp

Here the command is run again with encryption set to all.

C:\oracle\admin\TESTDB\dpdump>expdp system/pwd directory=data_pump_dir dumpfile=crypto.dmp 
tables=dmg.demo_states encryption = all

The next sample reveals that the unencrypted output is vulnerable to inspection with simple to use tools like findstr.

C:\oracle\admin\TESTDB\dpdump>findstr /C:"NEW JERSEY" cleartext.dmp

However the encrypted output file crypto.dmp cannot be parsed with text tools. No match is returned.

C:\oracle\admin\TESTDB\dpdump>findstr /C:"NEW JERSEY" crypto.dmp

In summary, Oracle 11G has significant new features to support encrypting data. You’ll want to plan carefully what you encrypt. There’s probably no reason to encrypt data unless it contains privacy oriented data, also known as personally identifiable data or PII. Such data can be used to uniquely identify an individual, such as a social security number. You’ll want to identify precisely which columns contain PII and then take measures to encrypt those columns or place the tables containing those columns in an encrypted tablespace. If your database does contain PII, you may also want to explore encrypting the data as it is transmitted using https and encrypted SQL*NET traffic, and investigate ensuring backups are encrypted by running RMAN encryption or using encrypted tape or disk devices. Oracle 11G enables you to encrypt your data at just about every level.

Oracle 11G: More Secure Than Before?

Leave a comment

I wrote this entry in 2010 and never posted it. However, with Oracle 12c about to be released, I figured I’d post it now. It will serve as a point of comparison to the security features that will come with Oracle 12c.

Pressure is increasing to make computer deployments more secure. 15 to 10 years ago, corporate computing security focused far away from the database layer. Implementing firewalls for web servers and https protocol was enough to satisfy many security requirements in those days. Now, security specialists routinely examine database deployments to see if whether security best practices are in place. Oracle 11G brings several notable improvements to the Oracle security feature set. This article will focus on non-encryption security enhancements. A subsequent posting will describe the encryption new features available in 11G R1 and R2.

Several of the new 11G security features are available with the core Oracle database license and do not require purchasing the extra cost Advanced Security option. Five features are implemented as init.ora parameters. Another feature is implemented via a new mechanism to govern who gets access to sensitive packages that can be used to hack into the database.

Want your passwords to be case-sensitive? Just set the init.ora parameter SEC_CASE_SENSITIVE_LOGON to true. Here are sql statements run in SQL*PLUS illustrating how this works. Note that this parameter affects the entire instance.

-- Connect as system, examine password case sensitivity setting, make a new user.
SQL> connect system/password@TEST;
SQL> show parameter sec_case_sensitive_logon
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sec_case_sensitive_logon             boolean     TRUE
SQL> create user freddy identified by Freddy;
SQL> grant create session to freddy;

-- Get an error, then connect OK.
SQL> connect freddy/freddy@TEST
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
SQL> connect freddy/Freddy@TEST

A word of caution when using database links with case sensitive passwords turned on. When connecting from a pre-11G database to an 11G with case sensitivity turned on, you need to make the password within the 11G system in upper case so that the older database can successfully login via the link to the 11G system. And while on the topic of passwords and database links, know that the password hash formerly visible in the DBA_USERS.PASSWORD column can no longer be viewsed. What about viewing the password provided when creating a database link? Didn’t you used to be able to see that in the password column of USER_DB_LINKS? As of 10G R2 (not 11G, but the release before), passwords for database links are stored in encrypted format, and even the encrypted value is only visible to users with access to the view SYS.LINK$.

What else can you do with new init.ora parameters for security in 11G? The parameter SEC_MAX_FAILED_LOGIN_ATTEMPTS governs how many times a user can attempt to login with an incorrect password before getting their account locked. The default value is 10. SEC_PROTOCOL_ERROR_TRACE_ACTION determines what type of notification should occur when the database gets bad packets, as might happen during a denial of service attack. You can specify that no notification be sent (NONE), or that it be sent as a short message to the alert log (LOG) OR a detailed trace file (TRACE) OR as a short alert log message plus notification to OEM (ALERT). The default value is TRACE. SEC_PROTOCOL_ERROR_FURTHER_ACTION specifies the fate of connections that send bad packet. They can CONTINUE, get DROPped or be DELAYed. The default value is CONTINUE. Lastly, SEC_RETURN_SERVER_RELEASE_BANNER allows the DBA to avoid sending the complete database version information to an incoming session, making it harder for hackers to figure out the exact version the database is running. The default value is FALSE, sending only minimal information about the database version.

The last security feature to be discussed here is the new method of enabling users to invoke the system packages UTL_HTTP, UTL_MAIL, UTL_SMTP and UTL_TCP. Hackers target these packages since they make it possible for database sessions to send email, communicate via HTTP protocol and via TCP/IP. In previous database releases, the Oracle PUBLIC user had EXECUTE privileges on these packages by default, thereby making it possible for any user to use them. In 11G, the PUBLIC user still has EXECUTE privileges on them. However, now more is needed for database users to successfully invoke the procedures that belong to these packages. In addition to the EXECUTE privileges, users must also have privileges from an access control list (ACL) that gets stored in XML format in the Oracle XDB. DBAs can administer this ACL using the DBMS_NETWORK_ACL_ADMIN and the DBMS_NETWORK_ACL_UTILITY package, which naturally are new to 11G.

In summary, Oracle has improved security in 11G. Implementing several features via init.ora parameters departs from the approach of using the PROFILE feature, where several similar security were in place already and continue to stay. A problem with implementing features like locking a user after invalid login attempts is that if a user doesn’t belong to the correct profile, that user can attempt an infinite # of incorrect passwords. Using init.ora parameters provides a blanket mechanism to cover all users. Want to know more? I recommend going straight to the source: the 11G Release 1 Security Guide and the 11G Release 2 Security Guide.

Explaining it better: dbms_xplan

Leave a comment

The dbms_xplan PL/SQL package arrived with Oracle 9i. It had just one function, display(). Provided you have a plan table (created with $OH/rdbms/admin/utlxplan.sql) in your schema, you can use it like this in a sqlplus session:

SELECT e.*, d.dept_name
FROM emp e, dept d
WHERE e.deptno = d.deptno
  AND e.ename='benoit';

Rem Display the plan using the DBMS_XPLAN.DISPLAY() table function

Why use it? It provides explain plans formatted much better than the older pre 9i methods of displaying query explain plans. DBMS_XPLAN also provides more information needed for performance tuning. I thought I was pretty slick having this at my finger tips since 9i came out.

What I didn’t realize is that DBMS_XPLAN has been enhanced in 10G and further in 11G.

The comments for this Jonathan Lewis blog post tipped me off to the new functions. They led to Rob van Wijk’s detailed post on the new display_cursor function, as well as another Jonathan Lewis post on the function.

Needless to say, these enhancements have been around for 5 plus years and I’m just getting to them. Humbling!

By the way, the plan table supports keeping data for multiple statements at the same time. Handy if multiple developers need to run explain simultaneously. Here’s how: provide arguments to the dbms_xplan.display function.

-- Begin spooling to file.
spool my_file_name.txt

-- Capture database name and current user.
select * from global_name;
select user from dual;

-- Run query.
explain plan 
SET STATEMENT_ID = 'developer_01_sql_01' 
INTO plan_table for
SELECT e.*, d.dept_name
FROM emp e, dept d
WHERE e.deptno = d.deptno
  AND e.ename='benoit';


spool off

Backup and Recovery with 11g Database Control

Leave a comment

11G Oracle Enterprise Manager Database Control now has an extensive GUI based interface to RMAN. You can view RMAN backups, run RMAN backups, check on status of jobs, schedule RMAN jobs, all within the OEM db control GUI.

But wait, there’s more!

There is also something called Data Recovery Advisor (DRA), which will AUTOMATICALLY detect database failures that require recovery, and will suggest multiple recovery approaches. It can then automatically create the needed RMAN script and then run it for you if you wish! Of course, you need to be in archive log mode and you need to have a viable RMAN backup complete.

This tool (DRA) can also be accessed from the RMAN prompt with commands like LIST FAILURE; ADVISE FAILURE, REPAIR DATABASE;

PLEASE NOTE: OEM 11g Database Control is different than OEM grid control. Highest version OEM grid control is Database control is the standalone version of OEM grid control. The features discussed here are only available in the 11G standalone database control, not the 10g grid control. Fortunately, you can have both OEM 10g grid control monitoring your 11g database PLUS run the 11g database control. At least it worked for me. I’d like to double-check this with Oracle.

How do you get OEM 11g database control working? Use the emca tool.
If you don’t have it working already, these 10g instructions work for 11g:
Recreate Database Control Repository, by Jeff Hunter (thanks Jeff!)
You might hit this problem, which I did in my test db:
Meta link doc id (733710.1) EMCA on 11g Db Fails With: Listener is Not Up

How do you stop and start database control?
emctl start dbconsole
emctl stop dbconsole
emctl status dbconsole

How do you access database control once running?

How do you see DRA within OEM 11g Database Control?
Once you are in 11g Database Control, go to the “Availability” link to see all the backup and recovery options.
DRA can be accessed via the Availability : Perform Recovery : Oracle Advised Recovery section
It can also be launched from “Advisor Central”, a link found in the “Related Links” section on the bottom of many pages.

More links on DRA
Tim Hall’s great DRA review
Using DRA from 2 Day DBA guide
Repairing Data Corruptions from DBA guide

ASSM: A shark in the water?

1 Comment

My team came across the following earlier this month. There’s a shark in the water, waiting to byte users running through

One evening, a table with 10 million rows got several thousand rows updated and inserted via a nightly batch job. But the table grew from 500 MB to over 4 GB!

Well, how could that be…The tablespace where the table resides had recently been re-organized so that settings were now locally managed, uniform extent sizes and automatic segment space management (ASSM) turned on. All best practices. Oracle version was on Solaris.

We looked at the possibility of chained rows. There were none. We checked how the batch job worked. It was a Peoplesoft SQR job. It did not use parallelism. It only inserted and updated rows. It did not do deletes or truncate the table before inserts and updates.

I remembered about how the high water mark can be driven up high under certain circumstances, so I checked that using dbms_space.used_space. High water mark seemed fine. But that led me to dbms_space.space_usage. This procedure indicated that there thousands of “unformatted blocks”. This was something new for me. I forwarded this off to Oracle support, and sure enough, there is a bug affecting, and also 10g versions through See Metalink Note:469985.1: Sudden Increase in Unformatted Blocks.

The write up on this one is a little scary, just like a shark in the water. So be careful out there.

“Unexplained increases in unformatted blocks in a tablespace. Very few of the unformatted blocks ever get used. More space is added whenever a new block is requested resulting in even more unformatted blocks. The number of unformatted blocks varies from one database to another. Some databases only add a few thousand unformatted blocks at one time. Others add millions of unformatted blocks all at once. Once the problem starts the only way to prevent new unformatted blocks from being added is to rebuild the tablespace.”

11g: Alert Log is dead, long live ADR

Leave a comment

I’m studying Oracle 11g. Gots to keep up on the latest…

The venerable oracle alert log, long housed in the bdump directory, has undergone a major facelift and is now part of Automatic Diagnostic Repository (ADR). Oracle now has wayyy too many 3 and 4 letter abbreviations. ADR imposes a unified set of directories across all Oracle products, not just database. The xml file format is consistent across products, and so is the tool set needed to parse the xml file. A new init.ora parameter called diagnostic_dest controls the location of these directories. Then a whole mess of directories get created beneath that according to the pattern: ADR_base/diag/product_type/product_id/instance_id — so if you had installed the Oracle database software and had a RAC database with two instances, and the ADR_base was /u01/app/oracle, you would get

/u01/app/oracle/diag/rdbms/racname/node1 AND /u01/app/oracle/diag/rdbms//racname/node2.

And beneath each node directory you would find:

alert cdump hm incident incpkg ir lck metadata stage sweep trace

There’s new stuff getting tracked: incidents and problems. A critical error is a problem and each time it occurs is an incident.

You view all this stuff with a new command line tool, adrci. This command was made to be scripted, so probably most DBAs will end up making a few helper shell scripts to reduce typing long adrci commands. Or maybe not. Viewing the alert log was a matter of doing the following:

$ adrci

ADRCI: Release – Beta on Mon Jun 2 22:38:36 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

ADR base = “/u01/app/oracle”
adrci> show alert
Choose the alert log from the following homes to view:

1: diag/clients/user_unknown/host_411310321_11
2: diag/clients/user_oracle/host_61728193_11
3: diag/rdbms/orcl/orcl
4: diag/tnslsnr/localhost/listener
Q: to quit

Then you type 3…of course you need to know that 3 is rdbms is your database alert log…

Oddly, the text-only alert log is still around, but it’s now in the trace directory within the diag sub-directory tree.

Lastly, adrci has a set of commands starting with “ips” which incident packaging. From the adrci prompt, you can issue ips commands to create zip files stuffed with trace data to ship off to Oracle support when bad things happen. This promises to streamline the information gathering process many a DBA has suffered through.