SQL for checking RMAN catalog or control file for backup information

Leave a comment

Do you need a quick query to check the local control file to see if backups succeeded? Use the below if there is no RMAN catalog involved.

/*
Query to use when storing meta data in control file.
SID column will be 0 unless working with Oracle RAC.
*/
select sid, object_type, status, 
round((end_time - start_time) * 24 * 60, 2) duration_minutes,  
to_char(start_time, 'mm/dd/yyyy hh:mi:ss') start_time, 
to_char(end_time, 'mm/dd/yyyy hh:mi:ss') end_time,
round((input_bytes/(1024*1024*1024)),2) input_gb, 
round((output_bytes/(1024*1024*1024)),2) output_gb
from v$rman_status
where operation = 'BACKUP';

If you are using the RMAN catalog, then you can run this query instead.

/*
Query to use when storing meta data RMAN catalog.
*/
select db_name, object_type, status, 
round((end_time - start_time) * 24 * 60, 2) duration_minutes,  
to_char(start_time, 'mm/dd/yyyy hh:mi:ss') start_time, 
to_char(end_time, 'mm/dd/yyyy hh:mi:ss') end_time,
round((input_bytes/(1024*1024*1024)),2) input_gb, 
round((output_bytes/(1024*1024*1024)),2) output_gb
from rc_rman_status
where operation = 'BACKUP';
/*
ADD THE FOLLOWING TO LIMIT THE ROWS RETURNED B
where end_time  > (SYSDATE - 31)
*/

quick peak at postgres post install on ubuntu

Leave a comment

I’m checking out how postgres works.

I installed it to 32-bit ubuntu version 12.04. Googling lead to multiple places for the appropriate apt-get commands.

The install created a user, postgres, that runs the postgres binaries. However, root owns the binary files.

A default instance got created. I haven’t learned yet if people often run multiple postgres instances per host.

The following processes run with the default installation:
/usr/lib/postgresql/9.1/bin/postgres
postgres: writer process
postgres: WAL process
postgres: autovacuum launcher
postgres: stats collector process

The first process is the main postgres process, and it was launched with the -D parameter pointing to a specific directory and the -c parameter pointing to the full path of the postgres.conf file.

The writer process I’m surmising must write to data files, and the WAL process I’ve read elsewhere is the Write Ahead Log, similar to redo log writer in Oracle. Autovacuum launcher governs the ability to automatically run the VACUUM command, which is needed in Postgres periodically. And I’m sure the stats collector updates query optimization stats, but I’ll have to check.

There’s a command psql that is the equal of sqlplus. I’ll explore psql in a follow up post.

Documentation for postgres can be found at postgresql.org.

Having worked as a SQL Server and Oracle DBA, keeping track of database storage is important. Documentation for those two products describes early on how each system places all objects into datafiles. A datafile can contain tables, indexes, stored procedures, views and everything else.

Postgres on the other hand relegates discussion of physical storage to a location fairly deep in the documentation. Each table gets its own datafile. A master directory tree contains all the object in the postgres database, with most objects getting their own separate file. And postgres dictates the directory structure, although perhaps in more advanced deployments users can control some aspects. The filenames have a number which is automatically generated by postgres. My instance installed to /var/lib/postgresql/9.1/main. There are multiple sub-directories below that.

Done writing for now, but I’m going to create some tables, bang around with psql and try out the gui admin tool pgadmin III.

learning postgres

Leave a comment

I work at EMC in the Backup and Recovery Services (BRS) division, and we use postgres. It powers our backup software catalog for Avamar. We use it as a database repository for Data Protection Advisor (DPA). And it was the first database to be virtualized automatically by VMware’s vFabric Data Director.

In the Big Data landscape, postgres pops up all the time. Greenplum uses it. Netezza uses it. Hadapt, a newcomer to the big data space uses it. I think maybe Platfora uses it but by this point my head is spinning and I can’t even remember where I read that. And Cloudera uses it to store management data.

And probably a bizillion other peices of software use it.

I’m interested in EMC and in big data so I’m going to start learning postgres.

I’ll finish up with the “What about mysql?” question. In general, I’d always read that mysql is easy to learn, fast by default and deployed widely for small web apps. And that postgres is slower, but more reliable and feature rich. Some recent browsing reminded me that MySQL has corporate backing, first from its original corporate owners, then from Sun and now from Oracle, which currently owns it. And Postgres remains 100% open source. Finally, mysql allows users to pick their storage engine while Postgres provides for just one.

mysql vs postgres links for reference:
One on wikivs.com
A stackoverflow question with responses
An ancient databasejournal.com article still getting traffic
A blog posting by Chris Travers

SQL Server calling Oracle, Can you hear me?

Leave a comment

This is the last post from my 2010 archives. I’m glad to pop these up on to my blog, if only to be able to reference them quickly if I need to mess around with any of these features. Now I’ve got to get cracking and generate some new content!

This is the 2nd blog posting addressing querying SQL Server from Oracle and vice versa. This post covers the vice versa part, from SQL Server to Oracle.

Assuming that SQL Server is working fine on your windows host, the first step is to install the Oracle client on the SQL Server host. Be sure to choose the Windows components, specifically Oracle OLEDB. That’s critical! Then configure the Oracle networking files appropriately for your target system. By this I mean adjusting the tnsnames.ora file if you are hard coding the Oracle database location, or ensuring that you have the correct LDAP.ORA setting if you use OID to look up Oracle locations. Here are some links for help with the if you have not configured Oracle client network files: 11G R2 docs on adjusting tnsnames using Oracle GUI tools, Ora FAQ on editing tnsnames.ora directly, Ora FAQ on editing ldap.ora directly.

Once the Oracle client is correctly configured, try a quick test with TNSPING from the command line to ensure that Oracle client can find the database with the tnsnames.ora or ldap.ora changes you made.

If things are OK at the Oracle client level, you must now restart SQL Server. This is because the Oracle client install updates the system PATH environment variable. The SQL Server service only checks this at startup time, so it needs a restart to get the updated value of PATH in order to find the Oracle client files.

The rest of this article addresses how to enable SQL queries executing within SQL Server to contact the Oracle database using something called a Linked Server. However, know that frequently people make Oracle data available in SQL Server by means of an SSIS job that queries Oracle and loads data into a table. By making a linked server, any sql session with appropriate permissions will be able to use the linked server.

You can create the linked server in T-SQL like this:

EXEC master.dbo.sp_addlinkedserver @server = 'ORA_LINKED_S', @srvproduct='Any string', @provider='OraOLEDB.Oracle', @datasrc='ORA_SID', @provstr='XXX'

In my testing, for Oracle data sources, it seemed that only the provider argument had to be precise in order to specify the OLEDB Oracle driver and the datasrc argument had to match the Oracle name as indicated in TNSNAMES.ORA or your LDAP server. The other values are descriptive for Oracle sources.

Then you need to provide the Oracle login information:

EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname='ORA_LINKED_S',@useself='False',@locallogin=NULL,@rmtuser='joe_user',@rmtpassword='joe_pwd'

You can also configure a Linked Server via the SQL Server Management Studio GUI. To do that, connect to your instance with Management Studio, then expand the Instance and then the Server Objects icon. Right click on the Linked Servers icon and select New Linked Server choice. This screen will allow you to input the values needed. Choose the Oracle Provider for OLEDB. You also need to provide the username and password for the user connecting to Oracle.

With the linked server in place, you can now query Oracle from within SQL Server.

Here again you have choices.

To query using native T-SQL, use this syntax:

select * from ORA_LINKED_S..SCHEMA.TABLE_NAME  

Yes, that’s two dots before the word SCHEMA. In my testing, I had to do all upper case in order for the query to run. It’s possible if the remote Oracle objects were created with mixed case using quotations that you would have to match that case.

The other choice is to do pass through queries, which enable you to use Oracle SQL syntax that gets parsed at the remote Oracle database. You use the OPENQUERY function to do that.

select * from openquery(ORA_LINKED_S, 'select * from table_name where rownum < 10')

Note that in the above example the case was not important, provided the objects were not created with quotation marks and specific case in Oracle. Also, the above uses an Oracle specific syntax: rownum < 10. That rownum clause won't work in native T-SQL.

Here's a gotcha that we encountered in my shop when having 64-bit SQL Server 2008 running on 64-bit windows and running SSIS jobs that connected to a remote Oracle database. The solution to getting everything working was to install both 32 and 64 bit Oracle clients on the same machine. See this write up from sqlblogcasts.com for more details.

Oracle calling SQL Server, do you read me?

Leave a comment

Also from 2010 archives: two posts on hetergenous queries using SQL Server and Oracle. I’d hoped to do a 3rd on general issues that arise when keeping data in a remote system, but haven’t produced it yet…

Querying SQL Server from within Oracle and vice-versa is a messy little corner of the database world. . This post will address the setup needed to query SQL Server from Oracle. The next will approach querying Oracle from SQL Server.

Notes on configuring an Oracle database to query a remote SQL Server database follow. For more detailed information, see the Hetergeneous Connectivity section of the Oracle database documentation. It’s important to know that you can either use the ODBC support that comes with the basic Oracle database license for connecting to 3rd party databases. Or you can pay extra for a specific gateway tailored for your target 3rd party database. The Oracle Database Gateway for ODBC User’s guide, discusses specifics about exactly what support is provided with the default no extra cost gateway. This article discusses the no extra cost ODBC gateway.

Note that the no extra cost option that ships with 11G is called DG4ODBC (database gateway for ODBC). It replaces the HSODBC (heterogeneous services ODBC) program that shipped with 10G and earlier. Also note that I was unable to get DG4ODBC working with Oracle DB release 11.1.0.6.0 on Windows. When I upgraded to 11.1.0.7.0, DG4ODBC worked fine.

Connecting to SQL Server from an Oracle database is easier when the Oracle system runs on Windows. That’s because SQL Server client software must be installed on the Oracle host, as well as an ODBC driver for SQL Server. Installing both is straightforward on Windows. Simply install SQL Server client using the file sqlncli.msi, or run the SQL Server database installer and just select the SQL client choices. Doing either takes care of both the native connectivity and the ODBC driver. On unix you will either have to purchase a commercial ODBC driver for SQL Server which comes bundled with the native connectivity like Data Direct or use some of the open source offerings such as Free TDS for the native connectivity and Unix ODBC for the odbc layer.

After you put the SQL Server connectivity software in place, you then need to create an ODBC data source to your remote SQL Server. On windows, this is done via Control Panel – Administrator Tools – Data Sources (ODBC). On unix, you will be configuring text files specific to the odbc driver software you have installed.

At this point, you can verify if connectivity from your host to SQL Server is working without using Oracle. For example on Windows, click the Test Data Source button within the ODBC DSN set up screen. And on unix, there likely will be a command line utility with which you can test connectivity.

With connectivity in place, the next steps are about describing to Oracle your ODBC data source. These steps will be the same on unix and windows since they take place at the Oracle level.

The first part of describing to Oracle your ODBC source is specifying a database gateway. As stated above, we will illustrate using the no extra cost ODBC gateway. You create an initDG4ODBC.ora file that gets put in the $ORACLE_HOME/hs/admin directory. You’ll need these 3 parameters at a minimum:

# Substitute MYDATASOURCE with whatever you called the SQL Server in your ODBC entry.
HS_FDS_CONNECT_INFO = MYDATASOURCE
HS_FDS_TRACE_LEVEL = off
HS_FDS_SUPPORT_STATISTICS=FALSE

There is another step to telling Oracle about your remote SQL Server. You must create an entry in the listener.ora file. It will reference the initDG4ODBC.ora file. The program name will be DG4ODBC.

(SID_DESC =
(PROGRAM = DG4ODBC)
(SID_NAME = DG4ODBC)
(ORACLE_HOME = C:\oracle\product\11.1.0\db_1)
)

Once you’ve installed SQL Server client, ODBC driver and configured the $ORACLE_HOME/hs/network\initDG4ODBC.ora and listener.ora files, you are ready to create a database link that connects to the remote SQL Server. This link can then be referenced by SQL statements to get data.

CREATE public DATABASE LINK MYDATASOURCE
CONNECT TO "user" IDENTIFIED BY "pwd" USING 'DG4ODBC';

Querying against this link immediately creates case-sensitivity issues. SQL Server will need the correct case for the fields and tables that you query. For example the query below generates an error.

SQL> select name from sysobjects@MYDATASOURCE;
select name from sysobjects@MYDATASOURCE
       *
ERROR at line 1:
ORA-00904: "NAME": invalid identifier

But this gets the case correct:

select "name" from sysobjects

You can use Oracle SQL functionality provided the DGODBC can convert it successfully.

For example, this will work:

SQL> select column_name from all_tab_columns@MYDATASOURCE where table_name = 'TBLSTATS';

Oracle has mapped its data dictionary view to the native SQL Server one.

You can also send SQL statements directly to the remote SQL Server without having them checked in Oracle first using the DBMS_HS_PASSTHROUGH pl/sql API. This means your syntax can be in SQL Server T-SQL syntax. You can execute a subset of DDL statements with the DBMS_HS_PASSTHROUGH.EXECUTE_IMMEDIATE command.

Oracle 11G: Encryption Everywhere

Leave a comment

The following is another entry I wrote in 2010 that I’ve refrained from publishing. It’s a follow on to this overview of Oracle 11g security.

Keeping data safe in today’s computing environment demands encryption. In fact, the law often requires it. Several states in the USA now have laws stating that companies must safe guard personal data belonging to consumers. If the data gets stolen, and if the data is not encrypted, then the company that suffered the data breach must compensate consumers for their loss of privacy. So naturally, companies want to encrypt personal information stored within their databases. Oracle 11G makes such encryption straightforward. This article summarizes older encryption features and details the new encryption features in 11G releases 1 and 2.

Here’s a quick rundown on what’s been available for encryption for the last few releases. The ability to encrypt data as it travels across the network to and from the database has been available for several releases. Called sqlnet encryption, it requires purchasing the extra cost option Advanced Security and dates back to at least Oracle 8i. Note that sqlnet encryption does not encrypt the data once it is inside the database residing on disk, or at rest, as the catch phrase goes. Another feature available in Oracle 8i and onward are PL/SQL functions to encrypt and decrypt a piece of data. These functions could be used with custom programming to ensure the encryption of key data in the database. However, such functions demand custom programming and make it impossible to index the columns storing such data. Oracle 10g announced the arrival of Transparent Data Encryption (TDE), enabling the database to encrypt all data in a column such that a programmer did not have to alter a SQL statement to encrypt and decrypt it. Users needed only to set up the encryption wallet for the Oracle instance and then run an ALTER TABLE statement to modify the column to be encrypted. Provided a user had SELECT privileges on the table with encrypted columns, that user could transparently access the data using exactly the same SQL statement required if the column were not encrypted. And a hacker with unauthorized access to the underlying data file would not be able to see the data. Version 10g also enhanced the RMAN backup utility so it could create encrypted output files.

Despite the TDE feature, encryption in 10g had serious flaws. You had to specify individual columns and encrypt them one at a time. Certain column types could not be encrypted, most notably BLOB columns. If you created a new table or added columns with sensitive data, you had to remember to encrypt them.

Oracle 11G solves these short comings. The new tablespace encryption features enables an entire tablespace to be encrypted. Any new table created in that tablespace will get encrypted. 11G also provides encryption for the export data pump (expdp) utility. And BLOB files can be now be encrypted by means of using an 11G feature called SecureFiles. Please note that all three of these features require purchase of the extra cost option Advanced Security, which can only be used with the Enterprise Edition of Oracle database.

Here’s how to implement tablespace encryption. As in previous releases, you start by creating the Wallet, which contains the master encryption key for the entire database.

The default location for the wallet is $ORACLE_BASE/admin/$ORACLE_SID/wallet. You can override this in the $TNS_ADMIN/sqlnet.ora file by adding an ENCRYPTION_WALLET_LOCATION entry.

ENCRYPTION_WALLET_LOCATION=
  (SOURCE=(METHOD=FILE)(METHOD_DATA=
    (DIRECTORY=/u01/app/oracle/admin/TEST/encryption_wallet/)))

You must then create an encryption key, using an alter system key.

ALTER SYSTEM SET ENCRYPTION KEY AUTHENTICATED BY "str0ngphr4se!";

A note for release 11G R1 versus 11G R2. In 11G R1, you can only create an encryption key once. In 11G R2, you can reset the encryption key by running the above command again. Doing so will not invalidate already encrypted data. This feature is to provide support for environments where regulations require periodic changing of the encryption key. I have not tested what the performance or overhead penalty is for resetting the encryption key.

Whenever you open the database, you must open the wallet.

ALTER SYSTEM SET WALLET OPEN IDENTIFIED BY "str0ngphra4se!";

With the wallet in place, you can now create an encrypted tablespace. Note that you cannot ALTER an existing tablespace to become encrypted. One can move an existing table from a decrypted tablespace to an encrypted one, however. The following is sample code for creating an encrypted tablespace, then creating a table inside it using the CREATE TABLE AS SELECT method.

-- Make unencrypted tablespace
create tablespace foo
datafile '/u02/oradata/TEST/foo_1.dbf' size 100M ;

-- Make encrypted tablespace
create tablespace foo_enc 
datafile '/u02/oradata/TEST/foo_enc_01.dbf' size 100M 
encryption using 'AES256' 
default storage (encrypt);

-- Put a table in the unencrypted tablespace. Query the all_objects view to make table.
16:55:06 SQL> create table decrypt_object tablespace foo as select * from all_objects;
Table created.
Elapsed: 00:00:08.08

-- Put a table in the encrypted tablespace. Query the all_objects view to make table.
16:54:50 SQL> create table enc_object tablespace foo_enc as select * from all_objects;
Table created.
Elapsed: 00:00:11.23

Finally, an illustration with unix commands show that the underlying datafiles for the tablespace are indeed encrypted. The first command shows the grep utility scanning for the string value “ALL_TABLES” in the unencrypted data files. It finds a match for a SYNONYM and a VIEW.

$ grep ALL_TABLES foo_1.dbf
ALL_TABLESÿÂQÿSYNONYMxm
ALL_TABLESÿÂPÿVIEWxm
$

But when the same command is run against the data file for the encrypted tablespace, no matches are found.

$ grep ALL_TABLES foo_enc_01.dbf
$

LOB columns could not be encrypted with Oracle 10G encryption. In 11G, you have 2 ways to encrypt the LOB columns. Tablespace encryption will work. I verified this by loading BLOB files into tables residing in the FOO and FOO_ENC tablespaces loaded above and than repeated the test of scanning the underlying data files in each for strings in the LOB files. You can also encrypt LOB columns with a new feature called SecureFiles. I’m not sure what is the performance impact of using SecureFiles vs Tablespace Encryption for encrypting LOBS.

SecureFiles is a new option to store BLOB and CLOB columns, and the old way is now called BasicFiles. SecureFiles offers performance enhancements over BasicFiles. Oracle recommends using SecureFiles for LOB storage over the older BasicFiles. Caching, locking, the write mechanism and logging are all enhanced in Securefiles. And if you pay for the extra cost Advanced Security Option, you can encrypt your SecureFile columns. Note that there is also another extra cost option called Advanced Compression, which enables you to use the compression and deduplication features of SecureFiles. These won’t be discussed here.

Before discussing the encryption feature for SecureFiles, I’ll provide some background information deploying SecureFiles in general.

A SecureFile LOB column must be created in a tablespace using Automatic Segment Storage Management (ASSM). To convert an existing pre-11G LOB column to a SecureFile column, you have the following options.

  • CREATE TABLE AS SELECT. Use a CTAS statement to insert the data from the existing column into a SecureFile column in a new table.
  • INSERT INTO using SELECT. Pre-create the target table with the SecureFile column and then run an INSERT INTO statement.
  • Online table redefinition.
  • Export/Import. You can use the expdp and impdp utilities to load the data. Note that the old exp and imp do not support encryption, so if you plan to import into an encrypted securefile, you will need expdp and impdp.
  • Create a new column, update the new column with the values in the original column, then drop the old column.

The syntax for creating a SecureFile column looks like this:

create table company_docs
(doc_id, number not null primary key,
name varchar2(255) not null,
blob_content blob)
tablespace company_docs_data
lob (blob_content) store as securefile 
(OPTIONAL DETAILED STORAGE CLAUSE GOES HERE);

For details on the detailed SecureFiles storage clause, see the oracle documentation.

If you would like to create the SecureFile column in encrypted format, you need to have the Oracle Wallet set up first, as described above. Once that is in place, the syntax for creating a SecureFile LOB column with encryption turned on is the following.

CREATE TABLE encrypt_a_lot 
(id NUMBER,  document  BLOB) 
LOB (document) 
STORE AS SECUREFILE obscure_it
(ENCRYPT [ USING 'encrypt_algorithm' ] [ IDENTIFIED BY password ])
(OPTIONAL DETAILED STORAGE CLAUSE GOES HERE);

All records in the LOB column get encrypted, and that includes records across all partitions if the LOB column is spread over partitions. Note that in the above example, the ‘encrypt_algorithm’ indicates the name of the encryption algorithm. Valid algorithms are:

  • 3DES168
  • AES128
  • AES192 (default)
  • AES256

The last encryption feature to be covered here is support for encrypting the output of the Oracle’s export utility, expdp, alternately known as datapump. The old export utility exp does NOT support encryption.

The command line for datapump now has four parameters that govern the encryption feature.

  • ENCRYPTION will encrypt part or all of a dump file. Valid keyword values are: ALL, DATA_ONLY, ENCRYPTED_COLUMNS_ONLY, METADATA_ONLY and NONE.
  • ENCRYPTION_ALGORITHM specifies how encryption should be done. Valid keyword values are: AES128, AES192 and AES256. The default choice is AES128.
  • ENCRYPTION_MODE indicates the method of generating the encryption key. Valid keyword values are: DUAL, PASSWORD and TRANSPARENT. The default choice is TRANSPARENT.
  • ENCRYPTION_PASSWORD indicates the password key for creating encrypted data within a dump file. This is not a required argument unless you specify you want to use a password.

Here’s an illustration of using datapump to create encrypted output. First is a sample of running the utility with no encryption to create an output file called cleartext.dmp.

C:\oracle\admin\TESTDB\dpdump>expdp system/pwd directory=data_pump_dir dumpfile=cleartext.dmp
tables=dmg.demo_states

Here the command is run again with encryption set to all.

C:\oracle\admin\TESTDB\dpdump>expdp system/pwd directory=data_pump_dir dumpfile=crypto.dmp 
tables=dmg.demo_states encryption = all

The next sample reveals that the unencrypted output is vulnerable to inspection with simple to use tools like findstr.

C:\oracle\admin\TESTDB\dpdump>findstr /C:"NEW JERSEY" cleartext.dmp
NEW JERSEY< ???NM

However the encrypted output file crypto.dmp cannot be parsed with text tools. No match is returned.

C:\oracle\admin\TESTDB\dpdump>findstr /C:"NEW JERSEY" crypto.dmp
C:\oracle\admin\TESTDB\dpdump>

In summary, Oracle 11G has significant new features to support encrypting data. You’ll want to plan carefully what you encrypt. There’s probably no reason to encrypt data unless it contains privacy oriented data, also known as personally identifiable data or PII. Such data can be used to uniquely identify an individual, such as a social security number. You’ll want to identify precisely which columns contain PII and then take measures to encrypt those columns or place the tables containing those columns in an encrypted tablespace. If your database does contain PII, you may also want to explore encrypting the data as it is transmitted using https and encrypted SQL*NET traffic, and investigate ensuring backups are encrypted by running RMAN encryption or using encrypted tape or disk devices. Oracle 11G enables you to encrypt your data at just about every level.

Oracle 11G: More Secure Than Before?

Leave a comment

I wrote this entry in 2010 and never posted it. However, with Oracle 12c about to be released, I figured I’d post it now. It will serve as a point of comparison to the security features that will come with Oracle 12c.

Pressure is increasing to make computer deployments more secure. 15 to 10 years ago, corporate computing security focused far away from the database layer. Implementing firewalls for web servers and https protocol was enough to satisfy many security requirements in those days. Now, security specialists routinely examine database deployments to see if whether security best practices are in place. Oracle 11G brings several notable improvements to the Oracle security feature set. This article will focus on non-encryption security enhancements. A subsequent posting will describe the encryption new features available in 11G R1 and R2.

Several of the new 11G security features are available with the core Oracle database license and do not require purchasing the extra cost Advanced Security option. Five features are implemented as init.ora parameters. Another feature is implemented via a new mechanism to govern who gets access to sensitive packages that can be used to hack into the database.

Want your passwords to be case-sensitive? Just set the init.ora parameter SEC_CASE_SENSITIVE_LOGON to true. Here are sql statements run in SQL*PLUS illustrating how this works. Note that this parameter affects the entire instance.

-- Connect as system, examine password case sensitivity setting, make a new user.
SQL> connect system/password@TEST;
SQL> show parameter sec_case_sensitive_logon
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sec_case_sensitive_logon             boolean     TRUE
SQL> create user freddy identified by Freddy;
SQL> grant create session to freddy;

-- Get an error, then connect OK.
SQL> connect freddy/freddy@TEST
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
SQL> connect freddy/Freddy@TEST
Connected.

A word of caution when using database links with case sensitive passwords turned on. When connecting from a pre-11G database to an 11G with case sensitivity turned on, you need to make the password within the 11G system in upper case so that the older database can successfully login via the link to the 11G system. And while on the topic of passwords and database links, know that the password hash formerly visible in the DBA_USERS.PASSWORD column can no longer be viewsed. What about viewing the password provided when creating a database link? Didn’t you used to be able to see that in the password column of USER_DB_LINKS? As of 10G R2 (not 11G, but the release before), passwords for database links are stored in encrypted format, and even the encrypted value is only visible to users with access to the view SYS.LINK$.

What else can you do with new init.ora parameters for security in 11G? The parameter SEC_MAX_FAILED_LOGIN_ATTEMPTS governs how many times a user can attempt to login with an incorrect password before getting their account locked. The default value is 10. SEC_PROTOCOL_ERROR_TRACE_ACTION determines what type of notification should occur when the database gets bad packets, as might happen during a denial of service attack. You can specify that no notification be sent (NONE), or that it be sent as a short message to the alert log (LOG) OR a detailed trace file (TRACE) OR as a short alert log message plus notification to OEM (ALERT). The default value is TRACE. SEC_PROTOCOL_ERROR_FURTHER_ACTION specifies the fate of connections that send bad packet. They can CONTINUE, get DROPped or be DELAYed. The default value is CONTINUE. Lastly, SEC_RETURN_SERVER_RELEASE_BANNER allows the DBA to avoid sending the complete database version information to an incoming session, making it harder for hackers to figure out the exact version the database is running. The default value is FALSE, sending only minimal information about the database version.

The last security feature to be discussed here is the new method of enabling users to invoke the system packages UTL_HTTP, UTL_MAIL, UTL_SMTP and UTL_TCP. Hackers target these packages since they make it possible for database sessions to send email, communicate via HTTP protocol and via TCP/IP. In previous database releases, the Oracle PUBLIC user had EXECUTE privileges on these packages by default, thereby making it possible for any user to use them. In 11G, the PUBLIC user still has EXECUTE privileges on them. However, now more is needed for database users to successfully invoke the procedures that belong to these packages. In addition to the EXECUTE privileges, users must also have privileges from an access control list (ACL) that gets stored in XML format in the Oracle XDB. DBAs can administer this ACL using the DBMS_NETWORK_ACL_ADMIN and the DBMS_NETWORK_ACL_UTILITY package, which naturally are new to 11G.

In summary, Oracle has improved security in 11G. Implementing several features via init.ora parameters departs from the approach of using the PROFILE feature, where several similar security were in place already and continue to stay. A problem with implementing features like locking a user after invalid login attempts is that if a user doesn’t belong to the correct profile, that user can attempt an infinite # of incorrect passwords. Using init.ora parameters provides a blanket mechanism to cover all users. Want to know more? I recommend going straight to the source: the 11G Release 1 Security Guide and the 11G Release 2 Security Guide.

Older Entries Newer Entries