Oracle Logbook

April 30, 2012

Oracle Heap Tables or SQL Server Clustered Indexes?

Filed under: Uncategorized — heliodias @ 4:51 pm

Jonathan Lewis and Grant Fritchey Live Debate

Thurs 7th June 2012 17:00 CEST 16:00 BST 11.00 EDT 10.00 CDT 08.00 PDT

Register your place

Oracle and SQL Server may both share a common language, but certain things are handled quite differently. Jonathan Lewis (OakTable Network, Oracle Ace Director) is used to seeing heap tables (almost) everywhere, but Grant Fritchey (Microsoft SQL Server MVP) is used to seeing clustered indexes (almost) everywhere. But which arrangement performs better? And is comparative performance even the right thing to measure?

In this live discussion, these two heavyweights in their respective technology areas will debate the pros and cons of Oracle Heap Tables and SQL Clustered Indexes. Jonathan and Grant may even play a few unexpected cards during the discussion. Be prepared for a lively exchange which will not only entertain, but will teach you key concepts on Oracle and SQL Server.

Please note: this webinar is offered free of charge, and places are strictly limited and offered on a first come, first serve basis. Register your place now.

Advertisements

April 27, 2012

Avoiding redolog delays using commit_wait and commit_logging

Filed under: Uncategorized — heliodias @ 11:44 pm

Abstract:

This post shows how to get a lot of benefits regarding improve the redolog written,  having as tradeoff  a tiny little chance to lost commited transactions in a event of crash.

Those who can´t afford lose a commited transaction e.g. (BANKs, Financial corporations, Hospitals) should disregard what I´m going to present.

 

Oracle have the hability to group together redo operations instead of writting as fast as it could, by changing parameters

For the experimentation , I advice to open two sessions, one for control the other for the operations undergoing.

session 1
create table l3_t1 (id number);
session 2
select name,value from v$statname t1,v$mystat t2 where t1.statistic#=t2.statistic#
and value>0
and (name like ‘redo%’ or name =’messages sent’)
order by 1
/
set linesize 28
set pagesize 0
spool insert1.sql
select ‘insert into l3_t1 values(1); commit;’ from dba_objects
where rownum <10001;

@insert1
select name,value from v$statname t1,v$mystat t2 where t1.statistic#=t2.statistic#
and value>0
and (name like ‘redo%’ or name =’messages sent’)
order by 1
/

/*After 10 thousands of inserts you should keep your eyes the the around 10000 messages and 500 sync time  */

NAME                                                                  VALUE
—————————————————————- ———-
messages sent                                                         10002
redo entries                                                          10079
redo size                                                           5150552
redo synch time                                                         552
redo synch writes                                                     10002

@insert1
select name,value from v$statname t1,v$mystat t2 where t1.statistic#=t2.statistic#
and value>0
and (name like ‘redo%’ or name =’messages sent’)
order by 1
/

/*After the second round of more 10 thousands of inserts you should keep your eyes the the around 20000 messages and 920 sync time  */
NAME                                                                  VALUE
—————————————————————- ———-
messages sent                                                         20003
redo entries                                                          20166
redo size                                                          10299892
redo synch time                                                         920
redo synch writes                                                     20003

==============
Session 1

SQL> alter system set commit_logging=batch scope=both;

System altered.

SQL> alter system set commit_wait=nowait scope=both;

System altered.
===============

Session 2

@insert1

select name,value from v$statname t1,v$mystat t2 where t1.statistic#=t2.statistic#
and value>0
and (name like ‘redo%’ or name =’messages sent’)
order by 1
/

/*Now the magic , you almost didn´t have messages nor sync time, even doing once again more 10.000 inserts & commits  */
NAME                                                                  VALUE
—————————————————————- ———-
messages sent                                                         20008
redo entries                                                          34497
redo size                                                          15107148
redo synch time                                                         921
redo synch writes                                                     20005

 

I stated before that was “tiny,little chance” due to the fact that in the event of a crash , to actually loose some information, Oracle had buffered grouped redo, which is unlikelly, but not impossible.

Once again, don´t  use it, unless the loss of transaction is not a big deal for you.

Just to make myself clear, the stacks are lose in the worse case the last few seconds of transaction. If this will cost you nothing, and you are suffering from redo sync wait event , this could be a solution.

April 20, 2012

IOT Candidates – Which tables should be consider to Index Organizer Tables

Filed under: Uncategorized — heliodias @ 9:32 pm

Abstract: This post is an introduction to where the investigation should start.

The very begin is check tables with just one index:

SELECT COUNT(*) FROM (
select OWNER,TABLE_NAME,count(*) from dba_indexes
WHERE OWNER NOT IN (‘SYS’, ‘SYSMAN’,’XDB’,’SYSTEM’,’CTXSYS’,’OLAPSYS’)
group by OWNER,TABLE_NAME
HAVING COUNT(*)=1);

  COUNT(*)
———-
      2008

In this sample we could see 2008 tables that  deserve a further investigation.

The next step is check the  v$segment_statistics to see which among them have less DML , those will be the perfect candidate.

Important advice, before change the structure to IOT, test it, and be sure that you won´t gonna need lot´s of  indexes in the future.

April 17, 2012

11 Reasons to upgrade to Oracle 11g.

Filed under: Uncategorized — heliodias @ 4:28 pm

1 – Considering hardware/software investing to speed up the database?
Looking for some realtime database, or even an exadata solution?
Maybe Oracle 11g with smart flash option, can already speed up the database
the way you need, (smart flash let you use the flash memory making possible
to create a second cache layer (the first layer is memory))
Oracle 11g r2, have many other new features created specially to boost your database.
before go any further in hardware change, check with one expert in performance,
what is the root problem of your performance issue,
and check if 11g r2 could address it properly.
2 – General slowness

A. With STS (SQL Tuning Set) you can create a set of queries to be tuned automatically.

B.Very often our performance issues is due to failures on statistics, histograms was a very
good solution, but lack go to the heart of some problems: Statistics at coumpound columns, e.g. Two columns
with 12 distinct values each, will lead Oracle to calculate that there are 100 distinct combinations,
But often there aren´t , because the data in both columns have an intrinsic connection (months and astrological sign)
You actually have just 24 combinations not 144 which would misguide Oracle optimizer.
Now you can teach Oracle , even this subtle connection among columns.

C.Cursor sharing always was a hard decision to make (“do I want DB_CACHE problems due to bad cursor sharings?,
Or do I prefer SHARED_POOL issues due to repeated similar DML in my library cache?”),
Oracle 11g have now Intelligent cursor sharing, that is smart enough to fix all these.
3 – Serious degradation due to heavly backup.
A. On 11g r2 you now have the ZLIB compression algorithm , faster and lighter
B. Now just the in use UNDO will be backup up (before 11g all undo was backuped up regardless
if it was in use or not)
4 – Mission critical downtime due to application upgrade.
You need to create a simple column in the most concurrent table on the system, with Oracle 11g, it´s a piece of cake
Just use DDL WAIT and your alter table will now wait till that brief moment that you can alter the table.
5 – Security reason
If you are suffering attempts to hacking your database
Oracle 11g, have a lot of new features, which I highlight: Fail logon delay
Everytime one try to connect with wrong password , Oracle put a delay that
turn your database “force brute free”.
6 – Information Safety
Have you ever wondered what would happen if your tape backups falls in malicious hands?
With TDE (transparent database encryption) you don´t have to worry anymore.
All data within a given tablespace will be automatically encrypted
7 – You need to create a partitions based on parent table, and you cannot replicate the columns used in parent table to child one.
It´s possible with Reference Partition, you can partition your table using just the FK
8 – BUGs on your Oracle 10g r1 and below
Why keep struggling to make an obsolete version workout, knowing that eventually you are gonna face
a problem without a patch, don´t go to 10g r2, go Straight to 11gr2.
Depending on your version, you may even capture the load on you database and
check how it will work on 11g using RAT Real Application Test
9 – Slow reports that are slowing down the whole system.
You have a perfected tuned database for your OLTP , but once in a while that DW Report arise in the
middle of the day and now you need to triple the PGA, for some hours.
Memory target parameter, let you dinamically move your memory from SGA to PGA, decreasing the usage of
temporary tablespaces.
10 – Necessity to test the application with the production data.
You just refreshed the test database couple of hours and the developers are complaining that the data are already to old.
You actually have a window to test in production environment, but then what???
With Oracle 11g, you may test in production database, all you need is create a snapshot of the data
after they finish you can delete the snapshot, and the production will be like you never had been there.
11 – Your database is getting slow because a large amount of triggers created to track changes on some tables
Now you can have a Flashback data archive , and it do it automatically, with less resource usage, and cleaner.

 

It´s very important to note that most of the features just exists on Enterprise Edition,
and some of them are Options that should be paid (RAT, Partition, Tuning Pack)

April 11, 2012

Semijoins Presentation

Filed under: Uncategorized — heliodias @ 7:33 pm

Semijoins Presentation

Must Watch

Blog at WordPress.com.