Oracle Logbook

September 10, 2012

Exadata Certified Specialist – Achieved last week

Filed under: Uncategorized — heliodias @ 2:43 pm

Advertisements

June 22, 2012

Quickly way to check out if a query have performance problem or not

Filed under: Uncategorized — heliodias @ 4:55 pm

This article is for those whom are begining in query performance tuning

(If and when you become experiente, start think about 10046, 10053 and wait event)

So far you best start would be check if the query time make sense.

To accomplish it, you will need to set autotrace on and run your query again
Let´s pick up a query that visit two tables. with 1000 blocks each and the result set is 10 rows.
The best case would be if your query visit 2 blocks (1 from each table) it´s just a dream situation and usually it´s hard to see.
20 Blocks still would be very good (ten blocks from each table)

Ok, but let´s suppose that your query is visiting 1500 blocks, is it good?
DEPENDS, how many rows do you really need from each table? 500 rows? than it seems reasonable. (It´s not wonderful, due the fact that this 500 rows could be within few blocks , but instead apparently in this case the data are spread among many blocks)
The way to improve it , usually have to do with physical organization of the data, e.g. lack of index organized table.

What about 5 rows?  NO WAY, you probably has a lack of index or statistics.

What if you are visiting 10,000 blocks? Big problem , you may have a cartesian join (in a very few cases it make sense, but usually don´t), The clue here is the distinct clause, often is evidence that you didn´t created the perfect join among the tables, (Perfect join would be 2 rows in each table producing 2,1 or zero rows) 3 or 4 rows usually is a clear indication of problem.

Formula of wonderful query:
(Rows in need from table1/avg rows per block for this table)  + (Rows in need from table2/avg rows per block for this table)
e.g. your predicate (filter) over the first table will get 200 rows
each block from table1 usually hold 50 rows, then you need to visit 4 blocks from table 1.
then the same scenario repeat over the second table, and you end up visiting 8 blocks. This is just very very good.

Formula of very good query.
Rows in need from table1+Rows in need from table2
In this case you end up visiting 400 blocks,
Your query seems good but your data access/organization don´t.

Formula of good query.
(Rows in need from table1) * (Rows in need from table2)
In this case you end up visiting 400 blocks,
Your query seems good but your data access/organization don´t.

Formula of very bad query.
(Table 1 number of rows) * (Table 2 number of rows)

June 20, 2012

The simplest query for checking what´s happening in a Oracle database

Filed under: Uncategorized — heliodias @ 8:08 pm

Often we are in a hurry needing just a rough idea about what´s going on in our database right now.

Tanel Poder one of the first OCM in world wrote a very easy explanation that I summarize with his query:

Column sw_event format a20

select
     count(*),
     CASE WHEN state != ‘WAITING’ THEN ‘WORKING’
          ELSE ‘WAITING’
     END AS state,
     CASE WHEN state != ‘WAITING’ THEN ‘On CPU / runqueue’
          ELSE event
     END AS sw_event
  FROM
     v$session
  WHERE
      type = ‘USER’
  AND status = ‘ACTIVE’
  GROUP BY
     CASE WHEN state != ‘WAITING’ THEN ‘WORKING’
          ELSE ‘WAITING’
     END,
     CASE WHEN state != ‘WAITING’ THEN ‘On CPU / runqueue’
          ELSE event
     END
  ORDER BY
     1 DESC, 2 DESC;

Tanel article is available at:

http://blog.tanelpoder.com/2008/08/07/the-simplest-query-for-checking-whats-happening-in-a-database/

AWR it´s not free and requires license – DBA_FEATURE_USAGE_STATISTICS

Filed under: Uncategorized — heliodias @ 3:18 pm

Be aware that if you use such feature, Oracle will log this, and will fail in an license auditing.

even if you just select some *_HIST views, Oracle will going to log this access into DBA_FEATURE_USAGE_STATISTICS and eventually you will have to pay.

Within DBA_FEATURE_USAGE_STATISTICS have informations regarding , how many times you used that feature, when was the last time.

June 18, 2012

Biggest difference between analyze and dbms_stats

Filed under: Uncategorized — heliodias @ 5:46 pm

Analyze collect statistics from the indexes associated within a table and dbms_stats don´t.

Using dbms_stats you should add the parameter cascade=>true to accomplish the same result.

June 15, 2012

Small list of bugs that lead Oracle to choose the wrong path

Filed under: Uncategorized — heliodias @ 4:59 pm

At OTN I saw this, and it´s quite usefull to check over time if your Oracle version still suffer from these bugs.

CBO does not consider cheaper NL-Plan without hints

June 14, 2012

Improving performance with foreign key

Filed under: Uncategorized — heliodias @ 4:21 pm

Oracle statistics are single table  only (may be version 12 will come with relationship statistic)

When you have a subquery with FKs between then, you may know that for each query at the child table, it will have just one row on the parent table,but Oracle is blind in this matter.

Then Oracle make some math to guess how it could be the selectivity on the main table. And usually it goes wrong and sometimes very wrong.

So the only way to give to Oracle a perfect picture, is creating the foreign key (yes that foreign key that you avoided creating due to “never will be violated”) Than Oracle will give you a much better plan.

I´d decide write about it after answer this post at OTN Different Execution paths for similar query.

June 12, 2012

How to control the speed of archive generation during an Oracle import

Filed under: Uncategorized — heliodias @ 5:19 pm

The scenario is an Oracle 10g, importing a table from 35GB dump file inside a test instance, with small archive area and running a simultaneous delete at the same table.(therefore we should use commit=y)

Another huge constraint is that the archive backup run each other hour and can´t be changed.

The bottom line is that I need something like a parameter MAXIMUM_ROWS_IMP_HOUR=2000000 , the issue is that such parameter doesn´t exist.

The solution:

Encompass the lock table, delete , sleep(big), commit and another sleep (small) inside a PL/SQL block

So the “sleep” will give to import some rows and the lock will slow down the pace of the whole process

June 11, 2012

Best way to add not null columns at a million rows table

Filed under: Uncategorized — heliodias @ 10:12 pm

I saw this question today at OTN:

Massive updates of new columns on tables

DBMS_REDEFINITION is usually the best way to go,

You will benefit from direct path technology, plus having a controlled operation, avoiding all that worries about disable triggers, FK, Indexes, and so on.

 

 

 

May 25, 2012

Why tables without indexes should have their PCTFREE set to zero or one

Filed under: Uncategorized — heliodias @ 5:26 pm

If your table don’t have any indexes, it’s most likely that it is sort of audit or log table, therefore no further update will happen there, turning any value above one on a complete waste of room.

« Previous PageNext Page »

Blog at WordPress.com.