Oracle Logbook

May 13, 2013

Benefits of Oracle Block Change Tracking

Filed under: Uncategorized — Helio Dias @ 8:53 pm

Regarding of amount of block change and the size of your database, one can have a huge decrease in backup time.

To activate:

alter database enable block change tracking using file ‘/u02/oracle/bct/bct.log’;

To following it:

SELECT filename, status, bytes
FROM v$block_change_tracking;

March 27, 2013

Query to check if your Oracle redo changing is too frequently

Filed under: Uncategorized — Helio Dias @ 7:12 pm

Everywhere is easy to find some paper regarding the frequency that redo should be filled up.

Personally I like the 1/2 hour changing, it doesn´t put a log of pressure at checkpoint, giving us a good balance between recovery time and wait of redo sync.

The query bellow will show if any changing during the working time (7AM to 7PM) is happening more often then twice an hour.

(It will check just the last ten days)

select to_number(to_char(t1.FIRST_TIME,’ddmm’)||t1.thread#||t1.sequence#) sequencia,
to_number(to_char(t1.FIRST_TIME,’hh24′))*60+ to_number(to_char(t1.FIRST_TIME,’mi’)) time ,
to_number(to_char(t2.FIRST_TIME,’hh24′))*60+ to_number(to_char(t2.FIRST_TIME,’mi’)) time2
from v$archived_log t1,v$archived_log t2
where
to_number(to_char(t1.FIRST_TIME,’ddmm’)||t1.thread#||t1.sequence#)=to_number(to_char(t2.FIRST_TIME,’ddmm’)||t2.thread#||t2.sequence#-1) and
(to_number(to_char(t2.FIRST_TIME,’hh24′))*60+ to_number(to_char(t2.FIRST_TIME,’mi’)) )

(to_number(to_char(t1.FIRST_TIME,’hh24′))*60+ to_number(to_char(t1.FIRST_TIME,’mi’)) )
<30  –redo changing more than 2 times per hour
/*  begining of time validation,
aiming to avoid batch operation */
and to_number(to_char(t1.FIRST_TIME,’hh24′))  between 7 and 19 and
to_number(to_char(t2.FIRST_TIME,’hh24′))  between 7 and 19
— end of time validation
and t1.first_time>sysdate-10
and t2.first_time>sysdate-10
;

March 21, 2013

Monitoring backup through Alert

Filed under: Uncategorized — Helio Dias @ 4:21 pm

This procedure will write into alert.log when the enviroment doesn´t have backup online working properly.

 

CREATE OR REPLACE PROCEDURE SYS.L3_BACKUP_CHECK
AS
STATUS_ONLINE VARCHAR2(10);
BEGIN
SELECT decode(count(*),0,’FAIL’,’OK’) INTO STATUS_ONLINE
FROM V$RMAN_BACKUP_JOB_DETAILS T1 WHERE START_TIME>SYSDATE-36/24
AND INPUT_TYPE NOT LIKE ‘%ARCHIVE%’;
IF STATUS_ONLINE=’FAIL’ THEN
dbms_system.ksdwrt(2,’ORA-00600: ONLINE BACKUP FAIL, NOT BACKING UP   +36 HOURS’);
END IF;
END;

 

 

March 8, 2013

Why Oracle automatic memory management is often a bad idea

Filed under: Uncategorized — Helio Dias @ 2:11 pm

Oracle might had created this feature for those whom doesn´t have a DBA onsite and need a database that can operate by itself.

The use of AMM is not free of charge, many peak of usage in one area often lead to future decrease in performance in another area.

 

e.g. If a bad query loads tons of cache in database buffer, and Oracle deallocate memory from shared pool to accomplish this request, eventually the whole database will pay a huge price with decreasing in performance due to lack of information at shared pool.

 

So the botton line is, if your database have someone watching close to it,  don´t use AMM.

February 25, 2013

The best advance in Oracle database 12c

Filed under: Uncategorized — Helio Dias @ 8:11 pm

Adaptive execution plans.

This new feature, by itself will justify many upgrade around the globe.

Most of database operation are repetitions, and as everywhere always have a “bad execution plan” due to lack of precise statistics. And this used to go over and over.

But now the database will gonna behaviour better and better.

I can imagine improvements in more 90% on hardest operation.

Decreasing in CPU demands and memory.

 

 

 

 

February 14, 2013

How to measure IO performance inside Oracle Database

Filed under: Uncategorized — Helio Dias @ 8:36 pm

The challenge was to create a baseline from Oracle database viewpoint which allow us to spot after a storage migration if the performance changed.

I created a “magical number” called Whole Mbytes per seconds. Which is the total write and read of the whole database operations, including background processes, redos write and of course the foreground processes divided by the IO wait per second.

select total_mb/trunc(value/100) from v$sysstat,
(select trunc(sum(value)/1024/1024) total_mb from v$sysstat where name in
(‘physical read total bytes’,’physical write total bytes’))
where name=’user I/O wait time’;

September 10, 2012

Exadata Certified Specialist – Achieved last week

Filed under: Uncategorized — Helio Dias @ 2:43 pm

June 22, 2012

Quickly way to check out if a query have performance problem or not

Filed under: Uncategorized — Helio Dias @ 4:55 pm

This article is for those whom are begining in query performance tuning

(If and when you become experiente, start think about 10046, 10053 and wait event)

So far you best start would be check if the query time make sense.

To accomplish it, you will need to set autotrace on and run your query again
Let´s pick up a query that visit two tables. with 1000 blocks each and the result set is 10 rows.
The best case would be if your query visit 2 blocks (1 from each table) it´s just a dream situation and usually it´s hard to see.
20 Blocks still would be very good (ten blocks from each table)

Ok, but let´s suppose that your query is visiting 1500 blocks, is it good?
DEPENDS, how many rows do you really need from each table? 500 rows? than it seems reasonable. (It´s not wonderful, due the fact that this 500 rows could be within few blocks , but instead apparently in this case the data are spread among many blocks)
The way to improve it , usually have to do with physical organization of the data, e.g. lack of index organized table.

What about 5 rows?  NO WAY, you probably has a lack of index or statistics.

What if you are visiting 10,000 blocks? Big problem , you may have a cartesian join (in a very few cases it make sense, but usually don´t), The clue here is the distinct clause, often is evidence that you didn´t created the perfect join among the tables, (Perfect join would be 2 rows in each table producing 2,1 or zero rows) 3 or 4 rows usually is a clear indication of problem.

Formula of wonderful query:
(Rows in need from table1/avg rows per block for this table)  + (Rows in need from table2/avg rows per block for this table)
e.g. your predicate (filter) over the first table will get 200 rows
each block from table1 usually hold 50 rows, then you need to visit 4 blocks from table 1.
then the same scenario repeat over the second table, and you end up visiting 8 blocks. This is just very very good.

Formula of very good query.
Rows in need from table1+Rows in need from table2
In this case you end up visiting 400 blocks,
Your query seems good but your data access/organization don´t.

Formula of good query.
(Rows in need from table1) * (Rows in need from table2)
In this case you end up visiting 400 blocks,
Your query seems good but your data access/organization don´t.

Formula of very bad query.
(Table 1 number of rows) * (Table 2 number of rows)

June 20, 2012

The simplest query for checking what´s happening in a Oracle database

Filed under: Uncategorized — Helio Dias @ 8:08 pm

Often we are in a hurry needing just a rough idea about what´s going on in our database right now.

Tanel Poder one of the first OCM in world wrote a very easy explanation that I summarize with his query:

Column sw_event format a20

select
     count(*),
     CASE WHEN state != ‘WAITING’ THEN ‘WORKING’
          ELSE ‘WAITING’
     END AS state,
     CASE WHEN state != ‘WAITING’ THEN ‘On CPU / runqueue’
          ELSE event
     END AS sw_event
  FROM
     v$session
  WHERE
      type = ‘USER’
  AND status = ‘ACTIVE’
  GROUP BY
     CASE WHEN state != ‘WAITING’ THEN ‘WORKING’
          ELSE ‘WAITING’
     END,
     CASE WHEN state != ‘WAITING’ THEN ‘On CPU / runqueue’
          ELSE event
     END
  ORDER BY
     1 DESC, 2 DESC;

Tanel article is available at:

http://blog.tanelpoder.com/2008/08/07/the-simplest-query-for-checking-whats-happening-in-a-database/

AWR it´s not free and requires license – DBA_FEATURE_USAGE_STATISTICS

Filed under: Uncategorized — Helio Dias @ 3:18 pm

Be aware that if you use such feature, Oracle will log this, and will fail in an license auditing.

even if you just select some *_HIST views, Oracle will going to log this access into DBA_FEATURE_USAGE_STATISTICS and eventually you will have to pay.

Within DBA_FEATURE_USAGE_STATISTICS have informations regarding , how many times you used that feature, when was the last time.

« Previous PageNext Page »

Blog at WordPress.com.