Oracle Logbook

January 7, 2010

Best way to delete millions rows from hundred millions table

Filed under: Uncategorized — Helio Dias @ 2:48 pm

The best way to it with a lower impact to the users and a very good time is:

1 – If possible drop the indexes (it´s not mandatory, it will just save time)

2 – Run the delete using bulk collection like the example below

declare
cursor crow is
select rowid rid
from big_table where filter_column=’OPTION’ ;
type brecord is table of rowid index by binary_integer;
brec brecord;
begin
open crow;
FOR vqtd IN 1..500 loop
fetch c bulk collect into brec limit 20000;
forall vloop in 1 .. brec.count
delete from big_table where rowid = brec(vloop);
exit when crow%notfound;
commit;
dbms_lock.sleep(5);
end loop;
close crow;
end;
/

3 – Adjust the values vqtd  , limit  and sleep to best to your case.

In my case using these values (vqtd:=500 , limit:=20000 and sleep:=5) each whole execution took an hour.

Try begin with vqtd 10  , and then check your redo/archive generation and the v$system_event, to make adjust for your environment.

Bulk delete gives you a lot of control regarding the undo usage.

Blog at WordPress.com.