Oracle Logbook

January 7, 2010

Best way to delete millions rows from hundred millions table

Filed under: Uncategorized — Helio Dias @ 2:48 pm

The best way to it with a lower impact to the users and a very good time is:

1 – If possible drop the indexes (it´s not mandatory, it will just save time)

2 – Run the delete using bulk collection like the example below

declare
cursor crow is
select rowid rid
from big_table where filter_column=’OPTION’ ;
type brecord is table of rowid index by binary_integer;
brec brecord;
begin
open crow;
FOR vqtd IN 1..500 loop
fetch c bulk collect into brec limit 20000;
forall vloop in 1 .. brec.count
delete from big_table where rowid = brec(vloop);
exit when crow%notfound;
commit;
dbms_lock.sleep(5);
end loop;
close crow;
end;
/

3 – Adjust the values vqtd  , limit  and sleep to best to your case.

In my case using these values (vqtd:=500 , limit:=20000 and sleep:=5) each whole execution took an hour.

Try begin with vqtd 10  , and then check your redo/archive generation and the v$system_event, to make adjust for your environment.

Bulk delete gives you a lot of control regarding the undo usage.

Advertisement

7 Comments »

  1. Heliodias, This is a nice one! I was thinking exactly parallel to this for the past few days, but was unsure whether my idea was a good option! After looking at your post, Im sure this is a good option. We both are in the same boat! Using Rowid while bulk deleting, will give an improved performance, as looking up the data record with ROWID is much more efficient than using index. The below is exactly like yours with some simple modifications
    — removed vloop
    — Handled Bulk Exception

    DECLARE
    CURSOR crow is
    SELECT rowid
    FROM big_table WHERE filter_column=’OPTION’ ;
    TYPE brecord IS TABLE OF rowid INDEX BY BINARY_INTEGER;
    brec brecord;
    v_limit NUMBER := 20000; — Adjust this value according to your env
    errors NUMBER;
    dml_errors EXCEPTION;
    PRAGMA exception_init(dml_errors, -24381);

    BEGIN
    OPEN crow;
    LOOP
    FETCH c BULK COLLECT INTO brec LIMIT v_limit;
    IF brec.COUNT = 0 THEN
    DBMS_OUTPUT.PUT_LINE (‘Bulk deletion completed’);
    COMMIT;
    CLOSE crow;
    EXIT;
    ELSE
    FORALL vloop in brec.FIRST .. brec.LAST SAVE EXCEPTIONS
    DELETE FROM big_table WHERE rowid = brec(vloop);
    COMMIT;

    EXCEPTION
    WHEN dml_errors THEN
    errors := SQL%BULK_EXCEPTIONS.COUNT;
    dbms_output.put_line(‘Number of errors is ‘ || errors);
    FOR i IN 1..errors LOOP
    dbms_output.put_line(‘Error ‘ || i || ‘ occurred during ‘|| ‘iteration ‘ || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
    dbms_output.put_line(‘Oracle error is ‘ || SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
    END LOOP;
    COMMIT;
    IF crow%ISOPEN THEN
    CLOSE crow;
    END;

    Comment by metallicatony — April 5, 2010 @ 10:43 pm | Reply

  2. EXCEPTION WHEN dml_errors THEN errors := SQL%BULK_EXCEPTIONS.COUNT; dbms_output.put_line(‘Number of errors is ‘ || errors); FOR i IN 1..errors LOOP dbms_output.put_line(‘Error ‘ || i || ‘ occurred during ‘|| ‘iteration ‘ || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX); dbms_output.put_line(‘Oracle error is ‘ || SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE)); END LOOP; COMMIT; IF crow%ISOPEN THEN CLOSE crow;END;
    +1

    Comment by Wendi Ferner — July 1, 2010 @ 11:46 pm | Reply

  3. Wonderful sharing article about drop table fastest way. Thanks a lot for sharing.

    Comment by Jack Nicholson — September 23, 2010 @ 9:02 pm | Reply

  4. Thanks for the great post. It helps me.
    If insert also be done same way, will it be similarly with improved performance.

    Can this way be used for below kind of huge insert query or you see any efficient alternate way for these kind of inserts.

    INSERT INTO huge_table_mmdd
    SELECT * FROM huge_table;

    Comment by Sucheta — May 9, 2012 @ 4:22 am | Reply

    • Hi Sucheta,
      To improve a insertion from select, you should use the hint append and disable the redolog for the operation.
      (With the append hint)
      insert /*+append */ into huge_dest
      (select * from main_huge);
      1000000 rows created.

      Elapsed: 00:01:00.30

      (With the append hint and nologging (doing the operation without redolog))
      insert /*+append */ into huge_dest nologging
      (select * from main_huge);

      Elapsed: 00:00:31.02

      After use the nologging, you must do a backup of your datafiles.

      Comment by heliodias — May 9, 2012 @ 4:46 pm | Reply

  5. Thanks a lot for spending time to publish “Best way to delete millions rows from hundred millions table
    Oracle Logbook”. Thanks yet again -Maisie

    Comment by http://tinyurl.com/slasbiern26187 — February 3, 2013 @ 10:46 am | Reply

  6. Hello, I know the thread is old. I have a similar setup to delete hundreds of millions of rows. But it always fills up the UNDOTS. I do not have the dbms_lock.sleep(5) line.
    Does this line affects the filling up of the Undo tablespace. Other parameter is limit=5000
    Count of rows in tables – around 10M – 700M.

    Comment by jsixface — March 27, 2014 @ 8:48 pm | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: