使用bulk insert快速插入大批量数据

Oracle 数据库中,需要插入大数据量到表中时,如一次性数据量为百万级别。通常想到的是加nologging append 等提示,提升插入数据的速度。

但还有一种方法,是从根本上改变记录的插入方式。它就是bulk insert

我们简单比较一下此方法和普通的插入方法的差别,主要是性能上的。在数据块层面的操作,以后再分析。

(墙内链接: http://mikixiyou.iteye.com/blog/1532688 )

首先,创建一张普通的表t1(CREATE TABLE t1 AS SELECT * FROM dba_objects )

写一个存储过程,用于模拟普通的数据插入方法。

create or replace procedure test_proc is

begin

  for x in (select * from dba_objects) loop

    insert into t1

      (owner,

       object_name,

       subobject_name,

       object_id,

       data_object_id,

       object_type,

       created,

       last_ddl_time,

       timestamp,

       status,

       temporary,

       generated,

       secondary)

    values

      (x.owner,

       x.object_name,

       x.subobject_name,

       x.object_id,

       x.data_object_id,

       x.object_type,

       x.created,

       x.last_ddl_time,

       x.timestamp,

       x.status,

       x.temporary,

       x.generated,

       x.secondary);

  end loop;

  commit;

end test_proc;

/
 

 

模拟bulk insert 插入方法

create or replace procedure test_proc2(p_array_size in pls_integer default 100) is

  type array is table of dba_objects%rowtype;

  l_data array;

  cursor c is

    select * from dba_objects;

begin

  open c;

  loop

    fetch c bulk collect

      into l_data limit p_array_size; 

    forall i in 1 .. l_data.count

      insert into t1 values l_data (i); 

    exit when c%notfound;

  end loop;

  close c;

  commit;

end test_proc2;

/
 

在这个方法中,每100 条作为一个批次。

下面开始测试,过程如下:

普通方法执行一次,一致性读的数目为23947

扫描二维码关注公众号,回复: 1388210 查看本文章

 

SQL> select b.name,a.* from v$mystat a,v$statname b where a.STATISTIC#=b.STATISTIC# and b.name in ('consistent gets','physical reads')

  2  ;

 

NAME                                                                    SID STATISTIC#      VALUE

---------------------------------------------------------------- ---------- ---------- ----------

consistent gets                                                         395         50         23

physical reads                                                          395         54          0

 

SQL> execute test_proc;

 

PL/SQL procedure successfully completed

 

SQL> select b.name,a.* from v$mystat a,v$statname b where a.STATISTIC#=b.STATISTIC# and b.name in ('consistent gets','physical reads');

 

NAME                                                                    SID STATISTIC#      VALUE

---------------------------------------------------------------- ---------- ---------- ----------

consistent gets                                                         395         50      23970

physical reads                                                          395         54          0

 

SQL>

 

 

bulk insert 致性一次,一致性读的数目为6710

 

SQL> select b.name,a.* from v$mystat a,v$statname b where a.STATISTIC#=b.STATISTIC# and b.name in ('consistent gets','physical reads');

 

NAME                                                                    SID STATISTIC#      VALUE

---------------------------------------------------------------- ---------- ---------- ----------

consistent gets                                                         420         50         19

physical reads                                                          420         54          0

 

SQL> execute test_proc2;

 

PL/SQL procedure successfully completed

 

SQL> select b.name,a.* from v$mystat a,v$statname b where a.STATISTIC#=b.STATISTIC# and b.name in ('consistent gets','physical reads');

 

NAME                                                                    SID STATISTIC#      VALUE

---------------------------------------------------------------- ---------- ---------- ----------

consistent gets                                                         420         50       6729

physical reads                                                          420         54          0

 

 

根据实验结果,可以证实内存读的差别还是蛮大的。

因此,在大数据量插入操作时,可以采用bulk insert 方法。

猜你喜欢

转载自mikixiyou.iteye.com/blog/1532688
今日推荐