- DBIx::Custom
- here
Insert a row with the insert method
Use the insert method to insert a row into the table.
$dbi->insert({id => 1, title =>'Perl'}, table =>'book');
In the first argument, specify the data you want to insert with a hash reference. Specify the target table with table . The following SQL is issued:
insert into book (id, title) values (?,?);
The corresponding value is embedded in the placeholder.
Insert constant
If you want to embed constants in parameters instead of placeholders, you can specify a scalar reference.
$dbi->insert({id => 1, ctime => \ "now ()"}, table =>'book');
The following SQL is executed:
insert into book (id, ctime) values (?, now ());
insert method options
The insert method has all the options of the execute method. For the options, refer to the option item of " execute method".
In addition, you can use the following options:
Setting the current time at the time of insertion (generation time) ctime
Use the ctime option to set the current time for the specified column when you perform an insert.
ctime =>'created_time'
The default format is "YYYY-mm-dd HH: MM: SS". You can change this by setting the now attribute.
ID specification id
You can specify the id in the same way as the execute method. The primary_key must be set.
id => 4 id => [4, 5]
The data to be inserted is created using the column name set in primary_key.
$dbi->insert( {title =>'Perl', author =>'Ken'} primary_key => ['id1','id2'], id => [4, 5], table =>'book' );
It has the same meaning as the following insert statement.
$dbi->insert( {id1 => 4, id2 => 5, title =>'Perl', author =>'Ken'}, table =>'book' );
prefix prefix
You can use the prefix option to add any string between "insert" and "into table name" in the insert statement.
prefix =>'or replace'
The following SQL will be created.
insert or replace into book
table name table
Specify the table name with the table option. The table name is required.
table =>'book'
Setting the current time at the time of insertion (update time) mtime
This option is the same as the mtime option of the update method. You can set the current time for the specified line. This time can be changed with the now attribute.
mtime =>'modified_time'
If you insert it, it is also the first modification time, so the same current time is inserted in both the column set in ctime and the column set in mtime.
Placeholder wrapping wrap
If you use a database function in the values part of insert, you can solve it by using the wrap option. The placeholder part will be converted with the specified code reference.
wrap => {price => sub {"$_ [0] + 5"}}
Suppose you execute an insert statement like the one below.
$dbi->insert( {id => 1, price => 100}, table =>'book', wrap => {price => sub {"$_ [0] + 5"}} );
Then the placeholder part of price will be replaced using the code reference specified by wrap.
insert into book (id, price) values (?,? + 5);
How to "speed up" "massive inserts"
DBIx::Custom is a convenient wrapper for DBI, but its performance is significantly worse than raw DBI when it comes to performance when doing a lot of inserts etc. in succession.
In such a case, insert multiple data at once. You can pass multiple data to the insert method of DBIx::Custom at once.
my @params = [{id => 1, title =>'Perl'}, {id => 2, title =>'Ruby'}]; $dbi->insert(\ @params, table =>'book');
The performance degradation when doing a large number of inserts is small compared to DBI.
The limitation of multiple inserts is that the id option cannot be used. Also, if you don't have all of the filters, type rules, and bind type options, inserts will be even faster. If the type rule is already set, you can disable the type rule by setting type_rule_off to 1.
Note that the value of the first hash of the parameter determines the SQL generated. Note that the remaining parameters must contain the same columns as the first parameter.
Benchmark script and benchmark results. I used SQLite for the database.
use strict; use warnings; use Benchmark qw / timethese cmpthese /; use DBIx::Custom; my $dbi = DBIx::Custom->connect( dsn =>'dbi: SQLite: dbname =: memory:', ); $dbi->execute("create table book (id, title)"); # Performance comparison my $result = timethese (100, { # Normal normal => sub { for my $i (1 .. 100) { my $param = {id => $i, title =>'Perl'}; $dbi->insert($param, table =>'book'); } }, #Multiple inserts at once multiple => sub { my @params; for my $i (1 .. 100) { push @params, {id => $i, title =>'Perl'}; } $dbi->insert(\ @params, table =>'book'); }, #Reuse statement handles in raw DBI raw => sub { my $sth; for my $i (1 .. 100) { $sth || = $dbi->dbh->prepare('insert into book (id, title) values (?,?)'); my $id = $i; my $title ='Perl'; $sth->execute($i, $title); } } }); cmpthese ($result);
Benchmark results are slightly lower than those using raw DBI. Compared to using a normal insert, there is a difference of nearly 10 times, so I think it is better to use it when performing a large number of inserts at the same time.
Rate normal multiple raw normal 22.3 / s --- 90%-92% multiple 213 / s 855%---23% raw 278 / s 1147%31%-
Speed up with bulk insert
If your database supports bulk inserts, you can do bulk inserts. Currently available for MySQL and PostgreSQL. You can use the database function to insert multiple data at once, so you can expect even higher speeds.
$dbi->insert($params, table =>'book', bulk_insert => 1);
Benchmark with bulk_insert in MySQL
Below is the benchmark script.
use strict; use warnings; use Benchmark qw / timethese cmpthese /; use DBIx::Custom; my $dbi = DBIx::Custom->connect(dsn =>'dbi: mysql: database = usertest', user =>'root',connector => 1); # Performance comparison my $result = timethese (100, { normal => sub { $dbi->connector->txn(sub { $dbi->delete_all(table =>'book'); for my $i (1 .. 100) { my $param = {id => $i, title =>'Perl'}; $dbi->insert($param, table =>'book'); } }); }, multiple => sub { $dbi->connector->txn(sub { $dbi->delete_all(table =>'book'); my @params; for my $i (1 .. 100) { push @params, {id => $i, title =>'Perl'}; } $dbi->insert(\ @params, table =>'book'); }); }, bulk_insert => sub { $dbi->connector->txn(sub { $dbi->delete_all(table =>'book'); my @params; for my $i (1 .. 100) { push @params, {id => $i, title =>'Perl'}; } $dbi->insert(\ @params, table =>'book', bulk_insert => 1); }); }, raw => sub { $dbi->connector->txn(sub { $dbi->delete_all(table =>'book'); my $sth; for my $i (1 .. 100) { $sth || = $dbi->dbh->prepare('insert into book (id, title) values (?,?)'); my $id = $i; my $title ='Perl'; $sth->execute($i, $title); } }); } }); cmpthese ($result);
Benchmark results show that using bulk_insert gives about 5 times the normal performance of a normal insert.
Rate normal multiple raw bulk_insert normal 24.0 / s --- 72%-76%-83% multiple 86.2 / s 259%---16%-40% raw 102 / s 326%18%--- 29% bulk_insert 143 / s 496%66%40%-
If bulk inserts do not perform well, check once to see if there is enough memory space for database inserts. If this memory area is not enough, swap-in / swap-out will occur and it will be very slow.