Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
368 views
in Technique[技术] by (71.8m points)

performance - Why is MySQL InnoDB insert so slow?

I am using large random numbers as keys (coming in from another system). Inserts and updates on fairly-small (as in a few million rows) tables are taking much longer than I think is reasonable.

I have distilled a very simple test to illustrate. In the test table I've tried to make it as simple as possible; my real code does not have such a simple layout and has relations and additional indices and such. However, a simpler setup shows equivalent performance.

Here are the results:

creating the MyISAM table took 0.000 seconds
creating 1024000 rows of test data took 1.243 seconds
inserting the test data took 6.335 seconds
selecting 1023742 rows of test data took 1.435 seconds
fetching 1023742 batches of test data took 0.037 seconds
dropping the table took 0.089 seconds
creating the InnoDB table took 0.276 seconds
creating 1024000 rows of test data took 1.165 seconds
inserting the test data took 3433.268 seconds
selecting 1023748 rows of test data took 4.220 seconds
fetching 1023748 batches of test data took 0.037 seconds
dropping the table took 0.288 seconds

Inserting 1M rows into MyISAM takes 6 seconds; into InnoDB takes 3433 seconds!

What am I doing wrong? What is misconfigured? (MySQL is a normal Ubuntu installation with defaults)

Here's the test code:

import sys, time, random
import MySQLdb as db

# usage: python script db_username db_password database_name

db = db.connect(host="127.0.0.1",port=3306,user=sys.argv[1],passwd=sys.argv[2],db=sys.argv[3]).cursor()

def test(engine):

    start = time.time() # fine for this purpose
    db.execute("""
CREATE TEMPORARY TABLE Testing123 (
k INTEGER PRIMARY KEY NOT NULL,
v VARCHAR(255) NOT NULL
) ENGINE=%s;"""%engine)
    duration = time.time()-start
    print "creating the %s table took %0.3f seconds"%(engine,duration)

    start = time.time()
    # 1 million rows in 100 chunks of 10K
    data = [[(str(random.getrandbits(48)) if a&1 else int(random.getrandbits(31))) for a in xrange(10*1024*2)] for b in xrange(100)]
    duration = time.time()-start
    print "creating %d rows of test data took %0.3f seconds"%(sum(len(rows)/2 for rows in data),duration)

    sql = "REPLACE INTO Testing123 (k,v) VALUES %s;"%("(%s,%s),"*(10*1024))[:-1]
    start = time.time()
    for rows in data:
        db.execute(sql,rows)
    duration = time.time()-start
    print "inserting the test data took %0.3f seconds"%duration

    # execute the query
    start = time.time()
    query = db.execute("SELECT k,v FROM Testing123;")
    duration = time.time()-start
    print "selecting %d rows of test data took %0.3f seconds"%(query,duration)

    # get the rows in chunks of 10K
    rows = 0
    start = time.time()
    while query:
        batch = min(query,10*1024)
        query -= batch
        rows += len(db.fetchmany(batch))
    duration = time.time()-start
    print "fetching %d batches of test data took %0.3f seconds"%(rows,duration)

    # drop the table
    start = time.time()
    db.execute("DROP TABLE Testing123;")
    duration = time.time()-start
    print "dropping the table took %0.3f seconds"%duration


test("MyISAM")
test("InnoDB")
Question&Answers:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

InnoDB has transaction support, you're not using explicit transactions so innoDB has to do a commit after each statement ("performs a log flush to disk for every insert").

Execute this command before your loop:

START TRANSACTION

and this after you loop

COMMIT

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

2.1m questions

2.1m answers

60 comments

57.0k users

...