Wednesday, April 26, 2017

MariaDB - Speeding Up Records Migration

I started testing MariaDB as my new backend and while it is simple enough to do the switch onto that, there are some things that I need to adjust on my VFP codes.  Some of those are the field types, for logical fields I have to use TINYINT (for MySQL ODBC 5.1 or later) or BIT (for MySQL ODBC 3.51), for dates it does not accept {} for an empty date, etc.

However, this post is not about that. It is more on the speed of migration approach.  Bear in mind I am new with this, only less than a week but I think regardless of it, this can help some who might be using MariaDB or MySQL as backends.

One way to connect from VFP to MariaDB is via SPT (SQL Pass-Through). With a code like this:

If SQLExec(gomyapp.DbBaseConn,'INSERT INTO MyTable (MyField) VALUES (?m.lcMyValue)') <= 0

So that will create a new record from VFP to MariaDB table.

But as you can see it is done on a per record basis.  And testing migrating one of my smaller tables in VFP to MariaDB containing only 589 records using SCAN...ENDSCAN approach took me .5333 seconds.  It means that transfer speed on my end per record is at .000905 of a second. Take note that speed is affected as well by number of fields to transfer.

Since I have to migrate as well another table which contains 1,237,000 records and has more fields than my first per-record SPT above, we can safely presume basing on that .000905 of a second transfer rate per record that it will take me an estimated 18 minutes and 67 seconds to completely migrate this bigger table.  And damn, that is a very long time to wait.

So I sought another way and found out about LOOP Statement of MariaDB.  But we are in VFP so I would prefer to have something that we are already familiar with.  Still using SCAN...ENSCAN, I decided to create a batch of records with the help of TEXT...ENDTEXT and I was satisfied with the result.  However, there is one caveat, i.e., CONCATENATION.  A variable can only hold so much characters that eventually it will fail. I tested transferring 80,0000 records in one go and it was comparably faster.  But then per my test (and per number of characters on the text SQL INSERT INTO resultant), when I tried 100K of records, it failed due to concatenation.

Solution, do it by batch.  Here now is my preferred approach to transfer huge records. I decided to do a batch transfer of 5,000 records per SPT. Over-all result is from that calculated 18.67 minutes, it was slashed down to just 6.0167 minutes.  Here is the migration codes I used:

Local lcSetDate, lcBatch, lcValues, lcSQL, lcTop, ldStart, lnRecords
Close Databases All
Use vouchers Shared
lnRecords = Reccount()
ldStart = Datetime()
lcSetDate = Set("Date")
Set Date To YMD
lcTop = 'Insert Into vouchers (serialno, dsent,usersentfk,customerfk,userusedfk,storeredeemfk,'+;
      'dused,dtransfer, denomfk, storagelocfk,minvfk, mtfk, qrcode) VALUES '
lcBatch = ''
lcSQL = ''
('<<serialno>>', '<<IIF(dsent={},[0000-00-00],dsent)>>',<<sentfk>>,<<storefk>>,<<redeemfk>>,<<storeredfk>>,
 <<denomfk>>, <<storagefk>>,<<minvfk>>, <<mtfk>>, '<<barcode>>'),
      lcBatch = m.lcBatch + m.lcValues
      If Mod(Recno(),5000)= 0 Or Recno() >= m.lnRecords
            lcSQL = lcTop + m.lcBatch
            lcSQL = Left(m.lcSQL,Len(m.lcSQL)-1)+';'
            If SQLExec(gomyapp.DbBaseConn,m.lcSQL) > 0
                  Wait Window 'Writing up to '+Transform(Recno()) Nowait
            lcSQL = ''
            lcBatch = ''

Set Date &lcSetDate
Close Databases All
Messagebox('Started: '+Ttoc(m.ldStart,2)+Chr(13)+'Ended: '+;
      TTOC(Datetime(),2)+Chr(13)+'Elapsed: '+Transform((Datetime()-    m.ldStart)/60))

Well then, you can expect some more tricks here in the future on VFP to MariaDB.

1 comment: