All we need is an easy explanation of the problem, so here it is.
I recently developed a script that spawns multiple processes to import tables in parallel using mysqlimport and a –tab type mysqldump export. On the development server it works very well and compared to a standard mysql db_name < backup.sql type of import it cuts the time from around 15 minutes to 4 or 5 minutes.
The problem is on our production server this script seems to be locking tables system wide. That is to say, I’m importing a backup to a completely different database but our live application tables still end up locked. A SHOW PROCESSLIST confirms that tables on our live db are indeed locked but no INSERT or UPDATE queries are running on any tables in that database.
Why is this happening? Is there a configuration variable / setting that I can adjust to prevent this lock from happening?
How to solve :
I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.
If you start
--lock-tables=0 then there will be no locks.
LOAD DATA rather than
(The Question is about
mysqlimport, and has been answered. This ‘Answer’ is about the underlying goal, which I will restate.)
How to rapidly clone a single database (to a database with another name) on a single instance of MySQL?
One thought is have
dev_db created and empty, then do something like
mysqldump prod_db | mysql dev_db
- Make sure it does (or does not do)
- For copying two databases, consider running them in parallel.
- For multiple tables, consider splitting up the tables across different dump|load pairs.
- But watch out for
By using a pipe, the intermediate file does not need to touch the disk (at least not for *nix OS.)
Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂