Import Multiple .sql dump files into mysql database from shell

All we need is an easy explanation of the problem, so here it is.

I have a directory with a bunch of .sql files that mysql dumps of each database on my server.

e.g.

database1-2011-01-15.sql
database2-2011-01-15.sql
...

There are quite a lot of them actually.

I need to create a shell script or single line probably that will import each database.

I’m running on a Linux Debian machine.

I thinking there is some way to pipe in the results of a ls into some find command or something..

any help and education is much appreciated.

EDIT

So ultimately I want to automatically import one file at a time into the database.

E.g. if I did it manually on one it would be:

mysql -u root -ppassword < database1-2011-01-15.sql

How to solve :

I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.

Method 1

cat *.sql | mysql? Do you need them in any specific order?

If you have too many to handle this way, then try something like:

find . -name '*.sql' | awk '{ print "source",$0 }' | mysql --batch

This also gets around some problems with passing script input through a pipeline though you shouldn’t have any problems with pipeline processing under Linux. The nice thing about this approach is that the mysql utility reads in each file instead of having it read from stdin.

Method 2

One-liner to read in all .sql files and imports them:

for SQL in *.sql; do DB=${SQL/\.sql/}; echo importing $DB; mysql $DB < $SQL; done

The only trick is the bash substring replacement to strip out the .sql to get the database name.

Method 3

There is superb little script at http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script which will take a huge mysqldump file and split it into a single file for each table. Then you can run this very simple script to load the database from those files:

for i in *.sql
do
  echo "file=$i"
  mysql -u admin_privileged_user --password=whatever your_database_here < $i
done

mydumpsplitter even works on .gz files, but it is much, much slower than gunzipping first, then running it on the uncompressed file.

I say huge, but I guess everything is relative. It took about 6-8 minutes to split a 2000-table, 200MB dump file for me.

Method 4

I don’t remember the syntax of mysqldump but it will be something like this

 find . -name '*.sql'|xargs mysql ...

Method 5

I created a script some time ago to do precisely this, which I called (completely uncreatively) “myload”. It loads SQL files into MySQL.

Here it is on GitHub

It’s simple and straight-forward; allows you to specify mysql connection parameters, and will decompress gzip’ed sql files on-the-fly. It assumes you have a file per database, and the base of the filename is the desired database name.

So:

myload foo.sql bar.sql.gz

Will create (if not exist) databases called “foo” and “bar”, and import the sql file into each.

For the other side of the process, I wrote this script (mydumpall) which creates the corresponding sql (or sql.gz) files for each database (or some subset specified either by name or regex).

Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂

All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply