I wanted to change mysql databases to use innodb_file_per_table problem is even if you add this to your my.cnf it will only affect newly created tables.
In order to alter existing ones I needed to run a bunch of alter tables with ENGINE=InnoDB. Because I have heaps of database and even more tables this was proving to be a huge time sink and my system was only using 1/16 of its resources since mysql only uses one thread per query.
To use multi thread power I needed to be able to run multiple queries at the same time but not too many otherwise I’ll hit the max amount of connection limit or max allowed number of forks.
Here’s the small script I wrote to overcome this problem:
# Read line in ~/tmp/convert_tables.txt while read command; do # Echo with command are we running (I love stdout) echo $command; # Run the command in backgroud `bash -c "$command"`& # This is the where magic happens, # we don't want to have more than 10 jobs running # so with a simple while counting the jobs running we # can put the script "on hold" until other jobs finish while [ `jobs | wc -l` -gt 10 ]; do sleep 1; done done < ~/tmp/convert_tables.txt
Inside convert_tables.txt I just have the commands I want to run, here a example set:
mysql -u root -e "alter table db1.keyvalue ENGINE=InnoDB;" mysql -u root -e "alter table db2.acl_role ENGINE=InnoDB;" mysql -u root -e "alter table db2.acl_role_has_privilege ENGINE=InnoDB;" ...
Althout this post is about the threaded “jobs” if you’re wondering how to generate the table alter commands I follow this example: innodb_file_per_table – Converting to Per Table Data for InnoDB