Earlier today, I noticed that the robservatory database was massive—over 1GB in size, which it shouldn’t be anywhere near (it’s usually around 100MB). This is over the database size limit at my host, so I couldn’t add any new content (nor could visitors comment, create accounts, etc.)
My host offers phpmyadmin acccess, so I connected to the database to try to figure out what was going on. Using phpmyadmin, you can browse tables, perform SQL commands, and export and import data—it’s a must-have tool for managing remote databases.
The first challenge was to figure out which table was causing the problem. To help with that, I wanted to see which WordPress tables were the largest, as that should be a good hint. A web search found lots of possible solutions, but I liked this one the best. Within that thread, I slightly modified one of the queries to do what I wanted:
table_name AS "Table",
ROUND(((data_length + index_length) / 1024 / 1024), 2) AS "Size (MB)"
WHERE table_schema = 'YOUR_DATABASE_NAME_HERE'
ORDER BY (data_length + index_length) DESC;
The output will look something like the image at right—unfortunately, I didn’t screen grab the run that identified the problem; I snapped this one after the cleanup. (To use the above script, just replace YOUR_DATABASE_NAME_HERE with the actual name of your database.)
So what was the problem? In my case, it was 12,000,000+ rows (yes, 12 million!) in the wp_commentmeta table, all courtesy of a ROT13 plug-in I used in one post (and enabled in comments). Whoops!
I disabled the plug-in, then set about cleaning up the table, using this SQL command:
DELETE FROM wp_commentmeta WHERE meta_key LIKE "%is_rot13ed%"
This deleted all but 6,000 rows from the table. Unfortunately, that didn’t quite fix the problem, as the table was still reserving room based on all the deleted rows. To fix that, I ended up exporting the table, then dropping it, waiting 30 seconds or so, then importing the data. Whew; problem resolved.
And now, on to find a replacement for that rot-13 plug-in.