Sometime around 10 PM Friday the MySQL server at my hosting provider took a walk. The hosting sysop blamed it on my site and disabled the database that serves it by making the directory the MySQL files are in unreadable. MySQL didn’t seem to handle that condition well, and since MaisonBisson was still piling up queries looking for the content in the DB, things continued to go downhill.
My involvement started around 11 PM Friday night (yes, I’m that dorky). Not knowing what was going on, I started a support ticket with the host. The sysop replied fairly quickly with some MySQL logs that showed nothing, but then didn’t answer any further questions on the ticket until about 11 AM Saturday. The host’s answer was inspecific, but I replied quickly and again I got no replies.
Then, around 2 PM Saturday, the host sent me a message from their abuse department telling me they’d disabled my account (it had actually been disabled for about 15 hours or more by then) because the site was causing problems with the DB server. Apparently, because the scripts on the site were looking for the missing database content and MySQL doesn’t handle things well when the DB files are made inaccessible on the filesystem and because I was still getting traffic (though the traffic wasn’t getting content), things weren’t going well.
Again, however, my replies to support got no answer, and it was about 10 AM Sunday before I got another response from the hosting provider. But it still took until 1 PM before I could get the sysop to restore my access to the site content (under the condition that no public access be allowed on the site until they approved it).
Around 4 PM, after about 42 hours of downtime, MaisonBisson went live with a new provider.
Things I’ll admit:
- I haven’t paid serious attention to optimizing my code for heavy traffic (but I don’t really know what “heavy traffic” is)
- My site was getting a lot of page loads Friday (though it wasn’t record number), and weekend nights can be especially busy (look at the list of popular incoming searches in the sidebar during the weekend and you’ll understand)
- I don’t know how many queries per second I was calling at the time of the DB crash (or how many were running overall), or any other detail statistic of the CPU load I was generating
- HostGator sucks.
- Keep backups up to date (I actually had a good DB backup from early Friday, but I didn’t have filesystem backup)
- Perhaps I oughtta look at some ways to optimize all this.
- HostGator sucks.
Tips For Hosting Providers:
- Don’t treat customers like they want to crash your servers.
- Remember that customers can’t see the system performance stats you can. Involve them early — as problems are developing, but before things crash — so they can start working on solutions before anybody loses uptime.
As late as sometime mid-morning Saturday, HostGator could have kept me as a customer — and I was still considering upgrading my hosting account! — but customer service that was both slow and indifferent soured me.