high performance mysql

MySQL performance tips from around the web

Gospel: use InnoDB, never MyISAM It seems everybody on StackExchange is singing from the same gospel: “[How can I] prevent queries from waiting for table level lock?” Answer: use InnoDB. The major advantages of InnoDB over MyISAM. “Even in a read-intesive system, just one DELETE or UPDATE statement will quickly nullify whatever benefits MyISAM has.” […] » about 400 words

Speeding up MySQL joins on tables with TEXT columns, maybe

The thing about WordPress’ DB schema is that TEXT and VARCHAR content is mixed in the posts table (to say nothing of the frustrations of DATETIME columns). That’s not such a problem for a blog with a few hundred posts, but it’s a different matter when you have a few hundred thousand posts. And it wouldn’t even […] » about 500 words

MySQL Error 28: Temp Tables And Running Out of Disk Space

Bam: MySQL error 28, and suddenly my queries came to a stop. Error 28 is about disk space, usually the disk space for temp tables. The first thing to do is figure out what filesystem(s) the tables are on. SHOW VARIABLES LIKE “%dir%” will return a number of results, but the ones that matter are […] » about 300 words

MySQL Fulltext Tips

Peter Gulutzan, author of SQL Performance Tuning, writes in The Full-Text Stuff That We Didn’t Put In The Manual about the particulars of word boundaries, index structure, boolean searching, exact phrase searching, and stopwords, as well as offering a few articles for further reading (Ian Gilfillan’s “Using Fulltext Index in MySQL”, Sergei Golubchik’s “MySQL Fulltext Search”, Joe Stump’s “MySQL FULLTEXT Searching”). It’s one of a number of articles in the MySQL Tech Resources collection.

WordPress Baseline Changes To Support WPopac

I’ve whittled things down to the point where the only baseline change from WordPress 2.0.2 is in the next_posts_link function of the wp-includes/template-functions-links.php file. The change is necessary because WPopac rewrites the SQL search queries in a way that’s incompatible with a piece of this function, but necessary for performance reasons.

Here’s how my version reads:

`

function next_posts_link($label='Next Page »', $max_page=0) {
	global $paged, $result, $request, $posts_per_page, $wpdb, $max_num_pages;
	if ( !$max_page ) {
			if ( isset($max_num_pages) ) {
				$max_page = $max_num_pages;
			} else {
				preg_match('#FROM\s(.*)\sGROUP BY#siU', $request, $matches);

				// added April 5 2006 by Casey Bisson to support WPopac
				// necessary because the preg_match above fails with some queries
				if(!$fromwhere)
					$fromwhere = $wpdb->posts;
				
				// changed April 5 2006 by Casey Bisson to speed the query by eliminating
				// the slow DISTINCT clause
				//$numposts = $wpdb->get_var(“SELECT COUNT(DISTINCT ID) FROM $fromwhere”);
				$numposts = $wpdb->get_var(“SELECT COUNT(*) FROM $fromwhere”);
				$max_page = $max_num_pages = ceil($numposts / $posts_per_page);
			}
	}
	if ( !$paged )
		$paged = 1;
	$nextpage = intval($paged) + 1;
	if ( (! is_single()) && (empty($paged) || $nextpage < = $max_page) ) {
		echo '<a href=“';
		next_posts($max_page);
		echo '”>'. preg_replace('/&([^#])(?![a-z]{1,8};)/', '&$1', $label) .'</a>';
	}
}

`

MySQL’s Slow Query Log

Zach suggested it last week, but it’s only now that I’ve gotten around to setting up MySQL’s slow query log.

It’s easy enough, you’ve just got to put a couple lines like this in your my.cnf (which is in /etc on my server):

log-slow-queries = /var/log/mysql/mysql-slow.log<br /> long_query_time = 10

This should get most people running, but this story in Database Journal offers a few more details. Potentially more useful is this guide to query and index optimization (though it’s probably a little out of date).

Performance Optimization

A couple notes from the past few days of tweaks and fixes:

  • Hyper-threading has a huge effect on LAMP performance.
     
  • From now on, I’ll have bad dreams about running MySQL without Query Caching in the way that I used to have nightmares about going to school wearing only my underwear. The difference is that big.
     
  • WordPress rocks, but it has some queries that will kill large databases. I’m playing with baseline when I fix ’em, but it’s worth it.
     
  • Being highly ranked for searches related to bears and bear lovers on the weekend that Grizzly Man airs on TV is both good and bad: it doubled my previous high for daily page-loads, but killed my server and taught me some lessons about sloppy coding.
     
  • The support techs are Lunarpages continue to pretty much rock.
     
  • Having friends who cut their teeth on high-performance PHP/MySQL and are willing to spend the night workshopping it with you is indispensable.
     
  • It’s hard to beat the calming beauty of driving home through a snowfall on back roads at 2 AM carried by your life’s soundtrack.
     

Learning: MySQL Optimization

I have over 1000 posts here at MaisonBisson, but even so, the table with all those posts is under 3MB. Now I’ve got a project with 150,000 posts — yes, 150,000 posts! — and the table is about 500MB. An associated table, structured sort of like WP’s postsmeta, has over 1.5 million records and weighs in at over 100MB (not including the 150MB of indexes).

Up to now I’ve been a “throw more hardware at it” sort of guy — and in a server with only 1GB of RAM, that’s probably the best solution — but I also think it’s time I learned some MySQL optimization tricks. Zach‘s been pushing me to get mytop installed for quite a while, and I finally got around to it.

mytop is a console-based (non-gui) tool for monitoring the threads and overall performance of a MySQL 3.22.x, 3.23.x, and 4.x server. It runs on most Unix systems (including Mac OS X) which have Perl, DBI, and Term::ReadKey installed. And with Term::ANSIColor installed you even get color. If you install Time::HiRes, you’ll get good real-time queries/second stats.

Introducing bsuite_speedcache

I wrote bsuite_speedcache to reduce the number of database queries I was executing per page load. By implementing it on some of the content in my sidebar, I dropped 35 queries for each cache hit. That might not seem like much, but it should average about 525 queries per minute that that my host server […] » about 300 words