RSS
 

Grooveshark is Hiring (Part 1 – PHP edition)

16 May

Grooveshark is looking for talented web developers. We are looking to fill a wide variety of web dev positions. The next few posts I make will be job descriptions for each of the major positions we are looking to fill. For part 1, I’m listing our backend PHP position. If your skillset is more front-end leaning but you feel like you would be a good fit for Grooveshark, by all means apply now rather than waiting for me to post the job description. :)

 

Grooveshark is seeking awesome PHP developers.

Must be willing to relocate to Gainesville, FL and legally work in the US. Relocation assistance is available.
Responsibilities:
Maintaining existing backend code & APIs, creating new features and improving existing ones
Writing good, clean, fast, secure code on tight deadlines
Identifying and eliminating bottlenecks
Writing and optimizing queries for high-concurrency workloads in SQL, MongoDB, memcached, etc
Identifying and implementing new technologies and strategies to help us scale to the next level

Desired Qualities:
Enjoy writing high quality, easy to read, self-documenting code
A passion for learning about new technologies and pushing yourself
Attention to detail
A high LOC/bug ratio
Able to follow coding standards
Good written and verbal communication skills
Well versed in best practices & security concerns for web development
Ability to work independently and on teams, with little guidance and with occasional micromanagement
More pragmatic than idealistic

Experience:
Experience developing on the LAMP stack (able to set up a LAMP install with multiple vhosts on your own)
Extensive experience with PHP
Extensive experience with SQL
Some experience with Javascript, HTML & CSS though you won’t be required to write it
Some experience with lower level languages such as C/C++
Experience with version control software (especially dvcs)

Bonus points for:
Well read in Software Engineering practices
Experience with a SQL database and optimizing queries for high concurrency on large data sets.
Experience with noSQL databases like MongoDB, Redis, memcached.
Experience with Nginx
Experience creating APIs
Knowledge of Linux internals
Experience working on large scale systems with high volume of traffic
Useful contributions to the open source community
Fluency in lots of different programming languages
Experience with browser compatability weirdness
Experience with smarty or other templating systems
BS or higher in Computer Science or related field
Experience with Gearman, RabbitMQ, ActiveMQ or some other job distribution/message passing system for distributing work
A passion for music and a desire to revolutionize the industry

Who we don’t want:
Architecture astronauts
Trolls
Complete n00bs (apply for internship or enroll in Grooveshark University instead!)
People who want to work a 9-5 job
People who would rather pretend to know everything than actually learn
Religious adherents to The Right Way To Do Software Development
Anyone who loves SOAP

Send us your:
Resume
Code samples you love
Code samples you hate
Favorite reading materials on Software Engineering (e.g. books, blogs)
Tell us when you would use a framework, and when you would avoid using a framework
ORM: Pros, cons?
Unit testing: pros, cons?
Magic: pros, cons?
When/why would you denormalize?
Thoughts on SOAP vs REST

If you want a job: jay at groovesharkdotcom

If you want an internship: [email protected]

 

How to Jailbreak Chrome and Install Grooveshark

15 Apr

As you may already know, Grooveshark’s Android app was recently <a href=”http://news.cnet.com/8301-31001_3-20051156-261.htm”>pulled from the Android market</a> due to either label pressure, or concerns from its competing music service that is soon to be launched (or both). That event has been covered in great detail all over the net, so I won’t rehash it here; that’s not what this blog post is about.

 

Yesterday we discovered that Grooveshark has also been pulled from the Chrome Web Store. Fortunately, if you already installed Grooveshark, Google does not appear to be revoking the app from users Chrome home pages.

For users who had not yet installed Grooveshark, they are out of luck from the web store. However, we have discovered that it is possible to “jail break” your Chrome installation to get the full Grooveshark + Chrome experience.

Step 1: Visit http://grooveshark.com in your chrome browser.

Step 2: Close the Grooveshark tab.

Step 3: Open the New Tab tab by typing CTRL+T (CMD+T in OSX?), or by pressing on the small + icon next to the rightmost tab.

Step 4: If Grooveshark appears under “Most visited,” simply drag it to the top-left position, hover over the site preview for a second, and click on the pin icon that appears. Skip to step 7.

Step 5: If Grooveshark does not appear under Most visited, scroll down to Recently closed, and click on Grooveshark.

Step 6: Proceed to step 1.

Step 7: Collapse the Apps section, or remove it entirely.

It may take several thousand iterations of this loop to bring Grooveshark into your Most visited section depending on how obsessively you are checking the 8 websites that appear there for you, but we feel this added effort on your part is well worth it. When you are done Chrome should look something like this:

P.S.: Grooveshark was ranked #8, right behind YouTube, an interesting juxtaposition since these services are both powered by user contributed content, and are both legal, but one is owned by Google and the other is not.

P.P.S.: For those of you who are really bad at detecting sarcasm, I am being facetious. As with every blog post I make here, I also am not speaking for Grooveshark in any sort of official capacity.

 
 

How Grooveshark Uses Gearman

27 Mar

At Grooveshark, Gearman is an integral part of our backend technology stack.

30 Second Introduction to Gearman

  • Gearman is a simple, fast job queuing server.
  • Gearman is an anagram for manager. Gearman does not do any work itself, it just distributes jobs to workers.
  • Clients, workers and servers can all be on different boxes.
  • Jobs can be synchronous or asynchronous.
  • Jobs can have priorities.

To learn more about gearman, visit gearman.org, and peruse the presentations. I got started with this intro (or one much like it), but there may be better presentations available now, I haven’t checked.
The rest of this article will assume that you understand the basics Gearman including the terminology, and are looking for some use cases involving a real live high traffic website.

Architecture

With some exceptions, our architecture with respect to Gearman is a bit unconventional. In a typical deployment, you would have a large set of Apache + PHP servers (at Grooveshark we call these “front end nodes” or FENs for short) communicating with a smaller set of Gearman job servers, and a set of workers that are each connected to all of the job servers. In our setup, we have a gearman job server running on each FEN, and jobs are submitted over localhost. That’s because most of the jobs we submit are asynchronous, and we want the latency to be as low as possible so the FENs can fire off a job and get back to handling the user’s request. Then we have workers running on other boxes which connect to the gearman servers on the FENs and process the jobs. Where the workers run depends on their purpose, for example workers that insert data into a data store usually live on the same box as the data store, which again cuts down on network latency. This architecture means that in general, each FEN is isolated from the rest of the FENs, and Gearman servers are not another potential point of failure or even slowdowns. The only way a Gearman server is unavailable is if the FEN itself is out of commission. The only way a Gearman server is running slow is if the whole FEN is running slow.

Rate Limiting

One of the things that is really neat about this gearman architecture, especially when used asynchronously, is that jobs that need to happen eventually but not necessarily immediately can be easily rate limited by simply controlling the number of workers that are running. For example, we recently migrated Playlists from MySQL to MongoDB. Because many playlists have been abandoned over the years, we didn’t want to just blindly import all playlists into mongo. Instead we import them from MySQL as they are accessed. Once the data is in MongoDB, it is no longer needed in MySQL, so we would like to be able to delete that data to free up some memory. Deleting that data is by no means an urgent task, and we know that deletes of this nature cannot run in parallel; running more than one at a time just results in extra lock contention.

Our solution is to insert a job into the gearman queue to delete a given playlist from MySQL. We then have a single worker connecting to all of the FENs asking for playlist deletion jobs and then running the deletes one at a time from the MySQL server. Not surprisingly, when we flipped the switch deletion jobs came in much faster than they could be processed; at the peak we had a backlog of 800,000 deletion jobs waiting to be processed, and it took us about 2.5 weeks to get that number down to zero. During that time we had no DB hiccups, and server load was kept low.

Data Logging

We have certain high volume data that must be logged, such as song plays for accounting purposes, and searches performed so we can make our search algorithm better. We need to be able to log this high volume data in real time, without affecting the responsiveness of the site. Logging jobs are submitted asynchronously to Gearman over localhost. On our hadoop cluster, we have multiple workers per FEN collecting and processing jobs as quickly as possible. Each worker only connects to one FEN — in fact, each FEN has about 12 workers just for processing logging jobs. For a more in depth explanation for why we went with this setup, see lessons learned.

Backend API

We have some disjointed systems written in various languages that need to be able to interface with our core library, which is written in PHP. We considered making a simple API over HTTP much like the one that powers the website, but decided that it was silly to pay the cost of all the overhead of HTTP for an internal API. Instead, a set of PHP workers handle the incoming messages from these systems and respond accordingly. This also provides some level of rate limiting or control over how parallelized we want the processing to be. If a well meaning junior developer writes a some crazy piece of software with 2048 processes all trying to look up song information at once, we can rest assured that the database won’t actually be swamped with that much concurrency, because at most it will be limited to the number of workers that we have running.

Lessons Learned/Caveats

No technology is perfect, especially a technology when you are using it in a way other than how it was intended to be used.
We found that gearman workers (at least the pecl gearman extension’s implementation) connect to and process jobs on gearman servers in a round-robin fashion, draining all jobs from one server before moving to the next. That creates a few different headaches for us:

  • If one server has a large backlog of jobs and workers get to it first, they will process those jobs exclusively until they are all done, leaving the other servers to end up with a huge backlog
  • If one server is unreachable, workers will wait however long the timeout is configured for every time they run through the round-robin list. Even if the timeout is as low as 1 second, that is 1 second out of 20 that the worker cannot be processing any jobs. In a high volume logging situation, those jobs can add up quickly
  • Gearman doesn’t give memory that was used for long queues back to the OS when it’s done with it. It will reuse this memory, but if your normal gearman memory needs are 60MB and an epic backlog caused by these interactions leads it to use 2GB of memory, you won’t get that memory back until Gearman is restarted.

Our solution to these issues is, unless there is a strong need to rate limit the work, just configure a separate worker for each FEN so if one FEN is having weird issues, it won’t affect the others.
Our architecture combined with the fact that each request from a user will go to a different FEN means that we can’t take advantage of one really cool gearman feature: unique jobs. Unique jobs means that we could fire asynchronous jobs to prefetch data we know the client is going to ask for, and if the client asks for it before it is ready, we could have a synchronous request hook into the same job, waiting for the response.
Talking to a Gearman server over localhost is not the fastest thing in the world. We considered using Gearman to handle geolocation lookups by IP address so we can provide localized concert recommendations, since those jobs could be asynchronous, but we found that submitting an asynchronous job to Gearman was an order of magnitude slower than doing the lookup directly with the geoip PHP extension once we compiled it with mmap support. Gearman was still insanely fast, but this goes to show that not everything is better served being processed through Gearman.

Wish List

From reading the above you can probably guess what our wish list is:

  • Gearman should return memory to the OS when it is no longer needed. The argument here is that if you don’t want Gearman to use 2GB of memory, you can set a ulimit or take other measures to make sure you don’t ever get that far behind. That’s fine but in our case we would usually rather allow Gearman to use 2GB when it is needed, but we’d still like to have it back when it’s done!
  • Workers should be better at balancing. Even if one server is far behind it should not be able to monopolize a worker to the detriment of all other job servers.
  • Workers should be more aware of timeouts. Workers should have a way to remember when they failed to connect to a server and not try again for a configurable number of seconds. Or connections should be established to all job servers in a non-blocking manner, so that one timing out doesn’t affect the others.
  • Servers should be capable of replication/aggregation. This is more of a want than a need, but sometimes it would be nice if one job server could be configured to pull jobs off of other job servers and pool them. That way jobs could be submitted over localhost on each FEN, but aggregated elsewhere so that one worker could process them in serial if rate limiting is desired, without potentially being slowed down by a malfunctioning FEN.
  • Reduce latency for submitting asynchronous jobs locally. Submitting asynchronous jobs over localhost is fast, but it could probably be even faster. For example, I’d love to see how it could perform using unix sockets.

Even with these niggles and wants, Gearman has been a great, reliable and performant product that we are able to rely on to help keep the site fast and reliable for our users.

Supervisord

When talking about Gearman, I would be in remiss if I did not mention Supervisord, which we use to babysit all of our workers. Supervisord is a nice little python utility you can use to daemonize any process for you, and it will handle things like redirecting stdout to a log file, auto-restarting the process if it fails, starting as many instances of the process as you specify, and automatically backing off if your process fails to start a specified number of times in a row. It also has an RPC interface so you can manage it remotely, for instance if you notice a backlog of jobs piling up on one of your gearman servers, you can tell supervisord to fire up another 20 workers.

 
 

Grooveshark Playlists now in MongoDB

06 Mar

As of about 5:30am last night (this morning?) Grooveshark is now using MongoDB to house playlist information.

Until now playlists have lived in MySQL, but there were some big problems that occasionally lead to data loss due (mostly) to deadlocks. Needless to say, users don’t like it when you lose their data. Moving to Mongo should resolve all of these issues.

Grooveshark has been using MongoDB for sessions and feed data for a while now, so we are comfortable with the technology and know that it is capable of handling massive amounts of traffic. while it’s certainly not perfect, we are confident that it will be easy to scale out to maintain reliability as our user base continues to grow rapidly.

 
 

Grooveshark IE bug

29 Jan

I hate the idea that this blog might be turning into nothing but a journal of all the things at Grooveshark that have ever broken, but some of the most interesting challenges we face are when things go terribly awry, so I’m not going to avoid talking about it just because it involves something breaking at Grooveshark, again.

What happened was, out of the blue IE8 could no longer run the site. Users were getting a message about making sure they did not have flash block enabled, which means the swf was failing in some way. We determined that the swf was in fact loading, so why was it lying to us? There is one file that swfs need in order to talk to other domains: crossdomain.xml. If that file fails to load, the swf isn’t going to work. I suspected that was happening in this case, so I loaded up http://cowbell.grooveshark.com/crossdomain.xml and IE complained that it wasn’t valid XML. View source showed me that IE was right. It was in fact what looked like binary garbage. Loading the same file in Firefox and Chrome worked perfectly fine, but IE8 on 4 different computers all showed the invalid XML.

Some months ago, we switched from serving pages up directly from Apache, to running Nginx in front of Apache as a reverse proxy with caching. The difference that made on our front end servers in terms of memory usage and CPU load is phenomenal. Although Nginx serves 30% of requests from cache now, the drop in server load was much more than 30%. Nginx is truly a wonderful addition to our http stack…but as you’ve guessed by now, it played a key role in the latest breakage at Grooveshark.

Force clearing the cache in IE8 and in nginx would sometimes fix the file, but not always. I then turned to wget and found the same thing: whenever the file was broken for IE8, it was identical in wget. wget was showing the exact same file size that Firebug was showing, which was the biggest clue: Firefox received the file gzipped because it supports deflate, but wget also received the file gzipped even though it doesn’t support deflate. My theory, which proved correct, was that IE8 was for some reason asking for the non-gzipped version, but receiving the gzipped version and barfing.

Why would that happen? Well, remember that we are using nginx as a reverse proxy cache. It turns out that we just recently added some auto-gzipping for certain file types to apache. What was happening was, nginx would get a request for a file not in its cache, and forward along the request (with all headers intact) to Apache. If this request came from a client that supports deflate, Apache would respond with a gzipped file. Nginx would store that gzipped file in its cache, and the next request that came in asking for that file, with or without deflate support, would get the gzipped version served up.

The fix was relatively simple: add a variable in nginx conf tracking whether or not the current client supports deflate. Append the value of that variable to the proxy key, meaning that gzipped and non-gzipped versions of the files will be cached separately, and served appropriately depending on what the client supports.

What’s not clear to me at this time is why IE8 would refuse to accept gzipped content for that file, and whether that applies to all .xml files in IE8…but at least it helped us catch what would have otherwise been an extremely obscure issue!

 
 

Controlling Arduino via Serial USB + PHP

28 Dec

Paloma bought me an Arduino for Christmas, and I’ve been having lots of fun with it, first following some tutorials to get familiar with the platform but of course the real excitement for me is being able to control it from my computer, so I’ve started playing around with serial communication over the USB port.

I messed around with Processing a bit, and while Processing seems pretty cool and pretty powerful for drawing and making a quick interface, I want to be able to get up and running fast communicating with all of our existing systems, so I thought PHP would be ideal.

I’ve never used PHP to communicate over a serial port before, so I did some digging and actually came across an example of using PHP to communicate with an Arduino, but it wasn’t working properly for me. My LED was blinking twice every time, and if I changed it to a value comparison (i.e. sending 1 in PHP and checking for 1 on Arduino) it was never matching. Turns out for some reason by default the com port wasn’t running in the right mode from PHP. The solution is a simple additional line:

exec("mode com3: BAUD=9600 PARITY=N data=8 stop=1 xon=off");

…but make sure you adjust to match the settings appropriate for your computer.

Of course I had to write my own little demo once I got it working. This PHP script takes command line arguments and passes them along to the arduino, which looks for ones and zeroes to turn the LEDs off or on. Unlike the example I linked to, I’m working with a shift register to drive 8 LEDs, but you’ll get the idea, and it should be obvious how to convert it to work with a single LED.

<?php
ini_set('display_errors', 'On');
exec("mode com3: BAUD=9600 PARITY=N data=8 stop=1 xon=off");
$fp = fopen("COM3", "w");
 
 foreach ($argv as $i => $arg) {
    if ($i > 0) {//skip the first arg since it's the name of the file
        print "writing " . $arg . "\n";
        $arg = chr($arg);//fwrite takes a string, so convert
        fwrite($fp, $arg);
        sleep(1);
    }
 }
print "closing\n";
fclose($fp);
?>

And for the Arduino:

//Pin connected to latch pin (ST_CP) of 74HC595
const int latchPin = 4;
const int clockPin = 3;
const int dataPin = 2;
void setup() {
    pinMode(latchPin, OUTPUT);
    pinMode(dataPin, OUTPUT);  
    pinMode(clockPin, OUTPUT);
    Serial.begin(9600);
    //reset LEDs to all off
    digitalWrite(latchPin, LOW);
    shiftOut(dataPin, clockPin, MSBFIRST, B00000000);
    digitalWrite(latchPin, HIGH);
}

void loop() {
    byte val = B11111111;
    if (Serial.available() > 0) {
        byte x = Serial.read();
        if (x == 1) {
            val = B11111111;
        } else {
            val = B00000000;
        }

        digitalWrite(latchPin, LOW);
        shiftOut(dataPin, clockPin, MSBFIRST, val);
        digitalWrite(latchPin, HIGH);
        delay(500);
    }
}

For bonus debugging points, if you’re using a shift register like this, send x instead of val to shiftOut, and you can actually see which bytes are being sent, very handy if you’re not getting what you are expecting (like I was)!

 
1 Comment

Posted in Coding, fun

 

Old sphinx bug

21 Nov

I’m posting this mostly as a note to myself.

When trying to build the pecl sphinx extension, ran into problems because I could not build libsphinxclient.

It looks like the bug causing that problem has been around for at least a year, but at least the fix is simple:
change line 280: void sock_close ( int sock ); to static void sock_close ( int sock );

Note: this applies to sphinx 0.9.9, the last “stable” release at the time of this writing.

 
 

Google AJAX Support: Awesome but Disappointing

18 Oct

Google has added support for crawling AJAX URLs. This is great news for us and any other site that makes heavy use of AJAX or is more of a web app than a collection of individual pages.

We have long worked around the issue of AJAX URLs not being crawl-able by having two versions of our URLs, with and without the hash. Users who are actually using the site will obviously get AJAX URLs like http://listen.grooveshark.com/#/user/jay/42, but if a crawler goes to http://listen.grooveshark.com/user/jay/42 they will get content for that page as well, while real users will be automatically redirected to the proper URL with the hash. Crawlers aren’t smart enough to go there on their own of course, but we provide a sitemap, and all links we present to crawlers are absent of the hash. Likewise, when users post links to Facebook via the app, we automatically give them the URL without the hash so they can put up a pretty preview of the link in the user’s news feed. The problem is, users also like to share by copying URLs from the URL bar. If users post those links anywhere, crawlers don’t know how to crawl them, so they either don’t or they just count it as a link to http://listen.grooveshark.com/ which isn’t great for us and is lousy for users too.

Google’s solution is for sites like ours to switch from using # to using #! and then opting in to having those URLs crawled. The crawler will take everything after the #! and convert the “pretty” URL into an ugly one. For example, /#!/user/jay/42 presumably becomes something like /?_escaped_fragment_=%2Fuser%2Fjay%2F42 when the crawler sends the request to us.

This is annoying and frustrating for several reasons:

  1. All our URLs have to change
    We have to change all URLs to have a #! instead of just a #. This not only requires developer effort but makes our URLs slightly uglier.
  2. All links that users have shared in the past will continue to be non-crawlable forever.
  3. We now have to support 3 URL formats instead of 2
  4. One of those URL formats no human will ever see; we are building a feature solely for the benefit of a robot.

Again, we greatly appreciate that Google is making an effort to crawl AJAX URLs, it’s a huge step forward. It’s just not as elegant as it could be. It seems like we could accomplish the same goals more simply by:

  1. Requiring opt-in just like the current system
  2. Using robots.txt to dictate which AJAX URLs should not be crawled
  3. Allowing webmasters to specify what the # should be replaced with
    In our case it would be replaced by nothing, just stripped out. For less sophisticated URL schemes that just use #x=y, webmasters could specify that the # should be replaced by a ?

That solution would have the same benefits of the current one, with the additional benefits of allowing all crawling permissions to be specified in the same place (robots.txt), automatically making links already in the wild crawl-able, without requiring the addition of support for yet another URL format.

 
 

Improve Code by Removing It

17 Oct

I’ve started going through O’Reilly’s 97 Things Every Programmer Should Know, and I plan to post the best ones, the ones I think we do a great job of following at Grooveshark and the ones I wish we did better, here at random intervals.

The first one is Improve Code by Removing It.

It should come as no surprise that the fastest code is code that never has to execute. Put another way, you can usually only go faster by doing less. And of course, code that never runs also exhibits no bugs. :)

Since I started at Groovshark, I’ve deleted a lot of code. 20% more code than I’ve ever written. More code than all but one of our developers has contributed. Despite that, I think we’ve only actually ever removed one feature, and we’ve added many more.

One of the things I see in the old code is an over enthusiastic attempt to make everything extremely flexible and adaptive. The original authors obviously made a noble effort to try to imagine every possible future scenario that the code might some day need to handle and then come up with an abstraction that could handle all of those cases. The problem is, those scenarios almost never come up. Instead, different features are requested which do not fit with the goals of the original abstraction at all, so then you end up having to work around it in weird ways that make the code more difficult to understand and less efficient.

Let me try to provide a more concrete example so you can see what I’m talking about. We have an Auth class that really only needs to handle 3 things:

  • Let users log in (something like Auth->login(username, password)
  • Let users log out (something like Auth->logout()
  • Find out who the logged in user is, if a user is logged in. (Auth->getUser())

Should be extremely straightforward, right? Well, the original author decided that the class should allow for multiple authentication strategies over various protocols in some scenarios that could never possibly arise (such as not having access to sessions) even though at the time only one was needed. Instead of ~100 lines of code to just get the job done and nothing else, we ended up with 1,176 lines spanning 5 files. The vast majority of that code was useless; our libraries are behind other front-facing code so the protocol and “strategy” for authenticating is handled at one level higher up, and we always use sessions so that no matter how a user logged in, they are logged in to all Grooveshark properties. When we finally did add support for a truly new way to log in (via Facebook Connect), none of that code was useful at all because Facebook Connect works in a way the author could never have anticipated 2 years ago. Finally, because the original author anticipated a scenario that cannot possibly arise (that we might know the user’s username but not their user ID), fetching the logged-in User’s information from the database required a less efficient lookup by username rather than a primary key lookup by ID.

Let’s step back a moment and pretend that the author had in fact been able to anticipate how we were going to incorporate Facebook Connect and made the class just flexible enough and just abstract enough in just the right ways to accommodate that feature that we just now got around to implementing. What would have been the benefit? Well, most of the effort of implementing that feature is handling all of the Facebook specific things, so that part would still need to be written. I’d say at best it could have saved me from having to write about 10 lines of code. In the meantime, we would have still been carrying around all of that extra code for no reason for a whole two years before it was finally needed!

Let’s apply YAGNI whenever possible, and pay the cost of adding features when they actually need to be added.

Edit:
Closely related: Beauty is in Simplicity.

 

PHP Autoload: Put the error where the error is

13 Oct

At Grooveshark we use PHP’s __autoload function to handle automatically loading files for us when we instantiate objects. In the docs they have an example autoload method:

function __autoload($class_name) {
    require_once $class_name . '.php';
}

Until recently Grooveshark’s autoload worked this way too, but there are two issues with this code:

  1. It throws a fatal error inside the method.

    If you make a typo when instantiating an object and ask it to load something that doesn’t exist, your error logs will say something like this:require_once() [function.require]: Failed opening required 'FakeClass.php' (include_path='.') in conf.php on line 83 which gives you no clues at all about what part of your code is trying to create FakeClass.php. Not remotely helpful.

  2. It’s not as efficient as it could be.

    include_once and require_once have slightly more overhead than include and require because they have to do an extra check to make sure the file hasn’t already been included. It’s not much extra overhead, but it’s completely unnecessary because __autoload will only trigger if a class hasn’t already been defined. If you’re inside your __autload function, you already haven’t included the file before, or the class would already be defined.

The better way:
function __autoload($class_name) {
    include $class_name . '.php';
}

Now if you make a typo, your logs will look like this:
PHP Warning: include() [function.include]: Failed opening ‘FakeClass.php’ for inclusion (include_path=’.’) in conf.php on line 83
PHP Fatal error: Class ‘FakeClass’ not found in stupidErrorPage.php on line 24

Isn’t that better?

Edit 2010-10-19: Hot Bananas! The docs have already been updated. (at least in svn)