The tale of the ‘ubiquitous’ Stanford Bunny
In the early 1990s, Stanford professor Marc Levoy and his postdoc, Greg Turk, created the world’s first seamless 3D computer model of a complex object using a range-finding laser scanner. Their object of choice, a terra cotta garden decoration, is now known worldwide as “The Stanford Bunny.” Computer graphics researchers still use it today. Approaching Easter, we talked to Levoy about what has kept his bunny going even three decades later.
How did the Stanford Bunny come to be?
I was a new professor at Stanford. Three-dimensional scanning seemed like an interesting problem, trying to get the shape of an object accurately recorded in a computer. There were scanners at the time that could do some modest range scanning – like the front of your face or one that could go completely around you – and build a crude model of the surface of an object, but not a complete one, with a top and a bottom and interstices and all that. My postdoc at the time, Greg Turk, and some grad students started to work on this. It’s hard to scan an object all at once, so you do it in parts, little meshes, like pieces of fabric, from all around the object. Greg created an algorithm called Zipper that could take a bunch of these meshes and stitch them together to create a seamless, three-dimensional model. We needed a test object. It was around Eastertime, and Greg was out shopping and saw this bunny on a store shelf – a terra cotta garden sculpture. It was about the right size. So we scanned it. It was the first model of its kind. It became a popular computer graphics model for people to render on – to add fur, or melt, or break, or whatever you can imagine doing to a 3D object.
Why do you think it caught on?
It was the first detailed polygon mesh model of a very complicated object with lots of little nooks, crannies, and crevices. There had been models of complicated stuff before, but it was all built by hand – airplanes and things like that. The idea of just taking an object, sticking it on a platter, and being able to scan it more-or-less automatically and produce a model was new. We put the data online and we made it freely available. It just became ubiquitous; one of the standard models for computer graphics researchers to practice on. If you page through the proceedings of any computer graphics conference today, maybe a third of the papers use the Stanford Bunny. Just Google “Stanford Bunny.” You’ll see.
The bunny replaced what was the iconic computer graphics model, the Utah Teapot. The Computer History Museum approached me a while ago wanting to put the bunny in their museum. So there’s a spot for it next to the Utah Teapot, but not yet. The original is still in my office at Stanford. One of the reasons the bunny is popular is because there’s no IP associated with it. It was just a simple garden decoration, and anybody could just do anything with it. It’s also kind of innocuous, not a religious icon that people could desecrate. And it’s just the right size to do a lot of experiments on. It just became really popular. It’s fun to see the stuff people have done with it. Adding fur. The breakages. The textures.
How does the scanning technology work?
A lot of people think it’s LIDAR, the technology used in autonomous vehicles, that measures distance using time of flight – how quickly a laser light bounces back to a sensor after striking an object. The technology we used was different. The laser and the sensor are separate, allowing you to triangulate distance – like a range finder. The problem was, at the time you could only do one side of an object or a specific area, not an entire object. What we brought to the table were the algorithms for combining multiple scans to make a seamless object. We scanned the bunny all over the place and created mathematical triangles between the measurements. Greg’s algorithm was able to match up the edges of these meshes and stitch them together – or “zip” them, as he called it, hence the name Zipper. Those triangles represent the surface of the object and animators can use the data to render pictures. There are exactly 69,451 triangles making up the Stanford Bunny.
Did you continue to hone the technology?
Yes, there were several projects after that and some very sophisticated models of very complicated objects. The best known is probably the Digital Michelangelo project – where we traveled to Florence to scan Michelangelo’s David in situ. Our model of the David has a billion polygons – which is a lot. That required many, many sweeps, but used a more complicated laser scanner. Our approach – not the bunny per se – was used quite a bit in the entertainment industry. We made models for Industrial Light & Magic in the making of Star Wars: Episode I – The Phantom Menace. We scanned their models in our lab. That was kind of a dramatic day at Stanford where they came in this van with these sealed boxes. They didn’t want anyone getting a sneak peek. We had to disconnect our computers in the graphics lab from the internet. We had curtains across all the doorways. They put burly guards at the entrances. And, for a whole day, we scanned their models of various objects and creatures for The Phantom Menace. Then, we gave them a copy of the disk and they watched while we wrote zeros over our data, and that was that.