Google’s labeling game is based on the ESP Game created by Luis von Ahn at Carnegie Mellon University. Two random people, called pairs, are connected online. They are shown an image and are asked to label it. The moment they type in the same term, Google’s system immediately connects that term with that image. The reasoning is if two people use the same term to describe an image, it is likely that others will as well. The gaming element comes into play because each person is then given a point and assigned a ranking. Top pairs and all-time top contributors are then recognized on the Google Image Labeler home page.
Other initiatives forgo play for pay
Labeling its foray into crowdsourcing "artificial artificial intelligence," Amazon.com has established the Mechanical Turk project, essentially "piece work" for a knowledge-based economy.
With Mechanical Turk, companies create HITs (Human Intelligence Tasks). Everyday people accept these tasks in exchange for small sums of money. The HITs are tasks that humans can quickly and easily accomplish, but would take many hours of programming for computers to carry out -- such as examining a scanned receipt and pulling out specific pieces of data. Mechanical Turk helps companies answer business-essential rank-and-file questions, and the folks answering those questions get paid.
Michael Dell raised eyebrows with the release of Dell Idea Storm. One of the questions Dell put to this crowdsourcing market was, "What product do you really want us to create?" The overwhelming response was to create a laptop that had the option of no operating system or the Linux operating system. Dell listened to its market to significant results: Reports state that Dell has sold more than 40,000 laptops installed with the Linux-based Ubuntu OS.
Group crowdsourcing is also cropping up. InnoCentive describes its community as “open innovation.” It is an alliance of various companies, called Seekers, designed to tap in to the creativity and collective intelligence of a global network of subject matter experts, called Solvers. In InnoCentive’s Open Innovation Marketplace, Solvers are handsomely rewarded with prizes reaching $100,000 for their solutions.
Ensuring optimal results
Crowdsourcing success very much depends on the quality and quantity of participation. The best way to maximize participation is to have a firm grasp on the desired outcome and to reward people for contributing. If the desired outcome is to get as much information about a particular topic as possible, then the rules of the "game" need to reward participants who deliver quality information about that topic and penalize those who don’t.
It’s easy to see how crowdsourcing, by its nature, can quickly get out of hand. It’s helpful to staff your crowdsourcing team with people who can think like both computer scientists and economists -- able to see 10 moves ahead. The ability to look for unintended as well as intended outcomes and to determine how to manage those facets for the betterment of the project is a critical skill.