Queuing as a Coordination Mechanism

In some countries queuing has become a social norm.  In other countries queuing is not so established.  Here are some images of queues in China on so-called "queuing" days, days when queuing is enforced by the government.  You can read about why these "queuing days" are necessary here and how they were introduced here, here and here and the perils of getting in a taxi here.

Queues in China

A comment posted with the photo stated:

Queuing is never in China's vocabulary and cutting a queue is perceived as normal. You could encounter this act in almost anywhere such as restaurants, banks, toilets or ATMs. Once I experienced this in a super-mart. A customer shouted at the counter girl that since he bought only one small item he should be served first and it would be ridiculous for him to go to the end of the queue for paying. Surprisingly, the counter girl gave in.

The picture here depicted a scene where the Chinese are forced to queue up for purchasing the train tickets before their Chinese New Year. Noticed that they are so worried that people may cut into their queue they have to hug or arm-lock one another.

Here's a normal day getting onto public transport in China.

  Queueing for Transport Queueing for Transport2

Of course, the Chinese reaction to queuing, or lack thereof, can be viewed in a completely rational manner.  Time spent in a queue is time wasted.  It is likely that a society wastes a huge amount of resources when people stand in queues.  This time could be spent more productively instead of waiting in a queue.  Is their a more efficient way of organising queues as a coordination and allocation mechanism.

Steven Landsburg, the armchair economist, has a theory on this. A foolproof method to shorten queues.

You spend too much time waiting in lines. "Too much" isn't some vague value judgment—it's a precise economic calculation. A good place in line is a valuable commodity, but it's not ordinarily traded in the marketplace. And this "missing market" inevitably produces inefficient outcomes.

Under the current rules, line formation suffers from economic inefficiencies because we enter lines without regard to the interests of later arrivals who queue behind us. How to make line formation more efficient? Change the rules so that new arrivals go to the front of the line instead of the back. Then the addition of a new person in line would impose no costs at all on those who come later. With that simple reform, lines would be a lot shorter. People who got pushed back beyond a certain point would give up and go home. (Well, actually they'd leave the line and try to re-enter as newcomers, but let's suppose for the moment that we can effectively prohibit that behavior.) On average, we'd spend less time waiting, and we'd be happier.

Follow the link above to see how he proposes that this can work.  You can read more about this idea here and why queuing is bad for business here.

Chapter Five, Part VII

In January 1956, the economist Vernon L. Smith decided to use his classroom as a laboratory to answer that exact question. Today this would hardly be surprising. Economists routinely use classroom experiments to test out economic hypotheses and to try to understand how human behavior affects the way markets work. But fifty years ago, the idea was a radical one. Economics was a matter of proving mathematical theorems or of analyzing real-world markets. The assumption was that lab tests could tell you nothing interesting about the real world. In fact, in all the economic literature, there were hardly any accounts of classroom experiments. The most famous had been written by Harvard professor Edward Chamberlin, who every year set up a simulated market that allowed his students to trade among themselves. One of those students, as it happened, was Vernon Smith.

The experiment Smith set up was, by modern standards, uncomplicated. He took a group of twenty-two students, and made half of them buyers and half of them sellers. Then he gave each seller a card that indicated the lowest price at which she’d be willing to sell, and gave each buyer a card that indicated the highest price at which she’d be willing to buy. In other words, if you were a seller and you got a card that said $25, you’d be willing to accept any offer of $25 or more. You’d look for a higher price, since the difference would be your profit. But if you had to, you’d be willing to sell for $25. The reverse was true for buyers. A buyer with a card that said $20 would try to pay as little as possible, but if necessary she’d be willing to shell out the double sawbuck. With that information, Smith was able to construct the class’s supply-and-demand curves (or “schedules”) and to figure out therefore at what price they would meet.

Once all the students had their cards and the rules had been explained, Smith let them start trading among themselves. The market Smith set up was what’s called a double auction, which is much like a typical stock market. Buyers and sellers called out bids and asks publicly, and anyone who wanted to accept a bid or ask would shout out his response. The successful trades were recorded on a blackboard at the front of the room. If you were a buyer whose card said $35, you might start bidding by shouting out “Six dollars!” If no one accepted the bid, then you’d presumably raise it until you were able to find someone to accept your price.

Smith was doing this experiment for a simple reason. Economic theory predicts that if you let buyers and sellers trade with each other, the bids and asks will quickly converge on a single price, which is the price where supply and demand meet, or what economists call the “market-clearing price.” What Smith wanted to find out was whether economic theory fit reality

It did The offers in the experimental market quickly converged on one price. They did so even though none of the students wanted this result (buyers wanted prices to be lower, sellers wanted prices to be higher), and even though the students didn’t know anything except the prices on their cards. Smith also found that the student market maximized the group’s total gain from trading. In other words, the students couldn’t have done any better had someone with perfect knowledge told them what to do.

In one sense these results could be -thought of as unsurprising. In fact, when Smith submitted a paper based on his experiment to the Journal of Political Economy, an ardently pro-market
academic journal which was run by economists at the University of Chicago, the paper was rejected at first, because from the editors’ perspective all Smith had done was prove that the sun rose in the east. (The journal eventually did publish the paper, even though four referee judgments on it had come back negative.) After all, ever since Adam Smith economists had been arguing that markets did an excellent job of allocating resources. And in the 1950s, the economists Kenneth J. Arrow and Gerard Debreu had proved that, under certain conditions, the workings of the free market actually led to an optimal allocation of resources. So why were Smith’s experiments so important?

They were important because they demonstrated that markets could work well even when real people were trading in them. Arrow and Debreu’s proof of the efficiency of markets—which is called the general equilibrium theorem—was beautiful in its perfection. It depicted an economy in which every part fit together and in which there was no possibility of error. The problem with the proof was that no real market could fulfil its conditions. In the Arrow-Debreu world, every buyer and seller has complete information, meaning that every one of them knows what all the other buyers and sellers are willing to pay or to sell for, and they know that everyone else knows that they know. All the buyers and sellers are perfectly rational, meaning that they have a clear sense of how to maximize their own self-interest. And every buyer and seller has access to a complete set of contracts that cover every conceivable state of the world, which means that they can insure themselves against any eventuality.

But no market is like this. Human beings don’t have complete information. They have private, limited information. It may be valuable information and it may be accurate (or it may be useless and false), hut it is always partial. Human beings aren’t perfectly rational either. They may want, for the most part, to maximize their self-interest, but they aren’t always sure how to do that, and they’re often willing to settle for less-than-perfect outcomes. And contracts are woefully incomplete. So while Arrow-Debreu was an invaluable tool—in part because it provided a way of measuring what an ideal outcome would look like—as a demonstration of the wisdom of markets, it didn’t prove that real-world markets could be efficient.

Smith’s experiment showed that they could, that even imperfect markets populated by imperfect people could still produce near-ideal results. The people in Smith’s experiments weren’t always exactly sure of what was going on. Many of them saw the experience of trading as chaotic and confusing. And they described their own decisions not as the result of a careful search for just the right choice but rather as the best decisions they could come up with at the time. Yet while relying only on their private information, they found their way to the right outcome.

In the four decades since Smith published the results of that first experiment, they have been replicated hundreds, if not thousands, of times, in ever more complex variations. But the essential conclusion of those early tests—that, under the right conditions, imperfect humans can produce near-perfect results—has not been challenged.

Does this mean that markets always lead to the ideal outcome? No. First of all, even though Smith’s students were far from ideal decision makers, the classroom was free of the imperfections that characterize most markets in the real world (and which, of course, make business a lot more interesting than it is in economics textbooks). Second, Smith’s experiments show that there’s a real difference between the way people behave in consumer markets (like, say, the market for televisions) and the way people behave in asset markets (like, say, the market for stocks). When they’re buying and selling “televisions,” the students arrive at the right solution very quickly. When they’re buying and selling “stocks,” the results are much more volatile and erratic. Third, Smith’s experiments— like the Arrow-Debreu equations—can’t tell us anything about whether or not markets produce socially, as opposed to economically, optimal outcomes. If wealth is unevenly distributed before people start to trade in a market, it’s not going to be any more evenly distributed afterward. A well-functioning market will make everyone better off than they were when trading began—but better off compared to what they were, not compared to anyone else. On the other hand, better off is better off.

Regardless, what’s really important about the work of Smith and his peers is that it demonstrates that people who can be, as he calls them, “naïve, unsophisticated agents,” can coordinate themselves to achieve complex, mutually beneficial ends even if they’re not really sure, at the start, what those ends are or what it will take to accomplish them. As individuals, they don’t know where they’re going. But as part of a market, they’re suddenly able to get there, and fast.

Chapter Five, Part VI

A giant flock of starlings moves purposefully through the African sky keeping its shape and speed while sweeping smoothly around a tree. From above, a bird of prey dives into the flock. As the starlings scatter, the flock seems to explode around the predator, but it quickly reassembles itself. As the frustrated predator dives again and again, the flock breaks up, re-forms, breaks up, re-forms, its motion creating an indecipherable but beautiful pattern. In the process, the hawk becomes disoriented, since no individual starling ever stays in the same place, even though the flock as a whole is never divided for long.

From the outside, the flock’s movements appear to be the result of the workings of one mind, guiding the flock to protect itself. At the very least, the starlings appear to be acting in concert with each other, pursuing an agreed-upon strategy that gives each of them a better chance to survive. But neither of these is true. Each starling is acting on its own, following four rules: 1) stay as close to the middle as possible; 2) stay two to three body lengths away from your neighbor; 3) do not bump into any other starling; and 4) if a hawk dives at you, get out of the way. No starling knows what the other birds are going to do. No starling can command another bird to do anything. The rules alone allow the flock to keep moving in the right direction, to resist predators and to regroup when divided.

It’s safe to say that anyone who’s interested in group behavior is enamored of flocking birds. Of all the hundreds of books published in the past decade on how groups self-organize without direction from above, few have omitted a discussion of bird flocks (or schools of fish). The reason is obvious: a flock is a wonderful example of a social organization that accomplishes its goals and solves problems in a bottom-up fashion, without leaders and without having to follow complex algorithms or complicated rules. Watching a flock move through the air, you get a sense of what the economist Friedrich Hayek liked to term “spontaneous order.” It’s a biologically programmed spontaneity—starlings don’t decide to follow these rules, they just do, But It is spontaneity for all that. No plans are made. The flock just moves.

You can see something similar—albeit much less beautiful—the next time you go to your local supermarket looking for a carton of orange juice. When you get there, the juice will be waiting, though you didn’t tell the grocer you would be coming. And there will probably be, over the next few days, as much orange juice in the freezer as the store’s customers want, even though none of them told the grocer they were coming, either. The juice you buy will have been packaged days earlier, after it was made from oranges that were picked weeks earlier, by people who don’t even know you exist. The players in that chain— shopper, grocer, wholesaler, packager, grower—may not be acting on the basis of formal rules, like the starlings, but they are using local knowledge, like the starlings, and they are making decisions not on the basis of what’s good for everyone but rather on the basis of what’s good for themselves. And yet, without anyone leading them or directing them, people—most of them not especially rational or farsighted—are able to coordinate their economic activities.

Or so we hope. At its core, after all, what is the free market? It’s a mechanism designed to solve a coordination problem, arguably the most important coordination problem: getting resources to the right places at the right cost. If the market is working well, products and services go from the people who can produce them most cheaply to the people who want them most fervently. What’s mysterious is that this is supposed to happen without any one person seeing the whole picture of what the market is doing, and without anyone knowing in advance what a good answer will look like. (Even the presence of big corporations in the mirket doesn’t change the fact that everyone in a market has only a partial picture of what’s going on.) So can this work? Can people with only partial knowledge and limited calculating abilities actually get resources to the right place at the right price, just by buying and selling?

Chapter Five, Part V

Convention may play an important role in everyday social life. But in theory it should be irrelevant to economic life and to the way companies do business. Corporations, after all, are supposed to be maximizing their profits. That means their business practices and their strategic choices should be rationally determined, not shaped by history or by unwritten cultural rules. And yet the odd thing is that convention has a profound effect on economic life and on the way companies do business. Convention helps explain why companies rarely cut wages during a recession (it violates workers’ expectations and hurts morale), preferring instead to lay people off. It explains why the vast majority of sharecropping contracts split the proceeds from the farm fifty-fifty, even though it would be logical to tailor the split to the quality of the farm and the soil. Convention has, as we’ve already seen, a profound effect on strategy and on player evaluation in professional sports. And it helps explain why every major car company releases its new models for the year in September, even though there would presumably be less competition if each company released its cars in different months.

Convention is especially powerful, in fact, in the one part of the economy that you might expect it to have little sway: pricing. Prices are, after all, the main vehicle by which information gets transmitted from buyers to sellers and vice versa, so you’d think companies would want prices to be as rational and as responsive to consumer demand as possible. More practically, getting the price right (at least for companies that aren’t in pure competitive markets) is obviously key to maximizing profits. But while some companies—like American Airlines, which it’s been said changes prices 500,000 times a day, and Wal-Mart, which has made steady price-cutting into a religion—have made intelligent pricing key to their businesses, many companies are positively cavalier about prices, setting them via guesswork or by following simple rules of thumb. in a fascinating study of the pricing history of thirty-five major American industries between 1958 and 1992, for instance, the economist Robert I-Ia1] found that there was essentially no connection between increases in demand and increases in price, which suggests that companies decided on the price they were going to charge and charged that price regardless of what happened. Clothing retailers, for instance, generally apply a simple mark-up rule: charge 50 percent more than the wholesale price (and then discount like mad if the items don’t sell). And until recently, the record industry blithely insisted that consumers were actually indifferent to prices, insisting that it sold as many CDs while charging $17 per disk as it would if it charged $12 or $13 a disk.

One of the more perplexing examples of the triumph of convention over rationality are movie theaters, where it costs you as much to see a total dog that’s limping its way through its last week of release as it does to see a hugely popular film on opening night. Most of us can’t remember when it was done differently, so the practice seems only natural. But from an economic perspective, it makes little sense. In any given week, some movies will be playing to packed houses, while others will be playing to vacant, theaters. Typically, when demand is high and supply is low, companies should raise prices, and when demand is low and supply is high, they should lower prices. But movie theaters just keep charging the same price for all of their products, no matter how popular or unpopular.

Now, there’s a good reason for theaters not to charge more for popular movies. Theaters actually make most of their money on concessions, so they want as many people as possible coming through the door.. The extra couple of dollars they’d make by charging $12.50 instead of $10 for the opening weekend of Spider Man 2 is probably not worth the risk of forgoing a sellout, especially since in the first few weeks of a movie’s run the theaters get to keep only 25 percent or so of the box-office revenue. (The movie studios claim the rest.) But the same can’t be said for charging’ less for movies that are less popular. After all, if theaters make most of their money on concessions, and their real imperative is to get people into the theater, then there’s no logic to charging someone $10 to see Cuba Gooding Jr. in Snow Dogs in its fifth week of release. Just as retail Stores mark down inventory to move it, theaters could mark down movies to lure more customers.

So why don’t they? Theaters offer a host of excuses. First, they insist (as the music industry once did) that moviegoers don’t care about price, so that slashing prices on less-popular films won’t bring in any more business. This is something you hear about cultural products in general but that is, on its face, untrue. It’s an especially strange argument to make about the movies, when we know that millions of Americans who won’t shell out $8 to see a • not-so-great flick in the theater will happily spend $3 or $4 to watch the same movie on their twenty-seven-inch TV. In 2002, Americans spent $1 billion more on video rentals than on movies in the theaters. That year, the most popular video rental in the country was Don’t Say a Word, a Michael Douglas thriller that earned a mediocre $55 million at the box office. Clearly, there were lots of people who thought Don’t Say a Word wasn’t worth $9 but was worth $4, which suggests that there is a lot of cash being spent at Blockbuster that theater owners could be claiming instead.

Theater owners also worry that marking down movies would confuse customers and alienate the movie studios, which don’t want their products priced as if they’re second-rate. Since theaters have to cut separate deals every time they want to show a movie, keeping the studios happy is important. But whether a studio is willing to admit that its movie is second-rate has no impact on its second-rateness. And if annoying a few studio execs is the price of innovation, one would think theater chains would be willing to pay it. After all, fashion designers are presumably annoyed when they see their suits and dresses marked down 50 percent during a Saks Fifth Avenue sale. But Saks still does it, as do Nordstrom and Barneys, and the designers still do business with them.

In the end, though, economic arguments may not be enough to get the theaters to abandon the one-price-fits-all model—a model that the theaters themselves discard when it comes to the difference between showing a movie during the day and seeing one at night (matinees are cheaper than evening shows), but that they cling to when it conies t the difference between Finding Nemo and Gigli (for which they charge the same price). The theaters’ unwillingness to change is not a well-considered approach to profit maximization and more a testament to the power of custom and convention. Prices are uniform today because that’s how they were done back in the days when Hollywood made two different kinds of movies: top-of- the-line features and B movies. Those films played in different kinds of theaters at different times, and where people lived and when they saw a movie affected how much they paid. But tickets to all A-list movies cost the same (with the occasional exception, actually, of a big event -film, like My Fair Lady, which played in theaters with reserved seating and cost more). Today, there are no B movies. Every film a studio puts out is considered top-of-the-line, so they’re all priced the same. It is true that this ensures customers remain unconfused. But as the economists Liran Einav and Barak Orbach have written, it also means that movie theaters “deny the law of supply and demand.” They’ve uncoordinated themselves with moviegoers.

Chapter Five, Part IV

Culture also enables coordination in a different way, by establishing norms and conventions that regulate behavior. Some of these norms are explicit and bear the force of law. We drive on the right- hand side of the road because it’s easier to have a rule that everyone follows rather than to have to play the guessing game with oncoming drivers. Bumping into a fellow pedestrian at the crosswalk is annoying, but smashing into an oncoming Mercedes-Benz is quite another thing. Most norms are 1ongstanding, hut it also seems possible to create new forms of behavior quickly, particularly if doing so solves a problem. The journalist Jonathan Rauch, for instance, relates this story about an experience Schelling had while teaching at Harvard: “Years ago, when he taught in a second-floor classroom at Harvard, he noticed that both of the building’s two narrow stairwells—one at the front of the building, the other at the rear—were jammed during breaks with students laboriously jostling past one another in both directions. As an experiment, one day he asked his 10:00 AM class to begin taking the front stairway up and the back one down. ‘it took about three days,’ Schelling told me, ‘before the nine o’clock class learned you should always come up the front stairs and the eleven o’clock class always came down the back stairs—without, so far as Schelling knew, any explicit instruction from the ten o’clock class. ‘I think they just forced the accommodation by changing the traffic pattern,’ Schelling said.” Here again, someone could have ordered the students to change their behavior, but a slight tweak allowed them to reach the good solution on their own, without forcing anyone to do anything.

Conventions obviously maintain order and stability. Just as important, though, they reduce the amount of cognitive work you have to put in to get through the day. Conventions allow us to deal with certain situations without thinking much about them, and when it comes to coordination problems in particular, they allow groups of disparate, unconnected people to organize themselves with relative ease and an absence of conflict.

Consider a practice that’s so basic that we don’t even think of it as a convention: first-come, first-served seating in public places. Whether on the subway or a bus or in a movie theater, we assume that the appropriate way to distribute seats is according to when people arrive. A seat belongs, in some sense, to the person occupying it. (In fact, in some places—like movie theaters—as long as a person has established his or her ownership of a seat, he or she can leave it, at least for a little while, and be relatively sure no one will take it.)

This is not necessarily the best way to distribute seats. It takes no account, for instance, of how much a person wants to sit down. It doesn’t ensure that people who would like to sit together will be able to. And it makes no allowances—in its hard and fast form—for mitigating factors like age or illness. (In practice, of course, people do make allowances for these factors, but only in some places. People will give up a seat on the subway to an elderly person, but they’re unlikely to do the same with a choice seat in a movie theater, or with a nice spot on the beach.) We could, in theory, take all these different preferences into account. But the amount of work it would require to figure out any ideal seating arrangement would far outweigh whatever benefit we would derive from a smarter allocation of seats. And, in any case, flawed as the first-come, first-served rule may be, it has a couple of advantages. To begin with, it’s easy. When you get on a subway, you don’t have to think strategically or worry about what anyone else is thinking. If there’s an open seat and you want to sit down, you take it. Otherwise you stand. Coordination happens almost without anyone thinking about it. And the convention allows people to concentrate on other, presumably more important things. The rule doesn’t need coercion to work; either. And since people get on and off the train randomly, everyone has as good a chance of finding a seat as anyone else.

Still, if sitting down really matters to you, there’s no law preventing you from trying to circumvent the convention by, for instance, asking someone to give up his seat. So in the 1980s, the social psychologist Stanley Milgram decided to find out what would happen if you did just that. Milgram suggested to a class of graduate students that they ride the subway and simply ask people, in a courteous bit direct manner, if they could have their seats. The students laughed the suggestion away, saying things like, “A person could get killed that way.” But one student agreed to be the guinea pig. Remarkably, he found that half of the people he asked gave up their seats, even though he provided no reason for his request.

This was so surprising that a whole team of students fanned out on the subway, and Milgram himself joined in. They all reported similar results: about half the time, just asking convinced people to give up their seat. But they also discovered something else: the hard part of the process wasn’t convincing the people, it was mustering the courage to ask them in the first place. The graduate students said that when they were standing in front of a subject, “they felt anxious, tense, and embarrassed.” Much of the time, they couldn’t even bring themselves to ask the question and they just moved on. Milgram himself described the whole experience as ‘wrenching.” The norm of first-come, first-served was so ingrained that violating it required real labor.

The point of Milgram’s experiment, in a sense, was that the most successful norms are not just externally established and maintained. The most successful norms are internalized. A person who has a seat on the subway doesn’t have to defend it or assert her right to the seat because, for the people standing, it would be more arduous to contest that right.

Even if internalization is crucial to the smooth workings of conventions, it’s also the case that external sanctions are often needed. Sometimes, as in the case of traffic rules, those sanctions are legal. But usually the sanctions are more informal, as Milgram discovered when he studied what happened when people tried to cut into along waiting line. Once again, Milgram sent his intrepid graduate students out into the world, this time with instructions to jump lines at offtrack betting parlors and ticket counters. About half the time the students were able to cut the line without any problems. But in contrast to the subway—where, when people re fuse to give up their seat they generally just said no or even re fuse to answer—when people did try to stop the line cutting, their reaction was more vehement. Ten percent of the time they took some kind of physical action, sometimes going so far as to shove the intruder out of the way (though usually they just tapped or pulled on their shoulders). About 25 percent of the time they verbally protested and refused to let the jumper in. And 15 percent of the time the intruder just got dirty looks and hostile stares.

Interestingly, the responsibility for dealing with the intruder fell clearly on the shoulders of the person in front of whom the intruder had stepped. Everyone in line behind the intruder suffered when he cut the line, and people who were two or three places be him would sometimes speak up, but in general the person who was expected to act was the one who was closest to the newcomer. (Closest, but behind: people in front of the intruder rarely said anything.) Again, this was not a formal rule, but it made a kind of intuitive sense. Not only did the person immediately behind the intruder suffer most from the intrusion, but it was also easiest for him to make a fuss without disrupting the line as a whole.

That fear of disruption, it turns out, has a lot to do with why it’s easier to cut a line, even in New York, than you might expect. Milgram, for one, argued that the biggest impediment to acting against line jumpers was the fear of losing one’s place in line. The line is, like the first-come, first-served rule, a simple but effective mechanism for coordinating people, but its success depends upon everyone’s willingness to respect the line’s order. Paradoxically, this sometimes means letting people jump in front rather than risk wrecking the whole queue. That’s why Milgram saw an ability to tolerate line jumpers as a sign of the resilience of a queue, rather than of its weakness.

A queue is, in fact, a good way of coordinating the behavior of individuals who have gathered in a single location in search of goods or a service. The best queues assemble everyone who’s waiting into a single line, with the person at the head of the line being served first. The phalanx, which you often see in supermarkets, with each checkout counter having its own line, is by contrast a recipe for frustration. Not only do the other lines always seem shorter than the one you’re in—which there’s a good chance they are, since the fact that you’re in this line, and not that one, makes it likely that this one is longer—but studies of the way people perceive traffic speed suggest that you’re likely to do a bad job of estimating how fast your line is moving relative to everyone else’s. The phalanx also makes people feel responsible for the speed with which they check out, since it’s possible that if they’d picked a different line, they would have done better. As with strategizing about the subway seat, this is too much work relative to the payoff. The single-file queue does have the one disadvantage of being visually more intimidating than the phalanx (since everyone’s packed into a single line), but on average everyone will be served faster in a single queue. If there’s an intelligent way to wait in line, that’s it. (One change to convention that would make sense would be to allow people to sell their places in line, since that would let the placeholders trade their time for money—a good trade for them—and people with busy jobs to trade money for tine—also a good trade. But this would violate the egalitarian ethos that governs the queue.)

At the beginning of this chapter, I suggested that in liberal societies authority had only limited reach over the way citizens dealt with each other. In authority’s stead, certain conventions—voluntarily enforced, as Milgram showed, by ordinary people—play an essential role in helping large groups of people to coordinate their behavior with each other without coercion, and without requiring too much thought or labor. It would seem strange to deny that there is a wisdom in that accomplishment, too.

Chapter Five, Part III

In 1958, the social scientist Thomas C. Schelling ran an experiment with a group of law students from New Haven, Connecticut. He asked the students to imagine this scenario: You have to meet someone in New York City. You don’t know where you’re supposed to meet, and there’s no way to talk to the other person ahead of time. Where would you go?

This seems like an impossible question to answer well. New York is a very big city, with lots of places to meet. And yet a majority of the students chose the very same meeting place: the information booth at Grand Central Station. Then Schelling complicated the problem a bit. You know the date you’re supposed to meet the other person, he said. But you don’t know what time you’re supposed to meet. When will you show up at the information booth? Here the results were even more striking. Just about all the students said they would show up at the stroke of noon. In other words, if you dropped two law students at either end of the biggest city in the world and told them to find each other, there was a very good chance that they’d end up having lunch together.

Schelling replicated this outcome in a series of experiments in which an individual’s success depended on how well he coordinated his response with those of others. For instance, Schelling paired people up and asked them to name either “heads” or “tails,” with the goal being to match what their partners said. Thirty-six of forty-two people named “heads.” He set up a box of sixteen squares, and asked people to check one box (you got paid if everyone in the group checked the same box). Sixty percent checked the top left box. Even when the choices were seemingly infinite, people did a pretty good job of coordinating themselves. For instance, when asked the question: “Name a positive number,” 40 percent of the students chose “one.”

How were the students able to do this? Schelling suggested that in many situations, there were salient landmarks or “focal points” upon which people’s expectations would converge. (Today these are known as “Schelling points.”) Schelling points are important for a couple of reasons. First, they show that people can find their way to collectively beneficial results not only without centralized direction but also without even talking to each other. As. Schelling wrote, “People can often concert their intentions and expectations with others if each knows that the other is trying to do the same.” This is a good thing because conversation isn’t always possible, and with large groups of people in particular it can be difficult or inefficient. (Howard Rheingold’s book Smart Mobs, though, makes a convincing case that new mobile technologies— from cell phones to mobile computing—make it much easier for large collections of people to communicate with each other and so coordinate their activities.) Second, the existence of Schelling points suggests that people’s experiences of the world are often surprisingly similar, which makes successful coordination easier-After all, it would not be possible for two people to meet at Grand Central Station unless Grand Central represented roughly the same thing to both of them. The same is obviously true of the choice between “heads” and “tails.” The reality Schelling’s students shared was, of course, cultural. If you put pairs of people from Manchuria down in the middle of New York City and told them to meet each other, it’s unlikely any of them would manage to meet. But the fact that the shared reality is cultural makes it no less real.

Chapter Five, Part II

Consider, to begin with, this problem. There’s a local bar that you like. Actually, it’s a bar that lots of people like. The problem with the bar is that when it’s crowded, no one has a good time. You’re planning on going to the bar Friday night. But you don’t want to go if it’s going to be too crowded. What do you do?

To answer the question, you need to assume, if only for the sake of argument, that everyone feels the way you do. In other words, the bar is fun when it’s not crowded, but miserable when it is. As a result, if everyone thinks the bar will be crowded on. Friday night, then few people will go. The bar, therefore will be empty, and anyone who goes will have a good time. On the other hand, if everyone thinks the bar won’t be crowded, everyone will go. Then the bar will be packed, and no one will have a good time. (This problem was captured perfectly, of course, by Yogi Berra, when he said of Toots Shor’s nightclub: “No one goes there anymore. It’s too crowded.”) The trick, of course, is striking the right balance, so that every week enough—but not too many—people go.

There is, of course, an easy solution to this problem: just invent an all-powerful central planner—a kind of uber-doorman—-- who tells people when they can go to the bar. Every week the central planner would issue his dictate, banning some, allowing others in, thereby ensuring that the bar was full but never crowded. Although this solution makes sense in theory, it would be intolerable in practice. Even if central planning of this sort were possible, it would represent too great an interference with freedom of choice. We want people to be able to go to a bar if they want, even if it means that they’ll have a bad time. Any solution worth talking about has to respect people’s right to choose their own course of action, which means that it has to emerge out of the collective mix of all the potential bargoers’ individual choices.

In the early 1 990s, the economist Brian Arthur tried to figure out whether there really was a satisfying solution to this problem. He called the problem the “El Farol problem,” after a local bar in Santa Fe that sometimes got too crowded on nights when it featured Irish music. Arthur set up the problem this way: If El Farol is less than 60 percent full on any night, everyone there will have fun. If it’s more than 60 percent full, no one will have fun. Therefore, people will go only if they think the bar will be less than 60 percent full otherwise, they stay home.

How does each person decide what to do on any given Friday? Arthur’s suggestion was that since there was no obvious answer, no solution you could deduce mathematically, different people would rely on different strategies. Some would just assume that the same number of people would show up at El Farol this Friday as showed up last Friday. Some would look at how many people showed up the last time they’d actually been in the bar, (Arthur assumed that even if you didn’t go yourself, you could find out how many people had been in the bar.) Some would use an average of the last few weeks. And some would assume that this week’s attendance would be the opposite of last week’s (if it was empty last week, it’ll he full this week).

What Arthur did next was run a series of computer experiments designed to simulate attendance at El Farol over the period of one hundred weeks. (Essentially, he created a group of computer agents, equipped them with the different strategies, and let them go to work.) Because the agents followed different strategies, Arthur found, the number who ended up at the bar fluctuated sharply from week to week. The fluctuations weren’t regular, but were random, so that there was no obvious pattern. Sometimes the bar was more than 60 percent full three or four weeks in a row, while other times it was less than 60 percent full four out of five weeks. As a result, there was no one strategy that a person could follow and be sure of making the right decision. Instead, strategies worked for a while and then had to be tossed away

The fluctuations in attendance meant that on some Friday nights El Farol was too crowded for anyone to have fun, while on other Fridays people stayed home who, had they gone to the bar, would have had a good time. What was remarkable about the experiment, though, was this: during those one hundred weeks, the bar was—on average—exactly 60 percent full, which is precisely what the group as a whole wanted it to be. (When the bar is 60 percent full, the maximum number of people possible are having a good time, and no one is having a bad time.) In other words, even in a case where people’s individual strategies depend on each other’s behavior, the group’s collective judgment can be good.

A few years after Arthur first formulated the El Farol problem, engineers Ann M. Bell and William A. Sethares took a different approach to solving it. Arthur had assumed that the would-be bargoers would adopt diverse strategies in trying to anticipate the crowd’s behavior. Bell and Sethares’s bargoers, though, all followed the same strategy: if their recent experiences at the bar had been good, they went. If their recent experiences had been bad, they didn’t.

Bell and Sethares’s bargoers were therefore much less sophisticated than Arthur’s. They didn’t worry much about what the other bargoers might be thinking, and they did not know—as Arthur’s bargoers did—how many people were at El Farol on the nights when they didn’t show up. All they really knew was whether they’d recently enjoyed themselves at El Farol or not. If they’d had a good time, they wanted to go back. If they’d had a bad time, they didn’t. You might say, in fact, that they weren’t worrying about coordinating their behavior with the other bargoers at all. They were just relying on their feelings about El Farol.

Unsophisticated or not, this group of bargoers produced a different solution to the problem than Arthur’s bargoers did. After a certain amount of time had passed—giving each bargoer the experience he needed to decide whether to go back to El Farol—the group’s weekly attendance settled in at just below 60 percent of the bar’s capacity, just a little bit worse than that ideal central planner would have done. In looking only to their own experience, and not worrying about what everyone else was going to do, the bargoers came up with a collectively intelligent answer, which suggests that even when it comes to coordination problems, independent thinking may be valuable.

There was, though, a catch to the experiment. The reason the group’s weekly attendance was so stable was that the group quickly divided itself into people who were regulars at El Farol and people who went only rarely. In other words, El Farol started to look a lot like Cheers. Now, this wasn’t a bad solution. In fact, from a utilitarian perspective (assuming everyone derived equal pleasure from going to the bar on any given night), it was a perfectly good one. More than half the people got to go to El Farol nearly every week, and they had a good time while they were there (since the bar was only rarely crowded). And yet it’d be hard to say that it was an ideal solution, since a sizable chunk of the group rarely went to the bar and usually had a bad time when they did.

The truth is that it’s not really obvious (at least not to me) which solution—Arthur’s or Sethares and Bell’s—is better, though both of them seem surprisingly good. This is the nature of coordination problems: they are very hard to solve, and coming up with any good answer is a triumph, When what people want to do depends on what everyone else wants to do, every decision affects every other decision, and there is no outside reference point that can stop the self-reflexive spiral. When Francis Galton’s fairgoers made their guesses about the ox’s weight, they were trying to evaluate a reality that existed outside the group. When Arthur’s computer agents made their guesses about El Farol, though, they were trying to evaluate a reality that their own decisions would help construct. Given those circumstances, getting even the average attendance right seems miraculous.

Chapter Five, Part I

No one has ever paid more attention to the streets and sidewalks of New York City than William H. Whyte. In 1969, Whyte—the author of the sociological classic The Organization Man—got a grant to run what came to be known as the Street Life Project, and spent much of the next sixteen years simply watching what New Yorkers did as they moved through the city. Using time-lapse cameras and notebooks, Whyte and his group of young research assistants compiled a remarkable archive of material that helped explain how people used parks, how they walked on busy sidewalks, and how they handled heavy traffic. Whyte’s work, which was eventually published in his book City was full of fascinating ideas about architecture, urban design, and the importance to a city of keeping street life vibrant, it was also a paean to the urban pedestrian. “The pedestrian is a social being,” Whyte wrote. “He is also a transportation unit, and a marvelously complex and efficient one.” Pedestrians, Whyte showed, were able, even on crowded sidewalks, to move surprisingly fast without colliding with their neigh- hors, in fact, they were often at their best when the crowds were at their biggest. “The good pedestrian,” Whyte wrote, “usually walks slightly to one side, so that he is looking over the shoulder of the person ahead. In this position he has the maximum choice and the person ahead is in a sense running interference for him.”

New Yorkers mastered arts like “the simple pass,” which involved slowing ever so slightly in order to avoid a collision with an oncoming pedestrian. They platooned at crosswalks as a protection against traffic. In general, Whyte Wrote, “They walk fast and they walk adroitly. They give and they take, at once aggressive and accommodating. With the subtlest of motions they signal their intentions to one another.” The result was that “At eye level, the scene comes alive with movement and color—people walking quickly, walking slowly, skipping up steps, weaving in and out in crossing patterns, accelerating and retarding to match the moves of others. There is a beauty that is beguiling to watch.”

What Whyte saw—and made us see—was the beauty of a well-coordinated crowd, in which lots of small, subtle adjustments in pace and stride and direction add up to a relatively smooth and efficient flow. Pedestrians are constantly anticipating each other’s behavior. No one tells them where or when or how to walk. 1nead, they all decide for themselves what they’ll do based on their best guess of what everyone else will do. And somehow it usually works out well. There is a kind of collective genius at here.

It is, though, a different kind of genius from the one represented by the NFL point spread or Google. The problem that a crowd of pedestrians is “solving” is fundamentally different from a problem like “Who will win the Giants—Rams game, and by how much?” The pedestrian problem is an example of what are usually called coordination problems. Coordination problems are ubiquitous in everyday life. What time should you leave for work? Where do we want to eat tonight? How do we meet our friends? How do we allocate seats on the subway? These are all coordination problems. So, too, are many of the fundamental questions that any economic system has to answer: Who will work where? How much should my Factory produce? How can we make sure that people get the goods and that to solve it, a person has to think not only about what he believes the right answer is but also about what other people think the right answer is. And that’s because what each person does affects and depends on what everyone else will do, and vice versa.

One obvious way of coordinating people’s actions is via authority or coercion. An army goose-stepping in a parade is, after all, very well-coordinated. So, too, are the movements of workers on an old-fashioned assembly line. But in a liberal society, authority (which includes laws or formal rules) has only limited reach over the dealings of private citizens, and that seems to be how most Americans like it. As a result many coordination problems require bottom-up, not top-down, solutions. And at the heart of all of them is the same question: How can people voluntarily—that is, without anyone telling them what to do—make their actions fit together in an efficient and orderly way?

It’s a question without an easy answer, though this does not mean that no answer exists. What is true is that coordination problems are less amenable to clear, definitive solutions than are many of the problems we’ve already considered. Answers, when they can be found, are often good rather than optimal. And those answers also often involve institutions, norms, and history, factors that both shape a crowd’s behavior and are also shaped by it. When it comes to coordination problems, independent decision making (that is, decision making which doesn’t take the opinions of others into account)’ is pointless—since what I’m willing to do depends on what I think you’re going to do, and vice versa. As a result, there’s no guarantee that groups will come up with smart solutions. What’s striking, though, is just how often they do.

The Man on the Spot

Nobel Laureate Friedrich von Hayek was a strong advocate of tacit knowledge. This is best explained by this short extract from his 1945 article The Use of Knowledge in Society:
Times have changed since Hayek wrote this and most of the centrally planned economies of which he speaks failed. However, in a sense we have swapped one kind of planned coordination for another. In the middle of the 20th century whole countries were organised using central planning. Now we have corporations acting as coordination mechanisms where most of their activity is controlled and planned by central management. Of course, central management are likely to fall foul of the same knowledge problem that central planners faced.
This is, perhaps, also the point where I should briefly mention the fact that the sort of knowledge with which I have been concerned is knowledge of the kind which by its nature cannot enter into statistics and therefore cannot be conveyed to any central authority in statistical form. The statistics which such a central authority would have to use would have to be arrived at precisely by abstracting from minor differences between the things, by lumping together, as resources of one kind, items which differ as regards location, quality, and other particulars, in a way which may be very significant for the specific decision. It follows from this that central planning based on statistical information by its nature cannot take direct account of these circumstances of time and place and that the central planner will have to find some way or other in which the decisions depending on them can be left to the "man on the spot."
Central planning has failed as a way to organise economies but a central structure has not stopped corporations from getting larger and larger. If Wal-mart were a country it would be the in the top 30 economies in the world ranked by GDP, ahead of countries such as Austria, Argentina and Indonesia, and it would rank as China'a 8th largest trading partner. Central planning didn't work out too well for countries but doesn't seem to be doing too bad for companies. In 2006 Wal-mart reported profits of $12billion on sales of $350billion.

How has Wal-mart being so successful and avoided the problems of statistics that "lump together items which differ as regards location, quality, and other particulars, in a way which may be very significant for the specific decision." A recent post by journalist, Charles Platt on the boingboing.net blog provides a great insight.

Platt took a minimum wage job at Wal-mart to see if the commonly held beliefs about the company were true. He found that they were largely untrue and that the reputation was unwarranted. That is not our focus. His piece gives us this gem on tacit knowledge and "the man on the spot".
My standard equipment included a handheld bar-code scanner which revealed the in-store stock and nearest warehouse stock of every item on the shelves, and its profit margin. At the branch where I worked, all the lowest-level employees were allowed this information and were encouraged to make individual decisions about inventory. One of the secrets to Wal-Mart’s success is that it delegates many judgment calls to the sales-floor level, where employees know first-hand what sells, what doesn’t, and (most important) what customers are asking for.
That sums it up perfectly.

Ask the Audience



A good analysis of the Ask the Audience lifeline from Who Wants to be a Millionaire is available here, while on the other hand we have a slight sceptic here.

Chapter Four, Part I

In April 1946, at a forum organized by the New York Herald- Tribune, General Wild Bill Donovan gave a speech entitled ‘Our Foreign Policy Needs a Central Intelligence Agency.” During World War II, Donovan had been the head of the Office of Strategic Services, the United States’ chief wartime intelligence organization, and once the war ended he became a loud public advocate for the creation of a more powerful peacetime version of the OSS. Before the war, the United States had divided intelligence-gathering responsibilities among the different military services. But the failure of any of those services to anticipate the attack on Pearl Harbor— despite what seemed, in retrospect, to be ample evidence that a major Japanese strike was in the works—had pointed up the system’s limitations and suggested the need for a more comprehensive approach to intelligence gathering. So, too, did the prospect of conflict with the Soviet Union, which even in 1946 loomed as a real possibility, and the advent of new technologies—Donovan cited “the rocket, the atomic bomb, bacteriological warfare—that made America’s borders seem far from impregnable. In his April speech, Donovan hit on all of these themes, arguing that what the United States needed was “a centralized, impartial, independent agency” to take charge of all of the country’s intelligence operations.

Donovan’s public speaking didn’t do much for his own career, since his sharp criticisms alienated the intelligence community and probably doomed his chances of returning to government service. Nonetheless, in 1947, Congress passed the National Security Act and created the Central Intelligence Agency. As historian Michael Warner has put it, the goal of the law was to “implement the principles of unity of command and unity of intelligence.” Fragmentation and division had left the United States vulnerable to surprise attack. Centralization and unity would keep it safe in the future.

In fact, though, the centralization of intelligence never happened. Although the CIA was initially the key player in the postwar period, as time passed the intelligence community became more fragmented than ever, divided into a kind of alphabet soup of agencies with overlapping responsibilities and missions, including not just the CIA but also the National Security Agency, the National Imagery and Mapping Agency, the National Reconnaissance Office, the Defense Intelligence Agency, and the intelligence arms of each of the three major military services. In theory, the director of the CIA was in charge of the U.S. intelligence community as a whole, but in practice he exercised very little supervision over these agencies, and most of the money for intelligence operations came from the Department of Defense. In addition, the FBI—which was responsible for domestic law enforcement—operated almost completely outside the orbit of this intelligence community, even though information about foreign terrorists operating inside the United States would obviously be of interest to the CIA. In place of the centralized repository of information and analysis that Donovan had envisioned, the U.S. intelligence community evolved into a collection of virtually autonomous, decentralized groups, all working toward the same broad goal—keeping the United States safe from attack—but in very different ways.

Until September 11, 2001, the flaws of this system were overlooked. The intelligence community had failed to anticipate the 1993 bombing of the World Trade Center and the 1998 bombings of the U.S. embassy in Kenya and the USS Cole in Yemen. But not until September 11 did the failure of U.S. intelligence gathering come to seem undeniable. The Congressional Joint Inquiry into the attacks found that the U.S. intelligence community had “failed to capitalize on both the individual and collective significance of available information that appears relevant to the events of September 1 1.” Intelligence agencies “missed opportunities to disrupt the September 11th plot,” and allowed information to pass by unnoticed that, if appreciated, would have “greatly enhanced its chances of uncovering and preventing” the attacks. It was, in other words, Pearl Harbor all over again.

The congressional inquiry was unquestionably a classic example of Monday-morning quarterbacking. Given the sheer volume of information that intelligence agencies process, it’s hardly surprising that a retrospective look at the data they had on hand at the time of the attack would uncover material that seemed relevant to what happened on September 11. That doesn’t necessarily mean the agencies could have been realistically expected to recognize the relevance of the material beforehand. In her classic account of the intelligence failures at Pearl Harbor; Warning and Decision, Roberta Wohistetter shows how many signals there were of an impending Japanese attack, hut suggests that it was still unreasonable to expect human beings to have picked the right signals out from “the buzzing and blooming confusion” that accompanied them. Strategic surprise, Wohlstetter suggests, is an intractable problem to solve. And if a massive Japanese naval attack comprising hundreds of planes and ships and thousands of men was difficult to foresee, how much harder would it have been to predict a terrorist attack involving just nineteen men?

And yet one has to wonder. Given the almost complete failure of the intelligence community to anticipate any of four major terrorist attacks from 1993 through 2001, is it not possible that organizing the intelligence community differently would have, at the very least, improved its chances of recognizing what the Joint Inquiry called “the collective significance” of the data it had on hand? Predicting the actual attacks on the World Trade Center and the Pentagon may have been impossible. But coming up with a reasonable, concrete estimate of the likelihood of such an attack nay not have been.

That, at least, was the conclusion that Congress reached: better processes would have produced a better result. In particular, they stressed the lack of “information sharing” between the various agencies. Instead of producing a coherent picture of the threats the United States faced, the various agencies produced a lot of localized snapshots. The sharpest critic of the agencies’ work, Senator Richard Shelby, argued that the FBI in particular was crippled by its “decentralized organizational structure,” which “left information-holdings fragmented into largely independent fiefdoms.” And the intelligence community as a whole was hurt by a failure to put the right information in the hands of the right people. What needed to be done, Shelby suggested, was to abolish the fiefdoms and return to the idea for which Bill Donovan had argued half a century ago. One agency, which could stand “above and independent from the disputatious bureaucracies,” needed to be put in charge of U.S. intelligence. Decentralization had led the United States astray. Centralization would put things right.

Chapter Four, Part II

In challenging the virtues of decentralization, Shelby was challenging an idea that in the past fifteen years has seized the imagination of businessmen, academics, scientists, and technologists everywhere. In business, management theories like reengineering advocated replacing supervisors and managers with self-managed teams that were responsible for solving most problems on their own, while more utopian thinkers deemed the corporation itself outmoded. In physics and biology scientists paid increasing attention to self-organizing, decentralized systems—like ant colonies or beehives—which, even without a center, proved robust and adaptable. And social scientists placed renewed emphasis on the importance of social networks, which allow people to connect and coordinate with each other without a single person being in charge. Most important, of course, was the rise of the Internet—in some respects, the most visible decentralized system in the world—and of corollary technologies like peer-to-peer file sharing (exemplified by Napster), which offered a clear demonstration of the possibilities (economic, organizational, and more) that decentralization had to offer.

The idea of the wisdom of crowds also takes decentralization as a given and a good, since it implies that if you set a crowd of self- interested, independent people to work in a decentralized way on the same problem, instead of trying to direct their efforts from the top down, their collective solution is likely to be better than any other solution you could come up with. American intelligence agents and analysts were self-interested, independent people working in a decentralized way on roughly the same problem (keeping the country safe). So what went wrong? Why did those agents not produce a better forecast? Was decentralization really the problem?


BEFORE WE ANSWER THAT question, we need to answer a simpler one first: What do we mean by “decentralization,” anyway? It’s a capacious term, and in the past few years it’s been tossed around more freely than ever. Flocks of birds, free-market economies, cities, peer-to-peer computer networks: these are all considered examples of decentralization. Yet so, too, in other contexts, are the American public-school system and the modern corporation. These systems are dramatically different from each other, but they do have this in common: in each, power does not fully reside in one central location, and many of the important decisions are made by individuals based on their own local and specific knowledge rather than by an omniscient or farseeing planner.

In terms of decision making and problem solving, there are a couple of things about decentralization that really matter. It fosters, and in turn is fed by, specialization—of labor, interest, attention, or what have you. Specialization, as we’ve known since Adam Smith, tends to make people more productive and efficient. And it increases the scope and the diversity of the opinions and information in the system (even if each individual person’s interests become more narrow).

Decentralization is also crucial to what the economist Friedrich Hayek described as tacit knowledge. Tacit knowledge is knowledge that can’t be easily summarized or conveyed to others, because it is specific to a particular place or job or experience, but it is nonetheless tremendously valuable. (In fact, figuring out how to take advantage of individuals’ tacit knowledge is a central challenge for any.group or organization.) Connected with this is the assumption that is at the heart of decentralization, namely that the closer a person is to a problem, the more likely he or she is to have a good solution to it. This practice dates hack to ancient Athens, where decisions about local festivals were left up to the demes, as opposed to the Athenian assembly, and regional magistrates handled most nonserious crimes, It can also be seen in Exodus, where Moses’ father-in-law counseled him to judge only in ‘great matter[s]” and to leave all other decisions to local rulers.

Decentralization’s great strength is that it encourages independence and specialization on the one hand while still allowing people to coordinate their activities and solve difficult problems on the other. Decentralization’s great weakness is that there’s no guarantee that valuable information which is uncovered in one part of the system will find its way through the rest of the system. Sometimes valuable information never gets disseminated, making it less useful than it otherwise would be. What you’d like is a way for individuals to specialize and to acquire local knowledge—which increases the total amount of information available in the system— while also being able to aggregate that local knowledge and private information into a collective whole, much as Google relies on the local knowledge of millions of Web-page operators to make Google searches ever-smarter and ever-quicker. To accomplish this, any “crowd”—whether it be a market, a corporation, or an intelligence agency—needs to find the right balance between the two imperatives: making individual knowledge globally and collectively useful (as we know it can be), while still allowing it to remain resolutely specific and local.

Chapter Four, Part III

In 1991, Norwegian.hacker Linus Torvalds created his own version of the Unix operating system, dubbing it Linux. He then released the source code he had written to the public, so everyone out there—well, everyone who understood computer code—could see what he had done. More important, he attached a note that read, “If your efforts are freely distributable, I’d like to hear from you, so I can add them to the system.” It was a propitious decision. As one history of Linux points out: “Of the first ten people to download Linux, five sent back bug fixes, code improvements, and new features.” Over time, this improvement process became institutionalized, as thousands of programmers, working for free, contributed thousands of minor and major fixes to the operating system, making Linux ever-more reliable and robust.

Unlike Windows, which is owned by Microsoft and worked on only by Microsoft employees, Linux is owned by no one, When a problem arises with the way Linux works, it only gets fixed if someone, on his own, offers a good solution. There are no bosses ordering people around, no organizational charts dictating people’s responsibilities. Instead, people work on what they’re interested in
and ignore the rest. This seems like—in fact, it is—a rather haphazard way to solve problems. But so far at least, it has been remarkably effective, making Linux the single most important challenger to Microsoft.

Linux is clearly a decentralized system, since it has no formal organization and its contributors come from all over the world. What decentralization offers Linux is diversity In the traditional corporate model, top management hires the best employees it can, pays them to work full-time, generally gives them some direction about what problems to work on, and hopes for the best. That is not a bad model. It has the great virtue of making it easy to mobilize people to work on a particular problem, and it also allows companies to get very good at doing the things they know how to do. But it also necessarily limits the number of possible solutions that a corporation can come up with, both because of mathematical reality (a company has only so many workers, and they have only so much time) and because of the reality of organizational and bureaucratic politics. Linux, practically speaking, doesn’t worry much about either. Surprisingly, there seems to be a huge supply of programmers willing to contribute their efforts to make the system better. That guarantees that the field of possible solutions will he immense. There’s enough variety among programmers, and there are enough programmers, that no matter what the bug is, someone is going to come up with a fix for it. And there’s enough diversity that someone will recognize bugs when they appear. In the words of open-source guru Eric Raymond, “Given enough eyeballs, all bugs are shallow.”

In the way it operates, in fact, Linux is not all that different from a market, as we saw in Chapter 2 on diversity. Like a bee colony, it sends out lots of foragers and assumes that one of them will find the best route to the flower fields. This is, without a doubt, less efficient than simply trying to define the best route to the field or even picking the smartest forager and letting him go. After all, if hundreds or thousands of programmers are spending their time trying to come up with a solution that only a few of them are going to find, that’s many hours wasted that could he spent doing something else. And yet, just as the free market’s ability to generate lots of alternatives and then winnow them down is central to its continued growth, Linux’s seeming wastefulness is a kind of strength (a kind of strength that for-profit companies cannot, fortunately or unfortunately, rely on). You can let a thousand flowers bloom and then pick the one that smells the sweetest.

Chapter Four, Part IV

So who picks the sweetest-smelling one? Ideally, the crowd would. But here’ where striking a balance between the local and the global is essential: a decentralized system can only produce genuinely intelligent results if there’s a means of aggregating the information of everyone in the system. Without such a means, there’s no reason to think that decentralization will produce a smart result. In the case of the experiment with which this book opened, that aggregating mechanism was just Frances Galton counting the votes. In the case of the free market, that aggregating mechanism is obviously price. The price of a good reflects, imperfectly but effectively, the actions of buyers and sellers everywhere, and provides the necessary incentive to push the economy where the buyers and sellers want it to go. The price of a stock reflects, imperfectly but effectively, investors’ judgment of how much a company is worth. In the case of Linux, it is the small number of oders, including Torvalds himself, who vet every potential change to the operating-system source code. There are would-be Linux programmers all over the world, but eventually all roads lead to Linus.

Now, it’s not clear that the decision about what goes into Linux’s code needs to be or should be in the hands of such a small group of people. If my argument in this book is right, a large group of programmers, even if they weren’t as skilled as Torvalds and his
lieutenants, would do an excellent job of evaluating which code was worth keeping. But set that aside. The important point here is that if the decision were not being made by someone, Linux itself would not be as successful as it is. If a group of autonomous individuals tries to solve a problem without any means of putting their judgments together, then the best solution they can hope for is the solution that the smartest person in the goiip produces, and there’s no guarantee they’ll get that. If that same group, though, has a means of aggregating all those different opinions, the group’s collective solution may well be smarter than even the smartest person’s solution. Aggregation—which could be seen as a curious form of centralization—is therefore paradoxically important to the success of decentralization. If this seems dubious, it may be because when we hear centralization we think “central planners,” as in the old Soviet Union, and imagine a small group of men—or perhaps just a single man—deciding how many shoes will be made today. But in fact there’s no reason to confuse the two. It’s possible, and desirable, to have collective decisions made by deéentralized agents.

Understanding when decentralization is a recipe for collec.tive wisdom matters because in recent years the fetish for decentralization has sometimes made it seem like the ideal solution for every problem. Obviously, given the premise of this book, I think decentralized ways of organizing human effort are, more often than not, likely to produce better results than centralized ways. But decentralization works well under some conditions and not very well under others, In the past decade, it’s been easy to believe that if a system is decentralized, then it must work well. But all you need to do is look at a traffic jam—or, for that matter, at the U.S. intelligence community—to recognize that getting rid of a central authority is not a panacea. Similarly, people have become enamored of the idea that decentralization is somehow natural or automatic, perhaps because so many of our pictures of what decentralization looks like come from biology. Ants, after all, don’t need to do anything special to form an ant colony. Forming ant colonies -is inherent in their biology. The same is not, however, true of human beings. It’s hard to make real decentralization work, and hard to keep it going, and easy for decentralization to become disorganization.

A good example of this was the performance of the Iraqi military during the U.S—Iraq war in 2003. In the early days of the war, when Iraqi fedayeen paramilitaries had surprised U.S. and British troops with the intensity of their resistance, the fedayeen were held up as an example of a successful decentralized group, which was able to flourish in the absence of any top-down control. In fact, one newspaper columnist compared the fedayeen to ants in an ant colony, finding their way to a “good” solution while communicating only with the soldiers right next to them. But after a few days, the idea that the fedayeen were mounting a meaningful, organized resistance vanished, as it becare clear that their attacks were little more than random, uncoordinated assaults that had no connection to what was happening elsewhere in the country. As one British commander remarked, it was all tactics and no strategy. To put it differently, the individual actions of the fedayeen fighters never added up to anything bigger, precisely because there was no method of aggregating their local wisdom. The fedayeen were much like nts—following local rules. But where ants who follow their local rules actually end up fostering the well-being of the colony, soldiers who followed their local rules ended up dead. (It may be, though, that once the actual war was over, and the conflict shifted to a clash between the occupying U.S. military and guerrillas using hit-and-run terrorist tactics, the absence of aggregation became less important, since the goal was not to defeat the United States in battle, but simply to inflict enough damage to make staying seem no longer worth it. In that context, tactics may have been enough.)

The irony is that the true decentralized military in the U.S.—Irac1 war was the U.S. Army. American troops have always been given significantly more initiative in the field than other armies, as the military has run itself on the “local knowledge is good” theory. But in recent years, the army has dramatically reinvented itself. Today, local commanders have considerably greater latitude to act, and sophisticated communications systems mean that collectively wise strategies can emerge from local tactics. Commanders at the top are not isolated from what’s happening in the field, and their decisions will inevitably reflect, in a deep sense, the local knowledge that field commanders are acquiring. In the case of the invasion of Baghdad for instance, the U.S. strategy adapted quickly to the reality of Iraq’s lack of strength, once local commanders reported little or no resistance. This is not to say, as some have suggested, that the military has become a true bottom- up organization. The chain of command remains essential to the way the military works, and all battlefield action takes place within a framework defined by what’s known as the Commander’s Intent, which essentially lays out a campaigil’s objectives. But increasingly, successful campaigns may depend as much on the fast aggregation of information from the field as on preexisting, top-down strategies.

Chapter Four, Part V

When it comes to the problems of the U.S. intelligence community before September 11, the problem was not decentralization, The problem was the kind of decentralization that the intelligence community was practicing. On the face of it, the division of labor between the different agencies makes a good deal of sense. Specialization allows for a more fine-grained appreciation of information and greater expertise in analysis. And everything we know about decision making suggests that the more diverse the available• perspectives on a problem, the more likely it is that the final decision will be smart. Acting Defense Intelligence Agency director Lowell Jacoby suggested precisely this in written testimony before Congress, writing, “Information considered irrelevant noise by one set of analysts may provide critical clues or reveal significant relationships when subjected to analytic scrutiny by another.”

What was missing in the intelligence community, though, was any real means of aggregating not just information but also judgments. In otherwords, there was no mechanism to tap into the collective wisdom of National Security Agency nerds, CIA spooks, and FBI agents. There was decentralization but no aggregation, and therefore no organization. Richard Shelby’s solution to the problem—creating a truly central intelligence agency—would solve the organization problem, and would make it easier for at least one agency to be in charge of all the information. But it would also forego all the benefits—diversity, local knowledge, independence—that decentralization brings. Shelby was right that information needed to be shared. But he assumed that someone—or a small group of sorneones—needed to be at the center, sifting through the information, figuring Out what was important and what was not. But everything we know about cognition suggests that a small group of people, no matter how intelligent, simply will not be smarter than the larger group. And the best tool for appreciating the collective significance of the information that the intelligence community had gathered was the collective wisdom of the intelligence community. Centralization is not the answer. But aggregation is.

There, were and are a number of paths the intelligence community could follow to aggregate information without adopting a traditional top-down organization. To begin with, simply linking the computer databases of the various agencies ould facilitate the flow of information while still allowing the agencies to retain their autonomy. Remarkably, two years after September 11, the government still did not have a single unified “watch list” that drew on data from all parts of the intelligence community. In some sense, quite simple, almost mechanical steps would have allowed the intelligence community to be significantly smarter.

Other, more far-reaching possibilities were available, too, and in fact some within the intelligence community tried to investigate them. The most important of these, arguably, was the FutureMAP program, an abortive plan to set up decision markets—much like thOse of the IEM—that would have, in theory, allowed analysts from different agencies and bureaucracies to buy and sell futures contracts based on their expectations of what might happen in the Middle East and elsewhere. FutureMAP, which got its funding from the Defense Advanced Research Projects Agency (DARPA), had two elements. The first was a set of internal markets, which would have been quite small (perhaps limited to twenty or thirty people), and open only to intelligence analysts and perhaps a small number of outside experts. These markets might actually have tried to predict the probability of specific events (like, presumably, terrorist attacks), since the traders in them would have been able to rely on, among other things, classified-information and hard intelligence data in reaching their conclusions, The hope was that an internal market would help circumvent the internal politics and bureaucratic wrangling that have indisputably had a negative effect on American intelligence gathering, in no small part by shaping the kinds of conclusions analysts feel comfortable reaching. In theory, at least, an internal market would have placed a premium not on keeping one’s boss or one’s agency happy (or on satisfying the White I-louse) but rather on offering the most accurate forecast. And since it would have been open to people from different agencies, it might have offered the kind of collective judgment that the intelligence community has found difficult to make in the past decade.

The second part of FutureMAP was the so-called Policy Analysis Market (PAM), which in the summer of 2003 became the object of a firestorm of criticism from appalled politicians. The idea behind PAM was a simple one (and similar to the idea behind the internal markets): just as the IEM does a good job of forecasting election results and other markets seem to do a good job of forecasting the future, a market centered on the Middle East might provide intelligence that otherwise would be missed.

What distinguished PAM from the internal market was that it was going to be open to the public, and that it seemed to offer the possibility of ordinary people profiting from terrible things happening. Senators Ron Wyden and Byron Dorgan, who were the leaders of the effort to kill PAM, denounced it as “harebrained,” “offensive,” and “useless.” The public, at least those who heard about PAM before it was unceremoniously killed, seemed equally appalled.

Given the thesis of this book, it will not surprise you to learn that I think PAM was potentially a very good idea. The fact that the market was going to be open to the public did not mean that its forecasts would be more inaccurate. On the contrary, we’ve seen that even when traders are not necessarily experts, their collective judgment is often remarkably good. More to the point, opening the market to the public was a way of getting people whom the American intelligence community might not normally hear from— whether because of patriotism, fear, or resentment—to offer up information they might have about conditions in the Middle East.

From the perspective of Shelby’s attack on the intelligence community, PAM, like the internal markets, would have helped break down the institutional barriers that keep information from being aggregated in a single place. Again, since traders in a market have no incentive other than making the right prediction—that is, there are no bureaucratic or political factors influencing their decisions—and since they have that incentive to be right, they are more likely to offer honest evaluations instead of tailoring their opinions to fit the political climate or satisfy institutional demands.

Senator Wyden dismissed PAM as a “fairy tale” and suggested that DARPA would be better off putting its money into “real world” intelligence. But the dichotomy was a false one. No one suggested replacing traditional intelligence gathering with a market. PAI\’I was intended to be simply another way of collecting information. And in any case, if PAM had, in fact, been a “fairy tale,” we would have known it soon enough. Killing the project ensured only that we would have no idea whether decision markets might have something to add to our current intelligence efforts.

The hostility toward PAM, in any case, had little to do with how effective it would or would not be. The real problem with it, Wyden and Dorgan made clear, was that it was “offensive” and “morally wrong” to wager on potential catastrophes. Let’s admit there’s something viscerally ghoulish about betting on an assassination attempt. But let’s also admit that U.S. government analysts ask themselves every day the exact same questions that PAM traders would have been asking: How stable is the government of Jordan? How likely is it the House of Saud will fall? Who will be the head of the Palestinian Authority in 2OO? If it isn’t immoral for the U.S. government to ask these questions, it’s hard to see how it’s immoral for people outside the U.S. government to ask them.

Nor should we have shied from the prospect of people profiting from predicting catastrophe. CIA analysts, after all, don’t volunteer their services. We pay them to predict catastrophes, as we pay informants for valuable information. Or consider our regular economy. The entire business of a life-insurance company is based on betting on when people are going to die (with a traditional life- insurance policy, the company is betting you’ll die later than you think you will, while with an annuity it’s betting you’ll die sooner). There may be something viscerally unappealing about this, hut most of us understand that it’s necessary. This is, in some sense. what markets often do: harness amorality to improve the collective good. If the price of better intelligence was simply having our sensibilities bruised, that doesn’t seem like tOo high a price to have paid. And surely letting people wager on the future was less morally problematic than many of the things our intelligence agencies have done and continue to do to get information. If PAM would actually have made America’s national security strongel-, it would have been morally wrong not to use it.

There were serious problems that the market would have had to overcome. Most notably, if the market was accurate, and the Department of Defense acted on its predictions to stop, say, a coup in Jordan, that action would make the traders’ predictions false and thereby destroy the incentives to make good predictions. A well-designed market would probably have to account for such U.S. interventions, presumably by making the wagers conditional on U.S. action (or, alternatively, traders would start to factor the possibility of U.S. action into their prices). But this would be a problem only if the market was in fact making good predictions. Had PAM ever become a fully liquid market, it would probably also have had the same problems other markets sometimes have, like bubbles and gaming. But it is not necessary to believe that markets work perfectly to believe that they work well.

More important, although most of the attention paid to PAM focused on the prospect of people betting on things like the assassination of Arafat, the vast majority of the “wagers” that PAM traders would have been making would have been on more mundane questions, such as the future economic growth of Jordan or how strong Syria’s military was. At its core, PAM was not meant to tell us what Hamas was going to do next week or to stop the next September 11. Instead, it was meant to give us a better sense of the economic health, the civil stability, and the military readiness of Middle Eastern nations, with an eye on what that might mean for U.S. interests in the region. That seems like something about which the aggregated judgment of policy analysts, would-be Midllie Eastern experts, and businessmen and academics from the : Middle East itself (the kind of people who would likely have been trading on PAM) would have had something valuable to say.

We may yet find out if they do, because in the fall of 2003, NetExchange, the company that had been responsible for setting up PAM, announced that in 2004, a new, revised Policy Analysis Market (this one without government involvement of any sort) would be opened to the public. NetExchange was careful to make clear that the goal of the market would not be to predict terrorist incidents but rather to forecast broader economic, social, and military trends in the region. So perhaps the promise of PAM will actually get tested against reality, instead of being dismissed out of hand. It also seems plausible, and even likely, that the U.S. intelligence community will eventually return to the idea of using internal prediction markets—limited to analysts and experts—as a means of aggregating dispersed pieces of information and turning them into coherent forecasts and policy recommendations. Perhaps that would mean that the CIA would be running what Senators Wyden and Dorgan scornfully called “a betting parlor.” But we know one thing about betting markets: they’re very good at predicting the future.

Chapter Three, Part V

What makes information cascades interesting is that they are a form of aggregating information, just like a voting system or a market. And the truth is that they don’t do a terrible job of aggregation. In classroom experiments, where cascades are easy to start and observe, cascading groups pick the better alternative about 30 percent of the time, which is better than any individual in the groups can do. The fundamental problem with cascades is that people’s choices are made sequentially, instead of all at once. There are good reasons for this—some people are more cautious than others, some are more willing to experiment, some have more money than others. But roughly speaking, all of the problems that cascades can cause are the result of the fact that some people make their decisions before others. if you want to improve an organization’s or an economy’s decision making, one of the best things you can do is make sure, as much as possible, that decisions are made simultaneously (or close to it) rather than one after the other. -

An interesting proof of this can be found in one of those very classroom experiments I just mentioned. This one was devised by economists Angela Hung and Charles Plott, and it involved the time-honored technique of having students draw colored marbles from urns. In this case, there were two urns. Urn A contained twice as many light marbles as dark ones. Urn B contained twice as many dark marbles as light ones. At the beginning of the experiment, the people in charge chose one of the two urns from which, in sequence, each volunteer drew a marble. The question the participants in the experiment had to answer was: Which urn was being used? A correct answer earned them a couple of dollars.

To answer that question, the participants could rely on two sources of information. First, they had the marble they had drawn from the urn, If they drew a light marble, chances were that it was from Urn A. If they drew a dark marble, chances ar that it was from Urn B. This was their “private information,” because no one was allowed to reveal what color marble they had drawn. All people revealed was their guess as .to which urn was being used. This was the second source of information, and it created a potential conflict. If three people in front of you had guessed Urn B, but you drew a light marble, would you still guess Urn A even though the group thought otherwise?

Most of the time the student in that situation guessed Urn B, which was the rational thing to do. And in 78 percent of the trials, information cascades started. This was as expected. But then Hung and Plott changed the rules. The students still drew their marbles from the urn and made their decisions in order. But this time, instead of being paid for picking the correct answer, the students got paid based on whether the group’s collective answer—as decided by majority vote—was the right one. The students’ task shifted from trying to do the best they could individually to trying to make the group as smart as it could be.

This meant one thing had to happen: each student had to pay more attention to his private information and less attention to everyone else’s. (Collective decisions are only wise, remember, when they incorporate lots of different information.) People’s private information, though, was imperfect. So by paying attention to only his own information, a student was more likely to make a wrong guess. But the group was more likely to be collectively right. Encouraging people to make incorrect guesses actually made the group as a whole smarter. And when it was the group’s collective accuracy that counted, people listened to their private information. The group’s collective judgment became, not surprisingly, significantly more accurate than the judgments of the cascading groups.

Effectively what Hung and Plott did in their experiment was remove (or at least reduce) the sequential element in the way people made decisions, by making previous choices less important to the decision makers. That’s obviously not something that an economy as a whole can do very easily—we don’t want companies to have to wait to launch products until the public at large has voted yea or nay. Organizations, on the other hand, clearly can and should have people offer their judgments simultaneously, rather than one after the other. On a deeper level, the success of the Hung and Plott experiment—which effectively forced the people in the group to make themselves independent—underscores the value and the difficulty of autonomy. One key to successful group decisions is getting people to pay much less attention to what everyone else is saying.

Chapter Three, Part IV

So should we just lock ourselves up in our rooms and stop paying attention to what others are doing? Not exactly (although it is true that we would make better collective decisions if we all stopped taking only our friends advice) Much of the time imitation works.  At least in a society like America’s, where things generally work pretty well without much top-down control, taking your cues from everyone else’s behavior is an easy and useful rule of thumb. Instead of having to undertake complicated calculations before every action, we let others guide us. Take a couple of everyday examples from city life. On a cloudy day, if I’m unsure of whether or not to take an umbrella when I leave my apartment, the easiest solution— easier, even, than turning on the Weather Channel—is to pause a moment on the doorstep to see if the people on the street are carrying umbrellas. If most of them are, I do, too, and it’s the rare time when this tactic doesn’t work. Similarly, I live in Brooklyn, and I have a car, which I park on the street. Twice a-week, I have to move the car by 11AM because of street cleaning, and routinely, by 10:45 or so, every car on the street that’s being cleaned has been moved. Occasionally, though, I’ll come out of the house at 10:40 and find that all the cars are stillon the street, and I’ll know that that day street cleaning has been suspended, and I won’t move my car. Now, it’s possible that every other driver on the street has kept close track of the days on which street cleaning will be suspended. But I suspect that most drivers are like me: piggybacking, as it were, on the wisdom of others.

In a sense, imitation is a kind of rational response to our own cognitive limits. Each person can’t know everything. With imitation, people can specialize and the benefits of their investment in uncovering information can be spread widely when others mimic them. Imitation also requires little top-down direction, The relevant information percolates quickly through the system, even in the absence of any central authority. And people’s willingness to imitate is not, of course, unconditional. If I get a couple of tickets because of bad information, I’ll soon make sure I know when I have to move my car. And although I don’t think Milgram and his colleagues ever followed up with the people in their experiment who had stopped to look at the sky, one suspects that the next time they walked by a guy with his head craned upward, they didn’t stop to see what he was looking at. In the long run, imitation has- to be effective for people to keep doing it.

Mimicry is so central to the way we live that economist Her ber Simon speculated that humans were genetically predisposed to be imitation machines. And imitation seems to be a key to the transmission of valuable practices even among nonhumans. The most famous example is that of the macaque monkeys on the island of Koshima in Japan. In the early 1950s, a one-year-old female macaque named Imo somehow hit upon the idea of washing her sweet potatoes in a creek before eating them. Soon it was hard to find a Koshima macaque who wasn’t careful to wash off her sweet potato before eating it. A few years later, Imo introduced another innovation. Researchers on the island occasionally gave the mon key wheat (in addition to sweet potatoes). But the wheat was given to them on the beach, where it quickly became mixed with sand. lmo,though, realized that if you threw a handful of wheat and sand into the ocean, the sand would sink and the wheat would float. Again, within a few years most of her fellow macaques were hurling wheat and sand into the sea and reaping the benefits.

The Imo stories are interesting because they seem to be in stark contrast to the argument of this book. This was one special monkey who hit on the right answer and basically changed macaque “society.” How, then, was the crowd wise?

The wisdom was in the decision to imitate Imo. As I sug geste in the last chapter groups are better at deciding between possible solutions to a problem than they are at coming up with them. Invention may still be an individual enterprise (although, as we’ll see, invention has an inescapably collective dimension), but
selecting among inventions is a collective one. Used well, imitation is a powerful tool for spreading good ideas fast—whether they be in culture, business, sports, or the art of wheat eating. At its best, you can see it as a way of speeding up the evolutionary process— the community can become more fit without the usual need for multiple generations of genetic winnowing. Scientists Robert Boyd and Peter J. Richerson have pioneered the study of the transmission of social norms, trying to understand how groups arrive at collectively beneficial conclusions. They’ve run a series of computerized simulations looking at the behavior of agents who are trying to discover which of two different behaviors is best suited to the environment they’re living in. In the simulation, each agent can try out a behavior for himself and see what happens, but he can also observe the behavior of someone else who’s already made a decision about which behavior is best. Boyd and Richerson found that under these circumstances, everyone benefits when a sizable percentage of the population imitates. But this is only true as long as people are willing to stop imitating and learn for themselves when the benefits of doing so become high enough. In other words, if people just keep following the lead of others regardless of what happens, the well-being of the group suffers. Intelligent imitation can help the group—by making it easier for good ideas to spread quickly—but slavish imitation hurts.

Distinguishing between the two kinds of imitation is, of course, not easy, since few people will admit that they’re mindlessly conforming or herding. But it does seem clear that intelligent imitation depends on a couple of things: first, an initially wide array of options and information; and second, the willingness of at least some people to put their own judgment ahead of the group’s, even when it’s not sensible to do so.

Do such people exist? Actually they’re a lot more common than you’d expect. One reason is that people are, in general, overconfident. They overestimate Their abi1it their level of knowledge, and their decision-making prowess. And people are more overconfident when facing difficult problems than when facing easy ones. This is not good for the overconfident decision makers themselves, since it means that they’re more likely to choose badly. But it is good for society as a who1e because overconfident people are less likely to get sucked into a negative information cascade, and, in the right circumstances, are even able to break cascades. Remember that a cascade is kept going by people valuing public information more highly than their private information. Overconfident people don’t do that. They tend to ignore public information and gO on their gut. When they do so, they disrupt the signal that everyone else is getting. They make the public information seem less certain. And that encourages others to rely on themselves rather than just follow everyone else.

At the same time, even risk-averse people do not, for the most part, slavishly fall in line. For instance, in 1943 the sociologists Bryce Ryan and Neal Gross published a study of the way Iowa farmers adopted a new, more productive hybrid seed corn. In their study, which became the most influential study of innovation in history, Ryan and Gross found that most farmers didn’t investigate the corn independently as soon as they heard about it, even though there was good information available that showed it increased yields by 20 percent. They waited until other farmers had success with it and then followed their example. So that suggests that a cascade was at work. But in fact, even after witnessing the success of their neighbors, the farmers did not seed their entire fields with the hybrid corn. Instead, they set aside a small part of a field and tested the corn for themselves first. Only after they were personally satisfied with it did they start using the corn exclusively. And it took nine years from the time the first farmer planted his field with the new corn to the time half of the farmers in the region were using it, which does not suggest a rash decision-making process.

Similarly, in a fascinating study of how farmers in India decided whether or not to adopt new high-yielding-variety crop strains during the Green Revolution of the late 1960s, Kaivan Munshi shows that rice farmers and wheat farmers made their decisions about new crops in very different ways. In the wheatgrowing regions Munshi looked at, land conditions were relatively uniform, and the performance of a crop did not vary much from farm to farm. So if you were a wheat farmer and you saw that the new seeds substantially improved your neighbor’s crop, then you could be confident that it would improve your crop as well. As a result, wheat farmers paid a great deal of attention to their neighbors, and made decisions based on their performance. In rice-growing regions, on the other hand, land conditions varied considerably, and there were substantial differences in how crops did from farm to farm. So if you were a rice farmer, the fact that your neighbor was doing well (or poorly) with the new crop didn’t tell you much about what would happen on your land. As a result, rice farmers’ decisions were not that influenced by their neighbors. Instead, rice farmers experimented far more with the new crop on their own land before deciding to adopt it. V/hat’s telling, too, is that even the wheat farmers did not use the new strains of wheat until after they could see how the early adopters’ new crops did.

For farmers, choosing the right variety of corn or wheat is the most important decision they can make, so it’s perhaps not surprising that they would make those decisions on their own, rather than simply mimicking those who came before them. And that suggests that certain products or problems are more susceptible to cascades than others. For instance, fashion and style are obviously driven by cascades, which we call fads, because when it comes to fashion, what you like and what everyone else likes are clearly wrapped up with each other. I like to dress a certain way, but it’s hard to imagine that the way I like to dress is disconnected from the kind of impression I want to make, which in turn must have something to do with what other people like. The same might also be said, though less definitively, about cultural products (like TV shows) where part of why we watch the show is to talk about it with our friends, or even restaurants, since no one likes to eat in an empty restaurant. No one buys an iPod because other people have them—the way they might, in fact, go to a movie because other people are going—but many technology companies insist that information cascades (of the good kind, they would say) are crucial to their success, as early adopters spread the word of a new product’s quality to those who come after. The banal but key point I’m trying to make is that the more important the decision, the less likely a cascade is to take hold. And that’s obviously a good thing, since it means that the more important the decision, the more likely it is that the group’s collective verdict will be right.