What is the most common form of communication? A greeting! Think of how many times you ask someone “How are you?” even though you know that the only socially acceptable response is, “Good. How are you?” On facebook, the poke has become the virtual equivalent of a greeting. And like a greeting, it can take on many meanings in many different contexts (see the zen of poke for more.) For better or worse, poking–and now super poking–has revolutionized how millions of people greet each other. Watch this 3 min 1 sec video to find out my take on this revolution:
I am finally back in the US. After three back-to-back sleepless nights as a global data cruncher, I have the final results of the first-ever European-wide deliberative poll. I’ll have more to say about deliberative polls and this project, but for now, here is a short summary of what we did and what the results are.
What we did:
For the first time ever, a scientific microcosm of Europe was gathered to a single place, the European Parliament building in Brussels, to deliberate in 22 languages about key issues facing the future of the EU and its member states. The participants became dramatically more informed about key issues and changed their views. Participants from the 12 new member countries had different starting points in their opinions but generally changed their opinions more, growing closer in their views to those from the older member states. Over a long weekend, the participants deliberated about economic, social and foreign policy, reflecting on “Europe in the World.”
* Participants were more likely to support for sacrifices for pensions after deliberation than before
* They were less likely to support for enlargement, mostly coming from new member states learning the reasons against enlargement old member states
* In general, participants from new member states changed more and changed towards old member states
* Very significant knowledge gains
* Participants were more educated than non-participants and in general had small but statistically significant difference in attitude and other measures
Here’s a great press release that has a lot of juicy details:
On that site, you can also find video and other details.
There has also been a fair amount of press too. Here are two examples:
I claim that iTunes could earn 40% more revenue than it currently does just by slashing its price from $1.00 to $.50 . Why? Let me explain.
Digital music is a virtual good. As v1.0 approximation, I assume:
- Marginal costs are essentially zero (e.g., hosting, processsing and bandwidth)
- Customers have limits on the quantities that they’d consume even if the good were costless
- Prices are somewhat inelastic because customers are price sensitive
- Each customer’s demand curve can be modeled with the classic Cobb-Douglas demand curve
The cost side of the equation is simple: zero marginal cost. These things are just digits, so hosting, processing and bandwidth costs are negligible. Sure, apple has to pay licensing fees for the music. But for the sake of argument, let say that these fees are revenue sharing agreements so apple’s profits are still proportional to total revenue.
The action is on the revenue-side. We can take a classic demand function, d=u^1(1/a)xp^(1-1/a), where d is demand, u is utility, p is price and a is a measure of price elasticity. The suppliers problem is to maximize profit, d*p.
It turns out for products with relatively elastic price sensitivity, you want to set p as low as possible. So, the itunes store should pick a low price for music because the lost revenue per track is more than made up by demand for more tracks. (Teenagers are extremely price sensitive!)
Conversely, for products with relatively inelastic prices, you want to set p as high as possible. I am not aware of any virtual products with this characteristic, but the math works out that you more than make up for the lost demand in the price increase. Perhaps some unique goods in Second Life fit this category, although there is a cost at least for players in terms of time to create some of the valuable objects.
How low should itunes set its price? In the ideal case, the price tends towards zero. But in practice, consumers have limits. How many songs would you download per week if itunes were free? 7? 14? 100? It wouldn’t really go towards infinity!
You must have a good estimate of the price elasticity, the utility and the maximum quantity demanded to solve the pricing problem. In particular, you pick the maximum price such that the consumer still demands his maximum amount.
For example, let us assume that digital music has an inelasticity coefficient a=0.4 and that a teenager would download a maximum of 20 tracks a week and has a u=2.2 (which implies that she currently downloads 7 tracks a week at $1 each on itunes). Note: all of these numbers are just my best guesses. The ideal price would then be $0.51, which would generate $10.10 in profits per week per customer rather than $7. That’s a 44% increase, and the kids would get nearly triple the tracks per week!
Of course, itunes has other considerations. They do have to share the profits per track, and I think that ituntes may have negotiated a flat rate rather than a percent of revenue. (A flat fee deal structure leads to deadweight loss, where the labels, Apple and consumers all lose.) In addition, they claim to have other considerations like “simplicity” in pricing and the labels are certainly wary of undermining their CD sales.
Finally, there are other constraints that I haven’t considered in this simple analysis. Teenagers have budgets, so perhaps $7 a week is the maximum that they can spend. (I doubt this.) Also, I totally guessed at 0.4 for the price elasticity. But actual itunes data bounds the possibilities. We know that itunes actually makes $7 in revenue, so values greater than .45 are implausible. We also know that consumers only buy 7 songs a week at $1, so values under .3 are also implausible.
This range, 0.3 to 0.45, is great to know if you are launching your own virtual good. Without additional data, you should use something in that range as your initial value. Similarly, you can also use u=2.2 as a starting point for knowing your utility. How attractive does your virtual good seem in comparison to digital music? Ratchet that number up or down a little. And as data comes in, you can tweak the parameters and figure out the best price.
This is a simple but useful model. Some possibly important complications: How much should you raise prices to signal higher quality? What if you have a means of discriminating per customer, e.g., should you charge different prices for heavy and light users? What you can discriminate by product, e.g., should you charge different prices for popular and unpopular products? I will examine some of these more advanced questions in the future because I suspect that iTunes (and other providers of virtual goods) could have an even greater increase in its profits by intelligently using dynamic pricing.
Scott Reents sent me an email with just a single link, and man did it bum me out.
Lawrence Lessig is a famous constitutional law scholar who teaches now at Stanford Law School. For the last ten years, he has led the charge to redefine copyright in the Internet age as the founder of Creative Commons. Recently, all of us suffered a major setback when he lost a (literally) ‘Mickey Mouse’ Supreme Court case. And now this cause is losing its principal advocate, who has found a new windmill to tilt at: corruption. (He says: “I am 99.9% confident that the problem I turn to will continue exist when this 10 year term is over. But the certainty of failure is sometimes a reason to try.“)
For someone who is so admired and successful, I found Lessig’s inspirations really depressing for this decision in his announcement (“Required Reading“). His three inspirations are (1) obama; (2) gore and (3) an unnamed prominent republican who called him a “shill” for google.
Working backwards: first, please don’t accept the “shill” charge, Professor Lessig! Or tell me: how did you get duped into being an accomplice in corrupting the system? These charges really got to hurt an advocate who devoted a decade to an issue. I think that he should rather reconcile his views on the particular issue in question — network neutrality — where he should explain where his principled position does indeed differ from google’s largely correct but somewhat self-interested position.
Second, Al Gore is valiantly pursuing the global warming issue. But he is a sad hero: don’t you think he’d much rather have been President? And as a serious Washington insider for decades, it seems a little late and convenient to start blame the system now. It’s especially disingenuous because the environment also has its own interest groups that pray on emotional responses.
Third, he says that Obama is running for Presidency because of a ten-year “up or out” strategy. It’s a terrible parallel because Obama is riding at the top of his wave and Lessig is adrift after recent serious setbacks. So, claiming success (“we are going to prevail in these debates. Maybe not today, but soon.“) is decidedly weak. In fact, the weakness of this argument by such an admired thinker leads to question whether my own support of Obama is foolish too!
That said, Lessig is an *amazing* person so I wish him all the luck in the world in this new endeavor as well as those who will remain in the copyright arena. Lessig, here’s a message for you in the unlikely event you read this post: please understand that it is written with the same honest candor that your post was written and that I have the utmost respect for everything that you’ve accomplished. Corruption is a terrific new windmall to tilt at….good luck!
I have recently become a big fan of StumbleUpon. With a single click, you can transported to wild and wonderful things on the Internet based on recommendations from other web surfers. Using this service, I stumbled on this amazing video on human computation. After watching the video, I realize that StumbleUpon itself is an example of human computation. Here’s professor Luis von Ahn‘s 60 minute or so presentation:
What if we just think about humans as a very specialized kind of processor? Human computation is a novel take on artificial intelligence (AI) problems like classification and ontologies. Traditionally, AI would use edge detection or some other image processing technique to classify a picture. But no one has been successful in using these technique to find common objects like cars, celebrities, etc. Human computation asks: why ask computers to do what people can do automatically and instantly? Is there some way to harness the 9 billion man-hours wasted playing games like tetris to do accomplish something productive?
Professor Ahn’s first game is called ESPgame. You sign up and become teamed up with someone else from across the Internet. Simultaneously, you enter words to describe a random image. If you type in the same word, you get points for a “match.” The most common words to describe an image become “taboo” as the game advances and the players have to search for more subtle words that describe the image.
So far, hundreds of millions of images have been classified by hundreds of thousands of people. He has two other notable games: Peekaboom, a game for locating objects within an image, Verbosity, a game for collecting common-sense facts about the world.
I actually like the idea of human computation because it applied to an even wider class of internet services than the games that Ahn has created. For example, google’s search engine is really human computation: it relies on links from sites like this blog that were chosen by real people to determine the order of the search results. Similarly, both Amazon’s recommendation feature (“people who bought this also bought..”) and even the user reviews can be considered human computation. And the tagging services like del.icio.us, my music service last.fm also fits into the category as well as StumbleUpon, the place I discovered this idea originally.
Today, Facebook made an amazing announcement: it will be giving away $10 million as small grants to individuals and start ups in order to fund the development of application for their site. Facebook is a social networking site that started life for college students only but has been continuously opening up. A few months ago they allowed third-party developers to create applications that could interact with the profiles and networks created by the users of facebook.
I’ll be sure to write more about facebook in the future, especially because I am going to take this cool course at Stanford this coming semester on how to develop facebook applications. But now it turns out that there could be money involved in the course as well! What a great way for facebook to use their enviable position to their advantage.
I have just finished Richard Dawkin’s “The Selfish Gene.” It was necessary reading, since I have argued the unpopular position that the limits to the theory of evolution are more significant than many care to acknowledge. (See how I got sucked into this debate here.) In a nutshell, I often find that evolution arguments seem circular: Who survives? The fittest. Who are the fittest? Those who survive. So I read the book to find out: does Dawkins’ version of evolution avoid tautology?
Dawkins makes two impressive theoretical contributions. First, he argues that genes, not living organisms that host them, are the proper unit of analysis. In other words, he has a definition of the “who” in “who survives.” Second, he argues that that genes replicate themselves. Hence, he also offers an independent definition of “survives.”
Let’s just pause to consider how revolutionary these contributions are. As he himself says, “much of Darwinism is wrong” and Darwin “would scarcely recognize his original theory in this book.” (p.195) Darwin’s theory is about survival of the species! In Dawkins’ formulation, concepts like sexual selection and species are now explainable in more primitive, genetic terms. Indeed, the majority of the book is devoted to genetic explanations of a wide range of behaviors.
But limits do arise on both the “who” and the “survive” side. Are genes really the fundamental unit or do we have to consider intra-gene competition among alleles? (Alleles are smaller, sometimes overlapping, subsets of DNA sequences within a gene.) Is exact genetic copying really the only metric of survival, or are there other ways to consider similarity (e.g., some parts of the gene are more important than others)? I wonder if a future book called “The Selfish Allele” would say a gene is just an “allele survival machine” in an analogous way as Dawkins says plants and animals are just “gene survival machines.”
Another limit is in scope. Dawkins has a lengthy discussion of the “god meme” (p.192-200). Can genetics explain why celibacy persists in widespread, long lasting religions? Religious organizations benefit from the priests who devote all their attention without the distraction of family, but the genes of those priests clearly suffer. After much hand-wringing by Dawkins, I conclude that genetics play little or no role in explaining why any gene machine would choose to be celibate.
Dawkins’s genetic version of evolution is not tautological; it’s inability to explain the “god meme” is the exception that proves the rule. The “selfish allele” is an attractive but unproven challenger to the genetic version of evolution. But as the book demonstrates, genetics is a formidable champion that explains a lot more than it misses. By placing a skeptical eye on evolution, I think I’ve come to a better understanding of the broad reach (and limits) of the genetic theory of evolution.
SLOPS were originally “self-selected listener opinion polls” and now often refer to today’s ubiquitous “self-selected online polls.” As I found out in today’s roundup on techpresident.com, Ron Paul’s supporters are defrauding every SLOP that they can find. They’ve been excluded from one straw poll for their antics, and succeeded in winning a different poll about the latest republican debate based on text-message voting. At e-thePeople.org, we’ve had similar problems with freerepublic.com (“freeps!”) and other libertarians making sure that they were more than adequately represented in our polls.
Now, my advisor at Stanford Jim Fishkin loves to lampoon this old chestnut. He vigorously believes that researchers should control the public opinion process, through techniques like random sampling and moderated discussion. I certainly think that there is a place for his deliberative polling enterprise, but I don’t find it in opposition to SLOPs.
I think SLOPs have an important but different role to play. When I presented at the SXSW conference, I asked for a show of hands for who was republican. Not one of the 100 or so people in the audience raised their hands. SLOPs can give a powerful sense of “who’s in the room,” but only when we have adequate understanding of how the counting is done and what room we are talking about.
Which brings me back to the irony of Paul’s supporters and other free marketers that abuse the rules and intentions of these SLOPs. Aren’t they just proving why rules and regulations are needed to avoid anarchy?