Opinion

Felix Salmon

The problems of HFT, Joe Stiglitz edition

Felix Salmon
Apr 16, 2014 00:04 UTC

Never mind Michael Lewis. The most interesting and provocative thing to be written of late about financial innovation in general, and high-frequency trading in particular, comes from Joe Stiglitz. The Nobel prize-winning economist delivered a wonderful and fascinating speech at the Atlanta Fed’s 2014 Financial Markets Conference today; here’s a shorter version of what Stiglitz is saying.

Markets can be — and usually are — too active, and too volatile.

This is an idea which goes back to Keynes, if not earlier. Stiglitz says that in the specific area of international capital flows, “there is now a broad consensus that unfettered markets are welfare decreasing” — and certainly you won’t get much argument on that front from, say, Iceland, or Malaysia, or even Spain. As Stiglitz explains:

When countries do not impose capital controls and allow exchange rates to vary freely, this can give rise to high levels of exchange rate volatility. The consequence can be high levels of economic volatility, imposing great costs on workers and firms throughout the economy. Even if they can lay off some of the risk, there is a cost to doing so. The very existence of this volatility affects the structure of the economy and overall economic performance.

The question is: does the same logic, that traders seeking profit can ultimately cause more harm than good, apply equally to high-frequency trading, and other domestic markets? Stiglitz says yes: there’s every reason to believe that it does.

HFT is a negative-sum game.

In the algobot vs algobot world of HFT, the game is to capture profits which would otherwise have gone to someone else. Michael Lewis’s complaint is that if there weren’t any algobots at all, then those profits would have gone to real-money investors, rather than high-frequency traders, and that the algorithms are taking advantage of unfair levels of market access to rip off the rest of the participants in the stock market. But even if you’re agnostic about whether trade profits go to investors or robots, there are undeniably real-world costs to HFT — costs like drilling through Pennsylvania mountains. As a result, the net effect of the algorithms is negative: they reduce profits, for everybody, rather than increasing them.

In theory, HFT could bring with it societal benefits which more than offset all the costs involved. In practice, however, that seems unlikely. To see why, we’ll have to look at the two areas where such benefits might be found.

HFT does not improve price discovery.

Price discovery is the idea that markets create value by putting a price on certain assets. When a company’s securities rise in price, that company finds it easier to raise funds at cheaper rates. That way, capital flows to the places where it can be put to best use. Without the price-discovery mechanism of markets, society would waste more money than it does.

But is faster price discovery better than slower price discovery? Let’s say good news comes out about a company, and its share price moves as a result — does it matter how fast it moves? Is any particular purpose served to seeing the price move within a fraction of a millisecond, rather than over the course of, say, half a minute? It’s hard to think of a societal benefit to faster price discovery which is remotely commensurate with the costs involved in delivering those faster price moves.

What’s more, faster price discovery is generally associated with higher volatility, and higher volatility is in general a bad thing, from the point of view of the total benefit that an economy gets from markets.

HFT sends the rewards of price discovery to the wrong people.

Markets reward people who find out information about the real economy. Armed with that information, they can buy certain securities, sell other securities, and make money. But if robots are front-running the people with the information, says Stiglitz, then the robots “can be thought of as stealing the information rents that otherwise would have gone to those who had invested in information” — with the result that “the market will become less informative”. Prices do a very good job of reflect ignorant flows, but will do a relatively bad job of reflecting underlying fundamentals.

HFT reduces the incentive to find important information.

The less money that you can make by trading the markets, the less incentive you have to obtain the kind of information which would make you money and increase the stock of knowledge about the world. Right now, the stock market has never been better at reacting to information about short-term orders and flows. There’s a good example in Michael Lewis’s book: the president of a big hedge fund uses his online brokerage account to put in an order to buy a small ETF — and immediately the price on the Bloomberg terminal jumps, before he even hits “execute”. The price of stocks is ultra-sensitive to information about orders and flows. But that doesn’t mean the price of stocks does a great job of reflecting everything the world knows, or could theoretically find out, about any given company. Indeed, if investors think they’re just going to end up getting front-run by robots, they’re going to be less likely to do the hard and thankless work of finding out that information. As Stiglitz puts it: “HFT discourages the acquisition of information which would make the market more informative in a relevant sense.”

HFT increases the amount of information in the markets, but decreases the amount of useful information in the markets.

If markets produce a transparent view of all the bids and offers on a certain security at a certain time, that’s valuable information — both for investors and for the economy as a whole. But with the advent of HFT, they don’t. Instead, much of the activity in the stock market happens in dark pools, or never reaches any exchange at all. Today, the markets are overwhelmed with quote-stuffing. Orders are mostly fake, designed to trick rival robots, rather than being real attempts to buy or sell investments. The work involved in trying to understand what is really going on, behind all the noise, “is socially wasteful”, says Stiglitz — and results in a harmful “loss of confidence in markets”.

HFT does not improve the important type of liquidity.

If you’re a small retail investor, you have access to more stock market liquidity than ever. Whatever stock you want to buy or sell, you can do so immediately, at the best market price. But that’s not the kind of liquidity which is most valuable, societally speaking. That kind of liquidity is what you see when market makers step in with relatively patient balance sheets, willing to take a position off somebody else’s book and wait until they can find a counterparty to whom they can willingly offset it. Those market makers may or may not have been important in the past, but they’re certainly few and far between today.

HFT also reduces natural liquidity.

Let’s say I do a lot of homework on a stock, and I determine that it’s a good buy at $35 per share. So I put in a large order at $35 per share. If the stock ever drops to that price, I’ll be willing to buy there. I’m providing natural liquidity to the market at the $35 level. In the age of HFT, however, it’s silly to just post a big order and keep it there, since it’s likely that your entire order will be filled — within a blink of an eye, much faster than you can react — if and only if some information comes out which would be likely to change your fair-value calculation. As a result, you only place your order for a tiny fraction of a second yourself. And in turn, the market becomes less liquid.

It’s important to distinguish between socially useful markets and socially useless ones.

In general, just because somebody is winning and somebody else is losing, doesn’t mean that society as a whole is benefiting in any way. Stiglitz demonstrates this by talking about an umbrella:

If there is one umbrella, and there is a 50/50 chance of rain, if neither of us has any information, the price will reflect that risk. One of us will get the umbrella. If it rains, that person will be the winner. If it does not, the other person will be the winner. Ex ante, each has the same expected utility. If, now, one person finds out whether it’s going to rain, then he is always the winner: he gets the umbrella if and only if it rains. If the other person does not fully understand what is going on, he is always the loser. There is a large redistributive effect associated with the information (in particular, with the information asymmetry), but no real social benefit. And if it cost anything to gather the information, then there is a net social cost.

HFT is socially useless; indeed, most of finance does more harm than good.

As finance has taken over a greater and greater share of the economy, growth rates have slowed, volatility has risen, we’ve had a massive global financial crisis, and far too much talented human capital has found itself sucked into the financial sector rather than the real economy. Insofar as people are making massive amounts of money through short-term trading, or avoiding losses attributable to short-term volatility, those people are not making money by creating long-term value. And, says Stiglitz, “successful growth has to be based on long term investments”.

So let’s do something about it.

HFT shouldn’t be banned, but it should be discouraged. The tax system can help: a small tax on transactions, or on orders, would reduce HFT sharply. “A plausible case can be made for tapping the brakes,” concludes Stiglitz. “Less active markets can not only be safer markets, they can better serve the societal functions that they are intended to serve.”

COMMENT

The strategies over the supply and demand curve is now more in the hands of the robots and a little less in the company’s. The algorithm writers are now the ones doing the game playing guided of course by the companies’ forecasts and the real-money investors.

Unless there is a way to discount all these potential efficiency gains then I would leave it alone for now at least.

Posted by abnewallo | Report as abusive

The utility of switching lanes

Felix Salmon
Apr 11, 2014 05:42 UTC

Josiah Neeley has an evil, hour-long commute. But unlike most of us with traffic issues, he actually decided to do something constructive with it: according to the flip of a coin, he either commutes normally, switching lanes when doing so seems sensible, or else sticking religiously to the left-hand lane and just sweating it out, no matter how fast or slow it goes.

Neeley doesn’t have a statistically significant result yet, but initial indications are pretty much what you might expect, if you understand the psychology of traffic: if you just sit in a single lane, you spend no more time in traffic than if you aggressively switch lanes and try to go as fast as possible at all times.

There are two possible conclusions to draw from this. The first is that, rationally, no one should switch lanes when they’re stuck in traffic. It doesn’t make them get to where they’re going any faster, but it does slow down the road as a whole.

The second, however, is the exact opposite. As Neeley says:

The hardest part of the experiment is just sticking to it. When it’s a left hand lane only day, it’s often quite difficult to keep to the plan when my lane is going forward at a crawl. But then I remind myself that this is for Science, and I soldier on. Perhaps more importantly, I’ve noticed that my subjective sense of how bad the traffic is on a particular day doesn’t necessarily line up with the objective data. On many a day I feel like my drive has gone on forever, only to find that it wasn’t any longer than on previous days where it felt like I was flying down the highway.

This is important: the really painful part of being stuck in traffic is not, really, the actual amount of time that it takes to get from Point A to Point B. Rather, it’s the “stuck” bit. As a result, the best way to minimize the suffering involved in a long commute is not, necessarily, to simply get to your destination as fast as you possibly can.

One of my lesser-value skills is that when my wife and I are stuck in highway traffic, and she’s driving, I’m quite good at looking at the live traffic maps, from Google and Apple, and finding a way to use surface streets to skip forward a couple of exits. My guess is that 90% of the time, when we do that, we don’t actually save time. But pretty much 100% of the time we both end up significantly happier than we were when we were crawling up the freeway. If you go a longer distance at a modest speed, but you’re not stuck in traffic on an ugly highway, you feel as though you’re getting somewhere.

The same is true when you stay on the highway, rather than leave it: the act of changing lanes, and thereby briefly overtaking the car which up until a moment ago was in front of you, makes you significantly happier than just sitting there like a passive schmuck. Which is why we all do it.

In other words, if you want to understand utility functions, don’t talk to an economist. The economist will find a proxy for utility — in this case, time — and then try to work out what kind of behavior optimizes for the proxy. If Neeley had discovered that changing lanes frequently only served to cause a significant increase in time spent commuting, he would probably just opt to sit in a single lane henceforth, even when doing so was difficult, and even when doing so increased the number of days where it felt like his drive had gone on forever.

The more interesting experiment, I think, would be to judge not actual time spent commuting, but some kind of subjective measure (say, on a four point scale) of how brutal the commute was that day. The important thing isn’t whether you shave a minute here or there: it’s how you feel once you get to your destination. This is something I’ve noticed since switching almost exclusively to Citibike when I bike around New York — while the Citibikes are undoubtedly slower than my regular bike, that doesn’t make me more impatient in traffic. Quite the opposite, indeed: I feel as though I’ve become a more zen biker as a result of the switch.

If you look at subjective rather than objective measures, I’m pretty sure that all our lane switching will turn out to have a useful purpose after all: it makes us feel as though we’re in control of our own destiny. Which, ultimately, is more important than an extra minute’s commute.

COMMENT

I agree with Felix’s point about preferring to keep moving as well as Sanity-Monger’s point about reinforcement, which I’d extend more broadly.

It’s not surprising that human nature defaults to “get there faster by moving into the faster lane” because that is generally the correct decision-making process when in in free-flowing, non-congested traffic. Stuck behind a left-turner? Hop over a lane to the right and go around them. Behind someone going 5 mph under the speed limit? Pass them and then get back in the same lane if need be. Congestion changes that situation, but people’s minds are conditioned to follow the more intuitive path of keeping moving, which does tend to work when traffic isn’t congested.

Posted by realist50 | Report as abusive

The Wu-Tang’s self-defeating unique album

Felix Salmon
Mar 28, 2014 17:11 UTC

I’ve had a couple of requests to write about the economics of the The Wu – Once Upon A Time In Shaolin, the new album from the Wu-Tang Clan. The album is being released in a beautiful box, in an edition of exactly one:

Like the work of a master Impressionist, it will truly be one-of-a-kind.. And similar to a Monet or a Degas, the price tag will be a multimillion-dollar figure.

Wu-Tang’s aim is to use the album as a springboard for the reconsideration of music as art, hoping the approach will help restore it to a place alongside great visual works–and create a shift in the music business.

This might be innovative, but it’s also more than a little bit peevish. Go to the official website for the album, and you’ll find a manifesto, of sorts, which basically boils down to art-market envy.

History demonstrates that great musicians such as Beethoven, Mozart and Bach are held in the same high esteem as figures like Picasso, Michelangelo and Van Gogh. However, the creative output of today’s artists such as The RZA, Kanye West or Dr. Dre, is not valued equally to that of artists like Andy Warhol, Damien Hirst or Jean-Michel Basquiat.

Is exclusivity versus mass replication really the 50 million dollar difference between a microphone and a paintbrush? Is contemporary art overvalued in an exclusive market, or are musicians undervalued in a profoundly saturated market? By adopting a 400 year old Renaissance-style approach to music, offering it as a commissioned commodity and allowing it to take a similar trajectory from creation to exhibition to sale, as any other contemporary art piece, we hope to inspire and intensify urgent debates about the future of music…

While we fully embrace the advancements in music technology, we feel it has contributed to the devaluation of music as an art form…

The music industry is in crisis. Creativity has become disposable and value has been stripped out.

Mass production and content saturation have devalued both our experience of music and our ability to establish its value.

Industrial production and digital reproduction have failed. The intrinsic value of music has been reduced to zero.

This is all rather misguided, on a number of levels.

Firstly, you shouldn’t aspire to being like Andy Warhol and Jean-Michel Basquiat, for the very good reason that Andy Warhol and Jean-Michel Basquiat are dead. Posthumous market success can make lots of money for dealers, collectors, and heirs — but neither Warhol nor Basquiat ever sold a painting for anything near $50 million during their lifetimes.

Secondly, the contemporary art market is in the midst of an unprecedented bubble right now. Different bubbles have different dynamics, but all of them are based, in one way or another, on price spirals. The general public needs to be able to see a given asset — tulips, dot-com stocks, houses, Richters, you name it — going up in price at an impressive clip. In order for any asset, or asset class, to become expensive, it first needs to start cheap, and work its way up. The Wu-Tang Clan not only want to create a whole new asset class; they also want that asset class to be valued at bubblicious levels right off the bat. Sorry, but markets don’t work that way.

Thirdly, the Wu can’t work out if they want their album to be treated as a piece of fine art or as a luxury good. They say that their approach “launches the private music branch as a new luxury business model for those able to commission musicians to create songs or albums for private collections”. It’s true that the distinction between art and luxury is eroding, with artists like Takashi Murakami and Damien Hirst at the forefront of that trend. But there is still a distinction, and the Wu-Tang Clan clearly want their album to be on the art side, rather than on the luxury side: they want to be artists, not artisans. At the same time, however, the ornate packaging of their album signifies an emphasis on artistry, rather than art. I suspect that they would have been better off selling a simple USB thumb drive for a couple of million dollars, rather than trying to create a new class of luxury object.

Fourthly, there actually isn’t as much of an economic difference between the contemporary art market and the contemporary music market as the Wu would seem to think. Kanye West and Dr Dre, to use their own examples, are making just as much money as any contemporary fine artist you might care to mention: both markets have skewed themselves towards a winner-takes-all model where a very small number of people are making gobsmacking amounts of money, while everybody else struggles. That said, the gross size of the contemporary music market, however you measure it, is still orders of magnitude greater than the gross size of the market in contemporary art. And musicians can always tour — an option which isn’t even available to fine artists. All in all, while there are many struggling artists in both camps, the average professional musician is probably still going to be better off than the average professional artist.

Fifthly, industrial production and digital reproduction have not failed — they have succeeded enormously. While the profits of record labels have fallen, the global experience of music is broader and deeper than ever. Today, everybody has a music player in their pocket — and billions of hours of music are listened to every day. Music consumers have never had it so good, and musicians have never had access to a larger audience. The business being lost is just the business of selling physical objects on which music can be imprinted. But it’s silly to say that the value of those physical objects is, or ever was, the same as “the intrinsic value of music”. After all, if the price of a CD really was the intrinsic value of the music on that CD, then essentially all music would have identical value.

Lastly, and most importantly, the Wu-Tang Clan here are flying in the face of the very nature of music itself. Art and music are at two different ends of an important spectrum: art appreciation is fundamentally a solitary experience, which is one reason why people like to live with art in their own homes, and generally dislike overcrowded museums and galleries. Music, by contrast, is fundamentally a social experience. You might prefer small venues to large arenas — but you’d still rather go see a gig at a small venue than have a band play a set for you and you alone. That would be weird.

It’s true that recorded music is often enjoyed in a solitary manner, through headphones. But even then the shared experience is important: file-sharing sites exploded in popularity not only because they allowed free access to music but also because the first thing that you want to do, when you listen to something you love, is to share it with others. The world’s biggest recording artists, including the Wu-Tang Clan, don’t achieve success purely through the intrinsic value of their music; they achieve success through the way in which their music is loved and shared. The love of music is a fundamentally communal experience, in a way that the appreciation of fine art is not. To turn an album into a unique object, belonging to just one person, is to defeat the very nature of music and music-making. This model from the Wu-Tang Clan does nothing for the cause of “reviving music as a valuable art”, to use their words. Instead, it simply mummifies and fetishizes it. It’s silly, and I hope it doesn’t catch on.

COMMENT

Music will be art again when most listeners start to listen to good music again. The music industry died in 1980 with the election of Ronald Reagan and the increase in facist brainwashing. The thing is, you can brainwash many people and they will follow you, but they are morons and not a useful army. They also don’t buy music that is thought provoking or increases introspection, which are exactly the best types of music. They also are not productive people and so unless you subsidize them, they have no money to buy music.

“Bazooko’s Circus is what the whole hep world would be doing Saturday nights if the Nazis had won the war. This was the Sixth Reich.” The Nazis lost the war, but they lived on and came to america in the form of corporatist bankers and the GOP. Their message is in the music and radio and television they work so hard to control. The message of consumption and conformity.

Posted by brotherkenny4 | Report as abusive

Mark Zuckerberg, the Warren Buffett of technology?

Felix Salmon
Mar 26, 2014 06:06 UTC

What does Mark Zuckerberg think he’s doing, spending $2 billion on Oculus? You could take him at his word — that he sees virtual reality as “a new communication platform” where “truly present” people “can share unbounded spaces and experiences”. Basically, virtual is the new mobile, and Zuckerberg wants to get in on the game early.

But note what Zuckerberg doesn’t say, as much as what he does. There’s no mention of “social”, no mention even of “Facebook”. Zuckerberg is one of the greatest product managers in history, but his legendary focus is nowhere to be seen here: it’s all big, vague, hand-waving futurism. And note too one of the quieter members of Zuckerberg’s board of directors: Donald Graham, the CEO of what used to be called the Washington Post Company, and old friend of Warren Buffett.

Buffett, of course, is the classic conglomerator: he’ll buy any business, so long as it’s good. Graham is similar: he inherited a grand media property, and added on all manner of unrelated businesses. Eventually he sold the Washington Post to Jeff Bezos, for $250 million — and is still the CEO of a company, Graham Holdings, which is worth more than $5 billion.

Is it too early to declare that Zuckerberg has ambitions to become the Warren Buffett of technology? Look at his big purchases — Instagram, WhatsApp, Oculus. None of them are likely to be integrated into the core Facebook product any time soon; none of them really make it better in any visible way. I’m sure he promised something similar to Snapchat, too.

Zuckerberg knows how short-lived products can be, on the internet: he knows that if he wants to build a company which will last decades, it’s going to have to outlast Facebook as we currently conceive it. The trick is to use Facebook’s current awesome profitability and size to acquire a portfolio of companies; as one becomes passé, the next will take over. Probably none of them will ever be as big and dominant as Facebook is today, but that’s OK: together, they can be huge.

Zuckerberg is also striking while the iron is hot. Have you noticed how your Facebook news feed is filling up with a lot of ads these days? Zuckerberg is, finally, monetizing, and he’s doing it at scale: Facebook’s net income grew from $64 million in the fourth quarter of 2012 to $523 million in the fourth quarter of 2013. At the same time, his stock — which he is aggressively using to make acquisitions — is trading at a p/e of 100. If you’re going shopping with billions of dollars in earnings multiplied by a hundred, you can buy just about anything you like.

Eventually, inevitably, Facebook (the product) will lose its current dominance. But by that point, Facebook (the company) will have so many fingers in so many pies that it might not matter. Zuckerberg, here, is hedging. Oculus might be valuable to Facebook if the social network grows. But it will be even more valuable to Facebook if the network shrinks. Zuckerberg has seen the astonishing speed with which products come and go online; he knows that his flagship won’t last forever. So he’s decided to build himself a flotilla.

COMMENT

Hello, kindly have a glance at the footprints filmworks websites…Footprints filmworks has exclusive interviews with world leaders, celebs, community leaders and presidents…footprints filmworks is created by omar abdulla…

Posted by footprints555 | Report as abusive

Satoshi: Why Newsweek isn’t convincing

Felix Salmon
Mar 10, 2014 04:18 UTC

I had a 2-hour phone conversation with Leah McGrath Goodman yesterday. Goodman wrote the now-notorious Newsweek cover story about Dorian Nakamoto, which purported to out him as the inventor of bitcoin. At this point, it’s pretty obvious that the world is not convinced: in that sense, the story did not do its job.

As Anil Dash says, the geek world is the most skeptical. Almost all of the critiques and notations attempting to show that Dorian is not Satoshi are coming from geeks, which makes sense. If the world is what you perceive the world to be, then there is almost no overlap between the world of geeks in general, and bitcoin geeks in particular, on the one hand, and the world of a magazine editor like Jim Impoco, on the other hand. As a result, there’s a lot of mutual incomprehension going on here, which has resulted in an unnecessarily adversarial level of aggression.

As befits a debate which is centered on bitcoin, a lot of the incomprehension comes down to trust and faith. Bitcoin is a protocol which requires faith in no individual, institution, or state — all you need to believe in is cryptography. Dorian Nakamoto could have told Goodman explicitly that yes, he invented bitcoin — and still a lot of the bitcoin faithful would not be fully convinced unless and until Dorian proved that assertion cryptographically.

Goodman, on the other hand, is a proud journalist, who gets personally offended whenever anybody raises questions about her journalism, her techniques, or her reporting. In a reporter’s career, she says, “you check facts, you are building trust and building a reputation”. Goodman feels that her own personal reputation, combined with the institutional reputation of Newsweek, should count for something — that if Newsweek and Goodman stand behind a story, then the rest of us should assume that they have good reason to do so. There’s no doubt that a huge amount of work went into reporting this story, very little of which is actually visible in the magazine article itself.

In aggregate, says Goodman, an enormous amount of evidence, including evidence which is not public, persuaded her that Dorian Nakamoto was her man. Goodman has not decided whether or how she might publish that evidence. When she appeared on Bloomberg TV, she said that she would love for people to look at the “forensic research” and the public evidence in the case — but, talking to me, she made it clear that she didn’t consider it her job to help out other journalists by pointing them to that evidence. What’s more, she also made it clear that she was in possession of evidence which other journalists could not obtain.

In other words, Goodman spent two months following leads and gathering evidence, both public and private. Eventually — after confronting Dorian Nakamoto in person, and getting what she considered to be a confirmation from him, both she and her editors felt that she was able to say, on the front cover of Newsweek, that he was the guy. The article itself was the culmination of that process, but it did not — could not — contain every last piece of evidence, both positive and negative, public and private, about both Dorian Nakamoto and every other candidate she looked at. The result is not the process, and Goodman feels that she should be given the respect due a serious and reputable investigative journalist, working for a serious and reputable publication.

Newsweek, it’s fair to say, has not been getting that respect, although it has been getting a lot more attention than most purely-digital publications would have received had they published the same story. Jim Impoco, cornered at a SXSW party, said that he finds criticism of his story to be “phenomenally offensive”, and then went on to make the highly ill-advised remark that “we eliminated every other possible person”. But that’s really a messaging failure: he was on the back foot (SXSW is, after all, geek HQ this week, and the geeks are gunning for Impoco right now). Clearly, this was not the time or the place for a considered discussion of evidentiary standards.

That said, both Impoco and Goodman should have been smarter about how they talked about the story, post-publication. Both have been largely absent from Twitter and Reddit and RapGenius and other online places where the debate is playing out; instead, they have been giving interviews to mainstream media organizations, which are often unhelpful. TV interviews devolve into stupid fights; interviews with print or online journalists result in just a couple of quotes.

Goodman spent a lot of time, with me, walking me through her journalistic technique: she started, for instance, by trying to track down the person who initially registered the bitcoin.org domain name, and then followed various threads from there. And yes, she did consider and reject the individuals who are considered more likely candidates by the geek squad. Nick Szabo, for instance, might well look like a good candidate if you’re looking only at the original bitcoin paper, and asking who is most likely to have written such a thing. But when she looked at Szabo’s personal life, nothing lined up with what she knew about Satoshi Nakamoto and his communications. Instead, she found the Dorian Nakamoto lead — and didn’t think much of it, at first. But the more she kept trying to dismiss it, and failing to do so, the more she wondered whether Dorian’s very invisibility — “contextual silence”, she called it — might not be sending her a message.

Towards the end of Goodman’s investigation, when she was preparing to try to meet with Dorian Nakamoto in person, Goodman told Impoco that if it didn’t turn out to be Dorian, then “we’ve got nobody”. That’s what Impoco was most likely talking about, when he talked about eliminating people. Goodman — and Impoco, more recently — was just saying that this was her last open thread, and that if Dorian didn’t pan out as the guy, then they didn’t have a story.

From my perspective, then, there’s a big disconnect between what I now know about Goodman’s methodology, on the one hand, and how that methodology is generally perceived by the people talking about her story on the internet, on the other. With hindsight, I think that Goodman’s story would have elicited much less derision if she had framed it as a first-person narrative, telling the story of how she and her team found Dorian and were persuaded that he was their man. The story would surely have been more persuasive if she had gone into much more detail about the many dead ends she encountered along the way. The fateful quote would then have come at the end of the story, acting as a final datapoint confirming everything that the team had laboriously put together, rather than coming at the beginning, out of the blue.

That storytelling technique would not persuade everybody, of course: nothing would, or could. And, more importantly, it isn’t really what Impoco was looking for. Even the piece as it currently stands was cut back a few times: the final version was pared to its absolute essentials, and, like all longform magazine journalists, Goodman wishes that she might have had more space to tell a fuller story.

But here’s where one of the main areas of mutual incomprehension comes into play. Impoco and Goodman are mainstream-media journalists producing mainstream content for a mass audience; Goodman’s article was probably already pushing the limits of what Impoco felt comfortable with, given that he couldn’t reasonably assume that most of his readers had even heard of bitcoin. Impoco was interested in creating a splashy magazine article, for the print reincarnation of a storied mass-market newsweekly. Of course, seeing as how this is 2014, the article would appear online, and would reach the people who care a lot about bitcoin, who were sure to make a lot of noise about it. But they weren’t the main audience that Impoco was aiming for. Indeed, in early 2012, when Impoco was editing a much smaller-circulation magazine for Reuters, I sent him a draft of what ultimately became this article for Medium. He passed: it was too long, too geeky. Even if it would end up reaching a large audience online (it has had over 200,000 page views on Medium), it didn’t have broad enough appeal to make it into a magazine.

Similarly, while Goodman has done a lot of press around her article, most of it looks like a tactical attempt to reach the greatest number of people, and build the most buzz for her article. So she’s been talking to a lot of journalists, especially on TV, while engaging relatively little on a direct basis with her online critics. There’s no shortage of substantive criticism of Goodman’s article online, and of course there is no shortage of venues — including, but not limited to, Newsweek.com — where Goodman could respond to that criticism directly, were she so inclined. But instead she has decided in large part not to join the online debate, and instead is pondering whether or not to write a self-contained follow-up article which might address some of the criticism.

There’s a good chance that follow-up article will never come, and that Goodman will simply cede this story to others. And you can’t necessarily blame her, given how vicious and personal much of the criticism has been, and given how many of her critics seem to have made their minds up already, and will never be persuadable. Goodman has said her piece, and there are surely greatly diminishing returns to saying a great deal more.

Still, it’s just as easy to sympathize with the frustration being felt by the geeks. Appeals to authority don’t work well on this crowd — and neither should they. If the US government can lie about the evidence showing that there were weapons of mass destruction in Iraq, it’s hard to have much faith in an institution which, 18 months ago, slapped “HEAVEN IS REAL” all over its cover. (That story, interestingly enough, was demolished by another mass-market magazine, Esquire.)

Indeed, both sides here have good reason to feel superior to the other. From Newsweek’s point of view, a small amount of smart criticism online has been dwarfed by a wave of name-calling, inchoate anger, and terrifying threats of physical violence. And from what you might call the internet’s point of view, Newsweek is demonstrating a breathtaking arrogance in simply dropping this theory on the world and presenting it, tied up in a bow, as some kind of fait accompli.

The bitcoin community is just that — a community — and while there have been many theories as to the identity of Satoshi Nakamoto, those theories have always been tested in the first instance within the community. Bitcoin, as a population, includes a lot of highly-intelligent folks with extremely impressive resources, who can be extremely helpful in terms of testing out theories and either bolstering them or knocking them down. If Newsweek wanted the greatest chance of arriving at the truth, it would have conducted its investigation openly, with the help of many others. That would be the bloggy way of doing it, and I’m pretty sure that Goodman would have generated a lot of goodwill and credit for being transparent about her process and for being receptive to the help of others.

What’s more, a bloggy, iterative investigation would have automatically solved the biggest weakness with Goodman’s article. Goodman likes to talk about “forensic journalism”, which is not a well-defined phrase. Burrow far enough into its meaning, however, and you basically end up with an investigation which follows lots of leads in order to eventually arrive at the truth. Somehow, the final result should be able to withstand aggressive cross-examination.

At heart, then, forensic analysis is systematic, scientific: imagine an expert witness, armed with her detailed report, giving evidence in a court of law. Goodman’s Newsweek article is essentially the conclusion of such a report: it’s not the report itself, and it’s not replicable, in the way that anything scientific should be. If Goodman thinks of herself as doing the work of a forensic scientist, then she should be happy to share her research — or at least as much of it as isn’t confidential — with the rest of the world, and allowing the rest of the world to draw its own conclusions from the evidence which she has managed to put together.

A digital, conversational, real-time investigation into the identity of Satoshi Nakamoto, with dozens of people finding any number of primary sources and sharing them with everybody else — that would have been a truly pathbreaking story for Newsweek, and could still have ended up with an awesome cover story. But of course it would lack the element of surprise; Goodman would have to have worked with other journalists, employed by rival publications, and that alone would presumably suffice to scupper any such idea. (Impoco was not the only magazine editor to turn down my big bitcoin story: Vanity Fair also did so, when the New Yorker story came out, on some weird intra-Condé logic I never really bothered to understand. Competitiveness is in most magazine editors’ blood; they all want to be first to any story, even if their readers don’t care in the slightest.)

Instead, then, Newsweek published an article which even Goodman admits is not completely compelling on its own terms. “If I read my own story, it would not convince me,” she says. “I would have a lot of questions.” In other words, Goodman is convinced, but Goodman’s article is not going to convince all that many people — not within the congenitally skeptical journalistic and bitcoin communities, anyway.

Goodman is well aware of the epistemic territory here. She says things like “you have to be careful of confirmation bias”, and happily drops references to Russell’s teapot and Fooled by Randomness. As such, she has sympathy with people like me who read her story and aren’t convinced by it. But if there’s one lesson above all others that I’ve learned from Danny Kahneman, it’s that simply being aware of our biases doesn’t really help us overcome them. Unless and until Goodman can demonstrate in a systematic and analytically-convincing manner that her forensic techniques point to a high probability that Dorian is Satoshi, I’m going to remain skeptical.

COMMENT

▲ PIMPS ARE NOW ACCEPTING BITCOIN ▲

Posted by ThePowerElite | Report as abusive

Why Puerto Rico’s bonds are moving to New York

Felix Salmon
Mar 3, 2014 20:07 UTC

Puerto Rico, which is already junk-rated and which is facing yet another downgrade to its credit rating, is in no position to call any shots when it comes to raising new debt. If it wants to borrow new money — and it looks like it wants to borrow a hefty $3.5 billion in the next few weeks — then it’s going to have to make whatever concessions its lenders want. That means paying a very hefty interest rate in the 10% range, of course. But it also means changing the governing law of the bonds, from Puerto Rico to New York.

Notably, the Puerto Rican government was very careful to ensure that it would not waive its sovereign immunity, except as regards “legal proceedings with respect to such bonds”. The result is that Puerto Rico seems, on its face, to be setting itself up for a nasty, drawn-out stalemate a la Argentina, where bondholders sue a sovereign nation in New York, trying to claim all the principal and past-due interest that they’re due, while the sovereign in question responds that all of its assets are immune from attachment. That’s definitely not the kind of fight that the hedge funds lending Puerto Rico money would ever want. So why are they insisting on New York law?

The answer is that the hedge funds lending to Puerto Rico basically look at bond contracts, and New York law, in much the same way that Argentina does. In fact, they would be very upset if Puerto Rico treated its new debt in the way that Argentina’s opponents — and a number of New York federal judges — like to think.

At stake, of course, is the fate of Puerto Rico’s bondholders if and when the territory ever defaults on its obligations. Already, some Puerto Rican lawmakers are saying that’s exactly what should happen, and there’s no obvious fiscal track to debt sustainability, so default is a very real risk, and the main thing that lenders want to protect themselves against.

Historically, default protection has come mainly in the form of asset-backed bonds. Most of Puerto Rico’s debt is backed by some revenue stream or other: even if the government defaults, the state-owned utilities and the like will still have revenues which can be attached (at least in theory) by bondholders. But here’s the problem: if I held one of those revenue bonds today, I would not feel particularly confident in my ability to continue to get my coupon payments, even in the face of a government default.

The problem is precisely that so much of Puerto Rico’s debt is collateralized in this way. If we reach the point at which Puerto Rico needs to default in order to get its fiscal house in order, then it will have to restructure (which is to say, default on) its revenue bonds. If those are untouched, then the problem doesn’t get solved, and there’s really no point in defaulting in the first place. No one knows exactly how Puerto Rico would do such a thing, but legislation would probably be involved — if Greece can do it, then there’s a decent chance that Puerto Rico can, too. The idea is to pass a law which, in effect, makes it legal to default on your debts. And since those debts are issued under domestic law, there might not be much that bondholders can do about it.

Conversely, if Puerto Rico defaults with a relatively small quantity of New York-law debt outstanding, it’s probably easier for Puerto Rico to just continue to pay the coupons on that debt, rather than try to restructure it. Again, Greece is the precedent here: while debt under Athens law was restructured, debt under London law continues to be paid in full. Puerto Rico could default on its New York-law debt, of course — but doing so would severely cripple the island’s ability to do business with the mainland; would involve paying massive legal fees for years to come; and probably wouldn’t move the needle very much when it came to debt sustainability.

The point here is that the concept of seniority doesn’t really make a lot of sense when you’re not operating in the context of a formal bankruptcy regime. A bankruptcy judge can ensure that a debtor pays senior creditors first, and junior creditors last. But in a sovereign context — which includes Puerto Rico — there is no such thing as bankruptcy. In the Argentina case, the New York courts are trying to enforce an idiosyncratic reading of the formerly-obscure pari passu clause to try to bring back some semblance of seniority into the sovereign debt world, but it’s a knock-down, drag-out fight, and no one knows how it’s going to end. The overarching principle in sovereign debt remains the principle that has governed Argentina’s behavior ever since it defaulted well over a decade ago: a sovereign government can and will pay whomever it likes, whenever it likes, wherever those people think that they might stand in terms of some theoretical seniority chart.

As a result, creditors in Puerto Rico aren’t looking for de jure seniority; they’re looking instead for de facto seniority. And the way to get that is to be part of a small group of bonds which is more trouble than it’s worth to restructure.

That strategy generally works very well. When most of Latin America was busy defaulting on its sovereign loans in the 1980s, for instance, the countries in question generally stayed current on their sovereign bonds — just because the loan stock was big, and mattered, while the bond stock was small, and didn’t. Similarly, when Russia defaulted on its debt in the late 1990s, it defaulted on its large stock of domestic bonds, but stayed current on its much smaller stock of international-law bonds, for exactly the same reason.

So when you see hedge funds demanding that their new Puerto Rico bonds be issued under New York law, don’t kid yourself that they particularly value the protections that New York law gives them, or that they think that New York courts will allow them to recover most of their money in the event of default. Rather, they’re just hoping that Puerto Rico won’t bother defaulting on those bonds in the first place. And they might well be right about that.

COMMENT

It is in PR’s interest to not only default but also to repudiate. When it happens, all holders will be offered the same deal, such a bond worth ten or twenty cents on the dollar, a long maturity and a low interest rate. Puerto Rico is just another over-indebted Latin American country.

Posted by nixonfan | Report as abusive

Pension politics

Felix Salmon
Feb 13, 2014 00:12 UTC

David Sirota has a very important scoop today: the PBS series “Pension Peril” has secretly* been funded by John Arnold, a billionaire powerbroker with an aggressively anti-pensions political agenda. This looks very bad for PBS — but it’s also bad for Arnold, who generally gets glowing press, and who would seem to have no good reason to have insisted on secrecy when writing the $3.5 million check that made the series possible.

The PBS series in question seems to fall uncritically into line with the beliefs of Arnold and other Very Serious People — that pension liabilities are a huge problem, and that the only way to fix them is to reduce the amount that pensioners get paid. But of course it’s not nearly as simple as that.

The John Arnolds of this world tend to assume that three things are always true:

  • Defined-contribution pensions are better than defined-benefit pensions;
  • Funded pensions are better than unfunded pensions;
  • Individual pensions are better than group pensions.

It’s easy to see why people think this way. If there’s no money, then what assurance do you have — really — that you’ll be paid? If you have to share your pension with others, how can you be sure that they won’t end up with more than their fair share? Isn’t it better to just keep all your money for yourself, and make sure to save enough that you can live well in retirement?

This is a pretty libertarian, every-man-for-himself view of retirement: it makes few concessions to the idea that there’s a societal obligation to the elderly, or that groups can achieve more together than they can individually. At heart, it’s a view which benefits people like John Arnold, who pay a lot of taxes, at the expense of the poorest members of society, who might take out more than they put in. And, of course, it’s a view which benefits successful investors, like John Arnold, over schmucks who have no idea how to best invest their paltry 401(k) funds.

In reality, big pooled pension funds are much more efficient — and generate much higher returns — than anything an individual is likely to be able to manage. And in the specific realm of public finance, the case for group-funded defined-benefit schemes is even stronger. That’s because public servants — police officers, elementary school teachers, you name it — tend to have much longer tenure at their jobs than, say, hot-shot fund managers. They are also willing to work for relatively low salaries precisely because they know that their pension benefits are good: that they don’t need to worry about how they’re going to make ends meet in retirement. That peace of mind is hugely valuable, and rarely factors in to the calculations of the pension opponents, who seem to think that worrying about your individual retirement investments is a good thing.

Around the world, indeed, in places like Hungary and Poland, the roll-your-own pension plan model is being, reversed, and governments are reverting to the “trust us” model. The mechanism has been particularly drastic in Poland, where the government recently confiscated some 150bn zlotys (€36bn) of Polish government bonds and government-backed securities, seizing them from private pension-fund managers. The Poles then cancelled those bonds entirely, which had the effect of reducing Poland’s national debt overnight, by a substantial 8 percentage points. Given debt-ceiling rules, that gives the Polish government a lot more room to run deficits than it had before. In return, the Poles who were counting on the retirement income which was going to be generated by those bonds are just going to have to make do with a standard pay-as-you-go system, where they’ll receive a state pension which is paid for out of general tax revenues.

This is not as dreadful as it necessarily looks at first blush. Governments can always find a way to reduce pensioners’ incomes, through taxes or any other means. And now, at least, those incomes will be less tied to the vagaries of market returns. Indeed, Poland isn’t all that far from the United States: although we do put a lot of government bonds into the Social Security trust fund, it’s entirely up to the government how much money pensioners take out of that fund. It can be less than the fund is earning, or more: the decision is political, and doesn’t bear much relation to the income being generated, or even whether the trust fund has any money in it at all.

Still, the Polish move is a pretty bad one. The pension funds still exist, but now they’ve lost most of their fixed-income component, so they’ve become a lot more volatile. The playing around with the national-debt figures is a silly, and dangerous, trick. And without strong domestic pension funds, Poland has now lost an important source of investment flows — the kind of money that helps to keep an economy innovative and productive.

So pension funds are, generally, a good thing. And when you have a pension fund, it’s a good idea to fund it well. But they’re not a panacea, and in general the answer to the problem of underfunded pensions is just to fund them better, rather than to start cutting benefits.

The John Arnolds of this world should remember one thing: it’s just as easy to tax retirement funds as it is to cut defined pension benefits. If America really needs to start taking money from future retirees, then maybe the politicians will start looking at a much juicier target — the massive tax expenditures being spent on things like IRAs and 401(k) plans. Those tax breaks are not fair — they benefit the rich much more than the poor. Maybe the sensible thing to do is to take those tax expenditures, and use them instead to shore up distressed public pension plans. If indeed those plans are in as much peril as John Arnold says they are.

*Update: Leila Walsh, the director of communications at the Laura and John Arnold Foundation (which also responded to Sirota’s article), emails:

You stated that the “PBS series ‘Pension Peril’ has secretly been funded by John Arnold.” This statement is entirely inaccurate. Information about the grant is available on the LJAF website. We have nothing to hide and have publicly disclosed the amount, term, and purpose of the grant.

WNET also issued a statement this afternoon that says, “The Arnold Foundation is a supporter of this initiative, which has been clearly disclosed on the three PBS NewsHour Weekend broadcasts (produced by WNET) that have included segments funded through this project.”
COMMENT

“poorest members of society, who might take out more than they put in”

It is dishonest to use a word like “might” in a sentence like this. You “might” be hit by lightening; or you “might” win $100 million in the lottery; or John Arnolds “might” go and sell all his possessions and give the money to the poor (Matthew 19:21) – ie highly unlikely possibilities compared to the almost certainty that the poorest members of society will take out more than they put in. And then there is this: “the United States: although we do put a lot of government bonds into the Social Security trust fund”.
Here is what Social Security says about the SS trust funds that receive SS taxes:

http://www.ssa.gov/oact/progdata/fundFAQ .html

“How are the trust funds invested?”

“By law, income to the trust funds must be invested, on a daily basis, in securities guaranteed as to both principal and interest by the Federal government. All securities held by the trust funds are “special issues” of the United States Treasury. Such securities are available only to the trust funds.”

“What happens to the taxes that go into the trust funds?”

“Tax income is deposited on a daily basis and is invested in “special-issue” securities. The cash exchanged for the securities goes into the general fund of the Treasury and is indistinguishable from other cash in the general fund.”

In other words, SS Trust money is treated just like ordinary tax revenue except that the government promises to repay these monies plus interest to the SS as needed. These trust funds are government debt obligations and can only be repaid by raising taxes (unlikely) or increasing overall government deficits.

http://www.salon.com/life/life_stories/i ndex.html?story=/mwt/feature/2011/04/02/ late_in_life_excerpt

“the first recipient of Social Security, a bookkeeper named Ida May Fuller, started to collect her checks in 1940. She proceeded to live another thirty-five years, long enough to witness the ascent and disbanding of the Beatles and the landing of the man on the moon. For her total $24.75 contribution, she received $22,888.92 in benefits”

Posted by walstir | Report as abusive

Content economics, part 5: news

Felix Salmon
Feb 11, 2014 16:53 UTC

Have you heard the news? Janet’s pregnant!

There’s a reason that the first thing you see when you log in to Facebook — the product around which everything else on Facebook revolves — is called the News Feed. And yet, only a relatively small proportion of what you see in your News Feed can really be considered journalism.

It is almost impossible to exaggerate the degree to which Facebook has changed the news business. For centuries, news has been based on a broadcasting paradigm: a small group of journalists creates a product — a self-contained news bundle — which is then consumed by a very large group of viewers or readers or listeners. Various different bundles competed for your attention: you might get your news from the New York Times, or the Economist, or NPR, or the NBC Nightly News, or Newsweek, or any of a thousand other outlets. In any case, the atomic unit of news, from the consumer’s perspective, was the bundle, not the story. Any given individual would get her news from only a handful of outlets in any given week — quite frequently, only one or two.

The result was a mentality perfectly summed up in the NYT slogan of “all the news that’s fit to print”. News outlets could not assume that their readers were getting any news elsewhere, so they had to aspire to being comprehensive. They also had to appeal to a very broad audience: every story in a prime-time newscast, for instance, had to be understood by nearly everybody watching it.

Finally, the news had to be new. If you published a story yesterday, you couldn’t republish the same story today. The news was therefore incremental: the bundle informed you of how the world had changed in the past day, or week. The daily bundles were therefore at their best covering events which happened over the course of a single day, and the weekly bundles were best at covering bigger events which happened over the course of a single week. Longer-duration stories were harder to cover — wars, for instance, or the civil rights movement. On the one hand, you didn’t want to bore your readers with old news; on the other hand, you didn’t want to assume that they knew everything that had previously been reported on the subject. It was a hard line to walk. In general, the more heavily-covered the story, the more that the public would be forced to piece together the big picture from a long series of incremental developments. If you hadn’t been following the story from the beginning, you would feel a bit like someone starting a TV series on season 3, episode 5.

In the early days of the web, these constraints were not serious handicaps. The web wasn’t (yet) replacing the old bundles as the main place where people got their news. And, in any case, the portals were recreating the bundle strategy of trying to be all things to all people. But then the dot-com bust arrived, and in its wake the web became atomized. Where once there were portals, now there was search — along with a new phenomenon called blogs.

Search and blogs, between them, helped to usher in a huge change in how we consume news, and turned the atomic unit of news from the bundle to the story. New outlets, like the Huffington Post, still aspired to bundle the news, but the bundling was no longer the top priority. Instead, such sites put enormous amounts of effort into ensuring that their stories — on an individual level — would get lots of traffic from Google. Later, outfits like Demand Media gave up on the bundle idea entirely, and just tried to manufacture the kind of stories that people were searching for. And all the while, blogs were acting as rebundlers, linking to the best content from all over the web. The special value of the bundle — the whole point of a news product, and something only a major media company could ever put together — was starting to die. Simultaneously, the value of the individual story, which might attract hundreds or even thousands of inbound links, started to rise substantially.

The rise of the blogs also meant the erosion of what Ezra Klein calls “the constraint of newness”. Some blogs, especially in the tech space, competed hard on speed. But almost all of them found a huge audience of people who wanted them to take content which had already been published elsewhere, and then republish it in their own voice, on their own site. A punchier headline here, a snarkier take there; often the copy proved more popular than the original.

With this change, the economics of the news business started shifting dramatically. Before, the locus of value creation was fundamentally corporate: only big media companies could hire hundreds of journalists and put their work together into a comprehensive and valuable bundle. But online, bundling is cheap. Any blogger can start finding and linking to the best content out there, and many did. The real value, now, started being pushed down a couple of levels, to the individuals who were writing the content which would garner those all-important inbound links. What’s more, as we saw in part 2 of this series, those individuals tend to command more reader loyalty than their corporate owners do.

It was never going to be easy for legacy media companies to adjust to these new realities. But then came social, which accelerated everything, and sent the whole news ecosystem spiraling out of the old publishers’ control.

The main reason why the blogosphere never managed to overtake legacy media was the fact that it required quite a lot of work on the part of readers, who had to put significant effort into seeking out the blogs, or the set of blogs, which best reflected their own interests. The most avid news consumers would do that work, and be well rewarded for doing so. But most people aren’t particularly avid news consumers, and so they never bothered.

Similarly, the “daily me” products which were occasionally launched by big media companies tended to need a lot of laborious personalization effort up front, in return for dubious benefits down the road. Even if you went to the trouble of customizing one of these sites so that it would deliver personalized content, you’d still end up being served only material produced by a single media company. After many years of trying, only one personalization product got any mass traction at all, and eventually that one too — iGoogle — got killed.

But then social media arrived. Twitter and Facebook take a very basic bloggy format — the reverse-chronological news feed — and serve it up in as many different flavors as they have users. Personalization isn’t a way of taking an existing product and refining it; it is the product. This is personal personalization, too. Rather than trying to refine what you see by specifying subject headings like “dance” or “Miami Dolphins” or the tickers in your stock portfolio, social websites are based in the first instance on the real-life human beings you care about the most.

In the era of blogs, if a certain blogger shared a news story you were interested in, that would help increase the attention you paid to the blogger. In the era of social media, if one of your friends shares a news story, that helps increase the attention you pay to the news story. People started caring more about the news, not because the news had suddenly become more interesting, but just because they saw that their friends cared about it, and it’s only human to care about what your friends care about.

Social media didn’t just create newly-engaged readers; it also created millions of newly-engaged aggregators. The most enjoyable part of blogging, in the early days, was putting things up on the internet and seeing people respond to them — by clicking on your links, or linking to you, or engaging you in the comments section. But it wasn’t easy. Twitter and Facebook — and Pinterest, for that matter, and the rest of the social media universe — did two important things. Firstly they made publishing incredibly easy; and secondly they rewarded publishing by giving contributors immediate likes and replies and favs and other evidence that people really cared about what you were publishing. It was the endorphin rush familiar to old-school bloggers, democratized and accelerated.

Now, everybody is a journalist, or at least a contributor to other people’s news feeds. There are still a few individuals whose links matter a lot — Matt Drudge, most obviously, or John Gruber. They have an ability to provide the kind of links that millions of people want to follow. But the traffic they drive is dwarfed by the aggregated power of Facebook, where millions of links, and other snippets of information, are shared every minute.

The result is that Twitter and Facebook have become the new indispensable bundles — and in doing so have changed the nature of what news is. Imagine opening up the New York Times and seeing pictures of your friend’s birthday party: that would be personalization. And that’s exactly what Facebook provides, with the help of millions of unpaid editors. Those editors might care a little bit about stuff being new — but they don’t care nearly as much as journalists do. They do care a lot about interests which have historically been too narrow for mass-media outlets to cover. And they also care about stuff which is silly, or cute, or funny, or all of the above.

This might come as depressing news to high-minded editors who extol the wonders of investigative journalism and who disdain cat videos as being beneath them. But most news bundles have always included their fair share of fluff, and in a disaggregated world, there’s no need for the investigative journalists to work for the same employer as the people curating cat videos. (Although, they can.)

The new dominance of social media in the news business is not depressing at all: it’s excellent news. Just as most news consumers were never avid enough to seek out blogs, most Americans were never avid enough to seek out news at all. They didn’t buy newspapers; they didn’t watch the nightly news on TV; it just wasn’t something which interested them. But now the news comes at them directly, from their friends, which means that the total news audience has grown massively, even just within the relatively stagnant US population. Globally, of course, it’s growing faster still — the ubiquitous smartphone is a worldwide phenomenon.

We’re at an excitingly early stage in working out how to best produce and provide news in a social world. There are lots of business models that might work; there are also editorial models that look like they work until they don’t. But if you look at the news business as a whole, rather than at individual companies, it’s almost impossible not to be incredibly optimistic. Media used to be carved up along geographic grounds, because of the physical limitations of distributing newspapers or broadcasting TV signals. Now, there are thousands of communities and interest groups that gather together on Twitter and Facebook and share news with each other, which means there are thousands of new ways to build an audience.

Meanwhile, on the back end, technology is evolving fast, and giving individual journalists astonishing power to tell stories in compelling and highly visual ways. Posts like this one — wordy strings of paragraphs, without much structure or narrative — are inherently off-putting; there are much more efficient and effective ways for people like me to communicate what we want to say, and there are dozens of new-media companies devoted to giving us the tools to do just that.

Now is a particularly exciting time in the news business. One journalist recently told me that it has changed more in the past eight months than it changed in the previous five years, and I think he’s right about that. One big reason is that the technologists are getting involved: people like Vox Media and Medium and BuzzFeed and First Look Media are making multi-million-dollar bets that they can build the CMS of the future, and that they can use their software advantage to win the battle for consumer attention. David Carr says that it costs about $25 million these days to compete in the digital media space — that’s a lot lower than the $50 million cost of launching a magazine, or the $200 million cost of launching a cable network. And it’s lower still than the billions of dollars that newspaper companies — including the New York Times Company — spent on color printing presses. In other words, the barriers to entry have never been lower, while the potential rewards have never been higher.

Right now might also be a very brief window of opportunity for roll-up strategies. The idea behind such things is simple: if you have a powerful CMS, then it makes sense to take existing sites (like, say, the Curbed Network) and move them onto a more powerful system (like, say, Vox Media’s Chorus). Everybody wins. But as web technology becomes increasingly sophisticated, and sites start looking very different depending on the device used to view them, it becomes increasingly difficult to port an entire website over to a brand-new platform. You can’t just import the HTML and tweak the CSS any more. Up until very recently, there hasn’t been the money available to prosecute such a strategy; it won’t be all that long before such a strategy becomes technically much more difficult. (I can’t imagine, for instance, merging the Vox Media and BuzzFeed back ends without enormous headaches and difficulty.) So if you’re going to do it, then you should waste no time.

On the other hand, journalists themselves are becoming much more portable than they ever used to be. It used to be that if you left the NYT or WSJ or ABC News or some other storied news brand, you lost a lot of power and reach. But as the media universe fragments, that’s not nearly as true any more. Just in the past few months, Nate Silver and David Pogue have left the NYT; Walt Mossberg and Kara Swisher have left the WSJ; Katie Couric has left ABC; Ezra Klein has left the Washington Post; Glenn Greenwald has left the Guardian; and so on and so forth. All of those were high-profile, big-dollar deals, but there are lots of other journalists moving around right now too, and it has never been less obvious that if you get a job offer from a big legacy-media company then you should take it.

The reflected glory of the established brand is still there, to be sure — “I’m calling from the New York Times” still gets your calls returned a lot faster than “I’m calling from Medium”. But legacy companies tend to move more slowly, and have more cumbersome technology, and are less likely to win the technology arms race, if only because their editorial technology has to support not only digital publishing but also the old-media formats. The result is what you might call the journalist arb: a digital company can pay its journalists significantly more than (say) the NYT, while still having a significantly lower total editorial budget per journalist. The journalists get more money, more freedom, more tools to tell their story, and get to work for a more nimble employer which isn’t burdened with a massive legacy cost basis. They might lose a certain amount of reputational capital, but the loss involved there has never been smaller, and is decreasing by the week.

So even if it’s soon going to be difficult for digital media companies to aggregate websites onto a single platform, it is only going to get easier to aggregate journalists. The most efficient platforms, with the greatest reach and the best tools, will have a natural advantage in terms of talent acquisition and retention. Which in turn is going to make life more difficult still for legacy media companies.

We’re only just beginning to get an idea of the kind of journalism — and the kind of news — these new platforms are going to produce. Certainly, the conception of what counts as news is going to get broader. It will include living articles of the kind that Klein is talking about; it will include personal stories of the kind that do so well on Medium; it will include discursive conversations and opinionated video; it will be fast, and slow, and funny, and serious, and personal, and universal, and hyper-local, and global, and everything in between. And, for the companies which get it right, it will be extremely profitable.

(This is part 5 of an irregular series; it comes in the wake of part 1: advertising, part 2: payments, part 3: costs, and part 4: scale.)

Viral math

Felix Salmon
Feb 2, 2014 15:23 UTC

This chart, from Newswhip via Derek Thompson, has been doing the rounds, and causing a bit of debate:

facebook1.png

The question: What on earth is Upworthy doing so right? How is it that Upworthy’s articles shared a good order of magnitude more often than anybody else’s?

Part of the answer is that Upworthy simply doesn’t publish that many articles overall — a couple of hundred a month, each one carefully and laboriously optimized, through extensive A/B testing, to be as socially infectious as possible. But that doesn’t fully explain how Upworthy’s articles can be so much more viral. For that, Upworthy needs the help — either on purpose or inadvertent — of Facebook.

Facebook is the monster in the publishing room: a traffic firehose which can be turned on or off at Mark Zuckerberg’s whim. Right now, it’s turned on, and while a lot of sites are feeling the love none is doing so more than Upworthy. (Except, maybe, ViralNova.) So, how does Facebook give Upworthy such a big boost?

Let’s start with the basic mathematics of virality. Start with an article, any article; let’s stipulate that it gets 1,000 pageviews, naturally, just by dint of being published on a certain website. Now, let’s say that 1% of that article’s readers decide to share it with their friends, and that each reader has 100 friends. That means 10 people sharing, and 1,000 new people seeing the link. How many of those people will click the link? Let’s say it’s 10%. Which means that the article gets a boost of 100 new pageviews. Those extra pageviews cause their own viral loop, which generates an extra 10 pageviews, and that’s where the cycle pretty much peters out. Thanks to sharing, the article has been viewed 110 times, over and above the original 1,000 pageviews.

This requires a formula. Call the basic strength of the website PP, for publisher power: that’s the number of pageviews you can expect to get when you publish an article on your website. You then multiply that by S, or shareability: the likelihood that a reader will share your article on Facebook. That in turn gets multiplied by F, or the number of friends per reader, and then by C, which is the clickbaitiness of the headline.

The key number here is S·F·C, or shareability times friends times clickbaitiness. In our model, that’s 0.01 * 100 * 0.1 = 10%. If you increase any of those numbers — if you make people more likely to share your article, or more likely to click on the headline — then you’re going to increase the virality of the piece. For instance, if you double the proportion of people sharing the article and also double the probability that someone is going to click on the headline after seeing it, then S·F·C becomes 0.02 * 100 * 0.2 = 40%. If you start with 1,000 pageviews, then you’ll get another 400 viral views which in turn will generate another 160, and so on: your viral boost goes up from 110 views to 660 views.

You can see that a relatively small tweak to the variables in the S·F·C formula can make a very big difference to your total pageviews. Pretty soon you can double your initial pageviews, or treble them — and, then, when S·F·C exceeds 1, you achieve escape velocity: your article just keeps getting shared more and more and more. Getting S·F·C > 1, then, is the goal of all would-be viral content, and it’s by no means impossible: if 5% of an article’s readers share it, and those readers have 200 friends each, and 25% of people who see the headline click on it — well in that case, S·F·C is a whopping 2.5, or 250%.

At those levels, it almost doesn’t matter what PP is — how many pageviews you seed your article with before it goes viral. PP still matters, however — which is why so many viral sites have pop-up boxes which try to harvest your email address. It turns out that emailing lots of people with links to new content is a great way to start the ball rolling.

But there’s a fly in the ointment, here — something which makes achieving escape velocity much more difficult. Let’s call it FBT, for Facebook Throttle. If you share an article on Facebook, and you have 100 friends on Facebook, that does not mean that your 100 friends are all going to see that article in their newsfeed. Far from it. After you click “share”, Facebook then decides whether the article you just shared is going to appear in your friends’ feeds or not. (This is a very big difference between Facebook and Twitter, which shows you everything your friends are sharing.)

As a result, the important formula isn’t S·F·C; rather it’s S·F·FBT·C, where FBT is the probability that the article you’re sharing is going to actually appear in your friends’ feeds. In recent months, Facebook has been taking its foot off the throttle quite dramatically — but no one knows how long that’s going to last.

Which brings me to Upworthy. We know that Upworthy spends a lot of time optimizing for maximum S and maximum C. It more or less invented the “curiosity gap” headilne, for instance, which turns out to be a great way to boost C. In other words, Upworthy is maximizing the variables under its own control.

What’s less well understood is that there seems to be a direct correlation between C and FBT. While Facebook controls its own throttle, it does so in response to user behavior: it wants to show its users more of what they want to see, and less of what they don’t want to see. And it’s easy to tell what Facebook’s users want to see: just look at what they’re clicking on. As a result, there’s a direct feedback loop between C and FBT: the higher your clickbaitiness (C), the less that Facebook will throttle you, and the more likely that your articles will be seen by your readers’ friends.

To put it another way: at the moment, Facebook assumes that people click on exactly the material that they want to click on, and that if it serves up a lot of clickbaity curiosity-gap headlines, then it’s giving its users what they want. Whereas in reality, those headlines are annoying. Curiosity-gap headlines are a bit like German sentences: you don’t know what they mean until you get to the end, which means that the only way to find out what your friend is saying is to click on the headline and serve up another pageview to Upworthy. (Or ViralNova, or Distractify, or whomever.) It’s basically a way of hacking real-world friendships for profit, and there’s no way Facebook is going to allow it to continue indefinitely.

All of which is to say that the massive advantage which Upworthy has, as seen in the chart at the top of this post, is certain to go away. It’s a temporary phenomenon, a function of the fact that Upworthy is better than anybody else at turbocharging virality by using artificially-optimized curiosity-gap headlines as a way of sending a (false) message to Facebook that those headlines are the stories its users really want to read. Upworthy’s formula will work until it doesn’t. Which is why I think that Dennis Mortenson is going to win his bet against James Gross.

COMMENT

AbeB makes a good point. Computing the average for an extremely skewed distribution is next to worthless.
I just want to point one other thing out… the data is not for Facebook shares! If you read the paragraph before the chart, it clearly says they took the number of Facebook Likes divided by the number of articles published. I don’t know how correlated FB Likes and FB Shares are but I presume them different unless otherwise proven.

Posted by junkcharts | Report as abusive
  •