Opinion

Felix Salmon

Adventures with primary documents, sustainability-analysis edition

Felix Salmon
Feb 21, 2012 18:40 UTC

gdp.jpg

This chart, from the European Commission’s debt sustainability analysis of Greece, has been doing the rounds today. I posted it last night, and it got picked up by Joe Weisenthal as his chart of the day; it’s a very striking visualization of the degree to which the Greece bailout plan lies somewhere between optimistic and delusional.

I’m happy to say that Reuters was the first organization to get its hands on this analysis. At 4:21 EST, we ran our exclusive story, by Jan Strupczewski in Brussels, with the headline that Greek debt would still be 160% of GDP in the European Commission’s downside scenario. Jan took the 2,800-word analysis, and its wealth of charts and tables, and boiled it down into a 965-word story which was avidly read by news consumers around the world.

Later on, the FT’s Peter Spiegel also got his hands on the Commission’s analysis. His story ran at 6:15 EST, and was even shorter, at 566 words.

Necessarily, both stories cut out material from the original report, and as it happens neither of them mentioned the sharp uptick in future GDP growth. The first place I saw that was at Alphaville, with an anonymous post timestamped 02:44 GMT (someone was up late), which was 9:44 EST. “The baseline scenario in the report forecasts 4.3 per cent decrease in GDP this year,” said Alphaville, “followed by flat growth in 2013 and 2.3 per cent growth in 2014.” Clearly they had the analysis themselves, since those figures weren’t being reported anywhere else at the time.

But it wasn’t until 10:47 EST that the report finally appeared online, at Zero Hedge, allowing me to write my post. The Zero Hedge post has now reached 20,000 pageviews — a very big story by ZH standards.

Once the analysis was freely available online, it started propagating elsewhere, too: Alphaville (which, remember, had had access to it six hours earlier) finally posted it at 08:49 GMT, or 3:49am EST.

The analysis is very well written, clearly by a native English speaker:

There is a fundamental tension between the program objectives of reducing debt and improving competitiveness, in that the internal devaluation needed to restore Greece competitiveness will inevitably lead to a higher debt to GDP ratio in the near term. In this context, a scenario of particular concern involves internal devaluation through deeper recession (due to continued delays with structural reforms and with fiscal policy and privatization implementation). This would result in a much higher debt trajectory, leaving debt as high as 160 percent of GDP in 2020. Given the risks, the Greek program may thus remain accident-prone, with questions about sustainability hanging over it.

As such, I think it’s fair to say that the analysis itself was intrinsically superior to any of the news reports written about it. Given the choice between reading the Reuters story, or the FT story, or the primary document, virtually all market participants would plump for the primary document.

And yet neither Reuters nor the FT posted the document when they got their hands on it, preferring instead to simply write it up in their own words. Zero Hedge got the document more than six hours after Reuters did, but still gets all the glory of being the first site to post it.

There are three lessons here. The first is that reporters still don’t like posting primary documents, for various possible reasons. They might be worried about being sued for copyright violation, or they might have promised their source they wouldn’t post the document. More generally, reporters tend not to want to give away to their competitors information which they worked very hard to obtain. And as such, they’re often perfectly happy to promise not to post such documents: because they didn’t really want to in the first place.

The second is that legacy news organizations like Reuters and the FT still don’t think digitally: the unit of news is always the story, rather than, say, a primary document in PDF form. So if you obtain a primary document, the first thing you do is turn it into a story, rather than simply letting that document be the story.

Finally, it’s silly to assume that reporters are going to be particularly expert at extracting all the germane information from a report when they write it up. Especially when you’ve had a very long day, it’s late at night, you’re under time pressure, and you are looking at the document from a particular, inside-baseball perspective. Sometimes, reporters can add value when they write up primary documents, by putting them in perspective and in plain English. Other times, they miss important things. Either way, posting the document itself along with the write-up can only make the news story richer and more valuable. Even if doing so also helps the competition.

COMMENT

News. What is it? When I read a newspaper or a blog, I ask myself “how many verifiable, actual FACTS are in this article?” Are the facts important? What are the important facts?

Now I just don’t look at FRED (St Louis Federal Reserve graphs) because I often need some interpretation and insight into what it means. Also, with all the facts in the world, I need somebody to filter out this avalanche of information and distill it down.

At some point, the news industry will figure out to always provide the raw data. There will be plenty of demand for distillation – just maybe not as much as there is now.

It used to be that you needed reporters to give you the synopsis of some Federal report or politicians speech. Now you can see it for yourself.
Remember Trent Lott – and his comment at Strom Thurman’s birthday party? An incident not particulary emphasized by the news media intially – but when the general public (or some members of it) saw it, what the media and the public thought noteworthy turned out to be considerably different.

We’re seeing it time and time again – politicians ability to taylor messages to certain groups is decreasing because of mobile phones and the ease of recording, as well as the ability for anyone to read the entire text of a speech on the internet. As well as the fact that the internet doesn’t forget…

Posted by fresnodan | Report as abusive

Gawker Media jettisons its porn blog

Felix Salmon
Feb 17, 2012 18:47 UTC

Back in November, Nick Denton put Gawker Media’s Fleshbot up for sale. The official announcement, here, is NSFW due to the ads surrounding it — which pretty much explains why Fleshbot was being sold: its customers — porn sites — are very, very different from the brand advertisers who supply the money to all the other Gawker Media properties.

In the end, Fleshbot LLC was sold, or “sold”, on February 1, to Fleshbot’s editor, Lux Alptraum; if money changed hands I’m sure there wasn’t much of it. Jolie O’Dell, writing about the news of Fleshbot going up for sale, said that “the day a porn site can’t make money on the Internet is the day we all pack up and go home” — but in fact turning a profit on a porn blog is not easy at all. There’s a virtually infinite amount of competition, and the cost of porn online has basically gone to zero at this point, which means there’s even less money than there used to be for ad campaigns on sites like Fleshbot.

Fleshbot is certainly not the iconoclastic site that Denton aspired to creating when it was launched in 2003 — rather than taking a fresh look at where porn and eroticism might be found in life and on the internet, it increasingly became a mouthpiece for, and captured by, the porn industry. To the point that when Gawker Media started looking at porn-industry scandals, that ended up happening on Gawker, rather than on Fleshbot.

Interestingly, the kind of site that Denton originally envisaged is nowadays very common on Tumblr, which has a thriving porn-reblogging community, based around as many different niches as there are porn specialities. (Which is to say, a lot.) Fleshbot tried to be all things to all porn consumers, both gay and straight, and that’s not how porn works: people tend to gravitate towards their own personal kinks, rather than going for the anything-and-everything approach.

So what’s going to happen now that Fleshbot is an independent entity? For one thing, it has already moved to the ubiquitous WordPress platform from Gawker’s custom publishing software, which makes serving up the porn industry’s advertisements significantly easier. “For a variety of reasons, Gawker was looking to completely separate itself from Fleshbot,” says Alptraum; “on our end, the restrictive nature of the Gawker CMS/layout wasn’t really conducive to our work. On our own, we’re more capable of focusing our layout/ad sales/tech strategies in ways that are optimized for an adult site, rather than trying to shoehorn Fleshbot into models designed for a broader, more mainstream stable of properties.”

Alptraum’s job is not an easy one. The guy who used to sell Fleshbot ads for Gawker Media is now the CFO of Fleshbot LLC; there’s a lot of work to do, and I don’t think the site has ever made money. And now, of course, they need to worry about things like health insurance and payroll and all the other burdens of being an independent company, which were previously picked up by Gawker Media’s operations crew.

Alptraum is optimistic: Fleshbot is “an incredibly valuable property that hasn’t been optimized,” she says, “and I’m excited about the possibilities for expansion.” That might mean more live events like the Fleshbot awards; it probably means even further alignment with the porn industry. “At its core, it’s a project about destigmatizing and celebrating sexuality,” says Alptraum. “I also think it’s played a powerful role in helping to mainstream the adult industry.”

Meanwhile, Gawker Media now runs on a single advertising platform, rather than having to make Fleshbot the exception to many rules. Nick Denton wanted to shake things up, with Fleshbot; in the end, he just created a headache for himself. He kept the blog much longer than most entrepreneurs would have done, and Alptraum only has good things to say about him. But the parting has been inevitable for a long time now, and both sides are surely happier now that it’s happened.

COMMENT

Pútavé čítanie, dosť podobné ako keď som si minule bral pôžičku online. Pôžičky online sú zaujímavé v tom, že s ťažko hľadajú ale minule som na Googli našiel odkaz http://pozicky-online.net/ – pôžičky online a tam som sa všetko o pôžičkách online dozvedel. Ďakujem za to!

Posted by Peterko | Report as abusive

Target, Google, and privacy

Felix Salmon
Feb 16, 2012 18:38 UTC

The most interesting part of Charles Duhigg’s story about corporate “predictive analytics” is the reaction of Target’s PR department when they found out he was writing it.

When I approached Target to discuss Pole’s work, its representatives declined to speak with me… When I sent Target a complete summary of my reporting, the reply was more terse: “Almost all of your statements contain inaccurate information and publishing them would be misleading to the public. We do not intend to address each statement point by point.” The company declined to identify what was inaccurate. They did add, however, that Target “is in compliance with all federal and state laws, including those related to protected health information.”

When I offered to fly to Target’s headquarters to discuss its concerns, a spokeswoman e-mailed that no one would meet me. When I flew out anyway, I was told I was on a list of prohibited visitors. “I’ve been instructed not to give you access and to ask you to leave,” said a very nice security guard named Alex.

I’m sure that Target didn’t get its name from the way that it sends marketing materials and coupons customized to individual shoppers. But maybe the name is part of the reason why the company’s so wary about talking about the details of its marketing operations. A bigger part, though, is what I’ve called the uncanny valley of advertising — the way that we feel that we’re being spied on, when a big faceless corporation seems to know very intimate things about us. Like, for instance, the fact that we’re pregnant.

“If we send someone a catalog and say, ‘Congratulations on your first child!’ and they’ve never told us they’re pregnant, that’s going to make some people uncomfortable,” Pole told me. “We are very conservative about compliance with all privacy laws. But even if you’re following the law, you can do things where people get queasy.”

It’s incredibly important to Target that it have the ability to tell when you’re pregnant, before you have your child; Duhigg goes into great detail about why that’s the case, but it basically comes down to pregnancy being one of a very few opportunities for retailers to gain market share in the zero-sum game that is your basic household expenditure. As such, you can be sure that this kind of targeting is going to become increasingly commonplace, as retailers engage in a targeting war, trying harder and harder to capture the shopping dollars of new families and families-to-be.

And truth be told, it’s good for consumers to have lots of corporations falling over each other to offer us great prices and great, personalized, service. But while we love the prices and the service, we also like a little veneer which allows us to kid ourselves that we still have privacy:

“We have the capacity to send every customer an ad booklet, specifically designed for them, that says, ‘Here’s everything you bought last week and a coupon for it,’ ” one Target executive told me. “We do that for grocery products all the time.” But for pregnant women, Target’s goal was selling them baby items they didn’t even know they needed yet.

“With the pregnancy products, though, we learned that some women react badly,” the executive said. “Then we started mixing in all these ads for things we knew pregnant women would never buy, so the baby ads looked random. We’d put an ad for a lawn mower next to diapers. We’d put a coupon for wineglasses next to infant clothes. That way, it looked like all the products were chosen by chance.

“And we found out that as long as a pregnant woman thinks she hasn’t been spied on, she’ll use the coupons. She just assumes that everyone else on her block got the same mailer for diapers and cribs. As long as we don’t spook her, it works.”

This is going to be a very fine line, for years to come. So long as we don’t know that iPhone apps have access to our address book, everything’s fine — but then it’s revealed that Path has that information, and there’s a huge kerfuffle, and Apple ends up changing its policies.

Richard Falkenrath is pushing for a “right to be forgotten”, whereby individuals could ask companies to erase all the data and metadata that they possess about them. He says that it’s “essential to protect personal privacy in the age of pervasive social media and cloud computing”, but I think he’s importantly wrong. We’ve never really had personal privacy, we have less privacy than ever before, and extra legislation, while making a difference at the margins, is never going to return us to some mythical prelapsarian state where Big Brother knows nothing about us.

And indeed, few of us would want to return to that state. I get a steady stream of books and press releases here at Reuters, most of which I have very little interest in. But at least they’re a little bit targeted: they tend to be about business or finance, broadly. And some of them I actually like a lot, and end up in a blog post somehow. If I just got a random subset of all the books being published, or all the press releases being put out, my situation would be far worse than it is now. Because the people sending the books and releases know something about me, they attempt to send me only things I might conceivably be interested in. (At least in theory. Does anybody know how to unsubscribe to the TMZ mailing list?)

We’ve always lived in a world of personalization and targeting, from the maitre d’ who knows your name and favorite table at the fancy neighborhood restaurant, to the way in which corporations pay more money to advertise in the Wall Street Journal than they do to advertise in the New York Post, on the grounds that the Journal is more likely to reach rich professionals.

Nowadays, computers have made it increasingly possible to fine-tune personalization down to the individual level, where it can sometimes get “spooky”. (Although I’m convinced that spookiness increases with age: that in general young people are much less fazed by this kind of personalization than old people are.) If sophisticated corporations manage to make their marketing materials less spooky, I don’t think there’s going to be much popular opposition to continued targeting — at least not in this country. Germany is different: Germans care a lot about their privacy, and fight hard for it.

Here, however, I’ve never received a good answer to the “why should I care?” question — and certainly Falkenrath doesn’t provide one. All he does is hint at a vaguely dystopian scenario, and leave the rest to the reader’s imagination:

Picasa has a tagging feature that can tell Google where and when photographs were taken, and an advanced facial recognition feature that allows Google to identify individuals it has seen in one photo in any photo in the user’s digital library. Integrating just these three services with Google’s core search function could allow Google to locate individuals in virtually any digital photograph on the internet, and so derive where each user has been, when, with whom and doing what. Add YouTube to the mix, or Android smartphones, or whatever other database Google develops or buys – the implications are breathtaking.

If you’re pessimistically inclined, the breathtaking implications are negative. On the other hand, there are lots of positive potential implications, too. And the fact is that companies like Target and Google have no interest in becoming some kind of Hollywood corporate villain; that kind of behavior tends not to be nearly as profitable as screenwriters might think. So my feeling is that if they do become evil, we should cross that bridge when we come to it. As News International is discovering, genuine invasions of privacy can be fatal to any company. In the meantime, trying to legislate a “right to be forgotten” would probably cause much more harm than good.

COMMENT

I came home from work today and openend my mail and there is was “Congrat’s on saying I do” from target. I’m fuming! I’m not only 31yr old single female but I get enough hype that I’m still single and then have to come home to relax and I get slapped in the face by Target. If anyone has names at Target who I can contact I would love to know. I’m a marketing manager for a digital analytics company and I know more about privacy and what online information companies have access to but this is crossing the line when you offend people with your marketing materials!

Posted by tiz450s | Report as abusive

Journalism’s welcome longevity

Felix Salmon
Feb 14, 2012 22:59 UTC

This week, Significance magazine (“statistics making sense”) reprinted a three-year-old article of mine about the Gaussian copula function. That story has had an impressive shelf life, and I’m incredibly happy that it will continue to be read for years to come. Sometimes it will be read in print publications, like Significance or book anthologies. But many, many more people will read it online. That’s great for Wired, both in terms of ongoing ad revenues (which are pretty small at this point) and in terms of its reputation for printing high-quality journalism with lasting value. (That value isn’t just journalistic, either: Wired’s parent, Conde Nast, is being very inventive in terms of monetizing old content.)

The fact that the internet has a long memory is wonderful for magazines online, as Tom Standage of the Economist recently noted.

After years of locking the search engines out, now suddenly their whole archive is available. A three year old article about Iran, he said, does just as good a job of advertising what they are about and why you should be reading them as the ones form this week. He said it was “crucial” that content could be “sampled and shared on social media.”

Matt Yglesias, however, sees a downside here, as more and more great magazine pieces are available online, for free, in perpetuity:

The existence of this deep back catalog is great for readers, but not necessarily as rewarding for the forward-looking production of longform pieces. Each day—each hour, even—all previous “newsy” items become obsolete and the demand for new newsy items is robust. But the existing stock of well-hewn blocks of substantial prose is already very large and it no longer depreciates the way it did in print.

I don’t buy it: the long-term upside, to any publication, of producing more well-hewn blocks of substantial prose is real, and, well, substantial. Meanwhile, Yglesias’s downside is that there’s already so much good stuff out there that it’s somehow satiating demand for such material.

In reality, of course, the supply of attention, when it comes to long magazine articles, is far from fixed. Nowadays especially, in the days of Instapaper and Longreads, people are reading more long-form journalism, from more outlets, than they ever did before. And there’s no indication that the rise in long-form consumption will level off any time soon. The more that magazines feed that demand, the more the demand will rise, in a virtuous cycle. Meanwhile, people will read less SEO-optimized crap from Demand Media. This too is a good thing.

One of Nick Denton’s less celebrated innovations was the creation, with Lifehacker, of a blog which lives more through its archives than through the new content that it puts up every day. Yes, Lifehacker has many loyal readers keeping an eye on its new posts. But the real value there is in what you might call the back catalogue — all those timeless posts which get steady pageviews for months or years.

It’s the difference between recording a throwaway pop song and recording a Beethoven symphony — the symphony is a much more laborious and expensive proposition, but it will sell for years to come. Orchestras don’t stop recording Beethoven symphonies just because lots of other orchestras have been there already. And the difference between new long-form journalism and old long-form journalism is a lot bigger than the difference between a new recording of Beethoven 5 and an old recording of Beethoven 5.

COMMENT

Yglesias appears to have fallen for the lump-of-paper fallacy.

Posted by djseattle | Report as abusive

Quality vs quantity online

Felix Salmon
Feb 12, 2012 07:21 UTC

At about the same time that Michael Kinsley’s hilarious response to a blog post of mine hit the web, The Atlantic also uploaded to its website Kathleen McAuliffe’s excellent story about how parasites shape our behavior.

McAuliffe’s 5,873-word feature was written for the magazine, and went through multiple layers of commissioning, copy-editing, top-editing, fact-checking, and the like. It’s not, by any means, an easy read; it includes quite a few passages like this.

The neurotransmitter is known to be jacked up in people with schizophrenia—another one of those strange observations about the disease, like its tendency to erode gray matter, that have long puzzled medical researchers. Antipsychotic medicine designed to quell schizophrenic delusions apparently blocks the action of dopamine, which had suggested to Webster that what it might really be doing is thwarting the parasite.

And yet, within 36 hours of being uploaded to the Atlantic’s website, the story had already amassed half a million pageviews — and was “well on its way to becoming the most visited piece ever” in the history of the site, in the words of Alexis Madrigal.

Meanwhile, earlier in the week, Salon editor Kerry Lauerman had revealed some traffic stats of his own:

We ended 2011 on a remarkable high note, with over 7 million unique visitors for the first time, without any giant, viral hits that could be outliers. And now we’ve finished January in similar fashion, at 7.23 million.

There are concrete reasons for this… We’ve — completely against the trend — slowed down our process. We’ve tried to work longer on stories for greater impact, and publish fewer quick-takes that we know you can consume elsewhere. We’re actually publishing, on average, roughly one-third fewer posts on Salon than we were a year ago (from 848 to 572 in December; 943 to 602 in January). So: 33 percent fewer posts; 40 percent greater traffic.

All of which would seem to imply that Kinsley is right, and that there’s something amiss with my more-is-more thesis of online journalism. Have we really — finally — reached the point at which quality is asserting itself in the form of monster pageviews? Especially given the fact that the New York Observer, the subject of my original post, is getting fewer pageviews now than it was in its much more assiduously edited days at the end of 2007.

If we have reached that point — and I hope that we have — it’s a function of the way that the world of the web is moving from search to social. Companies like Demand Media were created to game search — to take what people are genuinely interested in, and then exploit those interests to get undeserved traffic and ad revenues. Gaming social media, by contrast, is much harder: people tend not to share things  they don’t genuinely like.

The one thing that Kinsley got undeniably wrong in his piece was his assertion that I find the “more is more” formula to be “a wonderful development”. I don’t. Yes, I said that the Observer threw out the old and did something brave and new; I also said that I preferred things the way they were before.

What I do find to be a wonderful development is the way in which social discovery engines like Summify and Percolate surface much more relevant and much higher-quality content than search ever did. (Although I do worry, a lot, about the way in which Twitter seems to have bought Summify just to shut it down.) The more that we share stories and use such tools, the better the chance that great content will get an audience commensurate with its quality — even if it doesn’t have a web-friendly headlines like “How Your Cat Is Making You Crazy”.

That said, the downside to publishing subpar content is certainly shrinking. Once upon a time, if you read a bad story in a certain publication, that would color your view of the whole enterprise. That’s no longer the case: in a world where websites are insatiable, there are precious few publications of consistent excellence. As Kinsley says, almost no one achieves good writing most of the time — but once upon a time, editors could and did simply spike material which wasn’t good enough. Nowadays, less-than-great copy tends to get published anyway, since websites have no space constraints and the old excuse about how “we ran out of pages” doesn’t hold water any more.

The real cost of publishing dull content is not that readers will be put off your brand. Instead, it’s an opportunity cost: rather than getting Norman to churn out ill-informed blog posts on ostrich farming and fracking, might it not be better to put him to work honing and editing the work of someone else, helping to create the next viral story about how your cat might be turning  you into a schizophrenic?

The economics, however, still don’t add up. For reasons I don’t fully understand, high-quality edited journalism is not a little but rather a lot more expensive than more-is-more blogging. McAuliffe probably got paid somewhere in the region of $1.50 a word for her piece, which works out at $8,800; by the time you add in the cost of salary and benefits for everybody who worked on it, plus the expenses involved in flying her to Prague to report it, you’re talking enough money to get a thousand blog posts out of Norman. Ex post, McAuliffe’s article is worth it. But it takes a bold cash-strapped publisher indeed (and all publishers are cash-strapped, these days) to choose a single heavily-reported feature over a thousand blog posts.

We still live in a world where the brand value of a venerable print publication has clout on the web. McAuliffe’s piece would never have garnered 500,000 pageviews in 36 hours had she published it on her personal website; instead, it both benefited from and helped to burnish the reputation of the Atlantic more generally. That’s a nice virtuous circle. On the other hand, a boring blog post which would never get attention on a random blog can get a decent four-figure number of pageviews just by dint of being published on the website of a print publication like the New York Times or the New York Observer. As a result, such publications are faced with a constant temptation to put up as much content as they can and monetize those pageviews, even if doing so slowly erodes their brand. Immediate cashflows, these days, tend to trump impossible-to-measure concepts like the degree to which brand value might be going up or down.

My expectation, then, is that we’re likely to see a lot of more-is-more journalism from established names like the Observer, even as the most successful online franchises, such as the Atlantic, increasingly invest in expensive, high-quality content. It’s the difference between managing decline and managing for growth. In an industry which is undoubtedly in secular decline, the former makes a lot of sense. And the latter, if it doesn’t work, can be incredibly expensive.

So while I’m extremely happy to see high-quality journalism reach a very large audience online, I’m far from convinced that we’re about to enter a golden age where publishers get rewarded for spending lots of effort and money on commissioning, editing, and publishing extraordinary content. The web is still a mass medium, and cats-make-you-crazy stories are hard to scale, while commodity content is much easier to replicate. If you want to get to half a million pageviews, you’re always much more likely to get there with a thousand blog posts than you are with a single swing for the fences.

COMMENT

Content discovery engines will indeed promote quality content. And don’t worry, there are plenty of content discovery engines to take Summify’s place. Percolate is most similar, in that it only delivers daily summaries, but there are also discovery engines that provide a continuous stream of content. These are best for deeper dives into topics.

Perhaps the best known is Zite, which is available only as an iOS app. It is basically a smarter Flipboard. As you thumb articles up and down, it learns what you like. It works very well. If you need to follow specific topics from a computer, I recommend Trapit, the company I work for.

Trapit (http://trap.it/) is like Zite, only it allows you to follow any topic at all. You tell it which topics you want to follow, then it suggests relevant content. Thumb content up and down, and watch the recommendations improve.

I have compiled an overview of discovery engines here:
http://colemanfoley.com/post/17722454460  /discovery-engine-roundup

Posted by ColemanFoley | Report as abusive

Elizabeth Spiers and the reinvented New York Observer

Felix Salmon
Feb 6, 2012 05:11 UTC

There are three main reasons that I like entering into bets with people. The first is, simply, that it’s fun. The second is that I love to win bets. And the third is that I love to lose them. I don’t ever trade the markets: all of my investments are strictly buy-and-hold, with a time horizon measured in decades. That rule has saved me a lot of money over the years, not that I ever had much inclination to trade in the first place. But it has also prevented me from learning the kind of lessons that all traders learn early and often.

For pundits, it’s easy to be wrong: in many ways, it’s what we’re paid for. If what you want is facts and certitude, stick to old-school journalism. But it’s much harder for us to learn from our mistakes, precisely because the cost of being wrong is in many cases negative. So when I get the opportunity to express a conviction in the form of a wager, I tend to jump at it, partly because it’s one of the very few ways for me to be forced to admit that I was wrong about something, and to ask myself what the lessons are.

All of which is a very long-winded way of saying that I’ve gone and lost another bet, much to the delight of Elizabeth Spiers. She’s firmly ensconced at the helm of the New York Observer, a year after being given the job; I said she wouldn’t be. I didn’t think that she was going to prove herself good at running a newspaper, and — more to the point — I didn’t think that her boss, Jared Kushner, would stick by her.

In point of fact, Spiers has not been all that great at running a newspaper. Over the past year, I can barely remember a single time I’ve even so much as seen a physical copy of the Observer; I certainly haven’t read one, and neither has anybody I know. And on the rare occasions that I’ve read an Observer story online, it’s seemed under-edited and rather lightweight, for a newspaper which fancies itself the house organ of the elite.

But the point of hiring Spiers was never to get a great newspaper editor, some kind of heir to Peter Kaplan who would burnish its reputation as the paper slowly dwindled in relevance and lost a few million dollars a year. Instead, making a virtue of necessity, Kushner decided to go as webby as he possibly could, with the newspaper quite explicitly in the position of an afterthought — the legacy brand upon which the new business was going to be built.

And Spiers — to her credit — has absolutely executed on that strategy. The Observer is now, first and foremost, Observer.com. (It’s a hugely valuable domain name, which, by some freakish accident of history, wound up getting snaffled by a dilettantish New York weekly before it could be claimed by the venerable newspaper in England.) There’s a slew of verticals, running the gamut of New York interests — Wall Street, media, art, real estate — as well as a bold attempt to break into the tech blogosphere with BetaBeat. Page design is sophisticated and effective, with all sites linking generously to all other sites, with the emphasis on dynamic headlines rather than bland navbars.

The Observer’s inimitable voice is gone, replaced by a barrage of bloggish posts by a group of writers so young that many of them can’t even remember a time before Gawker. (Which was birthed, by Spiers, in 2003.) The old Observer was edited, on a story-by-story basis, in a way that the new online Observer isn’t — Spiers doesn’t have either the time or the money to have a layer of experienced journalists reworking her bloggers’ prose before it’s published.

And so, in the proud tradition of good blogs everywhere, readers are left with a highly variable product. The great is rare; the dull quite common. But — and this is the genius of the online format — that doesn’t matter, not any more, and certainly not half as much as it used to. When you’re working online, more is more. If you have the cojones to throw up everything, more or less regardless of quality, you’ll be rewarded for it — even the bad posts get some traffic, and it’s impossible ex ante to know which posts are going to end up getting massive pageviews. The less you worry about quality control at the low end, the more opportunities you get to print stories which will be shared or searched for or just hit some kind of nerve.

Add in a few linkbait listicles, and you’ve got a recipe for a successful website — which can only be helped by its association with an honest-to-goodness print newspaper which, still, has extremely good name recognition with most New Yorkers and which we generally think fondly of. There are even nods to the old Observer’s buttoned-down worldview, here and there, if you look hard enough. For instance, there’s the way in which striking photos and videos are largely notable by their absence. The Verge this is emphatically not; while gorgeous design has its place in the Observer media empire, for the time being it seems to be confined largely to glossy magazines. Even hyperlinks are generally confined to web-first content: when stories from the physical paper appear online, they rarely have any at all.

Spiers’s Observer is not the one that her predecessor Tom McGeveran dreamed of when she was hired — one which serves to remind the rich of themselves, on which manages “to speak the patois that is being developed at Le Cirque at the table with Michael Bloomberg”. That kind of thing would always be too precious, too nichey, to work in a medium where the table stakes, in terms of reach and scale, are rising very quickly indeed. Instead, the new Observer is carving out new audiences, is aggressively embracing social media, and has much more attitude in common with HuffPo than it does with, say, the New York Review of Books. That’s something that Spiers is good at, and it’s something Kushner is happy to encourage.

I’m happy that I was wrong about the NYT paywall, and I’m happy too that I was wrong about the Observer. My mistake in both cases was to be too conservative: to think that change was probably going to be a bad thing, even in the context of a broader media world where change is the only possible alternative to death.

Both the NYT and the Observer threw out the old and did something brave and new; there are many people, in both cases, who preferred things the way they were. Myself included, truth be told. But it’s profoundly fallacious to believe that what you want is what should be, in some kind of normative sense. Spiers has come up with a formula which works, in practice, significantly better than its immediate predecessors. In the world of professional journalism, that’s something to celebrate. So, if she wants to join me and John Carney for our forthcoming lunch, she’s more than welcome. It’s on me.

Update: I should also have included the Observer’s traffic figures, which haven’t noticeably been improved much by Spiers’s arrival.

COMMENT

Hi I am an old fart who has subscribed to the physical paper for a few years. The paper has some interesting attributes, and is becoming less about stupid real estate transactions than the NYC Web scene, about which I am ignorant.
My only problem with the Observer is that some really good writers have gone missing – not Candace Bushnell, who has been gone for years, but Mike Thomas, the ex:Lehman curmudgeon and Simon Doonan, the Barney’s window man. If the good writers all go (and so far they haven’t) then I will too. Jim Hanbury

Posted by toaster1941 | Report as abusive

NYT paywall datapoints of the day

Felix Salmon
Feb 2, 2012 22:39 UTC

Ken Doctor has a very smart and interesting take on the news that the NYT now has 390,000 paying digital subscribers — plus another 16,000 at the Boston Globe. It’s unambiguously good news, on many fronts.

First, and most importantly, digital ad revenues went up by 10% in the area of the business with the paywall, while plunging by 26% at About Group, which doesn’t have one. The big worry about the paywall was always that it would eat into ad revenues, and that really doesn’t seem to have happened. Of course, it’s impossible to know what the NYT’s digital ad revenues would have done sans paywall. But my gut feeling is that it’s a net positive: it allows for much more targeted advertising and therefore higher ad rates.

What’s more, the NYT still has massive reach outside the paywall: it has at least an order of magnitude more unique visitors each month than it has paying subscribers. The NYT can still sell those other visitors just as it always could; they certainly haven’t become less valuable since the paywall went up.

The only possible cloud in this picture is in overall traffic growth: the NYT doesn’t give pageview numbers, but sites like Quantcast and Compete say that they see no real growth in traffic to nytimes.com, and possibly a small decline. Again, the counterfactual is impossible to know: would traffic have been bigger had the paywall not been in place? I don’t think that the paywall has reduced traffic very much, but I do think that the amount of time and money and editorial effort which went in to constructing the paywall might well have found its way into other innovations, had the paywall not happened, which would have made the NYT an even better and more popular product.

That said, the paywall has probably paid for itself already, and with luck some of the extra cashflow it throws off will be reinvested in more consumer-friendly innovations.

The other big news today is this:

Churn is less with digital than print customers: Skeptics opined that people might sign up, but then flee after sampling the paid digital product. The opposite appears true: Smurl says digital churn is less than print churn.

I didn’t expect this, but I believe it, and it’s really great for the NYT. It’s easy to cancel a NYT subscription, but by the same token it’s easy to keep one, too. And it seems that once you’ve taken the plunge and started paying for the NYT, you keep on paying — even more than with a print newspaper.

The result is that the NYT’s digital subscribers are a bit like a bank’s depositor base: although in theory they could leave at any time, in practice they’re an incredibly stable funding source. Much more stable, to be sure, than any advertiser.

But while I’m happy about this state of affairs, I still don’t really understand it. Here’s Doctor, again:

It took about 12 seconds for Times’ readers to figure out the new subscription math, when the company when digital-paid last year. When they did the math and saw they could get the four-pound Sunday paper and “all-digital-access” for $60 less than “all-digital-access” by itself, they took the newsprint. Which stabilized Sunday sales, and the Sunday ad base. Then the Times was able to announce a near-historic fact in October: Sunday home delivery subscriptions had actually increased year-over-year, a positive point in an industry used to parsing negatives. Now, Sunday is emerging a key point of strategic planning.

This is great news for the Sunday newspaper, which is highly profitable for the NYT. But it also raises the obvious question: why are 390,000 NYT readers eschewing a Sunday paper they could get for less than nothing? Some are IHT subscribers who don’t have that option; others are naturally peripatetic. And the cheapest digital subscription is actually still cheaper than the Sunday-only delivery.

It seemed to me, when I entered into my ill-fated bet with John Gapper, that NYT readers would go for the free access bundled with the paper, rather than plump for digital-only access. But increasingly it seems that readers actively dislike having to manage a physical paper, and are willing to pay for making the whole experience virtual.

If that’s the case, then the least the NYT can do is to continue to invest in its iPad app. Right now the website is still superior to the app, except for offline reading. The app desperately needs search, and it needs to retain hyperlinks from the original articles, and it needs to somehow build in the sense of serendipity and of relative importance which newspaper readers love so much. It’s hard to tell what’s important, in the app, once you move off the front page. And it’s hard to have your eye caught by a great story you didn’t know you wanted to read. But those things will come, I’m sure. If only because there’s now a very healthy income stream — Doctor estimates it at more than $80 million per year — which can pay to help develop them.

COMMENT

Unfortunately, the Times doesn’t seemed to have plowed even a dime of this windfall back into proofreading and copy editing.

Posted by NoSix | Report as abusive

How sharing disrupts media

Felix Salmon
Jan 23, 2012 10:12 UTC

I’m at DLD in Munich, where David Karp of Tumblr and Samir Arora of Glam Media helped me understand the way that media and publishing are evolving these days, and the way in which creating, editing, and publishing are increasingly separate things which interact with each other in fertile and unpredictable ways.

There are lots of ways of publishing content onto the web, and if you look at the relative popularity of, say, WordPress vs Tumblr vs Twitter, then it’s easy to come to the conclusion that the easier you make it to publish, the more popular you’re going to be. But at Tumblr, at least, there’s something else very interesting going on: according to Karp, there are 9 curators for every creator on his site.

Reblogging, on Tumblr, is so easy that the vast majority of Tumblr sites actually create little or no original content: they just republish content from other people. That’s a wonderful thing, for two reasons. Firstly, it takes people who are shy about (or just not very good at) creating their own content, and gives them a great way to express themselves online. (As Arianna Huffington says, “self-expression is the new entertainment”.) And secondly, it acts as a natural amplifier for the people who do create original content — the average post on Tumblr gets reblogged nine times, and therefore reaches vastly more people than if it just sat on its original site waiting to be discovered by people visiting it directly.

Indeed, you don’t even need original content at all to become a reblogging monster. Pinterest is in many ways Tumblr without the original creators, just the curators, finding stuff online and reblogging it at incredibly high velocity. And it’s huge. Meanwhile, a lot of the impetus behind the way that Twitter is pushing its proprietary retweet functionality is the idea that it too might be able to build a community of retweeters, in much the way that Tumblr and Pinterest have built communities of rebloggers.

Journalists, I find, tend to come quite late to sites like Tumblr and Pinterest. For one thing, those sites are overwhelmingly visual: images nearly always do much better than words. And more generally, journalists are much better at writing than they are at reading — which means that they’re really bad at seeing the value added by curating and reblogging.

Technologists, on the other hand, intuitively understand the idea of “the stack”, which is the nerd version of “the platform” that all entrepreneurs and media gurus love to talk about incessantly. Essentially, they have spent their entire careers building things on other things. That happens in legacy media, too, sometimes: cable channels, for instance, live on a distribution platform owned by someone else. But print media in the US has historically been highly vertically integrated: the same company would create the content, edit it, print it, and distribute it directly to its customers’ front doors. Far from building things on other things, it owns everything from the copyright on the original content to the printing plants and even newspaper carriers.

Facebook and Google have become two of the biggest media companies in the world in extremely short amounts of time, precisely because they don’t have much interest in owning any content. Rupert Murdoch looks at Google and sees a pirate because he does everything: he both creates content (think 20th Century Fox), and also distributes it (think Sky TV). It’s a world of iron-clad contracts and tight control. While the social, digital world is one where the biggest media companies have a much lighter touch, and where the content creators with the broadest reach will be the ones who care the least about protecting their copyrights.

I suspect that we’re only in the very early days of seeing how this is going to disrupt just about every media organization built on the idea of hosting a website and selling ads, including highly socially-attuned ones like the Huffington Post. HuffPo is built on the idea that when stories are shared on Twitter or Facebook, that will drive traffic back to huffingtonpost.com, where it can then monetize that traffic by selling it to advertisers. But in future, the most viral stories are going to have a life of their own, being shared across many different platforms and being read by people who will never visit the original site on which they were published.

That was actually the original idea behind Buzzfeed — it would help brands create viral content which would then spread across the web. And then, somehow, buzzfeed.com became a destination site in its own right, which can and will make a lot of money by hosting and selling advertising. The old models still work. But the new, more distributed models are I think much more powerful. They’re great for brands, which just want to reach consumers directly, whatever the best way of doing that might be. But for content creators like Rupert Murdoch, they’re much scarier. Because when something goes viral, you don’t own it any more — it belongs to everyone, and no one.

COMMENT

Great post. Another way to look at the dislocation in the online publishing industry is the separation of content from discovery. It used to be that content was only discoverable at “place” where it was produced. Now content discovery, of which sharing is a part, is distinct from content creation. Google, Facebook, Twitter, Flipboard, and many others don’t create content but rather hold the enviable position of being between the consumer and the content they are looking for. We are experience a shift in value from content creators to content distributors (discovery). I suspect we are only in the 2nd inning of this profound change in the economics of content.

Gregg Freishtat
CEO VerticalAcuity

Posted by gfreishtat | Report as abusive

Will fact-checking go the way of blogs?

Felix Salmon
Jan 18, 2012 16:01 UTC

Lucas Graves has by far the best and most sophisticated response to NYT ombudsman Arthur Brisbane’s silly question about “truth vigilantes”.

Graves makes the important point that Brisbane’s “objective and fair” formulation is itself problematic: as one of Brisbane’s commenters wrote, if a certain politician is objectively less truthful, less forthcoming, and less believable than others, then objectivity demands that reporting on what that politician’s saying be truthful — even if that comes across as unfair.

And this just about sums up the entire debate:

Pointing to a column in which Paul Krugman debunked Mitt Romney’s claim that the President travels the globe “apologizing for America,” Brisbane explains that,

As an Op-Ed columnist, Mr Krugman clearly has the freedom to call out what he thinks is a lie. My question for readers is: should news reporters do the same?

To anyone not steeped in the codes and practices of professional journalism, this sounds pretty odd: Testing facts is the province of opinion writers? What happens in the rest of the paper?

Graves’s main insight here, however, is to frame this debate in the context of what AJR has called the “fact-checking explosion” in American journalism — a movement which is roughly as old as the blogosphere, interestingly enough.

And like the blogosphere, the rise of fact-checking raises the obvious question:

It’s easy to declare, as Brook Gladstone did in a 2008 interview with Bill Moyers, that reporters should “Fact check incessantly. Whenever a false assertion is asserted, it has to be corrected in the same paragraph, not in a box of analysis on the side.” (I agree.) But why, exactly, don’t they do that today? Why has fact-checking evolved into a specialized form of journalism relegated to a sidebar or a separate site? Are there any good reasons for it to stay that way?

As I look around the blogosphere today, I see something which is clearly dying — it’s not as healthy or as vibrant as it used to be. But this is in some ways a good thing, since it’s a symptom of bloggish sensibilities making their way into the main news report. As we find more voice and attitude and context and external linking in news stories, the need for blogs decreases. (One reason why the blogosphere never took off in the UK to the same degree that it did in the US is that the UK press was always much bloggier, in this sense, than the US press was.)

With any luck, what’s happening to blogs will also happen to fact-checking. As fact-check columns proliferate and become impossible to ignore, reporters will start incorporating their conclusions in their reporting, and will eventually reach the (shocking!) point at which they habitually start comparing what politicians say with what the truth of the matter actually is. In other words, the greatest triumph of the fact-checking movement will come when it puts itself out of work, because journalists are doing its job for it as a matter of course.

That’s not going to happen any time soon, for reasons of what Graves calls “political risk aversion”. Fact-checking, says Graves, “is a deeply polarizing activity”, and mainstream media organizations have a reflective aversion to being polarizing. It’s certainly very difficult to be polarizing and fair at the same time. But a more honest and more polarizing press would be an improvement on what we’ve got now. And just as external links are slowly making their way out of the blog ghetto and into many news reports, let’s hope that facts make their way out of the fact-check ghetto too. It would certainly make a lot of political journalism much more interesting to read.

COMMENT

Also care to back up this claim?

“the reputation of the US suffered in the rest of the World under the last Republican Administration” – except amongst the left in Europe and mass-murderers in the Middle East?

Posted by Danny_Black | Report as abusive
  •