The economics of aggregation

By Felix Salmon
March 16, 2010
economics of aggregation post? Here's the question he asks:

" data-share-img="" data-share="twitter,facebook,linkedin,reddit,google" data-share-count="true">

How geektastic is Mark Thoma’s economics of aggregation post? Here’s the question he asks:

If you run a website that depends upon advertising, what is the optimal number of aggregator sites (sites that run part of your original posts plus a link back to the original)? What is the optimal length of an excerpt?

And here’s his answer to the first part:

RsN[sC(CrN(rNA + rAN))N + sNA] = -RsNsAN

In English, he concludes that aggregators are good for original content providers if they provide a lot of clickthroughs, and that longer excerpts can also be good for original content providers “if increasing the excerpt length has little detrimental effect on the number of clickthroughs”.

This is all jolly good fun, if not particularly useful in the real world, but I fear that Mark, here, is missing something very important. The holy grail for content websites is loyal readers — people who come back to your site multiple times per week or even per day. Most visitors to a site, whether they come from Google or from social-media links or from aggregators, will simply read the article they came to read, and then go away. A small proportion of those visitors, however, will stick around for a while, look at the rest of the site, and then, possibly, decide to keep on coming back.

I’m not a great browser of websites, but I certainly do something similar: if I follow a link and find myself on an interesting new blog, I’ll subscribe to that blog, and then might well come across more great content there while looking at the stories in my feedreader. Alternatively, I won’t subscribe to a blog the first time I visit, but once I find myself visiting the same site three or four different times, I’ll get the picture and click on the RSS icon. (Of course, if the RSS feed turns out to be truncated, I’ll probably delete it immediately, but that’s another issue.)

So it seems to me that Mark’s model overemphasizes the importance of what you might call drive-by traffic, while underemphasizing the importance of building a loyal repeat audience. There are millions of potential loyal readers out there, and the most difficult thing to do online is to reach out and touch them, somehow, just so that you get onto their radar screen so there’s a chance they might become actual loyal readers. Aggregators are a wonderful way of doing that: they make it their job to find people posting original content, and to show it to lots of potential loyal readers. I think of them as free marketing.

People like Rupert Murdoch, and his former employee Heidi Moore, think that what the aggregators are doing is stealing content and making money off it. But I like it when people make money from reading my blog! As far as I’m concerned, that’s a good thing. And Murdoch should similarly be very happy that the dynamics of the internet mean that there’s a strong incentive for people to read his websites assiduously, and then link back to hundreds of his stories, driving traffic and helping to build that crucial base of loyal readers. Historically, Murdoch got his readers by spending hundreds of millions of dollars printing physical newspapers and distributing them all over town, this is a much easier and much cheaper way of getting his brands into the consciousness of consumers.

So I don’t worry when people read my stuff on an aggregator and don’t click through to my website — and I wouldn’t worry about that even if I was supported solely by advertising. (Which, incidentally, is actually a very uncommon blog business model: most bloggers, insofar as they monetize, will do so through job offers, book deals, or even blog acquisition deals, rather than ad revenues.) If you make it easy for people to find you and to read your stuff, they will eventually become hugely valuable to you. If you make it hard, they will be worth nothing to you. The choice is easy.

Update: Heidi responds, in the comments, saying that “honest aggregators” are those who provide “only a brief summary”:

It is true that I am a former employee of the Wall Street Journal, which is owned by Rupert Murdoch, but it’s misleading to say I was an employee of Rupert Murdoch, is it not? And I did, after all, decide to leave the Journal. I have nothing against Rupert Murdoch, as he was nothing but polite the only time I met him, but for the sake of accuracy that should be clarified.

Similarly, I’m sad to say my views here have been misquoted. Unlike Rupert Murdoch, I do not see Google as a problem for journalism – I see Google as a boon to driving traffic and opening journalism up to more Web-surfing readers.

I believe, Felix, you are referring to a Twitter discussion we had about plagiarism, which is quite a different topic. In that conversation I said that SOME blogs, in the name of aggregation, had resorted to cutting-and-pasting the content of news sites or other blogs, adding no new analysis or reporting of their own, and distributed the content with a new headline as if it were their own work.

I made the point that this practice, if it were copied in print, would be considered plagiarism – and that it should be defined as plagiarism on the Web because it steals traffic by taking the most important parts of a story and corraling elsewhere the traffic due to the author and original publication.

To be clear, this has nothing to do with linking, or analyzing an article written elsewhere, which is the currency of the Web and its honest marketplace of ideas. My point was purely about the practice of re-headlining the work of others and then copying that work on such a scale that it would divert readers from clicking on the original. This is, I believe, plagiarism that can often be hidden in the name of “aggregation.”

It means nothing for honest aggregators, however, who provide readers with only a brief summary and direct credit and link to the original.

12 comments

Comments are closed.