What good is ‘crowd-sourcing’ when everyone needs help?

March 29, 2012

In a recent blog post I referred in passing to some of the hype surrounding “crowd-sourcing” projects in the aftermath of the Haiti earthquake.

That’s not to criticise the volunteers – mostly in the United States – who collectively devoted hundreds of hours to charting the needs of quake survivors on online maps, based on SMS texts sent from the disaster zone.

My point was that their gate-crashing of the relief response in Haiti posed a welcome challenge to the traditional humanitarian system – but also generated hyperbole about the effectiveness of crowd-sourcing in actually saving lives.

“There is, without question, a great deal of hype around technology,” BBC Media Action says in a new policy briefing on how communications are used in emergencies.

“Extravagant claims have been made in recent years for its ability to solve everything from election fraud to Urban Search and Rescue (USAR) management. Such claims are often based on little hard evidence, particularly on the practical use of communications technology in emergencies.”

This seems like a good time to ask: How useful is it to draw on the “wisdom of crowds” when mobilising a relief response after a big natural disaster?

The question matters because it’s now almost a matter of faith that crowd-sourcing platforms like Ushahidi will be key tools for aid agencies in future emergencies.

This belief feeds off two desires on the part of relief groups. One is to be more accountable to beneficiaries, which means listening to what disaster-hit communities are actually saying. The other is to embrace mobile technology in a world where even crisis zones tend to be “wired”.

Ushahidi, which means “testimony” in Swahili, started in 2008 as way for Kenyans to report instances of post-election violence. Anybody could send a text to log an incident – rape, riot, looting, murder – and the system would plot it on a map.

Since then, the open-source platform has been rolled out all over the world. It has been used to map acts of war in Gaza, crime in Indonesia and socio-political developments in Egypt. It is now being used by U.S.-based activists to chart human rights violations in Syria.

All of which makes sense. You can look at those maps and get an idea of where trouble is brewing or hotspots developing. Individual reports are unverified, but like a pointillist painting, they add up to a useful picture.

The Haiti earthquake was different.

For the first time, the idea of bearing witness became confused with humanitarian needs assessment and relief response. Big claims were made about this new twist on crowd-sourcing.

“The service was able to direct emergency response teams to save hundreds of people, and direct the first food, water and medicines to tens of thousands,” says the website of Mission 4636.

Mission 4636 was a coalition of “crisis-mappers” and volunteer translators who mobilised in the United States around text messages sent by survivors in Port-au-Prince (4636 was the SMS “short code” the coalition used).

Hundreds of people saved? Aid directed to tens of thousands? This is where the rhetoric starts to look like hyperbole.

It’s worth backtracking to January 2010, when the unprecedented information ecosystem that would become Mission 4636 was first evolving.

Thomson Reuters Foundation was part of that ecosystem. I declare an interest: I was in Port-au-Prince setting up AlertNet’s Emergency Information Service (EIS). EIS sent out, in Creole, critical information from the United Nations, NGOs and Haitian government. Anybody could register to receive the free SMS alerts by texting 4636 (we publicised the service mainly by radio).

Survivors could also text their needs and locations to 4636, and the messages flowed to a variety of online platforms via RSS feeds.

The EIS crowd-sourcing system

Early on, we ourselves had big ideas about crowd-sourcing. The EIS platform allowed us to map incoming messages and categorise data. The idea was that this could be useful information for the “Clusters”, the different sectors of the U.N.-led response – food, shelter, water and sanitation and so on. See the pictured screenshot.

Nice idea, but there were problems. For one thing, few people in the quake zone had the bandwidth to use the maps. For another, it set up false expectations on the part of survivors, who reasonably expected the information they sent to be acted upon, or at least responded to.

I can still feel the wincing pain of watching those heart-rending appeals for assistance roll in, knowing few people would receive replies.

On the radio, I did my best to explain that 4636 was not a hotline for help. Rather it was a means for people to say what they needed so aid workers could build up a rough picture. But inevitably, people turned to it as a lifeline.

It’s not that incoming messages fell into a void. In the United States, plenty of people were pouring over them.

Scores of volunteers mobilised by crowd-sourcing organisation CrowdFlower and social enterprise Samasource were translating the Creole texts into English. They categorised and geo-tagged them too.

Then Ushahidi volunteers mashed the data onto online maps and performed a kind of triaging service, sending high-priority messages to the U.S. military.

Anecdotal evidence suggests some of this information may have been acted upon by U.S. Southern Command, which by virtue of proximity was involved with early emergency operations.

A member of the Marine Corps wrote that crowd-sourced data “was saving lives every day”, without specifying how. And a member of the U.S. Federal Emergency Management Agency Task Force told Ushahidi: “No matter what anyone else tells you, don’t stop mapping. You are saving lives.”

An independent evaluation of the Ushahidi Haiti Project said the section of its work concerned with assessing the impact of the initiative was “supported with the weakest evidence base”. It lists a handful of “possible” and “probable” examples of people being rescued, taken to hospital or sent supplies.

Why isn’t there more evidence of impact?

One very simple reason – the crowd-sourcing efforts happening in living rooms in the United States were completely disconnected from the U.N.-led relief response in Haiti.

It wasn’t the U.S. military who coordinated the search-and-rescue teams piling into Port-au-Prince, or the many U.N. agencies and NGOs who followed in their wake. That was the job of the United Nations, led by the Office of for the Coordination of Humanitarian Affairs (OCHA).

Even MapAction, OCHA’s official crisis-mapping partner, didn’t know about what Ushahidi and the other crowd-sourcers were doing. Camped in their tents, first at the airport and then at U.N. Logistics Base, they didn’t have the bandwidth either.

Of course, it’s possible to imagine a future in which things are more joined up, and much is being done to make sure that happens. For example, the interagency Communicating with Disaster-Affected Communities (CDAC) Network is trying to bridge the divide between the humanitarian sector and new info-tech players.

The real problem isn’t coordination. It’s a flaw in the model of response that crowd-sourcing seems to promise.

Think of this model as “retail relief”, in which specific services – food or water distributions, say – are delivered to individuals based on individual needs (assuming they have a mobile phone to text in those needs).

In fact, the vast majority of disaster relief is delivered wholesale. This makes sense, because in a crisis zone like Port-au-Prince after the quake, nearly everybody has needs.

If you look at the Ushahidi maps of Port-au-Prince, or AlertNet’s EIS maps, you saw the same kinds of messages everywhere. All they told you was that the whole city was in trouble.

Getting food, water, shelter and medical care to hundreds of thousands of people in an earthquake or tsunami zone is not a series of surgical strikes, made on a first-call-first-served basis. It’s a grinding, blunt, often slow process, involving massive needs assessments over wide areas.

It means overcoming huge logistical hurdles. It involves coordinating hundreds of aid agencies and NGOs across a broad sweep of sectors.

That’s why the job falls to the United Nations – assuming the sovereign government of the country concerned calls for international help, as the Haitian government did.

Many traditional humanitarian players see crowd-sourcing as an unwelcome distraction at a time when they are already overwhelmed. They worry that the noise-to-signal ratio is just too high.

Committed crowd-sourcers counter that the solution is not less technology – but more.

Patrick Meier, director of crisis mapping at Ushahidi, writes that the Syria project is leading the way with a mixture of automated data mining and crowd-sourced human intelligence.

At a CDAC Network “media and technology fair” last week, John Crowley, a research fellow at the Harvard Humanitarian Initiative, said such approaches were the Holy Grail. The trick is to combine sophisticated algorithms with human feedback loops.

“That’s where the cutting edge is, and ultimately we’re not there yet,” he said.

Until then, the jury is still out, at least for me. I’d be happy to be proved wrong, and would welcome real evidence of lives saved.

If aid agencies are to invest time and resources in handling torrents of crowd-sourced information in disaster zones, they should be confident it’s worth their while.

Picture credit: A man rents mobile phone chargers by the hour in downtown Port-au-Prince January 17, 2010. REUTERS/Eduardo Munoz

No comments so far

We welcome comments that advance the story through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can flag it to our editors by using the report abuse links. Views expressed in the comments do not represent those of Reuters. For more information on our comment policy, see http://blogs.reuters.com/fulldisclosure/2010/09/27/toward-a-more-thoughtful-conversation-on-stories/