MediaFile

More like a whisper than a bang

October 6, 2011

By John Abell
The opinions expressed are his own.

(This column was written hours before the tragic news of Steve Jobs’ passing. For my thoughts on him, please see ‘We All Called Him Steve …‘ and for my reaction to his stepping down as Apple CEO in August, ‘A World Without Steve Jobs‘.)

There was lots of digerati (and shareholder) angst over the release of the iPhone 4S — so much I won’t even bother linking to any of it. It’s all over every social network, Twitter and the talk of suddenly all-knowing TV anchors.

Look: The iPhone is an experience delivery system. Hardware is perfect when it disappears, doesn’t get in the way, expedites. In and of itself a computer is a brick. The operating system and the apps — that’s where it’s at. That’s why the iPhone has such resonance, not because it looks cooler than anything else out there. It’s the same with tablets, which is the primary reason the iPad rules.

With Apple’s announcement of the not quite iPhone 5 came news of a big upgrade in the iOS for all iPhones, and especially enhanced with 4S handsets. So let’s all get over the lack of change in form factor, or that only the software, not the handset, includes a “5″ in the name.

Also, just so you know, no amp really goes to 11. Sheesh.

The 4S (and iOS 5) do include a couple of odd retro features: The ability to send post cards via snail mail, and camera improvements that make it possible to make “perfect” 8×10 prints. Prints?

But the underwhelming part is in what could be the start of the most overwhelming new experience metaphor since the introduction of the iPhone 4 years ago: Instead of just talking with one, Apple wants you to start getting used to the idea of talking to one.

Siri, voice command software that began as a DARPA project, is a daring bid into an area that is not, in my view, so much a quest for the Holy Grail as it is for Moby Dick. There aren’t a ton of situations where talking at your phone is preferable to tapping it — heck, in most public settings most nice people modulate their voices because they don’t want people nearby to eavesdrop on their awkward conversations, or are just trying to be nice. The unresponsiveness of voice tech in, say, customer support call centers is a modern-day cliche. So now we’ll want to ask what the time is in Paris?

As a personal assistant, Siri does have possibilities. It would replace those digital recording pens to make audio post it notes on the fly that actually do remember things and poke you later, and is way more convenient than fumbling for pen & paper or typing something you’ll forget to look at later. Barking “Wake me at 8 am!” to an inanimate object is much healthier than issuing that order to your spouse, and it will avoid one of those awkward conversations you don’t want anyone to overhear.

Voice control is an obvious must in the driver’s seat, but in most other contexts it’s mostly a nice to have, at best. I can see spoken commands being a boon in the living room, part of a truly universal remote. In the age of 500 channels sometimes all you know is the program you want to see, but not the time(s) or the channel it’s on. TV remotes tend not to have keypads (this tends to be a good thing, as Google TV recently helped to establish), and the hunt-and-peck method for narrowing down program possibilities is tedious, at best. Being able to say “Mad Men” and having that display a list of times closes an enormous chasm.

I’ve had a lust for voice command for close to 30 years: In the 1980s I bought a pulse-dialing phone that almost always dialed a wrong number, and I sent it back. A Compaq computer issued to me in the early ’90s had voice control and response built in. It was so clumsy and verbose co-workers literally unplugged it one night in my absence. I lost tons of important data, and never used that functionality again. Google 411 was fun, and mostly worked, but I hardly used it. I bought an overpriced and underwhelming dictation program 20 years ago that I never actually used because it was going to take hours to train it to recognize its master’s voice — something the RCA dog had no trouble doing instinctively.

And that’s the point. We think fast. Faster than we realize. Ever see closed captioning of live events on TV? Painful. Even slowing down one’s patter, repeating oneself or having to rephrase is enough to drive some people crazy. Unlike the bridge on the USS Enterprise, listening computers are outclassed every time. Well, almost: IBM Watson showed us that semantic interaction with a machine that is more knowledgeable than you is possible.

Now we have to shrink and price that down so everyone can talk to a computer when it makes sense — any computer, anywhere — and have the computer make sense of what its told.

“What we really want to do is talk to our device, and get a response,” Apple SVP Phil Schiller said of Siri at Tuesday’s event. “We don’t want to be told how to talk to it, we want to talk to it however we like.”

If Siri is a first step in that direction, that would be quite an achievement in the mass consumerism of this promising interface. But I suspect we’re still years away from anything that could be routinely used by anyone other than the angst-ridden digerati. Which is the real reason this iPhone update is more like a whisper than a bang.

Follow +John C Abell on Google+.

Photo: Philip Schiller, Apple’s senior vice president of Worldwide Product Marketing, speaks about Siri voice recognition and detection on the iPhone 4S at Apple headquarters in Cupertino, California October 4, 2011. REUTERS/Robert Galbraith

Post Your Comment

We welcome comments that advance the story through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can flag it to our editors by using the report abuse links. Views expressed in the comments do not represent those of Reuters. For more information on our comment policy, see http://blogs.reuters.com/fulldisclosure/2010/09/27/toward-a-more-thoughtful-conversation-on-stories/