A smartwatch isn’t a HUD, and a heads-up display isn’t a watch. They might fight for media attention, but they sure don’t fight for utility. Here’s how two beats one.
I’ve been wearing a Pebble and Glass together since January 2014, and it’s worked out great. The combination is powerful, more so than when I just wear one, and there’s way less overlap than you’d think.
Here’s why the smartwatch won’t kill Glass – in fact, here’s why it’ll get smartwatch wearers to adopt Glass. A linkbait topic gets a linkbait article, so here’s a top five.
Reason 5: Notifications Need Redundancy. The whole point of a notification is that it could be important right now. If it’s worth seeing a notification when your phone is in your pocket, it’s worth having fallbacks for different circumstances.
When my hands are on the wheel, my watch isn’t helpful and I want my notifications in front of me with Glass. If I’m in a conversation and I put my Glass away, I want my notifications on my watch in case of an emergency. There are intermediate cases, too – my watch hand is holding a cup of coffee so I check Glass, or Glass doesn’t support a notification that appears on my watch. Redundancy means I’m more robust to circumstances.
Things diverge when you want to act on the notification. You don’t have many options on watches – anything more intense than canned notifications or opening the app on the phone is unusably clunky. HUD’s have more flexibility, with more sensors, display area, and voice power. Glass’ voice and touch commands are fine for glancing at an emailed site or dictating a short reply to an SMS. Longer notifications are infinitely easier on Glass, with a large readable area and a speaker to fall back on text-to-speech.
Watches have an edge in social acceptability. It’s almost always appropriate to wear and check a watch, even when a momentary phone check is a major faux pas.
Reason 4: You Can’t NOT Look at Glass. I mean, the damn thing is always floating above your eye. Any app that relies on visual persistence is straight-up unsuitable for a watch. That means to-do lists, navigation apps, medical aids, accessibility tools, augmented reality, and note-taking excel on Glass and are clunky or unusable on a watch. I don’t want to take your hand off the wheel and my eyes off the road to see that I’ve missed my turn. The persistent aspect of Glass makes these apps work.
That said, visual persistence is a double-edged sword. Constantly glancing at a static display is annoying, fatiguing, and possibly dangerous. Users should take off the Glass and recover, but that’s not an excuse to lose all of the wearable’s power. Watches can do a good “diet Glass” impression that’s less intense but still helpful.
Reason 3: Watch Actions are Physical. Well, most watches. This is my favorite part of the Pebble and my DIY SmarTwatCh – the combination of deterministic states and physical buttons means I can activate some features reliably without even looking at the watch. On the Pebble, three clicks starts the music. On the SmarTwatCh, flipping the switch and pushing a button turns on the flashlight. It’s insanely fast and immediately available.
Glass does this with the camera button, but the other features rely too deeply on soft, contextual functionality to create muscle memory. Mapping common one-shot actions to hard buttons creates a powerful UX that’s easy to use in the heat of (metaphorical) battle.
This could be a big problem with Android Wear. It relies on swiping through semi-deterministic cards and voice commands, which sacrifices the power of physical interfaces. Just like taking a picture became a bitch when phone designers cut the camera button, losing rapid actions hurts a smartwatch. Hell, a mundane watch lets you start a timer with one tap.
It’s rule #1 of UX: Whatever is easiest is done most often. If you can start music in three taps, you’re going to listen to more music.
Reason 2: Dividing Actions Among Hardware is Natural. Humans are good at spreading tasks and information across devices. In college, I placed my notebook next to a textbook. Now, I put datasheets on one monitor, an IDE on another, a to-do list on Glass, texts on my phone, and the time on my watch. Role assignments work.
It’s human nature to assign functionality to our tools. Even if your watch, phone, wearable, and tablet can act as GPS navigators or cameras, you probably prefer one. Each device does its feature simultaneously, so I can get directions on Glass while I control music with my Pebble and talk on my Bluetooth headset.
When functionality can be distributed, there’s a literal one-to-one mapping between number of devices in possession and number of tools that can be used simultaneously. More devices means more power.
Reason 1: Google Probably Isn’t Favoring Wear Over Glass. It sure seemed that way from I/O 2014, but neither project originated in Google. Google inherited Motorola Mobility’s long-running smartwatch program, and Glass is a Google[x] project that the company keeps at arm’s length. Both projects are under active development, even if one team built expensive hardware and the other gave away reference designs that became products.
A number of key people, especially in developer relations, are shared between the two teams. Word on the street is that Google is putting emphasis on unifying the platforms internally. The ball is just rolling too quickly on Glass for Google to abandon it while keeping their hands clean.
As for I/O, I suspect that Google just wanted to avoid bad press from putting Glass onstage, and build hype for the Wear launch. They ran almost as many Glass sessions as Android Wear sessions. Sure, Glass wasn’t integrated into Google’s device-strategy charts, but Google[x] is barely integrated into Google proper. None of Google’s Android Auto demos were self-driving, and none of the cloud data was delivered by balloons. It’s the nature of skunk works.
In conclusion: A watch ain’t a HUD, and a HUD ain’t a watch. The smart user keeps his options open and picks the right tools for the job, and I believe that involves multiple wearables.