Microcosmographia by William Van Hecke

Visible UI Had Better Respond to Input Right Away

A while back I wrote a post about how unexpected UI shouldn’t respond to input so fast. That’s a sort of counterpart to my normal complaint, which is that visible UI had better respond to input right away. This is a more fundamental complaint — it affects how fast and reliable the entire system feels.

Maybe the most remarkable thing about the original iPhone to me, and the subsequent evolution of iOS, was how every single gesture you made was reflected instantaneously and realistically on the screen. There was no lagging, no obvious dropped frames, no hanging. Maybe the most iconic example of this dedication to responsiveness was the way Safari and Maps drew placeholder patterns when you scrolled beyond what had been loaded — rather than slowing down, the system would always go exactly where you wanted and then catch up.

Any time you hit a button, its surface would darken or highlight that very instant. The resulting command might take a while to happen, but at least you knew it was coming. You didn’t have to worry about whether maybe you’d missed and had to try tapping again. In my book I elaborate on what happens when an interface fails to do this, and label it the Moment of Uncertainty.

(Meanwhile, on Mac OS X, until recently the cursor has always instantly and precisely responded to mousing, no matter how busy the system is. Now it often stutters or fails to map directly to your input. You used to be able to start interacting with a sheet while it was still finishing up its animation, by for instance typing a filename into the Save sheet. Now those keystrokes are simply discarded.)

On a small touch screen, it’s crucial to reassure the user that the input they tried to make matches what the system understood. Touch interfaces have to feel as realistic and as physical as possible, or else the illusion is shattered and people realize they’re not directly operating a magical thing, but that they’re actually just wishing really hard that some pictures under glass will maybe do what they wanted. You know, kind of like the bad old days of inscrutable, unreliable desktop interfaces.

Somewhere around iOS 7, it seemed like this promise of reliably realistic interaction started to come false. There are now many situations where you can clearly see and recognize the UI element you want to interact with, and successfully complete a gesture on it, in the time between the element becomes visible and the time that the element is actually ready for input. It seems that there used to be a rule that putting a UI element on screen meant that it would respond to input. Now, many inputs seem to be ignored.

The result is that much of a user’s time in iOS is spent staring at the screen, wondering whether they need to try their input again. Maybe the thing wasn’t actually ready for input? Maybe it got the input just fine but didn’t give any feedback, and it’s just taking a while to respond? Maybe the entire system is hanging? Who knows?

You can look at these as performance bugs, or say that the animations are just too long. But what was so brilliant about early iOS was that it enforced UI design that made the system feel faster, even when it wasn’t actually fast. It never threw away your input. It waited to show you stuff until it could fulfill the promise of that stuff. So even when performance was slow, at least you could trust the system to let you know when it was ready, and to try to accept your input while it was working on something else.

If I were in charge, I’d make this a priority again. WebKit famously has a zero-tolerance policy for performance regressions. If a new feature or even a bug fix would make WebKit slower, it’s not accepted. I think iOS needs a policy along these lines, and badly. No screen, especially in a first-party app, should ever present UI that is not ready for input.