Weird input methods

Responsive designs need to take into account more than just screen size.

Our biggest challenge as developers for the web is a side effect of the very best thing about the web: its ubiquity. An iOS app developer has a much better idea of what devices, input paradigms and screen sizes are going to be used with his/her app, whereas our web builds have to work for everyone on any device -- and there’s no way we can possibly test every device.

Not only this, but it’s not really possible for us to detect how a device is being interacted with. Your site is being viewed at a resolution of 1920x1080? Well, is that a desktop computer, or a smart TV, or a game console? Even if it is a desktop, does the user have a mouse, or a touchscreen, or are they using a Leap Motion or Myo armband for gesture-based controls? How far away from the screen are they? Users with games consoles could well be on the sofa across the room from their TVs - will your site remain legible at distance?

Edge cases, right?

  • 32% of 16-24 year olds use a games console to access the web, according to a recent Ofcom report, and Smart TVs and set top boxes like the Amazon Fire TV, which you can install Firefox on, are finding their way into more and more homes.
  • Tiny devices like the Apple Watch and Google Glass can view the web in a tiny way, with some very non-standard controls that mostly involve zooming and panning the page to keep it inside the tiny viewport.
  • Gesture controls are starting to hit the mainstream, with devices like the Leap Motion (and plugins for it like Leap Touch) allowing anyone with a plain old desktop or laptop to use gestures to interact with the web, and other devices (like the impressive-looking Myo armband) building further on this paradigm.
  • The Xbox One’s Kinect can not only be used for gesture controls, but its voice recognition features can allow a user to activate links just by reading out the link text. How long before this comes to Siri or Cortana, too?

But it’s not only the cool new devices on the block that we need to consider. There’s one input device that’s been around since the 1870s, and that’s the humble keyboard. By testing with this, we cover some of these edge cases anyway…

Process

We’re doing our design and build on a traditional computer, and even when we are building mobile-first - with a slimmed-down browser window,  keeping interactive elements large enough for fat fingers to hit - we’re still building actions that work on click and hover. There’s more to it than this. Modern multi-touch devices allow swipes and pinches, not just simple taps. While making sure your design works fine with a mouse (and therefore with simple touch) is a given, there’s often an extra mile you can go.

So often, also, the humble keyboard is ignored. This is a device that’s been around since the 1870s, way before the dawn of the web. People use keyboards to access the web every day - some, due to accessibility concerns, have little choice on this matter. Making sure your content can be tabbed through, that you get good visual feedback on where the keyboard focus is, that drop-down menus function - it’s very little work, but it’s absolutely essential for some people to use your website.

Unexpected benefits

The real benefit of making these tweaks becomes more obvious when you think about those users with gesture controls, or using a game console to access the web.

Controls for newer input types are often modelled on what came before, to allow for a level of backwards-compatibility. Game controllers, for example, will often emulate using the tab key on a keyboard to step through and select links; in much the same way optimising for touch controls will often benefit users with gesture-based controllers too.

And what about a HCI method that may well become the next big thing - voice control? If a natural language parser can’t tie the commands its receiving back to the underlying HTML of a page, voice controls become useless very quickly. For example, Microsoft’s Xbox One’s Kinect can ‘click’ on links based on you reading out a portion of the linked text. Using an image or icon in between your anchor tags, rather than text, renders this functionality completely useless (another nail in the coffin for the Hamburger menu?). But wait - isn’t this bad accessibility practice anyway, messing with screenreaders etc.? Turns out that testing with screenreaders has an unexpected benefits, too.

Weird and Wonderful Browsers

It's not just weird HCI methods to think about, but weird browsers too with different capabilities. Consider that all game consoles these days have browsers in, with varying degrees of ACID compliance. There're even browsers on refrigerators these days!

It's been said a thousand times but bears repeating: practice progressive enhancement. No way can you test on eveything, so building for the lowest common denominator and adding niceties for more capable systems is the only way to get your site working reliably everywhere. Often, I'll build the first iteration of a page in a VM with IE running, before swapping to Chrome or Firefox to improve the design - rather than building in my preferred browser first and then spending hours cursing Microsoft's name.

Niceties

We can do more than ensure basic functionality works cross-device and cross-input-method; we can add those little niceties to this functionality in much the same way we add niceties to a design.

Consider having a slideshow that opens in a lightbox, with arrows to progress through the slides and a close button up top. Ensuring that the cursor keys on a keyboard or swipes on a touchscreen progress the slides too, and that pressing Escape or tapping outside the lightbox closes it, instantly adds a level of polish to the feature as well as providing a completely necessary level of usability for visitors on consoles or with accessibility concerns.

Going the extra mile, we could use HTML5’s pushState to allow the browser’s back/forward controls to progress through slides or close the lightbox, thus allowing users with game controllers or TV remotes to use the browser’s shortcut buttons for navigation, often a single button tap. In an initial version of this article (in presentation format), I added the ability to progress through slides with a game controller plugged into the computer, or by gesturing to a device’s webcam. While these may be a bridge too far (depending on your target audience and use case), it goes to show what’s possible on the web these days.