The future of UX & UI Innovations

There was fantastic stream of thought on Twitter last week about UX & UI Design; all started thanks to this tweet by @kellabyte, a Canadian developer with an uncanny knack for stirring up the masses and making them think.

[tweeted]http://twitter.com/kellabyte/status/27254194610[/tweeted]

What followed was a torrent of ideas on how to improve user experience, user interfaces & user interaction in terms of both the software & the hardware we use on a daily basis. Apparently Crowdsourcing + Brainstorming = Crowdstorming and a lot of credit for this post goes to following twitter folk: @robertmclaws, @cromwellryan, @uliwitness, @Montagist, @DavidQMora, @BenPittoors, @kellabyte and Others. There was a lot covered over the course of the chat, (here as a CSV file for posterity) but a few key things rang true with me so... stream of consciousness follows.

Stagnant UX

Minority Report

My company develops web app software for mobile devices so we've seen a number of innovative solutions to user experience problems over the years. Simple concepts like swiping a mobile touch interface to scroll, which we already take for granted, have come around because of very fast changes in form factor (touch interface, small dimensions, high-res) and out of necessity (a scroll bar on a mobile device would be extremely finicky to use)

However, in the desktop environment, the physical devices and the overall user experience have been fairly stagnant for quite a long time. Sure we've gotten nice Aero Glass graphics and smarter launchy-boxes in our Start Menus. We've gotten scroll wheels & non-standard function buttons on our input devices. But there hasn't been any fundamental overhaul of how we interact with a computer in 20 years. Voice commands are still a novelty outside of accessibility circles and other niches. We still use the tried and true keyboard & mouse combo for 99% of our actions. And if you want to perform an innocuous task such as resizing a window, you still aim your mouse pointer at a 4px border and hope you have good aim. Necessity is the mother of all invention, so perhaps part of the problem is that there isn't a necessity for massive change to the user experiences we've grown so accustomed to. But is there anything we can do to improve user experience?

  1. Make additive changes to the existing GUI Software/OS'
  2. Make additive changes to the existing Input Devices
  3. Change how we use existing Input Devices with existing GUI Software/OS'
  4. Fundamentally overhaul the entire User Interaction Stack (Hardware, Software, and Behaviour)

Lets call #4 the Minority Report Option. It falls into the same "Wouldn't that be cool" category as Hoverboards in Back to the Future, and we will no doubt see those kind of significant interactivity changes at some point in the near future, (I recently saw an advertisement for Dell, who are now rolling out their All-In-One Touch Screen Home PCs). Regardless hardware & software evolution has taken us in a particular direction. There are ~1 Billion computer users in the world with Keyboards & Mice. And those Dell AIO’s still ship with a keyboard & mouse. So can we do something with the existing hardware & software to improve usability & user experience.

Altering existing user experience is a tug of war

One example given was the idea of improving how we perform inaccurate or low-accuracy actions. Closing your screensaver with any mouse motion, or scrolling your browser with the mouse wheel while the cursor is anywhere in the window, are a low accuracy (and hence simple) actions. On the other hand closing your Word Document is a high accuracy action. First you have to click a small "X" in the upper right hand corner of your screen; Most likey you also need to hit a "Save" button as well back in the center of the screen. "KEYBOARD SHORTCUTS!!!", I hear you scream from the bleachers. Yes, true, anybody who's spent any significant amount of time in front of a computer probably knows their ALT-F4's & WINDOWS_KEY-D combos to increase their productivity. But what about an action like resizing a window. A multi-step action. You need to move the pointer to a 5 or 6 pixel wide border, click, hold & drag. If it's maximized, then there's twice as much interaction needed. No doubt tools exist out there to allow you to perform this task using some finger gymnastics instead. There are other more-innovative solutions as well to the inaccurate-actions issue. Take the Mac OSX Icon Dock for example. They introduced a warping/upsizing type feature so that as the mouse pointer approaches the dock, the icons in closest proximity to the pointer increase in size. As well as being aesthetically pleasing, it alters one of the key inputs in Fitts's Law and makes that action require less accuracy. Back to our window resizing example, could we could dynamically alter the size of that window border (or maybe alter the size of the “hit-zone”, without altering its appearance) as it's proximity to the pointer changes. Would this make it easier to perform that action. Probably, but then you run into a completely different problem around context which I'll talk about below. My main issue with all of the above is that for every issue raised, you make a counter, which raises another problem, and each "innovation" feels more like a sticky-taped on improvisation.

Context

What I mean by context is, the same repeatable behaviour can have completely different results depending on some other altering factor. Take left-right mouse movement in Windows for example

  • If you place your mouse over a desktop icon & move it, the pointer moves.
  • If you repeat that motion, while left clicking, you move the icon.
  • If you repeat that motion, while left clicking and holding CTRL, you make a copy.
  • If you repeat that motion, while left clicking and holding SHIFT, you make a shortcut.

We already use these context-modifiers all the time, combinations of keys, clicks, holds & even mouse movement gestures to change how our User Interface behaves. But, and this is my humble opinion, any additional “make-it-easier” context-modifier has to be in addition to the ones that already exist; You can't just replace/change them; The OS Manufacturer can't suddenly decide that CTRL+Click doesn't copy icons anymore, but increases the icon size to make them easier to click. Or people will complain; A lot. Which leaves us with what options; Add a new key combination; Add a completely new key to the keyboard (the windows2-key). And if you're just adding another modifier, are you really innovating? Adding another "magic-button" to the keyboard/mouse just feels like a kludge job to fix a bigger issue.

The Minority Report Option

Personally, I find it hard to believe that the UI/UX we’ve grown used to, especially in Microsoft Windows, will change significantly, without a significant leap in both input device technology & the access people have to those new devices. Don’t misunderstand me, I still believe there is a place for the Mouse & Keyboard, they are tried and true and aren't going anywhere. And it’s well established that people like that tactile, clicky, physical interaction. However UX advancements and innovations in mobile devices came about primarily because of the hardware (or the limitations of the hardware... T9 Dictionaries anyone). So let’s add to the desktop experience.

Your keyboard is now a touch interface. it still has tactile, press-able keys but there’s a surface skin over them that allows you to swipe across the keyboard as a whole making it a touch interface. It’s programmable, similar to the Optimus Keyboard so that the symbols can change based on software context. If you change your font from Calibri to Wing-Dings, the keyboard changes the symbols to match.

Your OS now supports a WP7 type Panorama GUI and you can use a left swipe/right swipe of your keyboard surface to move you from one Hub to another (or maybe to move focus from the application in Screen 1 to the app in Screen 2 in a multi-monitor setup).

Your mouse is now a touch interface also similar to the new Apple Magic Mouse. It still retains all the existing functionality that a mouse provides but at a twitch, you can place your fingers on its “back” and perform pinch, zoom and finger motion gestures which are either contextually related to the software you’re currently using, or context agnostic and related to some aspect of the OS in general. It also has a built in Gyro/Altimeter so you can free it from the 2D plane of your desk. When using a Map/Earth application, you move the device forward for north, left for west and you lift the mouse off the desk to zoom out, or apply additional pressure to it on the desk to zoom closer.

Your screen is now a large touch surface interface. Sure if you want to go old-school you can click the X to close that window, or click the borders to resize. But why not place 5 fingers on the centre of the monitor to move the window. Use pinch gestures to resize it. And to close it, simply throw it upwards, out of the screen.

Unfortunately in your living room, a 40inch touch screen TV isn’t feasible; Who wants to get off the couch to “touch” the telly. Thankfully the 3D TV Menus and the KINECT Bar on top allow you to simply navigate the channel guide with a swish of your hand or a voice command.

Or am I crazy ?

Eoin Campbell

Eoin Campbell

Eoin Campbell
Dad, Husband, Coder, Architect, Nerd, Runner, Photographer, Gamer. I work primarily on the Microsoft .NET & Azure Stack for ChannelSight

CPU Spikes in Azure App Services

Working with Azure App Services and plans which have different CPU utilization profiles Continue reading

Building BuyIrish.com

Published on November 05, 2020

Data Partitioning Strategy in Cosmos DB

Published on June 05, 2018