I’m going to be the dumb user now. It took me a while to work out what this arrow key graphic was telling me #polarb Twitter / ajdf
Keyboard shortcuts are still harder than you think. This UI isn’t available on Polar anymore (they’ve gone solely to a publisher plug-in/app model), but it does a fine job of illustrating the problems with relying on keyboard shortcuts.
GoalControl’s system works with a set of 14 cameras, with seven arrayed in a semicircle on the catwalks above, watching each goal. The cameras capture up to 500 frames per second from multiple vantage points to track the continuous position of the ball within a centimeter or so. If it crosses the goal line, a digital watch worn by the referee, his two linesmen and the fourth official at midfield flashes the word “GOAL” within a second. Referees will be notified on digital watches when high-speed cameras determine that a goal has been scored.
A Googler from the ATAP team first alerted me to Project Prakash a combination humanitarian and scientific effort to bring vision access to the poor and enrich our understanding of how human beings learn to see.
The results of the research are easy to understand, but will make you consider vision and visual stimuli in a whole new light.*
*’Prakash’ translates to ‘light’ but no pun was intended
FingerSense: A way to think with your hands
What if we stopped creating new gestures and started using more body parts? FingerSense from Qeexo can tell the difference between your finger, fingernail, and thumb, and can differentiate between ends of a stylus, without using Bluetooth the way Pencil does.
In dance, there is a practice called “marking”. When dancers mark, they execute a dance phrase in a simplified, schematic or abstracted form. Based on our interviews with professional dancers in the classical, modern, and contemporary traditions, it is fair to assume that most dancers mark in the normal course of rehearsal and practice. When marking, dancers user their body-in-motion to represent some aspect of the full-out phrase they are thinking about. Their stated reason for marking is that it saves energy, avoids strenuous movement such as jumps, and sometimes it facilitates review of specific aspects of a phrase, such as tempo, movement sequence, or intention, all without the mental and physical complexity involved in creating a phrase full-out. It facilitates real-time reflection.
If you’re a designer of gestural interfaces, you likely spend a lot of time marking, using your body to think about the interactions you have with a screen or camera or any machine space. This paper can give you a framework for thinking about marking and argues for the importance of going beyond ‘imagining’ interaction.
The opportunity to reconceptualize things is something that mental simulations does not offer. It is a major reason “externalizing” what is in mind is a more powerful strategy than working with things in the mind alone.
Imagine interactive electronic books that never need batteries or charging.
… a new energy harvesting technology that generates electrical energy from a user’s interactions with paper-like materials. The energy harvesters are flexible, light, and inexpensive, and they utilize a user’s gestures such as tapping, touching, rubbing and sliding to generate energy. The harvested energy is then used to actuate LEDs, e-paper displays and other devices to create interactive applications for books and other printed media.
Paper Generators: Harvesting Energy from Touching, Rubbing & Sliding (by DisneyResearchHub)
Pay with your hand using vein scanning
How it works (from the customer perspective):
Cashier totals your purchase
You look at the readout to confirm price
Enter your 4-digit pin
Put your hand on the scanner so it can read your vein structure
Your purchase is added to your tab and you’re billed twice a month by direct debit
There are some limits on the service, like a maximum purchase limit of 2,500 Swedish Kroner (about $375 USD) per 2-week billing period. Note that that the PIN isn’t a secret set of numbers, it’s the last four digits of your phone number. It probably helps speed up the vein map confirmation and as noted in the video, gives the customer a chance to double-check the purchase amount.
This is a place where the interaction could be seamless without any involvement from the user (think near-field communication), but the moment of entering the PIN returns a feeling of control over the environment back to the user. This reminds me of placebo interfaces, like fake thermostats in office buildings and elevator buttons that don’t work. Curious about those? Read more about Illusion of control.
Can someone chop off your hand to use it for payment? According to Quixter, the biometric tool only works if blood is pumping through your veins.
The Sichuan peppercorn that makes our mouths tingle activates the same neurons as when our foot falls asleep. Scientists are hoping the connection unlocks clues for how to turn those neurons off.
Before you think I’m abandoning touch experiences for taste-based interaction design, take a look at the subject of somatosensory systems, which “inform us about objects in our external environment through touch (i.e., physical contact with skin) and about the position and movement of our body parts (proprioception) through the stimulation of muscle and joints.” Sound familiar? Yep, this is the side of interaction design we don’t talk about a lot: how it feels to perform gestures or have a phone buzz against your leg.
The study referenced at the end of the story is especially interesting (Food vibrations: Asian spice sets lips trembling
Nobuhiro Hagura, Harry Barber and Patrick Haggard). It makes me wonder how well we can sense different frequencies of vibration and whether our brain could easily interpret these for notifications or feedback mechanisms between computers and human.
Google Design Minutes — Search: The beauty of speed
Sometimes small changes make a big difference.
Jon Wiley, the lead designer on Search UX has always pushed us to pursue grand ideas and incremental changes. This video shows how those seemingly small updates lead to new technology for the user experience. Plus, you get to see why Voice UIs are so helpful, especially in the mobile device age.
Testing Voice Search on Amazon’s FireTV set-top box
To search for a nature show you need to:
Press and hold the microphone button on the remote
Say the name of the show or keyword you want to search for
Select the translated phrase from a list
Use the remote to navigate to the show you want to watch
Select the show
The good: You don’t have to type using a remote control.
Remote design: The built in remote doesn’t have an affordance on the primary selection button. Using the mic to search saves some steps, but doesn’t feel like a game changer. Photo of the Amazon FireTV remote control.
Voice Search: Holding down the microphone button is different pattern than every other voice UI out there (see Siri, Cortana, Google Now, Chrome, Safari). My guess is that Amazon’s systems don’t have a good understanding of when the voice command has ended, thus the need for a physical control for listening rather than a single click or tap.
Query Completion: After completing a search you’re shown the options for your query. In cases where there are multiple interpretations (see my search at the very end of the video) this could be useful, but in cases where there’s only one interpretation, making a user click again rather than going straight to the results is disappointing.
All of that aside, I worry that we’re still training people to use voice UI’s as though they’re speaking into dumb terminals, not interacting with a system that could one day show signs of intelligence.