Beyond the touchscreen: voice, gestures, mind control?

20comments
This article may contain personal views and opinion from the author.
Beyond the touchscreen: voice, gestures, mind control?
Some say the touchscreen, as a direct contact method, is the last input paradigm mankind has reached on the quest for interaction with our gadgets, and, at least for the foreseable future, that seems to be true, barring the leaps and bounds currently underway in voice and gesture navigation. Of course, we'd rather prefer an even more direct connection like mind control, but more on that later.

Others mention that the iPhone was invented almost by accident, as Steve Jobs was actually looking for a more natural way that humans would interact with his Mac computers, without the keyboard and mouse intermediaries, or what was essentially meant to be a largish tablet of sorts at the time. The touchscreen scenario looked promising, and when the engineers first showed him what they did with capacitive multitouch, scrolling, pinching and the like, he immediately recognized this is more suitable of an input method for his dream personal computer and communication gear you'd always carry with you, paving the way for the first iPhone.

As is now history, his bet on fingers and capacitive touch brought on the mobile revolution that is still shaping the world in a novel direction nobody can predict yet, thus a short history of the touch screen technology is warranted, coming in the form of the excellent infographic below. We don't want to throw facts and dates at you, so just skim through if you are interested, or scroll down for what we feel might be coming in the future.

source: ArsTechnica

The touchscreen technology itself has evolved to such unforeseen heights that we now have not even the fraction of a hair thick touch layer in the screen package of flagship smartphones. It's rather embedded directly in the display panel itself, allowing for some really, really thin devices with bright and responsive screens, not to mention the super-sensitive ones that allows you to use them with gloves. There are also efforts for a shape-shifting future of the venerable touchscreen, so we can finally have some real feedback when we type on a virtual keyboard, along with many other novel possibilities you can check out in the video below.

Recommended Stories
Video Thumbnail
 

Heck, we can now even do touch without a touchscreen, but on a regular piece of paper, just watch Fujitsu's Fingerlink Interaction System - it can project the touch experience even on curved surfaces with off-the-shelf components, and some elaborate image processing software, as well as allow gesture manipulation of computer animated objects, like in 3D games.

Video Thumbnail


Yet with all its virtues in terms of direct interaction and interface navigation, the touch input can't and shan't be the last input method we'll ever need. Not to rain on anyone's parade, but try typing your Magnum Opus on a phone or tablet on-screen, or even Bluetooth keyboards, try precision tasks like in-depth picture, video or spreadsheet editing, and you'll be longing for a regular keyboard and a mouse faster than you can say "post-PC era". In short, the touchscreen, operated with your digits only, is suitable for basic computing tasks, light gaming and media consumption, but that's about it. What's next?


Stylus and air gestures

It was again Steve Jobs who didn't think tablets smaller than the iPad's screen will offer a good touch experience, as you'd have to "sand down your fingers to a quarter of the current size" for it to work the same way. And he was largely correct, if we fixate on fingers only. Enter Siri, Google Now, Samsung's S Pen and gesture navigation - which, in their essence, are all partial solutions to the big fingers/small screens issue, but also efforts to move beyond the touchscreen paradigm.

Alright, we already have the precision of the S Pen for when our fingertips don't cut it, and we also have touchless navigation for scrolling, pausing videos, links selection, and so on. Soon we'll have support for gestures that do zooming and tapping, without even touching the screen, just watch this uMoove experience demo below that is coming to Android and iOS apps. 

Video Thumbnail


Voice control and breaking the language barrier

Cool, eh? What would be even cooler is if you talk, and it answers! Oops, we now have that, too. What Apple's marketing push with Siri did to Google is force it to unearth the excellent R&D work it was quietly doing in voice recognition down in the labs, and implement it quickly into what is arguably the best voice-controlled assistant for mobile - Google Voice Search paired with Google Now. 

We won't be foaming about its virtues, as anyone with a Jelly Bean and above phone in their pockets can attest to the precision and usefulness of the system, we'd just point out about the push towards offline capabilities, which according to us will be another paradigm shift in the way you interact with your devices. You can already search for and run almost everything on the phone, as well as interact with the apps, type with your voice and so on, and further down the road we are seeing direct machine translation being the norm as well, with the way mobile processors are developing. 

Microsoft Research is the entity advancing this in strides, based on something called Deep Neural Networks, patterned after human brain behavior, and you can see a demo in the video below. It is essentially on-the-fly converting of your words to a different language, and with your own voice to boot. It is not perfect yet, and gets one in eight words wrong on average, but just think about never having to learn a foreign language, and still practice your best sarcastic remarks on the locals when they serve you food that still looks like it's moving.

The software has to be provided first with hours of recordings, so as it can analyze the subtleties of your vocal cords, and then learn to speak Cantonese with them, just like the first voice recognition efforts required you to talk to the computer for quite a while until it gets your brand of crazy accent, but the sheer possibility for this ready for prime time demo is already incredible enough. The fun starts at about 7:10 into the video, but the whole piece is worth watching.

Video Thumbnail

Alright we've got precision input solutions, natural voice interaction, and even breaking the language barriers shouldn't be far behind on our quest for global interaction with and aided by our gadgets. Is there some other crazy stuff to look forward to?


Mind control

Yes, that's exactly what we mean - controlling your phone with your thoughts only, or the so-called brain-computer interface (BCI). Wait, what? Nothing, just watch this amazing story below from the beginning to the end, it speaks for itself, and there have been developments since:

Video Thumbnail

Ok, now this is far-fetched, as machines worth in the many millions of dollars have to be attached to you. Bear in mind these are lab trials, though, and the technology has evolved so much just in the last couple of months, that we now have the first "wireless, implantable, rechargeable brain-computer interface," coming from the groundbreaking BrainGate research arm of Brown University.

The researchers there have managed to shrink all the electrodes to "pill size" and the equipment needed to convert your brain cortex signals into digital data to a "miniature sardine can with a porthole," and we are quoting Brown's own press release here. The implantable BCI is packaged tightly in a titanium shell for biocompatibility, a lithium-ion battery within provides power, and it is charged wirelessly via a simple copper coil no different than the one you'd find in phones like the Lumia 920, for instance. 


Funny enough, this wireless charging was the only potential source of discomfort for the test pigs and monkeys, after a team of nerosurgeons implanted the gear, as it warmed it up a little, so researchers had to pour some water on the scalp during charging. Other than that, the test animals went more than a year emitting digital data signals directly from their brains, all while they were out an about doing their thing, which has huge implications for these implantable brain-readers reaching into humans one day. Arto Nurmikko, professor of engineering at Brown, who mentored the invention, says: "This has features that are somewhat akin to a cell phone, except the conversation that is being sent out is the brain talking wirelessly.”


Now, we are not saying that very soon we will be walking around, just thinking of tweeting the huge selection of top-shelf booze in front of us at the beach bar, and our phones will do it for us without even lifting a finger or uttering a word, of course. This will be reserved for the flagships. 

We kid, but Samsung was apparently reading our mind when prepping this article, and showcased for the MIT Technology Review a brain-controlled Galaxy Tab 10.1. Granted, the tasks that can be executed are still very basic, and the head gear was unwieldier than the transplant we described above, but this seems to be just the beginning.


What we are saying is that it seems a combination of alternative input methods has emerged and more are on the horizon, that go beyond the touchscreen-only interaction paradigm with our mobile gadgets. This might change not only the specs and design requirements of our current smartphones, but the whole concept of what a personal communication device can be, and we are pretty excited about this next era of human interface interaction. What do you think?
Create a free account and join our vibrant community
Register to enjoy the full PhoneArena experience. Here’s what you get with your PhoneArena account:
  • Access members-only articles
  • Join community discussions
  • Share your own device reviews
  • Build your personal phone library
Register For Free

Recommended Stories

Loading Comments...
FCC OKs Cingular\'s purchase of AT&T Wireless