At this year’s Consumer Electronics Show in Las Vegas, Nissan demonstrated its ‘ Brain-to-Vehicle ‘ programme that offers a direct link between driver and car. But this might just be the tip of the iceberg.
With deep apologies to Lee Marvin, John Wayne, Bud Spencer and Terence Hill, I think we can all agree that Clint Eastwood is the quintessential cowboy. Always playing the rugged loner, with an initially questionable but ultimately solid moral compass, he remained either on the edge of The Law, or wildly justified when outside of it. His mission? To kill the bad guy, avenge the fallen ally, guide the oppressed, and get the girl. And it was this, regardless of outcome, which meant we always rooted for him.
Now in his late-80s, he’s played the cowboy, the army man, the sensitive photographer, the circus owner, the piano-playing secret service has-been, the tank crewman, the mountain climber, the thief and the boxing coach. Unfortunately, nowhere was he less credible than as the Russian-thinking fighter pilot in Firefox, the tale of a Soviet plane that sensed brainwaves and, thus, could be piloted hands-free. Seriously Warner Bros, what were you thinking?
Firstly, patience, you’ll strain yourself. Secondly, this all links in with the Japanese marque’s new ‘Intelligent Mobility’ universe of products, the most interesting of which is a brain-sensing interface that enables any Nissan to anticipate what its driver is going to do.
Anyway, Nissan’s Brain-to-Vehicle technology assists the driver with specific actions, such as turning the wheel, applying the brakes or accelerating. It works by connecting a head band/helmet that senses brainwaves and sends them to a processing centre in the car. Nissan claims these actions are between 0.2 and 0.5 seconds faster than the driver. Take into account that our reaction time may reach one complete second, and such a reduction is already quite significant.
And let’s not beat around the bush General Vladimirov, this technology is truly revolutionary.
We’ve already seen that winning a game of rock-paper-scissors against a robot is no easy feat, as shown in this video from Tokyo University. And there’s more to it than simply lucking into 2-out-of-3 on count back. There’s a camera scanning the movement of its opponents’ fingers, allowing the robot to react to this movement faster than any human hand. Imagine now that you have the robot sensing what you are thinking, and reacting before your brain synapses can reach your limbs. The increase in safety we could achieve with this alone is deserving of the, predictably, sizeable investment.
Think about it. The brain is an enigma, the study of which is not only difficult but extremely expensive. Devising a battery of tests, setting up complex machinery and then bringing individual people to go through the process is ridiculously slow, inefficient and blind. Plus, for truly objective results, you’d need to assess people from different age groups, cultures and genders, all with varying skillsets and character traits. How can you do that and not require a budget as large as the GDP of Germany?
The concept behind this technology becomes yet more intricate when you realise Apple has already implemented a similar technology. Recently the iPhone specialists gave us an avatar allowing us to send SMS messages via ‘emotions’. How does it work? We’re back to the camera again, only this time the device starts submitting data-points to Apple, allowing the AI to associate a face to a particular mood by analysing expressions and tracking the eye, reading the SMS, and assessing whom the SMS is addressing (yes, they know that too).
Now, while that may sound scarily 1984-ish or Big Brother-y, it’s really not. It’s actually a great way to improve the product not only technologically but also in the manner in which it affects its users. Yup, trust Apple to come up with a product nobody thought they wanted and brings personal privacy into question, but could well change the world as we know it.
Going back to Nissan, what the company could do with Brain-to-Vehicle technology – and I have no evidence this is what they are doing, by the way – is sell any car with the aforementioned features, for the purposes of safety enhancement. In doing so, the company could also analyse how certain brain waves correlate with actions any driver would consider instinctive, be it braking, accelerating or casually glancing in the side mirrors. Not to diminish the complexity of configuring these ‘instincts’ into driver-assist protocols but the largest cost is the acquisition of data points to feed the AI that will, in turn, produce an ever-optimizing algorithm. Imagine a car that could suggest that the driver stops for a drink, or a “break,” or give you a back rub, or tint the windows without being prompted. Consider also that Mercedes-Benz blinks a coffee cup onto its driver information screen to remind its drivers that frequent rest stops are essential, and you begin to realise how close we are to this becoming a reality.
Kudos Nissan, this Intelligence Mobility Program is becoming more interesting by the day, though there’s obviously still has a long way to go. Finding out who the owner of the vehicle will be; integrating communication with street furniture; implementing charging units or induction charging roads; or setting up communication standards that protect the passenger are incognita that will take some time to resolve. But at this stage, the development direction is already very attractive. Granted, Clint in Firefox, who is able to ‘think’ his destination into the Soviet plane’s navigation system, still has the edge for now.