It’s a dream that’s been around since at least the late 1930s, when the public imagination was sparked by the Futurama exhibit at the 1939 World’s Fair. Take a look at how cars learn to be human, and why they’ll be better at it than we are.
The world is precariously close to seeing a chapter of 20th-century science fiction become reality, as companies like Google, Mercedes, and Tesla develop technologies that promise to eventually offer consumers a car that can drive itself. It’s a dream that’s been around since at least the late 1930s, when the public imagination was sparked by the Futurama exhibit at the 1939 World’s Fair.
While the Googles of the world continue to stoke our sense of wonder at this future, pragmatists in the auto industry have recognized the value in eliminating human faults like sleepy, distracted, or drunk driving, which offers the promise of significantly safer driving. In all likelihood, it will be these pragmatic advances in safety and convenience that ultimately usher in fully connected networks of sensors, driverless vehicles, and transportation networks.
Getting Computers to Act Like Humans
Paradoxically, while the boons in safety with driverless vehicles will be due to the fact that computers do not get distracted, tired, or intoxicated, the major challenge to creating a working driverless car system today is getting them to behave like humans. Although various levels of automation have been a reality in rail and aviation applications for decades, the practicality of automation is just now maturing in cars because driving on the road requires computers to perceive, think, and act like human beings, which is something they’re not inherently wired to do. Navigating rush hour in city centers or merging onto a freeway are governed more by social negotiation between drivers than it is by machine logic, presenting a remarkable challenge to sensor technology, software, computing, and infrastructure design.
In a presentation at IEEE’s IROS conference in 2011, Chris Urmson, who leads Google’s driverless cars program, discussed the challenges of getting computers to act like and interact with people on the road: “If you read the DMV handbook on how to traverse an intersection, it sounds awesome. . . of course, real driving isn’t like that,” he said. Instead, it’s held together by a lot of informal rules that rely heavily on driver communication. “What we’ve encoded in the system is the ability to follow the rules and then adapt to what’s going on on the road,” by giving the car the tools to interact with human drivers.
Sensors and Software
Advancements in software and computing are driving much of the progress toward making driverless cars a reality, but effective technology around sensors – the interface between the software and the environment – is critical. Ultrasonic, radar, video, infrared, and laser sensors are the conduits through which information about a vehicle’s environment is taken in and, in systems like Google’s, is combined with map and pre-prepared environmental data to give the vehicle an accurate picture of its location and surroundings.
In a recent interview with Mouser Electronics for the company’s Empowering Innovation initiative around driverless cars, James Bates, senior vice president and general manager, analog and sensors, at Freescale Semiconductor, noted that “for the car to truly be able to understand itself, it has to be able to have all five senses. As we march forward over the next few years, they will be integrated, they will be smaller, and they will be cheaper.”
In other words, in order to operate in a road environment with drivers, driverless cars will have to have a rich range of sensor data from a variety of sources that both mimic human senses in significant ways and capitalize on the ways those senses can be transcended. Google’s driverless car employs the Velodyne HDL-64E LiDAR, which gives the car a 360-degree field of view – a human-like perception of obstacles with qualities that go beyond the capabilities of human drivers.
Components like ON Semiconductor’s LUPA300 image sensor facilitate functions like machine and computer vision and motion tracking, which allow driverless vehicle software to read and interpret elements of the environment, like traffic signals and signs and moving obstacles, that are designed entirely for human vision and visual processing.
Transcending Human Frailties
While processing the kind of information that allows a computer to negotiate with human drivers and navigate a four-way stop presents a significant challenge and demands a particular cocktail of sensor data to work effectively, in some ways the more distant future of autonomous vehicle technology looks quite a bit simpler. Today, the infrastructure of roads and cities is built for people, but as autonomous driving technology becomes more prevalent and driverless cars become ubiquitous, the sensor data available to vehicles will change significantly.
RSM’s Simon software offers a glimpse of the genesis of connected infrastructure.
“Once an underlying connected infrastructure of sensors is established, self-drive cars will really just become a component of the larger system,” says Martin Mantalvanos, CEO of RSM, a Dublin- and San Francisco-based company that creates tools for traffic monitoring with an ultimate vision of fully connected transportation systems. Mantalvanos’ vision involves driverless cars communicating with sensors in their environment to improve safety and traffic flow. “Imagine a scenario where there is an obstacle around a blind corner. Instead of a person having to make an emergency stop, a connected self-drive vehicle will have access to sensor data and already be aware of the obstacle before rounding the corner.”
RSM’s current installations for intersection monitoring rely on highly accurate microwave radar installations from German manufacturer Smartmicro, the cost of which the company is bearing itself, all in the name of seeding the connected infrastructure of the future with sources of sensor data.
ADAS and Safety: Evolution vs. Revolution
While the seeds of totally autonomous cars and fully connected transportation systems have been sown, in the near-future consumers can expect a continuous evolution of advanced driver assistance systems (ADAS), the semi-autonomous safety features that are already available in many vehicles. In a recent interview with Mouser, ON Semiconductor Vice President of Marketing Lance Williams observed, “We’re now beginning to see light vehicles with active safety options, systems that are used for the prevention of accidents such as adaptive cruise control, automatic braking, lane departure warning, and park assist. Moving forward, we’ll continue to invest in products such as imaging, power management, and vehicle networking, all of which will be key to supporting connected vehicles and autonomous driving. Items such as vehicle-to-vehicle and vehicle-to-infrastructure communications will offer greater convenience, improve traffic flow, and provide an overall safer driving experience.”
So, fret not, science-fiction lovers. While pragmatism and safety are driving most of the innovation in consumer autonomous vehicle technology today, the sensor manufacturers supplying the components that make those features possible have the Futurama vision in the back of their minds as well.
Neil Shurtz is a contributor to Connector+Cable Assembly Supplier based in San Francisco. As a freelancer and in his work in public relations for high tech companies, he has written about technology in the oil and gas, aerospace, and manufacturing industries. Shurtz specializes in framing complex and niche technical topics in a broader social context.