After many years in development, Waymo has finally taken to the streets in Phoenix, Arizona, offering a limited, app-based ride-hailing service to members of its early rider program. It’s a giant leap for the Google self-driving project, but an infinite number of small steps remain before it becomes an everyday sight on our streets.
Industry insiders have been enthusiastically predicting the advent of driverless vehicles for some time, but the reality has been slower to arrive. While the tech and test results are there, multiple obstacles to real-world implementation still exist, including cost, connectivity and concerns about cybersecurity and safety.
And then there are the secondary challenges that only become evident once a human backup driver is no longer on board. Computers excel at respecting lane markings, safe distances and speed limits. However, there are other, more mundane but still important functions that we perform without even thinking but that require complex programming for a machine to replicate, such as communicating with passengers about changes to schedules, pick-up and drop-off points. Sometimes, the simplest, spur-of-the-moment human interactions are the hardest for a computer to imitate.
Significant opportunities exist for on-demand, driverless shuttles, so Shotl is collaborating with companies like Sensible4 as part of the FABULOS program, providing technology for ride-booking, vehicle dispatch and dynamic routing. This and other similar projects help us gain insights into the challenges involved in achieving full automation.
For example, right now when a passenger takes a ride booked via our app, it’s the driver who registers the successful pick-up and drop-off in the system—as well as marking any no-shows. But how will a machine replicate this crucial task? And how will it verify that the person boarding has actually booked a ride?
In test conditions, driverless vehicles have a human ‘safety agent’ on board who can perform these functions. However, we want to progress towards removing the agent from the vehicle and to a control center where they can remotely supervise several vehicles at once. Still further into the future, we are working towards achieving fully automated passenger recognition and registration. The question now is how to achieve this and whether currently available technology can provide the solution.
One option could be a door-mounted sensor that counts passengers on and off. Another possibility is using Bluetooth or NFC sensors to detect the smartphone used to book the ride and thereby grant access to the vehicle. Still another could be facial recognition technology, already in use for event access or border control in some places.
However, while some of this tech is available, what we don’t know is whether it’s up to the greatest challenge of all: predicting and accounting for human nature. For example, on small shuttles (where fitting barriers is impractical), how do we know each passenger will wait politely to be identified before boarding? Or will the entire line just board behind the first person to gain access? And how do we prevent onboarding by those who haven’t even booked?
When we talk about driverless vehicles, it’s often the big, scary issues like safety that are foremost in people’s minds and which developers are focusing on. But as driverless becomes a reality, the pace of progress will likely be determined by our ability to solve a whole host of secondary issues related to replicating everyday human interactions. Perhaps, as with supermarket self-checkouts, a human will always be required to deal with those ‘unexpected items’ (aka other humans)?