As of the writing of this article in mid-2016, our society is not accustomed to reading about self driving car accidents. In fact, when Tesla Motors experienced the first known fatality of a self driving car accident, it made international news headlines. I posit that we should prepare ourselves for these stories to become more commonplace but with a dwindling number over time. In other words, self driving car accidents are going to get worse before they get better. And I also present how these accidents can be mitigated in the future.
Self Driving Car Accidents Are Cognitive
Before we can take a look at the causes of these accidents, we have to understand how self driving cars work to begin with. At core, a self driving car has two components that make it drive on its own:
- A collection of sensors & hardware interfaces that both collect information from the environment around the car and then issue corresponding commands to the car components (like the steering, wheels, turn signals, etc).
- A collection of software components that that receive information from these sensors and then make drivetrain and navigation decisions based on cognitive programming, firmly rooted in data science.
So what exactly does it mean to be cognitive? This is a term that will shortly make itself into everyday conversation outside of technology circles. In effect, when writing a computer program, you can write the entire program or you can write a program that can learn from itself. The latter is a cognitive approach.
If you were to program everything you would need to make a car drive, then that would be a very long program! You would have to teach the program about every single possible street sign, every possible driver maneuver and every possible road condition and things like that. It would be a large program with a constant need for you to update it because of changing road laws and environments.
But what if instead you just taught a car about the rules of driving. You would then let this same program take in those rules, experiment on its own and then it would learn and write more rules for itself. You could iterate through that process for an infinite amount of time. That process of iteration – and the program learning from itself – is a cognitive approach in a nutshell. You don’t write a massive program on how to drive, you just start the car with some basic rules and it will create the rest of its own program as it goes along.
But a cognitive approach is not without its own pitfalls. In any learning process, a positive result is both encouraging and good way to re-enforce what you have learned. Cognitive programs, whether you realize it or not, are all around you. Perhaps the most common example, for most people, would be the spam filter in your email inbox. Most spam filters take a cognitive approach to learn something that is good – an email you do want to read – and something that is bad, like an email you don’t want to read. As you correct those mistakes, the spam filter improves and your overall experience improves.
Therefore, we must accept that a cognitive approach, in anything we do, will have both positive and negative outcomes during the learning process. And this learning process never really stops. Your spam filter keeps on adapting; self driving cars will also continue adapting over time.
Of course, in this discussion I have minimized one different effect. If you receive a spam email in your clean inbox, that is negative result with very little physical consequences – you’re going to be just fine. But in a car, traveling at highway speeds, a negative result can have very real and fatal consequences. And that is a big part of why the Tesla accident made international news – it wasn’t just the first time someone died. It was the first time that the world of data science reached out from beyond the ether and had a fatal effect on a human life. Your cognitive spam filter can’t reach out and touch you in any way – in a self driving car, the same cognitive technology can kill you.
Can Self Driving Car Accidents Be Mitigated?
There are definitely ways that self driving car accidents can be mitigated over the long term. Some of these include:
- Time and the improved cognition of existing models
- The introduction of new and/or modified use of cognitive models & sensor hardware to self driving cars
- Controlled environments
Improved Cognition of Existing Models
There is no doubt that cognitive models will get better over time; that just their nature, it’s what they do. Cognitive models are out there learning every day and they will improve through both the passage of time and their increased adoption among more people and more driving/road conditions. The self driving cars will learn from these accidents and, in theory, not allow them to happen in the future. But, make no mistake about it, this is a rather passive approach and will require for things to get worse before they get better. It also means that accidents will continue to happen, they will just happen under more and more nuanced conditions, conditions that cars don’t face very often and can’t learn from on a frequent basis.
Use of New Cognitive Models & Sensor Hardware
Most of the statistics theories and approaches that exist today (the same ones that are at the foundation of data science) are based on long-standing math principles and applications. We shouldn’t expect new cognitive models to magically fall out of the sky. However, the world of data science is still open to new uses or approaches of existing models to cognitive applications. It is possible that the increased focus of the world on data science will produce different uses of existing models and apply them to self driving car accidents. In addition, it’s also possible that there will be the introduction of new sensor hardware to the marketplace that will allow more refined data or new data to use within models. Any cognitive model is only as good as the data that it receives; improved sensors can improve the overall number of self driving car accidents.
I have visions of GoogleLand or TeslaVille – large cities where everyone has a self driving car. Roads and signage are standardized, colors are chosen for their ability to be distinguished by sensors and self driving cars can also communicate with each other because, well, that’s the only kind of car next to you. Most of the self driving car accidents that have (and will) happen are due to the nuances that exist on roads. The Tesla accident was due to a large, white, reflective flash from the side of a semi-truck that the car mistook for open road. What if you could control those elements? What if you gave the cars the ability to “talk” with each other to improve their location detection abilities? If you can control these environmental variables, you would almost certainly lower the rate of self driving car accidents. Of course, you’d have to do this on a city or even county-wide level in order to be effective for a large number of people.
Make no mistake about it, data science is here to stay. We are shocked by the Tesla accident because this technology has “reached out” and had a fatal effect on a human being for the first time. While undoubtedly tragic, we have to accept the cognitive cycle for what it is and be prepared for things to get worse before they get better. There may well be other accidents in the future but things can get better for self driving car accidents.