We are all somewhat numbed by the continual announcements about cyber-hackers that have broken into an online database and stolen our personal information, oftentimes doing so via attacking credit reporting agency databases, retailer databases, insurance company databases, financial and banking systems, and the like.
It seems like nearly every day a letter comes in the mail with a notification that your personal identification has been compromised and you are urged to take precautionary measures to be on the watch for someone falsely using your ID and masquerading as you. Those reprehensible uses can harm your credit rating, can smear your reputation, and can hit bluntly your savings or other monies that the hackers might be able to access and deplete.
It is the wild west out there in cyber-land.
Generally, your personal safety is not particularly threatened, though let’s be clear that losing the dough in your bank accounts is tantamount to a type of financial menace and livelihood threat that could lead to your becoming destitute or facing other costly repercussions.
As we find ourselves becoming increasingly reliant on computer-controlled physical systems that are within our midst (for my discussion about the Internet of Things, IoT, and impending cyber-threats, see this link here), and as those systems tend to be hooked-up online, the danger of being threatened with actual bodily harm will rise.
One such obvious and frequently cited example is the emergence of AI-based self-driving cars and other similar autonomous vehicles (including self-driving drones, self-driving trucks, and so on).
In short, there is a possibility that a cyber-hacker could intercede in an autonomous vehicle and in one manner or another cause difficulty or worse in terms of impacting the driving aspects of the vehicle.
This concern is one of the most commonly noted qualms when public surveys and polls are taken about the advent of self-driving cars (for insights into the valid points and also the misconceptions that those surveys seem to generate, see my analysis atthis link here).
For those in the self-driving car industry, there seems to be an ongoing viewpoint by some that though there is a potential minuscule chance of a cyber-hack into a self-driving car, and thus indeed cyber-security is paramount, they nonetheless often state or suggest that there is little incentive for cyber-hackers to target self-driving cars due to the (presumed) lack of money to be made.
In other words, the belief appears to be that since a self-driving car is not a bank, it is not a savings account, it is not a credit card, one must conclude ergo that cyber-hackers will not go out of their way to come after self-driving cars.
To such a perspective, I loudly (and politely) say balderdash, and plead that those promulgating such a stance would reconsider the matter, including that they should forthwith cease and desist in permeating a quite misleading and wholly unsound position.
Let’s be above-board, there are plenty of ways for cyber-hackers to make money off of self-driving cars.
In fact, the money-making potential is quite sizable and will indisputably be a crucial factor in why and how cyber-hackers sink their teeth into self-driving cars.
Anyone with a blind spot on this source of motivation will likely underestimate the veracity of the threats that cyber-hackers are going to undertake in this realm.
Maybe this might help: Follow the money.
What this means is that if you are already willing to agree that safety is a key aspect of cyber-security and that there is a chance (no matter how slim) that cyber-hackers might seek to undermine the safety of self-driving cars, the money aspects are inextricably intertwined, I assure you.
I will lay out for you the numerous ways that cyber-hackers have an “opportunity” (dastardly so) to try and make a payday out of self-driving cars.
Before I share those insights, allow me a moment to bring up some related points.
First, whenever I write about cyber-security, there are some that right away complain that by doing so the indications proffered are allowing cyber-hackers to gauge what kinds of cyber protections are being devised and what kinds of cyber vulnerabilities exist.
The worry is that by writing about these topics, it helps the cyber-hackers, arming them accordingly.
Please realize that this is the now-classic head-in-the-sand posturing regarding discussing cyber-security and related matters.
Some believe that we should not talk about, nor write about, and not in any manner even whisper the nature and avenues of cyber-security and cyber-hacking, since it tips a hand to the evildoers.
This is a misguided and ill-informed notion, though one can certainly sympathize with their logic.
Here’s the rub.
It is plainly the case that cyber-hackers are going to figure out these same facets, one way or another, and by trying to hide such discussions it does little good, including that it tends to undercut the preparations for and awareness about being on the hunt to stop and prevent cyber-hacking.
A head in the sand translates into getting kicked in the rear, as the old saying goes.
Meanwhile, there is another stated reason to not discuss such matters, namely that by doing so, it will cause mass hysteria.
Again, the logic for this is certainly understandable.
When those writing about cyber-security and cyber-hacking do so irresponsibly, attempting merely to fan the flames of angst, there is no question that such shoddy and perhaps even iniquitous efforts are sad, hurtful, and do not advance sensibly the battle between cyber-security and cyber-hacking.
It is vital that discussions about cyber-crime be taken seriously, somberly, factually, and portray matters in a balanced and rational way.
Okay, so having covered those caveats, let’s dive into some background and context of how cyber-security and cyber-hacking come to play related to self-driving cars.
After establishing that foundation, we can then take a close look at how money is an underlying motivator and something not to be ignored, trivialized, or falsely thought as inconsequential.
Speaking of foundations, not everyone knows what it means to refer to a “self-driving car” and so we ought to start there.
The Role of AI-Based Self-Driving Cars
True self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And Hacking Levels
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
Generally, most automakers are anticipating removing entirely the human-accessible driving controls from Level 4 and Level 5 self-driving cars. They do not have to do so per se since there is not a requirement across-the-board to do so, but it makes sense that they would likely want to do this.
Simply stated, if you believe that human drivers have driving foibles, which we know they do, and we know that for example there are right now about 40,000 annual deaths due to car crashes in the United States alone, along with approximately 2.3 million injuries, it seems prudent to take away the driving from humans.
And, if the AI can do the driving, doing so without any need for a human driver, settle the matter by denying driving access for the human occupants (for more details, see my discussion at this link).
Before pursuing that aspect in the context of cyber-hacking and cyber-security, consider the Level 2 and Level 3 cars.
As mentioned, those are cars that involve the co-sharing of the driving task.
Keep in mind then the Level 4 and Level 5 will generally be minus driving controls for humans, while the Level 2 and Level 3 will have such controls and yet also involve the co-sharing of the driving with the automation of the car.
Some would say that the downside of the Level 4 and Level 5 is that if a cyber-hacker were to take over the driving controls, which at this point in the discussion I’m not saying is likely or not likely, but we ought to agree that there is a chance of it, which we might debate about the probability, but it is an existent chance, the human occupants have no ready or apparent means to try and overtake the overtaking of the driving controls.
For them, they sometimes believe that with a Level 2 or Level 3, the human driver either will not suffer at the hands of a cyber-hack or that if they do, since the human driver is at the wheel, they will simply overtake the overtaking.
I would not be so sanguine about Level 2 and Level 3.
If the steering suddenly and unexpectedly makes a wild veer to the right, and the car is going say 65 miles per hour, and there is a wall there, it seems mighty doubtful that the human driver is going to realize what is happening and even if they do it will likely be too late to react (for my explication about human driver reaction times, see this link here).
The point being that cyber-hacks can wreak havoc on not just Level 4 and Level 5, which is usually where all the attention and anguish seems to go but can just as likely impact the Level 2 and Level 3 cars, and that a human sitting in the driver’s seat does not especially bolster the chances of averting the hack (of course, it depends on what kind of hack is occurring).
Some will begrudgingly concur about the Level 2 and Level 3 qualms, but they would argue that the humans riding in a self-driving car are essentially sitting ducks, not having any direct and immediate means to overcome a hack, while the human driver in the less-so automated cars has at least a chance of taking action.
I would counter-argue that you are discussing a sentiment allegorically akin to the moving of deckchairs around on the deck of the Titanic, namely that the human driver in a Level 2 or Level 3 is not likely to make a substantive difference when a significant hack occurs.
Suppose a cyber-hack causes a Level 2 or Level 3 car to slightly veer off-course, but the human driver freaks out and way over-controls, possibly leading the vehicle into doom which otherwise might not have arisen.
Or, suppose that human drivers are aware of the chances of cyber-hacks, so they sit on the edge, waiting for the day that it might happen, and end-up at times radically over-controlling their car, even though let’s say that no cyber-hack has been activated (it is a ghost implanted in their minds).
There are about 250 million licensed drivers in the United States today, and one blanches at the notion that those humans still driving cars will be on pins-and-needles, leading to some percentage of newly classified car crashes as ones that were prompted due to the human driver believing their car was under cyber-attack.
It could be a huge multiplier effect when applied across hundreds of millions of human drivers.
In recap, cyber-hacking will impact not just Level 4 and Level 5, but Level 2 and Level 3 too, and the cars that allow human driving will not be immune to hacks and nor does the presence of a human driver afford necessarily a heightened measure of safety thereof (including that it could be potentially less safe).
Show Me The Money
I trust that you are now open-minded that there are cyber-hacks that could impact Level 2, Level 3, Level 4, and Level 5 cars, and thus we can judiciously consider that there is vulnerability to go all around, no matter the level of the vehicle (other than ones that have essentially no automation, or that have no connectivity allowing for cyber-hacking, though they could still be hacked possibly one at a time via the use of the OBD-II, see my discussion at this link here).
Again, put aside for a moment the chances of those acts, since I realize that some will jump up and instantly claim that the odds of those occurring are slim. My focus is that they could happen and as such, what else does that comport.
Time for the money.
We’ll start with the simplest variant.
Cyber-hackers are sometimes motivated by the notoriety that can be had via a highly visible and bone-chilling hack. Besides the self-aggrandizement, some of those cyber-hackers parlay their gained reputation into other acts.
In short, if one could hack a self-driving car, it could bolster their street cred, which in turn might bring them monetary offers of doing the same or other kinds of cyber-hacking, leading to a payday.
Essentially, they hire themselves out as a “proven” cyber-hacker, acting as a paid mercenary for other heinous cyber-hacking efforts. You won’t though be able to attract much dough if you do not have a calling card, as it were, and the potential of enormous publicity from a self-driving car hack is a whopper of a boost.
You might be carping that this seems somewhat indirect, but nonetheless, it is a bona fide and real-world possibility of tying this kind of cyber-hacking to money.
Shift gears and consider the more direct routes to money.
Imagine a cyber-hacker that has concocted some nefarious exploit for a particular brand of a self-driving car.
They perhaps employ it on one or two such vehicles, showcasing what they can get away with. Then, they contact the fleet owner of the self-driving cars and/or the automakers, and undertake a ransom threat, seeking money to either undo the exploit or reveal how the exploit works, etc.
What will the fleet owners and automakers do?
Some of you might be bellowing that no fleet owner and no automaker would ever pay such a ransom.
If you are making such a declaration, you might want to look more closely at the massive size of the ransomware marketplace (see my discussion at this link here).
You might also want to contemplate the aspects of a nation-state that might be (reluctantly or overtly) willing to pay such a ransom (see my indication at this link here).
Consider another example of a money-making path, similar to the ransomware route.
A cyber-hacker with a self-driving car self-made exploit might decide to post the existence of the exploit as available for auction, seeking the highest bidder that might want to purchase it.
In this use case, the cyber-hacker is likely thinking that it is too risky to try and use the exploit themselves, so why not instead sell the thing and pocket the dough, secretly, without as much exposure, and then presumably start on their next sellable exploit.
Likely, for “proof” that the exploit is real and demonstrative, the cyber-hacker might use it on a vehicle and perhaps videotape the result or otherwise offer evidence to showcase that the exploit is not vaporware.
Overall, I believe you get the gist, which is that money ties to safety (hacking), and safety (hacking) ties to money.
Rest assured that there is a slew of additional ways to make money by cyber-hacking self-driving cars (that is a glum thought).
I won’t go into them all here.
There is a twist though that is worthwhile to consider.
Consider this stomach-wrenching use case. A despicable scammer contacts someone, tells them that there is a hack associated with self-driving cars that can be operated remotely and that whichever self-driving car the person uses for ride-sharing or whatever purpose, the exploit is ready to be used. If the person will transfer funds or give up their credit card or pay some bitcoins, they will never be harmed by any such exploit, so the scammer assures.
Scammers will always exist and find new ways to scam, including in the case of self-driving cars, woefully so.
Not wanting to end this discussion on such a sour note, since we know that these are possibilities, along with the inarguable allure of money, we can attempt to mitigate these evildoers by bolstering cyber-security and by engaging the public in awareness on these matters, responsibly.
And, as perhaps a silver lining, maybe we can get the bad-hat hackers to switch over to the good-hat side of hacking, offering them the altruistic notion of helping mankind and simultaneously making money by finding exploits that they then get a bounty for discovering, or by enlisting them in the protection of self-driving cars for a steady paycheck and a bountiful peace of mind.
That seems to be the way that the wild west was won.
Dr. Lance B. Eliot is a world-renowned expert on Artificial Intelligence (AI) with over 3 million amassed views of his AI columns. As a seasoned executive and high-tech entrepreneur, he combines practical industry experience with deep academic research to provide innovative insights about the present and future of AI and ML technologies and applications. Formerly a professor at USC and UCLA, and head of a pioneering AI lab, he frequently speaks at major AI industry events. Author of over 40 books, 500 articles, and 200 podcasts, he has made appearances on media outlets such as CNN and co-hosted the popular radio show Technotrends. He's been an adviser to Congress and other legislative bodies and has received numerous awards/honors. He serves on several boards, has worked as a Venture Capitalist, an angel investor, and a mentor to founder entrepreneurs and startups.