I am a heavy user of artificial intelligence (AI) systems. I often do not realize I am using them and, like most of us, never think about the things that can go wrong.

All day long we use Siri as our voice-controlled personal assistant. We rely on systems like Waze for navigation, which not only helps us find our destination and also notifies us when to leave depending on our commute behavior and traffic density. With self-driving cars, we place the responsibility for our life’s safety in the “hands” of an intelligent system. Our interaction with customer support centers is guided by artificial intelligent assistants, and our personal advice about legal, medical, and financial matters is often based on algorithms and machine learning. The finance industry is the most forward in this; day trading companies are relying 100% on autonomous stock trading based on the results of algorithms. We are growing towards a world where systems are autonomously taking important steps in our own personal life.
Legal liability of artificial intelligence systems
An important topic linked to the rapid industry-wide introduction of artificial intelligence systems is the liability for the consequences of any damage caused by the decisions of these systems. This will raise some interesting questions:
- Who is responsible when intelligent systems go crazy?
- Is the supplier of the system responsible or is it the operator of the system?
- Do intelligent systems need legal representation? And grant them human-like rights?
Are autonomous systems that intelligent that they can take responsibility for their actions?
Most of these questions are interesting from a philosophical standpoint but totally ludicrous when you know how these systems operate. Just let me first share some background about machine learning and artificial intelligence.
What is machine learning, and how does it work?
Let me be clear; the current artificial intelligence is not “intelligent”. It has no IQ and its operations are comparable to those of a trained monkey. The correct term is actually machine learning and not intelligence. Machine learning is a clever combination of a complex algorithm combined with a large set of sample data. The algorithm is fed with these examples and learns from the desired outcome. The goal is to maximize the outcome. The system calculates (guesses) the correct outcome many times and is rewarded (bonus points) or penalized depending on the correct outcome. The goal is the maximization of outcome based on a very specific, and probably complex task, within a clearly defined domain (for example stock trading).
The power of machine learning is that it is focused on a single specific task. And that is good. Humans are easily distracted and convinced that they are good in multi-tasking (At least I am good at multi-tasking!).
Consider the trained algorithm responsible for autonomous driving. This system is trained to maximize the safety of the passenger inside the vehicle. This algorithm is not concerned about the most optimal route (that is the navigation algorithm) and does not care about the most optimal playlist for your trip (we use Spotify for this). And this is why algorithms outperform humans in driving a vehicle, selecting playlists, and navigation. The average human does navigation, safe driving, music selection, admiring the landscape, and sharing this on Instagram… all at the same time… while texting … and more.. wow! No wonder accidents happen.
So now you know: machine learning is similar to operating a trained system that is very good in one specific task. Now back to the main question:
Who is responsible when machine learning screws-up
Who is responsible when machine learning goes wrong and results in a fatal error? This question has possible 3 answers: the developer, the trainer, and the operator. Depending on the root cause of the error we can hold one of these 3 groups accountable.
1. The creator of the algorithm
The creation of an algorithm can make an error in the actual algorithm. This can result in a stable deviation or even very unpredictable behavior. Some of these errors only become visible after prolonged usage and large amounts of test data. These errors often originate from a lack of knowledge about the business models and from an inability to understand the business problem. Or the developer of the algorithm accidentally introduced a bug. In these cases, the creator of the algorithm is liable for the damage caused by the outcome of machine learning. Since the delivered product was obviously inadequate. Similarly, we hold the producer of a car responsible for a construction failure.
2. The supplier of the sample data
The next step, after creating the algorithm, is training the system. We supply the system with sample cases and expected outcomes. The analysis of this sample data results in operating rules predicting the outcome of new cases. This means this set has to be large enough and statistically significant to make sense to the system and give a reasonable, accurate, and predictable outcome. A small, or biased set of sample data will result in errors.

Let me give you an example: an existing algorithm for distinguishing male or female faces was trained with a huge set of sample pictures. The system was sufficiently trained and able to determine with 70% accuracy male or female faces. However, every image had a date and time printed in the lower left corner. The most important factor for male or female was the number of even numbers in this timestamp. The outcome was just coincidental. Another more shocking model was aimed at identifying violent people based on existing convictions. Based on this sample set, the system predicted the highest chance of violent behavior for young Black and Latino men. Other factors were of no significant difference. So the bias in existing conviction cases is transferred to the behavior of machine learning.

And finally, the example of the Microsoft chatbot aimed to engage in casual and playful conversation. This chatbot was trained with real Twitter data. Within hours the friendly chatbot mutated into a racist bastard. This is also an indicator of the opinion of the average Twitter human. So good training is important and with insufficient training, we can hold the trainer accountable. Similarly, we hold a driving instructor accountable for not delivering adequate driving instructions.
3. The person operating the AI

And finally, the person operating a trained system can make an incorrect judgment call interpreting the conclusion made by the system. As an operator, you have to be able to follow the path that led to the conclusion and get insight in the probability of the correct conclusion. The operator can decide to automatically accept all outcomes with 99% accuracy or more.
Compare this to the operator who decides to ignore all warning signs and enforces a decision with 60% accuracy and an unrealistic decision path. In this case, you cannot blame the machine learning system for the outcome. This is similar to continuing driving for a long time while seeing the “low oil” sign on your dashboard. You are still effectively moving towards your destination but with a very high risk of ruining the engine of your car. A driver causing a car accident is still accountable when he caused an accident due to driving and texting. Even though he was adequately trained and operating a high-quality car.
So make sure there is always a warning sign in the system and still use your common sense while operating it.
Legal liability for artificial intelligence works the same as normal liability
So basically, the liability of artificial intelligence is similar to regular legal cases. Numerous law cases have been filed for liability of electric equipment, car accidents, and guns where the manufacturer is responsible for the harm afflicted by the device. In all cases the outcome is obvious. In the case of a manufacturing defect, the producer is responsible. An absence of training makes the people responsible for education responsible. And deliberate malicious actions by the operator make the end user responsible.
This means; current legal procedures are still viable for the processing of legal liability cases on artificial intelligence and machine learning. These cases can even be processed by machine learning algorithms to ensure a predictable outcome. But this is probably a subject for another article. To ensure we determine the 3 different cases we have to be absolutely certain on how the underlying machine learning works. So we can trace back the root cause of the error. This is currently not the case.
We can make it safer with the OpenAI initiative
We face the danger of closed-source algorithms since we do not know what is actually happening. These are the cases where we do not have the ability to study the origin and decision path toward the calculated outcome. In these cases, we are unable to determine the backing algorithm and the sample data sets used to train this system. We are only confronted with the outcome and hope the manufacturer knows what he is doing. In practice, we are using Google Maps, Apple’s Siri, and Facebook without knowing how they decide what to communicate with us. We are using it on a day-to-day basis but we have no idea how it is working. A small group of companies is building most of the current AI systems and we have no clue about the hidden functions and errors these systems may contain. We have no means of checking the working of the systems.

The only way forward is to create a completely transparent and uniform platform for the development of machine learning. The OpenAI initiative aims to involve as many people as possible in the creation of AI to make the work as transparent as possible. This is the only way forward AI controllable and accountable usable intelligent system. In the meantime, Google, Microsoft, and Amazon, along with leading thinkers in the tech industry, such as Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Elon Musk, and Peter Thiel, started their own initiative in AI development, leading to new closes and commercial platforms. .
As consumers of intelligent systems, we have the right to know about the development process, sample data, and algorithms. Since these systems are influencing and managing more and more of our day-to-day life. They decide whom to call, what you see on Netflix and Facebook, how you drive your car, and what route you are following… and the adoption grows exponentially.
So my advice: stay positive, be critical, and always ask questions.
P.s. update 2024 :
OpenAI, as the open, transparent, community-driven antidote to closed-source AI. That case collapsed years ago. OpenAI is now a closed, commercially-driven company that has been publicly criticized, including by its own co-founders, for abandoning its original transparency mission. Elon Musk (listed here as a backer and leading thinker) sued OpenAI over exactly this.
Discover more from Pragmatic Technology Thinking
Subscribe to get the latest posts sent to your email.
