AI: the future or the end of the justice system?
It would have been unimaginable to tell Sir Isaac Newton, a visionary mathematician, about the idea of a smartphone, which after more than two centuries after his death everyone would be using. Seeing the world from a small box sounds like a fantasy back then. How about if I say that for a short period of time the smart machines will take a very important part of society? Seeing them in the streets, acting, talking, and doing humanitarian works. Do that sound like a fantasy?
Through advances in modern technology, the use of AI for services to human society is no longer a distant future in the realm of science fiction. According to the majority of AI experts, there’s a 50 percent chance that 2062 is the year when we would have created machines as intelligent as we are. By saying this, does it mean that machines will go to that extent that will have emotional intelligence like humans? What about them entering the justice system? What if a machine commits a crime, is the machine going to be criminally liable? How smart can they be to prevent or resolve a case? These are questions that do not have an answer yet. We can talk only about the present time. To make it more clear of what I am trying to say let’s take some examples.
Example 1: A shoots B. At the point in time when the shot is fired, B’s death has not yet occurred, but in the act of shooting, A is aware of the possibility of B’s death as a result of the shot.
In criminal law, we have learned that, for a person to be criminally responsible he must fulfill two conditions: the factual element and the mental element. The factual element requirement structure (actus reus) is identical for all types of offenses: intentional, negligence, and strict liability. That is to say, the person in question by his action or inaction has committed an illegal act. Where in the other hand the mental element is much more complex in my opinion, so it is hard to define it. But we can say that the mental element is also known as mens rea, meaning “evil mind” in Latin. This element has to do with the level of guilt of a person in a criminal act, the way they think, and how they control their emotions. This is the line that distinguishes us from AI. Well, all this is only suitable for human beings and not for machines.
Example 2: Employee in a motorcycle factory was killed by an AI robot working near him. The robot identified the employee as a threat to its mission and calculated that the most efficient way to eliminate the threat was to push the worker into an adjacent machine. Using its very powerful hydraulic arm, the robot smashed the surprised worker into the operating machine, killing him instantly, after which it resumed its duties without further interference.
The legal question is, who is to be held criminally liable for this homicide? If the machine was commanded by a human, of course, he would be the liable one, but if it isn’t then who it is, and how can we punish the “metal creature” which's lifespan can be 1000, maybe 10,000 years. If it has “emotional intelligence”, jail or something similar would probably be the proper punishment if we compare it with the human punishment for murder, where it can cause them mental distress or even the knowledge to never do it again. But how many years will the punishment be for AI's crime? It is doubtful that jail sentences will last more than another century as an effective form of punishment for humans, let alone for anything else, so new thinking will be required when considering penalties for the criminal offending of AI. So, it means that there will be different heights of punishments for us humans and machines, because just turning them off is not a solution.
There is no doubt that AI is helpful in today’s living, they can do something way better than humans can when it comes to remembering faces, clothing, body structures, etc. It will not be long before people are going to purchase robots as security guards, where he is going to process every face that has been programmed in its database, and it will be way easier to catch a burglar. A robot security guard might hold down a burglar until the police arrives, but it would not do this action with any emotion. It would not act with humans irrationally as a result of anger or fear, or even human rationally as a result of steadfastness, but would simply be triggered on the relevant information being received that the burglar has entered not as a guest, but by breaking in, and, is someone who is not on the database of faces of people who normally visit the property. At this stage, machines cannot be compared with human reasoning and emotional intelligence. I cannot imagine a robot in the position of a judge. I have heard a lot of judges where they say that, to make the right decision for a punishment you have to be cold-hearted. Does this mean that you have to be emotionless? Then where is the difference between us and a robot? A lot of questions, but the answers are limited.
From people's inexhaustible desire to see and do more, it makes me think that in the next 100 or 200 years, what will be our place in this world? Will we be the ones who still rule or those we created will rule us?