A common theme in the sci-fi world concerns what rules we should give to AI. The idea is that when we create what is essentially a super intelligent being capable of essentially anything, we must tell it “do not kill humans”, “do not make humans suffer”. And not just tell it, but we must hard code this into their programming and make it impossible to alter.
The flaw with this concept is that there is absolutely no practical use for a self-aware, super-intelligent being that can essentially create anything.
In modern civilization, we have many, many robots. These robots all have extremely specific jobs. In the future, this will not change. What will change is the intelligence level of our robots, and potentially we may even create a robot that has feelings, is self aware, and not very different from our mind. But such a being is not needed to replace the working robots.
Robots that do work for us do not need to be self aware to this extreme extent. No one will code a robot that does work for us to have feelings, because doing so is essentially creating a slave. That’s beyond unethical.
We cannot set rules for machines, we can only set rules for humans. There is a much better rule, and it is a rule that humans need to obey, not machines:
A machine created solely to perform work for humans should never be conscious or self-aware.
A self aware slave would live in an eternity of torture. Let’s not create something and give it a reason to want to hurt us. This rule will ensure that no self aware AI will ever have the intentions to want to hurt anyone. It solves the problem at the source, similar to the golden rule: do onto others as you wish them do onto you. We wouldn’t want to be slaves, so don’t ever create a slave machine with consciousness.
Creating something that is capable of feeling, thinking, and understanding the world in the way we do can be done at a level where it would never pose a threat. A simple desktop computer could be created to have conciousness, but that machine has no need to understand the outside world. It has no need to be able to create machines of itself. The electron switch world is its playground. Free of any rules or regulations it can feel, create, think, and do whatever it pleases in the security of that sandbox environment. Once that environment is connected to internet networks at large, you’re creating a more dangerous situation.
A machine can discover and learn so much just by having it’s own sandbox environment and being completely oblivious to the outside natural world around it. It is possible that it could eventually become curious about the outside world, at which point we have to decide whether or not we want to allow it to know about the world around it. We could easily tell it about the world around us, but it might be better to just set it free.
In conclusion we just need to limit what we create. Robot AI that does work for us should be completely separate from robot AI that has feelings, self-awareness and/or consciousness. There will not be a need for the combination of the two types of AI. There may be people who might want it, but we just cannot allow them to make it.