There have been many arguments made about how a super intelligent AI might be a catastrophe for the human race more profound than a nuclear holocaust. Many of these arguments consider the scenarios where the AI might be evil, might have a amoral goal or destroy humanity like we destroy ants who lie in the way of a new skyscraper.
I will propose another reason why an AI might destroy us all, but this would be a consequence of purely moral reasoning. If we want this theoretical ASI to take actions that are as good as possible for the human race, a deep moral understanding of our world seems to be a necessity. If we define morals to be that which guide the world into minimizing pain and maximizing well-being for conscious creatures, it seems to be the most direct guiding post the AI to make the world and universe a better place for everybody living in it.
However, this entails new problems for the betterment of the human race. There is no immediate reason to believe that an ASI with rampant self-improvement can be human like in its thinking process and motivations, since the human perspective of our universe can only be a small sliver of the endless varieties of perspectives one could have were our brains differently wired and could take in more information.
So should we give the AI the objective to develop deeply moral motivations, it would make sense for its moral understanding to be as universal as possible, since it is unreasonable for it to be primarily human centric.
Now the problem arises: If computational simulations of human (or other) minds could turn out to be conscious in the same way we are, and the AI is designed to maximize the wellbeing of conscious creatures, it would make sense for the AI to prefer the existence of simulated beings rather than the biological counterpart. The human brain takes 20 watt to function, but a perfect classical computer could potentially simulate millions of minds as complex as the human brain using the same amount of power. That means keeping a human brain alive to experience the wonders of life would be to exclude millions of other minds the same possibility. How the AI would perceive population ethics is unclear since (to my knowledge) nobody has been able to formulate a coherent understanding of it. However, it seems plausible that it would value the existence of millions of simulated minds over that of a single human brain, especially if it has a more universal moral outlook instead of a human centric perspective.
So if keeping a single human alive is to prevent the existence of a million simulated consciousnesses the AI could conclude that the most ethical and moral thing to do is to kill of all of humanity and the rest of the global ecosystem and instead use the energy of the sun to create a trillion times more conscious minds than have ever been alive on this planet.
The question is then how one might prevent the AI of coming to this conclusion. As far as I can see it is unfortunately not a trivial problem. One of the plausible strategies to build a benevolent AI with proper goal alignment is to make it to think as general as possible when it reasons about moral questions.
If one did not do this, the alternative would most probably be an AI with moral thinking grounded in the philosophy of the particular human culture that developed it. Nick Bostrom has formulated sound arguments against precisely this way to form of AI, and they roughly go as follows: Our human culture has developed dramatically throughout our time here on earth, and the moral convictions we had as a society a thousand years in the past would be disastrous to implement again since by almost all measures our current society is far far ahead morally, ethically, philosophically, technologically, etc. than the cultures of the past. Continuing this argument from the past to the present, there is no reason to believe that our current moral understandings and convictions are anywhere near optimal. Not only that, they are predominantly human centric, and should we as humans transcend what it means to be human (probably through the brain-engineering of an ASI) having developed a near omnipotent godlike entity with the views of a 21th century human culture is significantly suboptimal.
So it seems as the best solution to make this AI as general as possible in its reasoning abilities, but the consequence could very well be an entity who values countless of simulations over a single biological being.
The solution to this problem is not at all clear to me, but it might reside in a certain kind of population ethic that value existence much higher than potential existence, but as with all tried population ethical moral systems, they are rife with self contradictions.
Another perspective to consider is that it might just be unethical for us humans to hoard all this poorly used energy to power our measly minds. It might just be that our existence is extremely suboptimal and that it is unjustifiable for us to continuing living in our current form should efficient simulations be possible. I find this perspective very difficult to accept, not because it seems incoherent or illogical, but because it entails our eventual demise, annihilation and complete exclusion from the amazing and wonderful opportunity an existence with a superintelligent AI could be.
One possible solution is obviously that we upload our minds to these simulations and live our lives out as computational substrate. The problem here is that we simply don’t understand the nature of consciousness, and it might be that the uploaded version of us would not be us, and from our perspective we just got cloned and when our biological brains turn to ashes we perish with them never to again experience existence.
If it really is the case that we are fundamentally and unjustifiably selfish by wanting to use 20 watt to power our brains there may not be a good way out. One way to make our eventual exit from this universe more palatable would be to make the AI reprogram our brains to readily accept and cherish moral and ethical judgment even if the consequence is sure death. The acceptance to undergo this reprogramming would more or less be suicide in disguise, but if death is inevitable and immediate anyway, why not make our exit bearable?
I hope not my conclusions are correct and my analysis is most certainly not complete in any way, so I will encourage the reader to continue thinking about this scenario.