Using a maximization of well-being as a bedrock via a set of rules (health is better than sickness for example), is programming morality, or the ability to distinguish moral and immoral choices, into a computer possible?
It's not only possible, it's inevitable. All we need is for technology to keep advancing. Computers are simply tools. If morality is the task, it will supply complete.
The problem then arises of implementing morality. This computer would need to not have any power other than communication. If it were to have Internet access for example, and as the highest morality would lead to alleviating as much suffering as possible, the AGi may purchase food from all over the world and send it to the many starving people in the world. It may also strike all nuclear facilities to end the Nuclear Tension that exits.
It's not only possible, it's inevitable. All we need is for technology to keep advancing. Computers are simply tools. If morality is the task, it will supply complete.
The problem then arises of implementing morality. This computer would need to not have any power other than communication. If it were to have Internet access for example, and as the highest morality would lead to alleviating as much suffering as possible, the AGi may purchase food from all over the world and send it to the many starving people in the world. It may also strike all nuclear facilities to end the Nuclear Tension that exits.