Oxford Uehiro Prize in Practical Ethics: Should We Take Moral Advice From Our Computers? written by Mahmoud Ghanem
This essay received an Honourable Mention in the undergraduate category of the Oxford Uehiro Prize in Practical Ethics.
Written by University of Oxford student, Mahmoud Ghanem
The Case For Computer Assisted Ethics
In the interest of rigour, I will avoid use of the phrase “Artificial Intelligence”, though many of the techniques I will discuss, namely statistical inference and automated theorem proving underpin most of what is described as “AI” today.
Whether we believe that the goal of moral actions ought to be to form good habits, to maximise some quality in the world, to follow the example of certain role models, or to adhere to some set of rules or guiding principles, a good case for consulting a well designed computer program in the process of making our moral decisions can be made. After all, the process of carrying out each of the above successfully at least requires:
(1) Access to relevant and accurate data, and
(2) The ability to draw accurate conclusions by analysing such data.
Both of which are things that computers are very good at. Continue reading
Video Series: Walter Sinnott-Armstrong on Moral Artificial Intelligence
Professor Walter Sinnott-Armstrong (Duke University and Oxford Martin Visiting Fellow) plans to develop a computer system (and a phone app) that will help us gain knowledge about human moral judgment and that will make moral judgment better. But will this moral AI make us morally lazy? Will it be abused? Could this moral AI take over the world? Professor Armstrong explains…
Guest Post: ENHANCING WISDOM
Written by Darlei Dall’Agnol[1]
Stephen Hawking has recently made two very strong declarations:
- Philosophy is dead;
- Artificial intelligence could spell the end of the human race.
I wonder whether there is a close connection between the two. In fact, I believe that the second will be true only if the first is. But philosophy is not dead and it may undoubtedly help us to prevent the catastrophic consequences of misusing science and technology. Thus, I will argue that it is through the enhancement of our wisdom that we can hope to avoid artificial intelligence (AI) causing the end of mankind. Continue reading
Singularity Summit: How we’re predicting AI
When will we have proper AI? The literature is full of answers to this question, as confident as they are contradictory. In a talk given at the Singularity Institute in San Francisco, I analyse these prediction from a theoretical standpoint (should we even expect anyone to have good AI predictions at all?) and a practical one (do the predictions made look as if they have good information behind them?). I conclude that we should not put our trust in timeline predictions, but that some philosophical predictions seem surprisingly effective – but that in all cases, we should increase our uncertainties and our error bars. If someone predicts the arrival of AI at some date with great confidence, we have every reason to think they’re completely wrong.
But this doesn’t make our own opinions any better, of course – your gut feeling is as good as any expert’s; which is to say, not any good at all.
Many thanks to the Future of Humanity Institute, the Oxford Martin School, the Singularity Institute, and my co-author Kaj Sotala. More details of the approach can be found online at http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/ or at http://lesswrong.com/lw/e79/ai_timeline_prediction_data/
Recent Comments