Things I’ve learned (so far) about how to do practical ethics
I had the opportunity, a few months back, to look through some old poems I’d written in high school. Some, I thought, were pretty good. Others I remembered thinking were good when I wrote them, but now they seem embarrassingly bad: pseudo-profound, full of clichés, marked by empty rhetoric instead of meaningful content. I’ve had a similar experience today with my collection of articles here at the Practical Ethics blog. And Oh, the things I have learned!
Here are just a few of the lessons that have altered my thinking, or otherwise informed my views about “doing” practical ethics — particularly in a public-engagement context — since my very first blog post appeared in 2011:
When will we have proper AI? The literature is full of answers to this question, as confident as they are contradictory. In a talk given at the Singularity Institute in San Francisco, I analyse these prediction from a theoretical standpoint (should we even expect anyone to have good AI predictions at all?) and a practical one (do the predictions made look as if they have good information behind them?). I conclude that we should not put our trust in timeline predictions, but that some philosophical predictions seem surprisingly effective – but that in all cases, we should increase our uncertainties and our error bars. If someone predicts the arrival of AI at some date with great confidence, we have every reason to think they’re completely wrong.
But this doesn’t make our own opinions any better, of course – your gut feeling is as good as any expert’s; which is to say, not any good at all.
Many thanks to the Future of Humanity Institute, the Oxford Martin School, the Singularity Institute, and my co-author Kaj Sotala. More details of the approach can be found online at http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/ or at http://lesswrong.com/lw/e79/ai_timeline_prediction_data/
In a previous post, I discussed how, as a philosopher, one should decide on a research areas. I suggested that one method was to work out what are potentially the biggest problems the world faces, work out what the crucial normative consideration are, and then work on those areas. Call that the top-down method: starting with the problem, and working backwards to the actions one should take.
There’s a second method for high impact philosophy, however. Let’s call it the bottom-up method.
- Begin by asking ‘which are the biggest decisions that one typically makes in life?’
- Then ask: ‘What are the crucial normative considerations that might affect how I should make those decisions?’
- Then figure out which of these crucial considerations is most likely to produce an action-relevant outcome given your marginal research time.
- Then work on that topic!
As in my previous post, I’ll go through each step in turn.