AI

Cross Post: Common Sense for A.I. Is a Great Idea. But it’s Harder Than it Sounds.

Written by Carissa Veliz

Crosspost from Slate.  Click here to read the full article

At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a very bright interlocutor. Among some of the unexpected items users have found in their shopping lists after talking to (or near) Amazon’s Alexa are 150,000 bottles of shampoo, sled dogs, “hunk of poo,” and a girlfriend.

The mere exasperation of talking to a digital assistant can be enough to miss human companionship, feel nostalgia of all things analog and dumb, and foreswear any future attempts at communicating with mindless pieces of metal inexplicably labelled “smart.” (Not to mention all the privacy issues.) A.I. not understanding what a shopping list is, and the kinds of items that are appropriate to such lists, is evidence of a much broader problem: They lack common sense.

The Allen Institute for Artificial Intelligence, or AI2, created by Microsoft co-founder Paul Allen, has announced it is embarking on a new research $125 million initiative to try to change that. “To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” Allen told the New York Times. AI2 takes common sense to include the “infinite set of facts, heuristics, observations … that we bring to the table when we address a problem, but the computer doesn’t.” Researchers will use a combination of crowdsourcing, machine learning, and machine vision to create a huge “repository of knowledge” that will bring about common sense. Of paramount importance among its uses is to get A.I. to “understand what’s harmful to people.”

This article was originally published on Slate.  To read the full article and to join in the conversation please follow this link.

Video Series: Is AI Racist? Can We Trust it? Interview with Prof. Colin Gavaghan

Should self-driving cars be programmed in a way that always protects ‘the driver’? Who is responsible if an AI makes a mistake? Will AI used in policing be less racially biased than police officers? Should a human being always take the final decision? Will we become too reliant on AIs and lose important skills? Many interesting questions answered in this video interview with Dr Katrien Devolder.

Singularity Summit: How we’re predicting AI

When will we have proper AI? The literature is full of answers to this question, as confident as they are contradictory. In a talk given at the Singularity Institute in San Francisco, I analyse these prediction from a theoretical standpoint (should we even expect anyone to have good AI predictions at all?) and a practical one (do the predictions made look as if they have good information behind them?). I conclude that we should not put our trust in timeline predictions, but that some philosophical predictions seem surprisingly effective – but that in all cases, we should increase our uncertainties and our error bars. If someone predicts the arrival of AI at some date with great confidence, we have every reason to think they’re completely wrong.

But this doesn’t make our own opinions any better, of course – your gut feeling is as good as any expert’s; which is to say, not any good at all.

Many thanks to the Future of Humanity Institute, the Oxford Martin School, the Singularity Institute, and my co-author Kaj Sotala. More details of the approach can be found online at http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/ or at http://lesswrong.com/lw/e79/ai_timeline_prediction_data/

World funds: implement free mitigations

The future is uncertain and far. That means, not only do we not know what will happen, but we don’t reason about it as if it were real: stories about the far future are morality tales, warnings or aspirations, not plausible theories about something that is going to actually happen.

Some of the best reasoning about the future assumes a specific model, and then goes on to explore the ramifications and consequences of that assumption. Assuming that property rights will be strictly respected in the future can lead to worrying consequences if artificial intelligence (AI) or uploads (AIs modelled on real human brains) are possible. These scenarios lead to stupidly huge economic growth combined with simultaneous obsolescence of humans as workers – unbelievable wealth for (some of) the investing class and penury for the rest.

This may sound implausible, but the interesting thing about it is that there are free mitigation strategies that could be implemented right now. Continue reading

Recent Comments

Authors

Affiliations