Video Series: Is AI Racist? Can We Trust it? Interview with Prof. Colin Gavaghan

Should self-driving cars be programmed in a way that always protects ‘the driver’? Who is responsible if an AI makes a mistake? Will AI used in policing be less racially biased than police officers? Should a human being always take the final decision? Will we become too reliant on AIs and lose important skills? Many interesting questions answered in this video interview with Dr Katrien Devolder.

  • Facebook
  • Twitter
  • Reddit

One Response to Video Series: Is AI Racist? Can We Trust it? Interview with Prof. Colin Gavaghan

  • Keith Tayler says:

    We should perhaps not be too surprised that AI is only repeating the issues that were raised during the 19th century with the use of probability and statistics. If the advocates of AI (inc. some so-called ethicists) were to read Dicken’s Hard Times, Marx’s Des Capital, papers of Francis Galton, etc. they might realise this is not a new problem.
    Ethicists like Colin Gavaghan and Mercedes-Benz are completely wrong if they think automated vehicles (AV) should be able to prioritise the occupants of the vehicle over pedestrians. If you are in a two tone lump of metal it is you and your passengers that must take the ‘hit’. Sorry, but that is the way it is and if anyone does not understand that they should not be making decisions about AVs. This is a completely false issue but a major problem for AI because AVs must be programmed to stay on the road. They do not have the ‘option’ to crash into a tree because they are closed domain systems, i.e. they cannot correctly identify objects off the road. The problem of AV ethics is a trolley problem because the AVs, like all AI systems, are closed domains systems, i.e. they are effectively moving on rails. Again, we should perhaps not be too surprised how easily ethicists and others have allowed their thinking to be railroaded down this ethical dead end by the AI advocates. But let us get it straight, the AV control problem is being treated as a trolley problem because of the limitations of the AI technology and the unethical belief by Mercedes that their rich customers should be able to kill pedestrians and other road users. Ethicists and philosophers of technology should not allow their thinking to be limited by machine intelligence and capitalism.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use the <em>, <strong> and <blockquote> tags. Links have been disabled to combat spam.

Notify me of followup comments via e-mail. You can also subscribe without commenting.


Subscribe Via Email