Should self-driving cars be programmed in a way that always protects ‘the driver’? Who is responsible if an AI makes a mistake? Will AI used in policing be less racially biased than police officers? Should a human being always take the final decision? Will we become too reliant on AIs and lose important skills? Many interesting questions answered in this video interview with Dr Katrien Devolder.
We should perhaps not be too surprised that AI is only repeating the issues that were raised during the 19th century with the use of probability and statistics. If the advocates of AI (inc. some so-called ethicists) were to read Dicken’s Hard Times, Marx’s Des Capital, papers of Francis Galton, etc. they might realise this is not a new problem.
Ethicists like Colin Gavaghan and Mercedes-Benz are completely wrong if they think automated vehicles (AV) should be able to prioritise the occupants of the vehicle over pedestrians. If you are in a two tone lump of metal it is you and your passengers that must take the ‘hit’. Sorry, but that is the way it is and if anyone does not understand that they should not be making decisions about AVs. This is a completely false issue but a major problem for AI because AVs must be programmed to stay on the road. They do not have the ‘option’ to crash into a tree because they are closed domain systems, i.e. they cannot correctly identify objects off the road. The problem of AV ethics is a trolley problem because the AVs, like all AI systems, are closed domains systems, i.e. they are effectively moving on rails. Again, we should perhaps not be too surprised how easily ethicists and others have allowed their thinking to be railroaded down this ethical dead end by the AI advocates. But let us get it straight, the AV control problem is being treated as a trolley problem because of the limitations of the AI technology and the unethical belief by Mercedes that their rich customers should be able to kill pedestrians and other road users. Ethicists and philosophers of technology should not allow their thinking to be limited by machine intelligence and capitalism.
This is a great example of why I have come to read AI as meaning Algorithmic Irresponsibility rather than Artificial Intelligence. Simply stated, any AI controlled vehicle that would ride over an infant in its path rather than veering towards the nearest tree (or off a cliff for that matter) is just a gadget designed by irresponsible so called engineers that put financial gain ahead of accountability, claiming that an “autonomous” machine needs to answer to no one. Let us ask the head of Mercedes-Benz what would he prefer, to kill his infant child or have his car hit a tree and if he answers in any way that prioritizes the machine over the human that would clearly show whose side he is on.
As I say, part of the problem is AVs are programmed to stay on the road and do not have the ‘option’ to hit the tree. They might hit the tree because they are unable to stay on the road, but decision-making about what is off the road and whether they ‘should’ leave the road is not an option. The software engineers are trying to hide the limits of the system behind a fake ethical problem. Unfortunately, ethicists and the media have taken the bait and we are now debating an artificial ethical problem.
Comments are closed.