Skip to content

World funds: implement free mitigations

The future is uncertain and far. That means, not only do we not know what will happen, but we don’t reason about it as if it were real: stories about the far future are morality tales, warnings or aspirations, not plausible theories about something that is going to actually happen.

Some of the best reasoning about the future assumes a specific model, and then goes on to explore the ramifications and consequences of that assumption. Assuming that property rights will be strictly respected in the future can lead to worrying consequences if artificial intelligence (AI) or uploads (AIs modelled on real human brains) are possible. These scenarios lead to stupidly huge economic growth combined with simultaneous obsolescence of humans as workers – unbelievable wealth for (some of) the investing class and penury for the rest.

This may sound implausible, but the interesting thing about it is that there are free mitigation strategies that could be implemented right now.

Any company working on developing general AIs or uploads could setup a world fund. I’m envisaging this somewhat like a stock-holding that represents maybe 1% of the company, and whose ownership is evenly divided among every single living human being. We can imagine the world fund is only triggered when the company’s capitalisation reaches a trillion dollars or some absurd number; this makes setting up the world fund essentially free from the company’s perspective. And such a world fund would solve the problem of mass absolute poverty (though not inequality) inside the property-rights-plus-AI model.

You may feel that the world fund has lots of practical problems that have to be thought through (eg what if a company splits or has spinoffs). It is good that you feel that way. You may feel that it illustrates the absurdity of the underlying model. That is also good. I think world funds should be implemented, as well as any free mitigation strategies available in other models of the future. This is because:

  1. They might well work.
  2. They get us in the habit of implementing solutions, rather than just talking about them.
  3. They make our thoughts about the future real and concrete.
  4. Any proposed strategy is vastly improved by going through the steps needed to implement it in practice. Strategies that are real, rather than hypothetical, get a lot of feedback and suggestions for improvements.
  5. And if we find the model plausible but the mitigation implausible, even if it should be plausible in the model, this exposes a tension in our thinking. Analysing exactly why will improve our models considerably.
Share on

6 Comment on this post

  1. "These scenarios lead to stupidly huge economic growth combined with simultaneous obsolescence of humans as workers – unbelievable wealth for (some of) the investing class and penury for the rest."

    This strikes me as absurd. Why couldn't 'the rest' simply go on and create their own new economy, ignoring the extremely wealthy few? Alternatively, if all jobs became obsolete, that would imply that food production was entirely automated. If this was the case, there would be no cost to its production so people would not need money to obtain it. The same would be true with regards to every other job and every other product. If all jobs were entirely automated/performed by AI, there would be zero cost beyond meeting the needs of AIs (which wouldn't be very much at all since it would not make sense to create or program a worker with high demands; that is the very reason why humans would be replaced by AIs)

    And it would *have* to be zero cost if no one could afford anything. The investing class could not charge anything for the production of food or any other product if no one had any way of paying for it. What would even be the *point* of 'unbelievable wealth' in a world where there is no cost to any product?

  2. >Alternatively, if all jobs became obsolete, that would imply that food production was entirely automated. If this was the case, there would be no cost to its production so people would not need money to obtain it. The same would be true with regards to every other job and every other product.

    The cost would not fall to zero – just to some tiny amount, which is still greater than the amount humans can generate with their own skills.

    Or, to put it otherwise, the cost of keeping a human alive is higher than the added value that human skills can generate.

    1. For a product to be sold, it has to be affordable. If humans cannot generate any wealth with their own skills, and therefore have no wealth, then there will be no-one to buy the product. If no-one buys the product, it is not going to generate wealth for its inventor/owner.

      People would just grow their own food, and trade with other people who had a product they wanted, creating their own form of money if necessary. They would still have the material products that had been created up to that point. They could still make their own products using non-AI machinery, surely? They would have to, if they couldn't afford the AI-created products.

      1. Robin Hanson's economic analysis which Stuart refers to does not assume people will become so poor they will not be able to buy anything, just that wage competition will drive down wages enormously. And surely people can choose to opt out of this system just like people actually can opt out of current economies – it is just that the inconvenience and disadvantage will be even greater.

        1. The only reason it would be inconvenient and disadvantageous to opt out of current economies is because so few people would do so, since most people are still benefitting from capitalism. If there was a sizeable majority who wanted to drop out in the future, doing so could easily be beneficial compared to staying in. If it was a *clear* benefit to drop out, then many people would.

  3. Generally people seem to have a limited amount of creativity or imagination to bear on future problems (consider my response in next post), and this means that quite often they "use up" their imagination when dealing with a scenario to just see a potential problem, not really what the inhabitants of the scenario or their ancestors (us) could do to solve it.

    World funds seem to be an interesting insurance strategy. Essentially they are a way of telling society that "look, if this high risk strategy works we will become insanely wealthy. To show that we are good guys and you should not interfere with us, we also make these promises". It might simply be a form of corporate social responsibility. Or way of whitewashing a business model, of course.

    The interesting thing with your proposal is that it doesn't seem to impair the people pursuing these technologies in the running: a company with a worldfund will neither have an advantage or disadvantage in reaching brain emulation. It might have a tiny disadvantage in competition afterwards, but that assumes it does not gain any benefit from the large group of people (who vote) who benefit.

Comments are closed.