- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
There is one class of AI risk that is generally knowable in advance. These are risks stemming from misalignment between a company’s economic incentives to profit from its proprietary AI model in a particular way and society’s interests in how the AI model should be monetised and deployed. The surest way to ignore such misalignment is by focusing exclusively on technical questions about AI model capabilities, divorced from the socio-economic environment in which these models will operate and be designed for profit.
How about the risk of dumbass managers overestimating AIs ability to save on labor costs and firing too many workers to function?