One of the maddening things about reading #BrianChristian’s #TheAlignmentProblem is that it assumes creating systems without firm use cases—“general” AI—is inevitable and desirable. It never questions assumptions of who is doing this work, why, and for whose benefit.
A book that talks in a positive way about narrowly scoped systems which multiply human potential while preserving values is needed. A kind of “Understanding Computers and Cognition” focused on today’s technology.