AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks

Download the paper!Read on ArXiv!Run the code!Video available!
ABSTRACT:
hide & show ↓↑
Despite interest in communicating ethical problems and social contexts within the undergraduate curriculum to advance Public Interest Technology (PIT) goals, interventions at the graduate level remain largely unexplored. This may be due to the conflicting ways through which distinct Artificial Intelligence (AI) research tracks conceive of their interface with social contexts. In this paper we track the historical emergence of sociotechnical inquiry in three distinct subfields of AI research: AI Safety, Fair Machine Learning (Fair ML) and Human-in-the-Loop (HIL) Autonomy. We show that for each subfield, perceptions of PIT stem from the particular dangers faced by past integration of technical systems within a normative social order. We further interrogate how these histories dictate the response of each subfield to conceptual traps, as defined in the Science and Technology Studies literature. Finally, through a comparative analysis of these currently siloed fields, we present a roadmap for a unified approach to sociotechnical graduate pedagogy in AI.

What you need to know:

    1. The risks posed by new AIs emerge on a problem-specific scale and require different tools (and a new morphology) to assess.
    2. New programs, such as an AI clinic, are needed to train engineers that better understand the social context of their work (much like in other fields such as Law, Medicine, etc.).

    Citation

    @article{mckane2020aidev,
     title={AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks},
     author={Andrus, McKane and Dean, Sarah and Gilbert, Thomas and Lambert, Nathan  and Zick, Tom},
     journal={IEEE International Symposium on Technology and Society},
     year={2020}
    }