Examining Responsibility and Deliberation in AI Impact Statements and Ethics Reviews

Paper figure

Abstract

The artificial intelligence research community is continuing to grapple with the ethics of their work by encouraging researchers to discuss potential positive and negative consequences of their work.Neural Information Processing Systems (NeurIPS), a top-tier conference for machine learning and artificial intelligence research, first required a statement of broader impact in 2020. In 2021, NeurIPS updated their call for papers such that 1) the impact statement focused on negative societal impacts and was not required but encouraged, 2) a paper checklist and ethics guidelines were provided for authors, and 3) papers underwent ethics review and could be rejected on ethical grounds. In light of these changes, we contribute a qualitative analysis of231impact statements and all publicly-available ethics reviews. We describe themes arising around the ways in which authors express agency (or lack thereof ) in identify-ing or mitigating negative consequences and assign responsibility for mitigating negative societal impacts. We also characterize ethics reviews in terms of the types of issues raised by ethics reviewers(falling into categories of policy-oriented and non-policy-oriented),recommendations ethics reviewers make to authors (e.g., in terms of adding or removing content), and interaction between authors, ethics reviewers, and original reviewers (e.g., consistency between issues flagged by original reviewers and those discussed by ethics reviewers). Finally, based on our analysis we make recommendations for how authors can be further supported in engaging with the ethical implications of their work when writing impact statements.

Publication
AAAI/ACM Conference on AI, Ethics, and Society (AIES)