ServiceNow Research

Societal Alignment Frameworks Can Improve LLM Alignment

Abstract

Recent progress in large language models (LLMs) has focused on producing responses that meet human expectations and align with shared values— a process coined alignment. However, aligning LLMs remains challenging due to the inherent disconnect between the complexity of human values and the narrow nature of the technological approaches designed to address them. Current alignment methods often lead to misspecified objectives, reflecting the broader issue of incomplete contracts, the impracticality of specifying a contract between a model developer, and the model that accounts for every scenario in LLM alignment. In this paper, we argue that improving LLM alignment requires incorporating insights from societal alignment frameworks, including social, economic, and contractual alignment, and discuss potential solutions drawn from these domains. Given the role of uncertainty in contract formalization within societal alignment frameworks, this paper investigates how it manifests in LLM alignment. We end our discussion by offering an alternative view on LLM alignment, framing the underspecified nature of its objectives as an opportunity rather than perfect their specification. Beyond technical improvements in LLM alignment, we discuss the need for participatory alignment interface designs.

Publication
Workshop at the International Conference of Learning Representation (ICLR)
Jason Stanley
Jason Stanley
Head of AI Research Deployment​

Head of AI Research Deployment​ at AI Research Deployment​ located at Montreal, QC, Canada.

Nicolas Chapados
Nicolas Chapados
VP of Research

VP of Research at AI Research Management located at Montreal, QC, Canada.

Denis Therien
Denis Therien
VP of Research Partnerships

VP of Research Partnerships at AI Research Partnerships & Ecosystem​ located at Montreal, QC, Canada.

Siva Reddy
Siva Reddy
Research Scientist

Research Scientist at AI Research Partnerships & Ecosystem​ located at Montreal, QC, Canada.