The advancement of generative AI (GenAI) promises to transform our world. Since we are only at the beginning of realizing its vast potential, it’s only natural that many of the most pressing questions concern what it can do and how organizations can best harness GenAI use cases. But we must not lose track of an equally important question: Who is building GenAI, and for whom are they building it?
Today, a few companies are showing the greatest advancements, developing proprietary, closed large language models (LLMs) to power consumer-facing commercial applications and the next generation of AI-powered assistants and embodied AI. Such advances will redefine how we think about human augmentation and automation.
These major players are creating locked-down systems whose inner workings are closed to public review of their broader societal impact. This risks repeating the tragedy of the internet, which arose not to secure proprietary advantages for private companies, but as a scientific and governmental research and communication tool, which, once opened to the world, became a new frontier of creativity and innovation.
Innovation thrives in an ecosystem of open scientific collaboration, peer review, sharing, and transfer of knowledge, made safer through open access, independent audits, and mitigation of risks. The development of leading-edge closed GenAI foundation models should be of concern not just to computer scientists, policy experts, and AI researchers, but to the public at large, governments, regulators, and companies—essentially anyone who wants the future to be open for safety, innovation, and fair competition.