Sean Hughes with light gray background

ARTICLE | April 8, 2024 | VOICES

The future needs open innovation in AI

Open and transparent development speeds up innovation, builds trust in the AI supply chain, and democratizes human-centered AI

By Sean Hughes, Workflow contributor


The advancement of generative AI (GenAI) promises to transform our world. Since we are only at the beginning of realizing its vast potential, it’s only natural that many of the most pressing questions concern what it can do and how organizations can best harness GenAI use cases. But we must not lose track of an equally important question: Who is building GenAI, and for whom are they building it?

Today, a few companies are showing the greatest advancements, developing proprietary, closed large language models (LLMs) to power consumer-facing commercial applications and the next generation of AI-powered assistants and embodied AI. Such advances will redefine how we think about human augmentation and automation.

These major players are creating locked-down systems whose inner workings are closed to public review of their broader societal impact. This risks repeating the tragedy of the internet, which arose not to secure proprietary advantages for private companies, but as a scientific and governmental research and communication tool, which, once opened to the world, became a new frontier of creativity and innovation.

Innovation thrives in an ecosystem of open scientific collaboration, peer review, sharing, and transfer of knowledge, made safer through open access, independent audits, and mitigation of risks. The development of leading-edge closed GenAI foundation models should be of concern not just to computer scientists, policy experts, and AI researchers, but to the public at large, governments, regulators, and companies—essentially anyone who wants the future to be open for safety, innovation, and fair competition.

Related

AI at ServiceNow 

Even for large technology companies, developing the next generation of advanced LLMs is difficult without vast resources and highly specialized expertise. For that reason, ServiceNow has embraced a hybrid approach, cultivating open innovation with alliances across academia, industry, independent researchers, and nonprofits while accelerating ServiceNow’s in-house GenAI roadmap. The company launched the BigCode project in September 2022 with Hugging Face to develop open-access code LLMs that can be fine-tuned to complete domain-specific knowledge worker tasks such as text summarization, code generation, and even workflow generation. ServiceNow’s fine-tuned LLMs from this project are already being used in commercially available products and reportedly have increased developer productivity and speed of innovation by 52%. NVIDIA recently joined BigCode to help train the next-generation models, dubbed StarCoder2. As of February 2024, there were more than 1,200 members of the community from more than 60 countries.
 

This open innovation paradigm is diametrically opposed to the closed system. Unlike the closed system, which hoards knowledge and tries to keep the AI supply chain secret, BigCode announces plans to create new models, provides opportunities for open-source developers to opt out of having their code used to train models, and releases state-of-the-art open-access foundation models to the AI community under an OpenRAIL-M license, which allows royalty-free commercial use, supported by enforceable restrictions that align to responsible AI principles. With the BigCode project, all of this happens through open governance, with discussion forums that anybody can join to ask questions, collaborate, and contribute to the project. The developer recipes, the code—everything is released back to the community through a mixture of open-source and responsible-use licenses. This in turn creates a flywheel effect for open innovation and responsible development.

In December 2023, ServiceNow became a founding member of the AI Alliance, a consortium founded by IBM and Meta with 50 universities, companies, and governmental and scientific agencies to champion transparent and open innovation of AI. The AI Alliance reported more than 20 additional members as of February. As one of its founding members, we aim to build off the best practices of BigCode and further support open scientific collaboration that advances safe and responsible AI rooted in open innovation.

Innovation thrives in an ecosystem of open collaboration, peer review, sharing, and transfer of knowledge.... The development of closed GenAI foundation models should be of concern to anyone who wants the future to be open for safety, innovation, and fair competition.

Closed AI companies argue that open systems invite bad actors or misuse, yet their systems are susceptible to the same risks as open systems, such as bias and malware generation. It all comes down to transparency, auditability, and the subsequent guardrails built to address these risks, and in that regard, open systems win, hands down.  

Do LLMs have bias? Yes, they do. Do LLMs have risk? Yes. So, what do we do about that? We provide open access and protect good-faith research, development, and governance to improve upon the state of the art in responsible AI. Through shared responsibilities with the community and customers toward the deployment of human-centered AI and by having open-governance policies and processes in place, we’re able to operate with confidence because we know that when issues come up—and they will—there’s a risk management framework in place. At each step of the generative AI pipeline—from user prompt to output—we employ purposeful governance, responsible development, and human-in-the-loop best practices to manage risks.

Open innovation is more than simply showing the inner workings of AI. It’s about an open ecosystem that enables co-creation and transparency. It enables auditability and the sharing of foundational datasets, tools, techniques, and technologies that build trust and safety and power innovations that fuel societal advancements and new economies.

If you value progress, then you should support open innovation in AI. It’s in everyone’s interest to ensure a future where all companies can participate in driving the development and shaping the characteristics of AI and the economic progress that it will bring.

Closed AI companies argue that open systems invite bad actors or misuse, yet their systems are also susceptible to the same risks as open systems. It all comes down to the transparency, auditability, and the subsequent guardrails developed to address these risks, and in that regard, open systems win, hands down.

Related

Understanding proactive customer service

Related articles

Yoshua Bengio on GenAI governance and organizational change
ARTICLE
Yoshua Bengio on GenAI governance and organizational change

Yoshua Bengio won the 2018 A.M. Turing Award for his pioneering research on deep learning. In this exclusive Workflow interview, one of AI’s true OGs explains how organizations can put GenAI to work in a way that’s transparent, responsible, and auditable.

How the European Union plans to govern AI
ARTICLE
How the European Union plans to govern AI

The European Union’s pending Artificial Intelligence Act has already inspired other countries to follow suit.

AI will unlock developer productivity
ARTICLE
AI will unlock developer productivity

The future of programming will pair human coders with AI assistants, reducing repetitive, boring tasks and maximizing creativity and problem-solving.

AI’s impact on the tech skills of tomorrow
ARTICLE
AI’s impact on the tech skills of tomorrow

AI is fundamentally altering our jobs and the tech skills we need to perform them. New research by ServiceNow and Pearson examines how AI will shape the evolution of workplace expertise over the next five years.

Author

Sean Hughes

Sean Hughes is ServiceNow’s AI ecosystem director. He also co-leads the BigCode Legal, Ethics, Governance Working Group; the AI Alliance Foundation Models Working Group; and the AI Alliance Community core team. Hughes previously held leadership roles at HP, Hewlett Packard Enterprise, and Intel.

Loading spinner