Conversations On AI App Development CRM Enterprise IT Ethics & Governance Futures HR Industries ServiceNow on ServiceNow Platform Foundations Products & Solutions All topics For Leaders In IT & Dev Customer Experience Finance, Operations & Strategy Employee Experience Security & Risk News & Events People & Culture My List Explore All
April 8, 2024 5 min The future needs open innovation in AI Open and transparent development speeds up innovation, builds trust in the AI supply chain, and democratizes human-centered AI AI Thought Leadership
Sean Hughes
Sean Hughes AI Ecosystem Director, ServiceNow
Green marquee background

The advancement of generative AI (GenAI) promises to transform our world. Since we are only at the beginning of realizing its vast potential, it’s only natural that many of the most pressing questions concern what it can do and how organizations can best harness GenAI use cases. But we must not lose track of an equally important question: Who is building GenAI, and for whom are they building it?

Today, a few companies are showing the greatest advancements, developing proprietary, closed large language models (LLMs) to power consumer-facing commercial applications and the next generation of AI-powered assistants and embodied AI. Such advances will redefine how we think about human augmentation and automation.

These major players are creating locked-down systems whose inner workings are closed to public review of their broader societal impact. This risks repeating the tragedy of the internet, which arose not to secure proprietary advantages for private companies, but as a scientific and governmental research and communication tool, which, once opened to the world, became a new frontier of creativity and innovation.

Innovation thrives in an ecosystem of open scientific collaboration, peer review, sharing, and transfer of knowledge, made safer through open access, independent audits, and mitigation of risks. The development of leading-edge closed GenAI foundation models should be of concern not just to computer scientists, policy experts, and AI researchers, but to the public at large, governments, regulators, and companies—essentially anyone who wants the future to be open for safety, innovation, and fair competition.

Open innovation for AI

Even for large technology companies, developing the next generation of advanced LLMs is difficult without vast resources and highly specialized expertise.

For that reason, ServiceNow has embraced a hybrid approach, cultivating open innovation with alliances across academia, industry, independent researchers, and nonprofits while accelerating ServiceNow’s in-house GenAI roadmap.

The company launched the BigCode project in September 2022 with Hugging Face to develop open-access code LLMs that can be fine-tuned to complete domain-specific knowledge worker tasks such as text summarization, code generation, and even workflow generation.

Innovation thrives in an ecosystem of open scientific collaboration, peer review, sharing, and transfer of knowledge.
The development of closed GenAI foundation models should be of concern to anyone who wants the future to be open for safety, innovation, and fair competition.

ServiceNow’s fine-tuned LLMs from this project are already being used in commercially available products and reportedly have increased developer productivity and speed of innovation by 52%.

NVIDIA recently joined BigCode to help train the next-generation models, dubbed StarCoder2. As of February 2024, there were more than 1,200 members of the community from more than 60 countries.

This open innovation paradigm is diametrically opposed to the closed system. Unlike the closed system, which hoards knowledge and tries to keep the AI supply chain secret, BigCode announces plans to create new models, provides opportunities for open-source developers to opt out of having their code used to train models, and releases state-of-the-art open-access foundation models to the AI community under an OpenRAIL-M license, which allows royalty-free commercial use, supported by enforceable restrictions that align to responsible AI principles.

With the BigCode project, all of this happens through open governance, with discussion forums that anybody can join to ask questions, collaborate, and contribute to the project.

The developer recipes, the code—everything is released back to the community through a mixture of open-source and responsible-use licenses. This in turn creates a flywheel effect for open innovation and responsible development.

In December 2023, ServiceNow became a founding member of the AI Alliance, a consortium founded by IBM and Meta with 50 universities, companies, and governmental and scientific agencies to champion transparent and open innovation of AI.

The AI Alliance reported more than 20 additional members as of February. As one of its founding members, we aim to build off the best practices of BigCode and further support open scientific collaboration that advances safe and responsible AI rooted in open innovation.

Closed AI companies argue that open systems invite bad actors or misuse, yet their systems are also susceptible to the same risks as open systems.
It all comes down to the transparency, auditability, and the subsequent guardrails developed to address these risks, and in that regard, open systems win, hands down.

Open innovation leads to safer AI

Closed AI companies argue that open systems invite bad actors or misuse, yet their systems are susceptible to the same risks as open systems, such as bias and malware generation. It all comes down to transparency, auditability, and the subsequent guardrails built to address these risks, and in that regard, open systems win, hands down.

Do LLMs have bias? Yes, they do. Do LLMs have risk? Yes. So, what do we do about that? We provide open access and protect good-faith research, development, and governance to improve upon the state of the art in responsible AI.

Through shared responsibilities with the community and customers toward the deployment of human-centered AI and by having open-governance policies and processes in place, we’re able to operate with confidence because we know that when issues come up—and they will—there’s a risk management framework in place.

At each step of the generative AI pipeline—from user prompt to output—we employ purposeful governance, responsible development, and human-in-the-loop best practices to manage risks.

Open innovation is more than simply showing the inner workings of AI. It’s about an open ecosystem that enables co-creation and transparency. It enables auditability and the sharing of foundational datasets, tools, techniques, and technologies that build trust and safety and power innovations that fuel societal advancements and new economies.

If you value progress, then you should support open innovation in AI. It’s in everyone’s interest to ensure a future where all companies can participate in driving the development and shaping the characteristics of AI and the economic progress that it will bring.

Find out how ServiceNow can help you put responsible AI to work.

Next up
Dive into more conversations AI App Development CRM Enterprise IT Ethics & Governance Human Resources Industries ServiceNow on ServiceNow Platform Foundations Products & Solutions All Topics
Stay in the know Join Us
stay in know image
Alt