- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
As generative AI becomes more deeply embedded in the way we design, build, and experience technology, the industry is facing an important question: Can we trust AI models to produce accessible content by default? For too long, accessibility was treated as a final check rather than a foundational requirement. With AI accelerating content creation at unprecedented scale, that mindset is no longer sustainable.
That’s why ServiceNow, in collaboration with the GAAD Foundation, introduced the AI Model Accessibility Checker (AIMAC)—a first-of-its-kind, open-source tool designed to evaluate how effectively large language models generate accessible HTML without special instructions. Check out the AIMAC leaderboard now. AIMAC works by sending neutral, nonspecific prompts—the kinds of requests everyday users might give—to an AI model and then analyzing the returned HTML using the axe-core accessibility engine. From there, it generates a weighted accessibility score based on the severity of any issues it finds. The lower the score, the more accessible the output. This gives organizations, developers, and researchers a transparent, consistent way to benchmark LLMs against real accessibility standards like WCAG 2.2 AA.
The tool also supports customizable prompts, allowing teams to test everything from layout structures to semantic patterns in a wide range of scenarios. But AIMAC is more than just a diagnostic tool—it’s a statement about the future we want to build with AI. As a community, we have the responsibility to ensure that innovation doesn’t widen accessibility gaps, but instead closes them. By embedding people with disabilities, accessibility practitioners, and engineering teams into the process from the very beginning, AIMAC models what responsible AI development should look like. It moves the industry from reactive fixes to proactive accountability, showing that inclusive outputs aren’t an “extra step” but a measurable, achievable standard.
The early response to AIMAC has been a powerful proof point. By launching it on Global Accessibility Awareness Day, we underscored its mission: raising the bar for accessible AI across the entire ecosystem. The upcoming open-source release on GitHub will give organizations of all sizes the ability to adopt, adapt, and build upon this framework. And as the AIMAC Accessibility Leaderboard continues to grow, it creates healthy pressure for AI providers to improve—not just in accuracy and speed, but in equity. AIMAC is a reminder that when accessibility is built into the foundation of AI, everyone benefits.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
