9 min read

Chrome's AI Features Are Changing How We Make Things for the Web

Accessibility has always been something most of us intend to get right, but struggle to verify in practice. Traditional linting tools and automated audits catch a fraction of real-world issues, and the rest demands human judgment, specialist knowledge, and a lot of manual testing. Chrome's latest wave of AI-powered features is changing that, and I find it genuinely exciting, both for the people who rely on assistive technology every day and for those of us who make things for the web they navigate.

In this post I'll walk through what's new. I'll start with the browser-level features that improve the day-to-day experience for users with disabilities, then go deeper into the DevTools updates that I think have the most potential to change how we work.

First, the Browser: AI Accessibility Features for Real Users

Before getting into DevTools, I think it's worth pausing on the end-user side of Chrome's AI push. These features matter because they reflect what our users actually experience. Understanding them makes me more empathetic about what I put out into the world.

PDF Text Extraction (OCR): Chrome now uses AI-driven OCR to detect text in scanned PDFs. Documents that were previously just images, completely inaccessible to screen readers, can now be highlighted, searched, and read aloud. For people who rely on screen readers, this is a big deal.

Live Caption with Expressive Captions: Real-time captions for any audio or video content playing in the browser. A recent update goes further: the AI can detect emotional tone and reflect it in the caption output, not just what was said, but how it was said.

AI Image Descriptions: When an image has no alt attribute, Chrome can automatically generate a description for blind or low-vision users. It's a useful fallback, but I'd caution against treating it as a reason to skip writing proper alt text. The fallback is there for gaps, not as a substitute for good authoring.

Reading Mode with Read Aloud: Strips away navigation, ads, and visual clutter, leaving just the article content. The Read Aloud feature uses natural-sounding voices across 29 languages, useful for users with cognitive disabilities or reading difficulties, and honestly useful for plenty of other people too.

URL Typo Detection: The address bar uses AI to catch URL typos and suggest the correct previously-visited site. Small feature, real benefit, especially for users with dyslexia.

These features do meaningful work for real users. But for those of us who build and publish things on the web, our leverage is on the authoring side, and that's where DevTools comes in.

DevTools Gets an AI Brain: The AI Assistance Panel

The headline DevTools addition is the AI Assistance panel, a persistent AI chat interface powered by the Gemini family of models, embedded directly inside DevTools.

What makes it actually useful rather than a gimmick is the context it has access to. The panel doesn't just see your prompt. It sees:

  • The live DOM and the full Accessibility Tree of the current page
  • A screenshot of the rendered viewport, giving it visual context
  • Your source files, if you've connected a workspace
That combination lets you ask questions that used to require either a specialist or a lot of painstaking manual digging.

Finally, the Accessibility Tree Becomes Conversational

The accessibility tree has always been available in DevTools, but let's be honest: reading it meaningfully takes practice, and cross-referencing it with your markup to diagnose a specific issue is genuinely tedious. Now that the AI has direct access to it, that changes.

Select any element in the Elements panel, right-click, and choose "Ask AI". You can ask questions in plain language and get answers that are actually grounded in what the tree says about that element:

  • "Why is this button being ignored by screen readers?"
  • "What is the computed accessible name for this element, and how was it derived?"
  • "Does this custom dropdown follow the ARIA design pattern for a listbox?"
Even if you know the accessibility tree well, this is useful. You're no longer manually tracing roles, states, and properties through the tree yourself. You ask, and the AI reasons through it, telling you not just whether an attribute is present but whether it's correct given the full context of that element. That's a meaningful shift in how quickly you can get to an answer.

Combining Visual and Semantic Analysis

One of the features I find most compelling is using the camera icon in the AI panel to take a screenshot, and then asking questions that compare what users see with what assistive technology hears.

For example:

  • "Does the visual heading hierarchy on this page match the <h1><h6> structure reported to screen readers?"
  • "This icon-only button shows a save icon visually. Is the accessible name sufficient for a screen reader user?"
  • "Is the color contrast on this rendered button sufficient for WCAG AA compliance?"
This bridges a gap that traditional linters simply can't close. They check attributes. They can't reason about whether the design intent aligns with the semantic structure.

Reasoning About Complex Patterns

For trickier issues like focus traps, incorrect ARIA roles, or broken keyboard navigation, the AI can do a deeper analysis across multiple elements. Prompts like:

  • "Find accessibility issues in this navigation menu"
  • "How can I debug the focus order of this modal dialog?"
  • "Is the role on this custom component appropriate for its behavior?"
...produce structured, contextual explanations rather than just flagging a missing attribute. This is particularly useful for ARIA widget patterns like tabs, accordions, and comboboxes, where a correct implementation involves a precise combination of roles, properties, and keyboard interaction. Getting that right from memory is hard. Having something reason through it with you is genuinely helpful.

Console Error Explanations

When accessibility-related errors or warnings appear in the Console, you can click "Understand this error". The AI explains the underlying WCAG violation, traces it back to the specific markup responsible, and suggests a concrete fix, often with a code snippet ready to apply.

From Analysis to Fix: The Workspace Integration

Identifying an issue is half the battle. The AI Assistance panel closes the loop with a workspace integration that lets you go from diagnosis to a saved code fix without ever leaving DevTools.

Setting Up a Workspace

  1. Open the Sources panel in DevTools
  2. Click the Workspace (or Filesystem) tab in the left sidebar
  3. Click "Add folder to workspace" and select your project's root directory
  4. When Chrome prompts for permission to access the folder, click Allow
  5. Look for green dots next to filenames in the Sources panel. This confirms the files are mapped correctly to the URLs you're viewing.
If green dots don't appear, right-click a file and use "Map to network resource..." to link it manually.

Applying and Saving Fixes

Once the workspace is connected, the workflow is pretty smooth:

  1. Ask the AI for a fix: "Make this button accessible" or "Add appropriate ARIA attributes to this tab panel"
  2. Review the suggested change, then click "Apply the suggested change" to see it live in the browser
  3. In the AI panel, look for the "Unsaved changes" section, click "Apply to workspace" and then "Save all" to write those changes directly to your local files
No copy-pasting. No switching between your editor and the browser. The fix goes straight to disk.

Going Further: The Chrome DevTools MCP Server

For teams that want to pull browser inspection into AI-assisted workflows beyond the DevTools panel, Google has released the Chrome DevTools MCP (Model Context Protocol) server.

This lets external AI agents, including Claude, Gemini, Cursor, or Copilot, connect to a live Chrome browser and use the full DevTools protocol programmatically. Use cases include:

  • Capturing full-page screenshots for visual regression checks
  • Verifying that code changes actually fixed a visual or structural bug
  • Automating accessibility audits as part of a CI-adjacent workflow
  • Running multi-agent workflows where different agents target specific browser tabs in parallel (now supported via pageId routing)
The DevTools MCP also ships with dedicated skills for memory leak detection and general DevTools usage, and the accessibility debugging skill has been refined to produce more robust output by better leveraging Lighthouse under the hood.

You can learn more at github.com/ChromeDevTools/chrome-devtools-mcp and Addy Osmani's write-up on the DevTools MCP.

Why This Feels Different From Yet Another Linter

I've spent enough time with automated accessibility tools to appreciate what they're good at and where they fall short. Tools like Lighthouse, axe, and WAVE are genuinely valuable, but they cover a limited slice of real-world issues. The rest requires human judgment: understanding intent, evaluating context, and applying WCAG principles to specific design decisions rather than just checking whether an attribute exists.

What the AI Assistance panel does is raise that ceiling in a meaningful way. It won't replace a proper accessibility audit or real user testing, and I wouldn't want it to. But it does raise the floor, especially for those of us working on the web who have some accessibility knowledge but aren't specialists.

A few things it does well that linters can't:

  • Semantic reasoning: Suggesting that a <div> with a click handler should be a <button>, not because of a lint rule, but because of what it does
  • Architectural advice: Flagging that a menu role is wrong for a navigation list where a plain <nav> and <ul> would be more appropriate and better supported
  • Context-aware fixes: Generating correct ARIA markup based on the specific widget pattern in your code, not a generic template
Honestly, the closest analogy I have is having a senior colleague look over your shoulder and reason through problems with you, rather than a linter throwing flags and walking away.

Getting Started

If you want to try this today:

  1. Make sure you're on a recent version of Chrome
  2. Open DevTools (F12 or Cmd+Option+I)
  3. Navigate to Settings → Experiments and enable the AI assistance features if they're not already on
  4. Open the AI Assistance panel (the sparkle icon or the dedicated tab, depending on your Chrome version)
  5. Optionally connect a workspace via the Sources panel for full save-to-disk capability
My suggested first prompt: right-click any interactive element on a page you're working on, select "Ask AI", and ask "What accessibility issues does this element have, and how would a screen reader announce it?" The answer will tell you a lot about what the tool can do, and in my experience it'll probably surface at least one thing worth fixing.