HTML-in-Canvas API: Rendering Live DOM Elements as Canvas Textures
The HTML-in-Canvas API is an experimental web platform feature that bridges the HTML DOM and the Canvas element. It allows you to take a live, interactive HTML element and render its current visual state onto a 2D or WebGL canvas, where it can be used as a texture, distorted, animated, or composited into a scene.
The rendered element stays connected to the DOM. When its content updates, the canvas receives a paint event. When a user types into an input nested inside the canvas, that interaction is reflected in the canvas rendering.
Current status
As of this writing, HTML-in-Canvas is experimental and only available behind a flag in Chromium-based browsers. It is not a web standard and should not be used in production. To try it in Chrome Canary, navigate to chrome://flags, search for "HTML-in-Canvas", enable the flag, and relaunch.
The problem it solves
HTML and Canvas have historically been separate tools with complementary strengths. HTML handles structured layouts, text rendering, accessibility, and built-in interactivity. Canvas handles custom graphics, GPU-accelerated performance, and effects like shaders and distortions that CSS cannot achieve.
The traditional workarounds for mixing them are unsatisfying. Overlaying HTML elements on top of a canvas with absolute positioning creates z-index and interaction problems. Recreating UI components using canvas drawing commands produces poor text rendering, no accessibility, and large amounts of boilerplate.
HTML-in-Canvas removes the need for either workaround by letting you define UI with HTML and CSS and then use that element as source data for the canvas.
The layoutsubtree attribute
The layoutsubtree boolean attribute on a <canvas> element changes how its child elements are treated. Without it, children of <canvas> are treated as fallback content and only displayed if the canvas itself is unsupported.
With layoutsubtree:
- Child elements are not painted to the screen in normal DOM flow. They become visually invisible.
- They are still fully processed in the layout and accessibility tree. Their size and position are calculated, they can receive focus, and they are visible to screen readers.
This is the core mechanism. The browser handles layout and accessibility, but defers painting, allowing canvas code to capture and use the rendered state.
WebGL implementation
For a Three.js 3D scene, the HTML element becomes a WebGL texture updated via gl.texElementImage2D().
The final argument to gl.texElementImage2D() is a direct reference to the DOM node. The browser transfers the current visual state of that element, including all its text, styling, and children, to the GPU as a texture. The paint event fires whenever the source element changes, so the texture is only re-uploaded when needed rather than on every frame.
2D canvas implementation
For standard 2D canvases, the equivalent method is ctx.drawElementImage(). The API is simpler because there is no texture binding step.
The onpaint handler fires when the form changes, such as when the user types into the input. ctx.drawElementImage() takes the element and an x/y coordinate. The result is a live, interactive form element rendered at an arbitrary position on the 2D canvas.
Privacy-preserving rendering
Any API that reads pixel data from rendered elements raises fingerprinting concerns. If system-specific rendering details like custom fonts, visited link colors, or OS themes were exposed through canvas pixel data, scripts could use that information to identify users.
The proposal addresses this with a privacy-preserving rendering mode that explicitly excludes: cross-origin content from <iframe> elements, system colors and themes, spelling and grammar markers, visited link appearance, and pending form autofill data. Elements rendered through this API omit these details, preventing fingerprinting while still exposing the structure and content of same-origin elements.
Known limitations
The proposal documents several open issues. There is a one-frame delay in updates when using drawElementImage. Certain CSS properties, including scrollbars within rendered elements, can cause crashes. Performance has not been fully optimized; uploading complex or frequently updated elements to the GPU has cost, and the development team is actively working on this.
These are the categories of issues that get resolved during an experimental phase, but they mean the current implementation is not suitable for production use.
Final thoughts
HTML-in-Canvas removes a long-standing constraint in web development. The ability to use HTML and CSS for layout and accessibility while treating the result as a canvas-renderable resource opens up use cases that were previously only possible through fragile workarounds or by giving up either accessibility or graphical flexibility.
The experimental status means the API and its ergonomics will change before standardization. For developers exploring it now, the 2D canvas path with drawElementImage is the simplest entry point. The WebGL path via texElementImage2D is more involved but enables the full range of GPU effects on live HTML content.
The specification and open issues are tracked in the HTML-in-Canvas GitHub repository.