8 min read

OpenSCAD MCP server: AI with visual feedback

The OpenSCAD MCP server lets AI tools see what they're generating in real time. It closes the feedback loop that makes text-to-CAD actually iterative instead of blind.

Quick answer

The OpenSCAD MCP (Model Context Protocol) server connects AI assistants to OpenSCAD, allowing them to generate code, render previews, and iterate based on visual feedback. This creates a closed-loop text-to-CAD workflow where the AI can see and correct its output, significantly improving results over blind code generation.

The first time I used ChatGPT to generate OpenSCAD code, the workflow was: describe a part, copy the script, paste it into OpenSCAD, hit render, look at the result, go back to ChatGPT, describe what was wrong, get a new script, copy, paste, render, repeat. It worked. It also felt like giving driving directions to someone over the phone while they wore a blindfold. The AI was generating geometry it couldn't see. Every correction required me to be the AI's eyes, translating visual problems back into text: "the hole is on the wrong face," "the pocket is too shallow," "the mounting tabs are inside the enclosure instead of outside."

The OpenSCAD MCP server fixes this problem. It gives the AI eyes.

What MCP actually is#

MCP stands for Model Context Protocol. It's a standard, originally developed by Anthropic, that lets AI assistants connect to external tools. Instead of the AI being limited to generating text and hoping you do something with it, MCP lets the AI call functions, read files, execute code, and receive results. Think of it as a way for the AI to use software the same way you do, by issuing commands and seeing what happens.

An OpenSCAD MCP server is a bridge between an AI assistant (Claude, ChatGPT via a compatible client, or any MCP-aware agent) and a local OpenSCAD installation. The server exposes OpenSCAD's capabilities as tools the AI can call: create a new script, modify code, render a preview, export to STL, analyze the geometry. The AI writes OpenSCAD code, tells the server to render it, receives back an image of the result, and decides what to change next. The whole loop happens without you copying and pasting anything.

The projects that exist#

Several OpenSCAD MCP servers have appeared in the last year, each with a slightly different focus.

quellant/openscad-mcp is the most actively maintained as of early 2026, with about 63 GitHub stars and a v0.2.0 release from February 2026. Built with Python and FastMCP, it supports rendering from multiple perspectives, export to STL, 3MF, AMF, OFF, DXF, and SVG, model management, and geometry analysis. It works with Claude Desktop, Cursor, Windsurf, and VS Code. This is the one I've spent the most time with.

fboldo/openscad-mcp-server is a TypeScript implementation, also from early 2026, available on npm. It focuses on PNG preview rendering and STL export, with a design geared toward iterative agent-driven workflows. Lighter-weight than quellant's version, and the npm packaging makes setup straightforward if you're already in a Node.js environment.

petrijr/openscad-mcp is another Python-based server, released in January 2026, with validation, rendering, batch rendering, templates, and module support. It emphasizes local-first operation using stdio transport, meaning everything runs on your machine with no network calls.

jhacksman/OpenSCAD-MCP-Server is older, from early 2025, and takes a more ambitious approach: AI image generation, multi-view reconstruction, CUDA integration, and remote processing. It has around 139 stars and represents a different philosophy, using the MCP connection as part of a larger pipeline that goes beyond simple code generation and rendering.

All of these require a local OpenSCAD installation. They're bridges, not replacements. OpenSCAD does the actual rendering and geometry computation. The MCP server just translates between the AI's requests and OpenSCAD's command-line interface.

Why visual feedback changes everything#

Here's the thing about AI generating CAD: the geometry is spatial. A text description of what's wrong with a 3D model is inherently lossy. When I tell ChatGPT "the hole is in the wrong place," the AI has to guess what I mean by "wrong place." Is it on the wrong face? At the wrong coordinates? The right coordinates but measured from the wrong datum? Rotated incorrectly? All of these produce different fixes, and without seeing the geometry, the AI is essentially guessing which correction to apply.

With an MCP server, the AI renders the model and receives an image. Modern LLMs with vision capabilities can look at that image and understand the geometry. They can see that a hole is on the top face when it should be on the side face. They can see that a pocket is off-center. They can see that two features overlap when they shouldn't. The correction is based on visual evidence, not a text translation of a visual problem.

In practice, this roughly doubles the success rate on first-iteration corrections. When I was doing the copy-paste workflow, I'd estimate about half my corrections produced the intended fix. The other half produced a different error, because the AI misinterpreted my text description of the problem. With the MCP workflow, the AI's corrections hit the target more consistently because it can see what it's fixing.

What the workflow feels like#

I use quellant/openscad-mcp with Claude in Cursor. The setup took about fifteen minutes: install the Python package, configure the MCP connection in Cursor's settings, point it at my OpenSCAD installation. After that, it just works.

I describe a part in natural language. Claude generates an OpenSCAD script, saves it through the MCP server, and renders a preview. The preview image appears in the conversation. I can see the geometry. Claude can see the geometry. If something is wrong, I say "the mounting tabs should be on the outside of the box, not the inside" and Claude modifies the script, re-renders, and shows me the updated result. The iteration loop is fast, usually under ten seconds per cycle.

The multi-perspective rendering is useful for catching problems that a single-angle preview hides. A part that looks correct from the front might have a feature missing on the back. Rendering from three or four angles gives both me and the AI a complete picture without having to rotate the model manually.

The export step is also handled through MCP. When the geometry looks right, I tell Claude to export STL and the file appears in my project directory. No menu navigation, no dialog boxes, no forgetting to set the right export resolution. The AI handles the export parameters because it knows what the model contains and can choose appropriate settings.

What it doesn't solve#

The MCP server doesn't make OpenSCAD better at things OpenSCAD is bad at. The limitations of OpenSCAD + AI remain: no STEP export, no organic surfaces, no assemblies, limited threading. The AI can see the geometry now, but it still can't generate a freeform surface in a language that doesn't support freeform surfaces.

The visual feedback also has limits. The AI sees a rendered image, not the actual geometry data. It can't measure distances in the rendering. It can't detect that a wall is 1.9mm thick when it should be 2mm by looking at the preview. Dimensional accuracy still requires you to check the code or export and measure in a slicer. The visual feedback catches structural and positional errors. It doesn't catch dimensional errors below visual threshold.

Complex geometry still confuses the AI, with or without visual feedback. If the model has many overlapping boolean operations, the rendered result might look wrong in ways the AI can't diagnose from the image alone. "Something looks weird about the bottom-left corner" is about the level of precision you get from visual analysis, and that's often not enough to identify a buried boolean error five levels deep in the script.

There's also a practical constraint: the render cycle adds time. Each iteration requires OpenSCAD to render the model and the server to capture the image. For simple parts this takes a second or two. For complex parts with many boolean operations or high $fn values, it can take ten to thirty seconds. That's still faster than the copy-paste workflow, but complex models make the loop feel sluggish.

Setting it up#

The quickest path is quellant/openscad-mcp with Claude Desktop or Cursor:

  1. Install OpenSCAD if you don't have it.
  2. Install the MCP server: pip install openscad-mcp or clone the repo.
  3. Add the MCP server configuration to your AI tool's settings. For Cursor, this goes in the MCP configuration file. For Claude Desktop, it goes in the Claude settings JSON.
  4. Verify the connection by asking the AI to generate and render a simple cube.

The README for each project has specific setup instructions. The configuration differs slightly between AI clients, but the concept is the same: point the client at the MCP server, point the MCP server at OpenSCAD, and the pipeline connects.

If you prefer a TypeScript setup or are already using npm-based tools, fboldo's server installs with npm install openscad-mcp-server and has a similarly straightforward configuration.

Where this fits in the bigger picture#

The MCP approach isn't unique to OpenSCAD. There are MCP servers for FreeCAD, Fusion 360, and other CAD tools. The text-to-CAD open source ecosystem is full of these bridges. What makes the OpenSCAD version particularly effective is that OpenSCAD's interface is already text-based. The MCP server doesn't need to simulate mouse clicks or navigate GUI menus. It writes a text file and calls a command-line renderer. The impedance mismatch between what the AI does naturally (generate text) and what the tool needs (receive text) is essentially zero.

For FreeCAD and Fusion 360 MCP servers, the AI generates Python scripts that manipulate a GUI application through an API. That's a more complex translation, with more things that can go wrong. The OpenSCAD MCP server is simple in architecture because OpenSCAD is simple in interface. That simplicity is, in a roundabout way, OpenSCAD's greatest strength for AI integration.

The text-to-CAD guide covers the full range of tools and approaches. If you're already using OpenSCAD and already using an AI assistant, an MCP server is the obvious next step. It turns a workable-but-clunky copy-paste workflow into something that feels like pair programming with someone who can actually see the screen. The AI is still not a CAD expert. But at least it's no longer a CAD expert working blindfolded.

Newsletter

Get new TexoCAD thoughts in your inbox

New articles, product updates, and practical ideas on Text-to-CAD, AI CAD, and CAD workflows.

No spam. Unsubscribe anytime.