10 min read

Using ChatGPT to write OpenSCAD code

ChatGPT can write OpenSCAD code that actually compiles most of the time. Here's how to use it, what to watch out for, and where it gets weirdly creative with geometry.

Quick answer

ChatGPT can generate valid OpenSCAD code from natural language prompts, producing parametric 3D models. Best practices: describe geometry with dimensions, ask for modules and parameters, verify the code compiles in OpenSCAD, and iterate on errors. Works well for simple parts; struggles with complex boolean operations and threading.

A few months ago I needed a simple sensor bracket for a project, the kind of thing with two mounting tabs, a pocket for the sensor body, and a slot for the cable. I could have modeled it in Fusion 360 in ten minutes. Instead, because I was already in a ChatGPT window answering a client email, I typed: "Write an OpenSCAD script for a sensor bracket, 40mm wide, 25mm tall, 3mm thick, with a 15mm x 10mm rectangular pocket centered on the face, two 3.5mm mounting holes 5mm from each end, and a 4mm wide slot from the pocket to the bottom edge for a cable."

ChatGPT gave me a 30-line script. I pasted it into OpenSCAD, hit F5, and the preview showed something recognizably bracket-shaped. The mounting holes were in the right place. The pocket was centered. The cable slot connected to the pocket and ran to the bottom edge. The wall around the pocket was a little thin, and the slot was 3mm wide instead of 4mm, but I changed two variables and had a printable part.

That was easier than it should have been. Here's what I've learned since about making it work consistently.

Why ChatGPT and OpenSCAD are a good match#

The pairing works because OpenSCAD's language is small, well-documented, and heavily represented in ChatGPT's training data. OpenSCAD scripts show up in hundreds of blog posts, forum threads, Thingiverse descriptions, and tutorials dating back over a decade. ChatGPT has seen a lot of .scad files. It knows the syntax, the standard primitives, the boolean operations, and most of the common patterns.

Compare this to asking ChatGPT to write FreeCAD Python macros. FreeCAD's scripting API is large, inconsistent across workbenches, and documented unevenly. ChatGPT generates FreeCAD code that looks plausible but fails on execution because it invents method names, uses deprecated API patterns, or forgets a recompute() call. OpenSCAD's language is constrained enough that ChatGPT stays within the valid syntax almost every time.

The other advantage is that OpenSCAD scripts are self-contained. No imports, no dependencies, no environment setup. You paste the code, it either renders or it doesn't. There's no debugging a missing library path or a version mismatch. The feedback loop is immediate: code goes in, geometry comes out, you see the result in under a second for simple parts.

How to prompt for good results#

The single most important thing is dimensions. Every number you leave out is a number ChatGPT invents, and its sense of proportion is unreliable. I've asked for "a small box" and gotten back a 200mm cube. I've asked for "a bracket" and received something the size of a dinner plate. Always include overall dimensions, wall thickness, hole diameters, feature positions, and spacing.

Use millimeters. ChatGPT handles metric better than imperial for OpenSCAD, probably because most OpenSCAD examples online use millimeters. If you're working in inches, convert before prompting. "25.4mm" will produce more consistent results than "1 inch."

Ask for parametric code explicitly. Say "use variables for all dimensions" or "put parameters at the top of the script." ChatGPT will often generate parametric code anyway, but asking for it ensures the output has named variables you can adjust instead of magic numbers scattered through the geometry.

Request modules when the part has repeated features. "Create a module for the mounting tab and use it twice" produces cleaner code than letting ChatGPT repeat the geometry inline. Modules also make it easier to modify the design later, because changing the module definition updates every instance.

Name features using CAD vocabulary. "Counterbore" produces better geometry than "a hole with a wider hole on top." "Fillet" works better than "rounded edge." "Chamfer," "pocket," "boss," "slot," "keyway" all seem to trigger more accurate code generation. The prompt engineering guide goes deeper on this, but the principle is the same: precise vocabulary produces precise geometry.

A prompt that works#

Here's a prompt I use regularly for simple enclosures:

"Write an OpenSCAD script for a rectangular electronics enclosure. Outer dimensions 80mm x 50mm x 30mm. Wall thickness 2mm. Open top. Four 4.2mm mounting holes in the corners of the open face, 5mm from each outer edge. Two M3 standoffs inside the box, 8mm tall, 6mm outer diameter, 3mm inner diameter, positioned 20mm from each short wall, centered on the long axis. Use variables for all key dimensions."

ChatGPT consistently generates a working script for this. The enclosure is a difference() of two cubes. The holes are cylinder() calls subtracted from the walls. The standoffs are cylinder() calls added inside the box. The variables are declared at the top. It compiles, it renders, the proportions are correct.

The details sometimes need adjustment. ChatGPT occasionally positions features from the wrong reference point (from the center when I meant from an edge, or vice versa). It sometimes forgets $fn on cylinders, giving you octagonal holes instead of round ones. It might put the standoffs on the wrong axis if the prompt is ambiguous about which wall is "short" and which is "long." These are all quick fixes, a matter of changing a number or adding a $fn=50 parameter.

Where ChatGPT gets creative in bad ways#

Boolean operations are the most common failure. ChatGPT will generate a difference() where two faces are coplanar, a situation that makes OpenSCAD's renderer produce warnings or broken geometry. The classic case: subtracting a cube from a larger cube where the subtracted cube's face sits exactly on the larger cube's face. OpenSCAD handles this ambiguously. The fix is to extend the subtracted shape slightly past the surface, and ChatGPT doesn't always remember to do this. I've started adding "extend all cuts 0.1mm past the surface for clean booleans" to my prompts, which helps.

Nesting is another issue. ChatGPT sometimes generates deeply nested boolean operations that are hard to read and occasionally produce unexpected results. A difference() inside a union() inside another difference() can behave in ways that aren't intuitive, and if the AI gets the nesting order wrong, features appear or disappear in confusing ways. For complex parts, I ask ChatGPT to comment each section and use named modules for logical groupings. This produces longer code but fewer geometry surprises.

Circular geometry can go wrong when ChatGPT forgets the $fn parameter. OpenSCAD defaults to a low polygon count for circles and cylinders, so a "round hole" might render as a hexagonal hole. I include "use $fn=50 for all circular features" in every prompt now. It's a small thing but it saves a debugging step every time.

ChatGPT also has a tendency to generate geometry that's structurally valid but not printable. Thin walls that would collapse during printing, overhangs without support surfaces, bridges that are too long. The AI doesn't think about manufacturing. It thinks about geometry. If printability matters, you need to specify minimum wall thickness, maximum overhang angles, and bridge lengths in the prompt, or just check the result yourself with a slicer.

The iteration loop#

The first script is rarely the final script. My typical workflow:

  1. Write a detailed prompt with all dimensions and features.
  2. Paste the script into OpenSCAD, render it.
  3. Identify what's wrong: wrong position, wrong size, missing feature, broken boolean.
  4. If it's a quick fix, edit the script directly.
  5. If the structure is wrong, go back to ChatGPT with a correction: "The cable slot should run along the Y axis, not the X axis" or "The mounting holes should be on the vertical face, not the horizontal face."

Two or three iterations usually gets me to a usable part for simple geometry. If the part is complex enough to need more than three rounds, I'm better off modeling it from scratch. The time investment tips over somewhere around the fourth revision, especially if the structural layout keeps being wrong.

ChatGPT maintains context within a conversation, so corrections build on previous output. "Make the walls thicker" works after the initial generation. "Add a snap-fit lip around the top edge" works if the enclosure is already generated. This conversational refinement is the real strength of the workflow: you're iterating on a design through natural language, with the AI maintaining the full script context.

ChatGPT vs Claude vs local models#

I've tested this workflow with ChatGPT (GPT-4 and later), Claude, and a few local models running through Ollama.

ChatGPT produces the most consistently valid OpenSCAD across the widest range of complexity. It gets the syntax right almost every time, handles parametric variables well, and generates readable code. GPT-4 is better than GPT-3.5 for anything beyond simple primitives.

Claude generates clean, well-commented code and sometimes writes more elegant solutions than ChatGPT, using hull() operations and mathematical positioning that are genuinely clever. Claude also tends to generate code with better structure, separating concerns into modules more naturally. It occasionally produces scripts that are more complex than necessary, but the code quality is generally high.

Local models (I've tested with Llama 3 and DeepSeek Coder) work for simple parts but struggle with complex boolean operations and positioning. They're fine for generating a parametric box or a simple bracket. They're unreliable for anything with more than three or four features interacting. If you're running a local model, stick to simple geometry and be prepared to fix more errors.

For the OpenSCAD + AI workflow in general, any of these models work. The choice depends on whether you care about privacy (local models), cost (local models again), code elegance (Claude), or widest compatibility (ChatGPT).

What not to attempt#

Don't ask ChatGPT to generate gears with correct involute tooth profiles. It'll produce something that looks like a gear from across the room but won't mesh with anything. Use the BOSL2 library's gear modules for that, or generate the gear profile in a dedicated tool and import the DXF.

Don't ask for thread geometry. ChatGPT will generate a cylinder and call it threaded, or produce a helical sweep that's cosmetically thread-shaped but dimensionally meaningless. For 3D printing, use BOSL2's threading modules. For manufacturing, threads belong in your machining setup, not your model.

Don't ask for multi-part assemblies in one prompt. Generate each part separately. ChatGPT loses track of which geometry belongs to which body when you describe multiple interacting parts, and the resulting script is usually a tangled mess of boolean operations that produces a single fused solid instead of separate parts.

Don't trust the output without measuring it. Paste the script into OpenSCAD, render it, use the measurement tools or export and check in a slicer. ChatGPT gets dimensions wrong often enough that blind printing is a bad idea. I've had holes come out 0.5mm too small, walls too thin by a full millimeter, and features positioned from the wrong datum. Always verify.

The honest take#

ChatGPT writing OpenSCAD code is the most practical text-to-CAD workflow I use regularly. Not the most impressive. Not the most powerful. The most practical. Because the input is text, the output is text, the edit cycle is fast, and I don't need to install anything beyond OpenSCAD, which I already have.

It works best for parts I could model myself in ten to fifteen minutes. For those parts, ChatGPT gets me to 80% in one minute and I spend five minutes fixing the rest. The time savings are real but modest. The bigger value is creative: I can iterate on design ideas faster by describing variations than by modeling each one. "Make it 5mm taller. Add a third mounting hole. Widen the pocket by 2mm." Each variation takes seconds to describe and the AI produces updated code instantly.

If you're already comfortable with OpenSCAD, adding ChatGPT to the workflow is trivial and immediately useful. If you've never used OpenSCAD, the combination is a genuinely good way to learn the language, because you can read what the AI generates and understand how OpenSCAD primitives compose into real parts. Either way, it's worth an afternoon of experimentation. The MCP server approach takes this further by closing the visual feedback loop, but even the basic copy-paste workflow produces results I'd actually use.

Newsletter

Get new TexoCAD thoughts in your inbox

New articles, product updates, and practical ideas on Text-to-CAD, AI CAD, and CAD workflows.

No spam. Unsubscribe anytime.