10 min read

Text-to-CAD dimensional accuracy: I measured the output

I prompted five parts with specific dimensions, exported the output, and measured everything in Fusion 360. The results were educational, in the way that watching someone parallel park into a fire hydrant is educational.

Quick answer

Text-to-CAD dimensional accuracy varies by tool and geometry complexity. In testing, Zoo.dev hit specified dimensions within 5% for simple prismatic parts. Curved geometry and holes were less accurate (10-20% deviation). No tool consistently produced the exact dimensions requested. Complex parts with multiple interacting dimensions had the worst accuracy.

Text-to-CAD dimensional accuracy is inconsistent, tool-dependent, and worse on complex geometry than on simple prismatic shapes. I know this because I measured it. Five test parts, three tools, a STEP file importer, and the inspection tools in Fusion 360 that I normally use for checking supplier models. It took me most of a Saturday afternoon, which I could have spent doing something useful, but the numbers were worth it because nobody else seems to be publishing actual measurements.

The whole exercise started because I was tired of the vague claims. "Close enough for prototyping." "Usually within a few percent." "Good for concept work." These phrases float around text-to-CAD conversations without anyone attaching numbers. I wanted numbers. Specific, reproducible, measured-in-CAD numbers that I could point at when someone asks whether the output dimensions can be trusted. The answer, backed by data from my Saturday of clicking "Measure" repeatedly: sometimes yes, often no, and never with the confidence you'd need for anything beyond rough prototyping.

The methodology#

I designed five test parts on paper with specific, unambiguous dimensions. No organic shapes. No vague descriptions. Every feature had an exact dimension specified in the prompt. The point was to test the AI's ability to produce what was asked for, not to test how creatively it interprets vague prompts.

The five test parts:

Part 1: A flat rectangular plate, 80mm by 50mm by 5mm, with four 4.2mm holes on a 60mm by 30mm bolt pattern, centered on the plate. Six critical dimensions: length, width, thickness, hole diameter, and the two bolt pattern spacings.

Part 2: A cylindrical standoff, 25mm outer diameter, 12mm inner bore, 30mm tall, with a 2mm chamfer on both ends of the outer edge. Five critical dimensions plus the chamfers.

Part 3: An L-bracket with 3mm wall thickness, 40mm by 40mm legs, with a 15mm radius fillet at the inside corner and two 5mm holes per leg on a 25mm spacing. Nine critical dimensions including the fillet.

Part 4: A U-channel, 60mm long, 30mm wide, 20mm tall, 2mm wall thickness, open top. Five critical dimensions, but the wall thickness uniformity is what I was really watching.

Part 5: A flanged cylinder. A 20mm diameter, 30mm tall cylinder sitting centered on a 50mm by 50mm by 4mm square flange, with a 10mm through-bore. Seven critical dimensions and a concentricity relationship between the bore and the outer cylinder.

I ran each prompt through three tools: Zoo.dev, AdamCAD, and CADScribe. Exported the STEP files. Imported every one into Fusion 360. Measured every critical dimension using the Inspect tool. Wrote everything down in a spreadsheet that's still open on my desktop because I haven't had the heart to close it.

Zoo.dev results#

Zoo.dev performed best on the simple parts and worst on the complex ones, which was roughly what I expected given my earlier accuracy testing.

Part 1 (rectangular plate): Length 80.0mm, width 50.0mm, thickness 5.0mm. Good. Holes measured 4.2mm. Also good. Bolt pattern: 59.4mm by 29.7mm instead of 60mm by 30mm. Close, but a 0.6mm error on a bolt pattern means the holes are shifted. On an M4 clearance hole you'd probably still get the bolts through, but it's not what I asked for. Overall: the gross dimensions are right, the feature positions have sub-millimeter drift.

Part 2 (cylindrical standoff): OD 25.0mm, bore 12.0mm, height 30.0mm. The chamfers were 1.8mm instead of 2mm. Honestly, pretty good. Cylinders are Zoo.dev's comfort zone. Simple rotational geometry with clear dimensions.

Part 3 (L-bracket): This is where things got interesting. Wall thickness measured 3.0mm on one leg and 2.8mm on the other. Leg lengths were 40.0mm and 39.4mm. The fillet radius at the inside corner was 13mm instead of 15mm, a 13% error on a feature that would affect stress distribution if this were a structural part. Hole diameters were 4.9mm and 5.1mm instead of 5.0mm. Hole spacing was 24.2mm on one leg and 25.3mm on the other. The symmetry I described in the prompt didn't carry through to the output. Both legs should have been identical. They weren't.

Part 4 (U-channel): External dimensions were close: 60.1mm long, 30.0mm wide, 20.0mm tall. But the wall thickness varied from 1.7mm to 2.3mm around the channel. I specified 2mm uniform. The AI got the outer shell right and let the inner cavity drift. This is the same pattern I noticed in my earlier enclosure testing: external dimensions are more reliable than internal features.

Part 5 (flanged cylinder): Flange was 49.6mm by 49.7mm by 4.0mm. Cylinder OD 19.5mm, height 29mm, bore 10.0mm. The cylinder was 0.5mm too small in diameter and 1mm too short. The bore was centered on the cylinder, but the cylinder was not perfectly centered on the flange: offset by about 0.4mm from center. Concentricity, a geometric relationship, was approximate rather than exact.

AdamCAD results#

AdamCAD generates OpenSCAD code from prompts, so the output is parametric and in theory should match the specified dimensions exactly, since the code explicitly sets dimension values.

Part 1 (rectangular plate): All box dimensions correct at 80, 50, and 5mm. Holes at 4.2mm. Bolt pattern spacing correct at 60mm by 30mm. AdamCAD's code-generation approach means the top-level dimensions are literally typed into the code. Where it gets less reliable is in features that require geometric calculation rather than direct dimension input.

Part 2 (cylindrical standoff): Dimensions correct. Chamfers were generated as 45-degree cuts at 2mm. Accurate. Simple OpenSCAD geometry.

Part 3 (L-bracket): This is where code generation got tricky. The wall thickness was 3mm as specified. Leg lengths were correct at 40mm. But the fillet was implemented as a cylinder subtracted from the corner rather than a proper fillet, and the radius measured 15mm as specified but the fillet didn't smoothly blend with the leg surfaces. The result was geometrically correct on paper but produced a visible seam in the STEP export. Holes were 5.0mm and spacing was 25.0mm. Dimensionally accurate but geometrically rough.

Part 4 (U-channel): All dimensions correct because it's a simple Boolean operation in OpenSCAD. Wall thickness uniform at 2mm. This is AdamCAD's strength: straightforward parametric geometry where every dimension is a variable in the code.

Part 5 (flanged cylinder): Dimensions correct. Concentricity exact because the code uses the same center coordinate for both features. AdamCAD's code-based approach eliminates the positional drift that Zoo.dev showed.

The trade-off: AdamCAD is dimensionally more accurate for parts that can be described with OpenSCAD primitives and Booleans, but the geometry quality (surface smoothness, fillet quality, edge treatment) is rougher. You get the right numbers in a less refined package.

CADScribe results#

CADScribe generates Fusion 360 commands, so the output should have native feature history and good geometry quality.

Part 1 (rectangular plate): Length 80.0mm, width 50.0mm, thickness 5.0mm. Holes 4.2mm. Bolt pattern 60.0mm by 30.0mm. Fully accurate. The Fusion 360 sketch constraints held the pattern precisely.

Part 2 (cylindrical standoff): All dimensions correct. Chamfers at 2mm. Clean native Fusion geometry.

Part 3 (L-bracket): Wall thickness 3.0mm. Legs 40.0mm. Fillet radius 15.0mm, properly blended. Holes 5.0mm, spacing 25.0mm. The Fusion 360 native features handle this geometry cleanly. The sketch constraints and feature operations produce exact results.

Part 4 (U-channel): Correct dimensions. Uniform 2mm wall. The Shell feature in Fusion 360 produced clean, consistent walls.

Part 5 (flanged cylinder): All dimensions correct. Concentricity exact. The Fusion 360 construction geometry (center points, axes) ensures features align precisely.

CADScribe's results were the most accurate across all five parts. The catch: CADScribe's accuracy depends on the AI correctly translating the prompt into Fusion 360 operations. When it works, the dimensions are exact because Fusion 360's geometric kernel enforces them. When the translation fails (which happens with more complex prompts), you get an error rather than wrong geometry. It fails loudly rather than silently, which is actually preferable to silent dimensional drift.

Where dimensions break down#

Across all three tools, I noticed consistent patterns about what kinds of features are most and least accurate.

Most accurate: overall bounding dimensions (length, width, height), hole diameters specified explicitly, features that map directly to a single CAD operation (a hole, an extrusion, a chamfer with a single dimension).

Least accurate: features that reference other features (bolt patterns, hole positions relative to edges), fillet radii (often approximate rather than exact), wall thicknesses on parts generated with Boolean operations rather than Shell features, and concentricity or symmetry relationships between features.

The pattern makes sense if you think about how these tools work. A dimension that maps to a single number in a CAD operation (extrude 5mm, hole diameter 4.2mm) tends to be accurate because the AI just needs to put the right number in the right field. A dimension that requires calculating a position relative to other features (hole center is 10mm from an edge that's at a certain position based on the overall part width) introduces compounding errors. Each reference in the chain can drift slightly, and the errors accumulate.

For a deeper look at this accuracy problem, my earlier testing on Zoo.dev covers the pattern in more detail. This round of testing confirmed the same trends across multiple tools.

The "close enough" threshold#

Whether text-to-CAD accuracy is acceptable depends entirely on what you're doing with the output.

For concept visualization and design reviews: a part that's within 5% on all dimensions is fine. You're evaluating form and proportion, not building to spec. All three tools are acceptable for this use case.

For FDM prototyping of non-functional parts: within 1-2mm is usually workable. Zoo.dev and CADScribe are reliable here for simple parts. Complex parts need verification.

For FDM prototyping of functional parts (test fits, assembly checks): you need sub-millimeter accuracy on critical interfaces. Only CADScribe consistently delivered this, and only on prompts it successfully translated. Zoo.dev is hit-or-miss. AdamCAD is dimensionally precise but geometrically rough.

For CNC machining or any production process: no tool is consistently accurate enough. Always verify every dimension in your CAD tool before sending anything to manufacturing. The limitations of AI-generated geometry for real work go beyond dimensional accuracy, but dimensional accuracy is the first thing a machinist will notice.

What this means for prompt engineering#

The accuracy data suggests some practical prompt-writing strategies.

State every dimension explicitly. Don't say "small hole." Say "5mm diameter hole." The AI performs best when dimensions are numbers, not adjectives.

Keep feature count low. Each additional feature is another opportunity for positional drift. A plate with two holes is more accurately generated than a plate with eight holes in a complex pattern.

Specify relationships explicitly. Instead of "holes near the corners," say "holes centered 10mm from each edge." Instead of "a fillet at the corner," say "a 5mm radius fillet at the inside corner." The more specific the prompt, the less the AI has to guess, and guessing is where the errors come from.

Verify before using. I know this sounds obvious, but the number of people who generate a part and send it to a printer without measuring a single dimension is higher than it should be. Open the STEP file in your CAD tool. Use the measure tool. Check the critical dimensions. It takes two minutes and it's the difference between a usable prototype and a confusing waste of filament.

The text-to-CAD guide has more on prompt strategies. But no amount of prompt engineering eliminates the need to verify. The accuracy is good enough to be useful and inconsistent enough to require checking. That's where the technology is. My Saturday of measuring confirmed it.

The honest verdict#

Text-to-CAD dimensional accuracy is better than I expected on simple parts and worse than the demos imply on anything with feature relationships. Zoo.dev gets you in the ballpark. AdamCAD gets you the exact numbers in rough geometry. CADScribe gets you the exact numbers in clean geometry, when it works. No tool is reliable enough to skip the verification step.

My spreadsheet has fifty-odd measurements in it now, and the story they tell is consistent: text-to-CAD is a first draft, not a specification. Treat the output accordingly. Measure what matters. Fix what's wrong. And keep the best text-to-CAD tools in perspective: they're impressive for what they are, and insufficient for what a lot of people want them to be. My Saturday afternoon confirmed both halves of that sentence.

Newsletter

Get new TexoCAD thoughts in your inbox

New articles, product updates, and practical ideas on Text-to-CAD, AI CAD, and CAD workflows.

No spam. Unsubscribe anytime.