9 min read

Is text-to-CAD accurate enough for real parts?

I measured text-to-CAD output with calipers (after printing) and compared it to what I asked for. The answer is: sometimes close, sometimes not, and never with the confidence you'd want for production.

Quick answer

Text-to-CAD accuracy varies by tool and geometry complexity. Simple dimensions can be within 1-2mm of the prompt specification, but tolerances, hole positions, and complex features are unreliable. No current text-to-CAD tool produces output accurate enough for production manufacturing without manual verification and editing.

I printed five parts from text-to-CAD output last month. Same tool, same printer, same material. Before slicing, I measured each STEP file in Fusion 360. After printing, I measured each part with calipers on my desk, the cheap digital ones I keep next to a coffee mug that's survived more near-misses than it deserves. The dimensions I asked for, the dimensions in the CAD file, and the dimensions on the physical part were three different numbers. Every single time.

That's not unusual in manufacturing. Printers have their own accuracy issues. But the interesting part was the gap between what I asked for and what the AI generated, before any printing happened. That gap is what this post is about. Not printer calibration. Not slicer settings. The accuracy of the geometry that the AI produces from a text prompt, and whether you can trust it.

The short answer: you can't. Not for production. Not without checking every dimension yourself.

What "accurate" means in this context#

Accuracy in CAD has layers, and people mix them up constantly. There's the nominal dimension: is the 50mm feature actually 50mm in the model? There's the tolerance: how much variation is acceptable? There's the feature relationship: are two holes really 30mm apart, center to center? There's the geometric accuracy: is a circular hole actually circular, or slightly oval? And there's the manufacturing accuracy: does the physical part match the digital model?

Text-to-CAD only touches the first layer, and it doesn't always nail it. The tools produce nominal geometry with no tolerance data, no GD&T, and no concept of which dimensions are critical and which ones are free. The AI doesn't know that a bearing bore needs to be within 0.01mm and a cosmetic radius can be off by a full millimeter and nobody cares. It treats every dimension with the same indifference.

The test I ran#

I used Zoo.dev for this because it outputs STEP files with real B-Rep geometry, which means I can import the files and measure actual faces and edges in Fusion 360 rather than trying to measure triangulated mesh data, which is like measuring a brick wall by counting individual bricks.

I wrote five prompts with specific, unambiguous dimensions:

A rectangular plate, 80mm by 50mm by 5mm, with four 4.2mm holes on a 60mm by 30mm bolt pattern centered on the plate.

A cylindrical standoff, 20mm outer diameter, 10mm inner bore, 15mm tall.

An L-bracket, 3mm thick, 40mm legs, with two 5mm holes per leg spaced 25mm apart, 10mm from the edge.

A simple box enclosure, 100mm by 60mm by 30mm, 2mm wall thickness, open top.

A flanged plate, 100mm by 70mm by 4mm, with a 30mm circular boss centered on one face, 15mm tall.

I ran each prompt once and measured the result. No cherry-picking, no re-rolling for a better result.

What I found#

The rectangular plate was close. Width came in at 79.6mm instead of 80mm. Length was 50.1mm. Thickness was 5.0mm. The bolt pattern was the problem: one hole was shifted about 0.8mm from where it should have been. If you're using clearance holes with M4 bolts, you'd probably still get the bolts through. If you're using dowel pins for alignment, forget it.

The cylindrical standoff was the best result. Outer diameter 20.0mm, bore 10.0mm, height 15.0mm. Simple geometry with simple dimensions. This is where text-to-CAD currently lives comfortably.

The L-bracket was mixed. Thickness was 3.0mm. Leg lengths were 39.5mm and 40.2mm, so the two legs didn't even match each other. Hole spacing measured 24.3mm instead of 25mm, which is close but not what I asked for. On a 3D print, you'd never notice. On a machined part bolting to something with fixed holes, you'd notice immediately.

The box enclosure had the right external dimensions within half a millimeter, but the wall thickness varied between 1.8mm and 2.3mm around the perimeter. A consistent 2mm wall was part of the prompt. The AI got the outer box right and let the inner cavity wander a bit. This is the kind of error that's invisible in a viewport rotation and obvious the moment you section the model.

The flanged plate was the worst result. The plate was close to spec, but the circular boss came in at 28mm diameter instead of 30mm, and the height was 14mm instead of 15mm. Both off by enough to matter if the boss is supposed to locate into a mating hole or clear a specific component.

The pattern I noticed#

Simple, symmetric geometry with few features tends to be accurate. A cylinder with one bore is an easy problem. The AI nails it.

As you add features, especially features that reference other features, accuracy drifts. A bolt pattern requires holes positioned relative to edges and relative to each other. That's a constraint problem, and text-to-CAD tools don't really reason about constraints. They predict positions based on what similar parts in the training data looked like, not based on the relationships you described. The difference between "holes on a 60mm by 30mm pattern" and "holes approximately where you'd expect them based on similar parts" is small in language and large in manufacturing.

Features that require precise relationships, concentric circles, symmetric patterns, features referenced to datums, tend to be less accurate than features that stand alone. Which is unfortunate, because referenced features are exactly the ones that matter most in real assemblies.

How this compares to manual CAD#

In Fusion 360 or SolidWorks, if I dimension a hole at 4.2mm, it's 4.2mm. Not 4.15mm. Not 4.3mm. Exactly 4.2mm. The software does what I tell it, no more, no less. The accuracy of the model is limited only by the precision of the geometric kernel, which for practical purposes is perfect. If the dimension is wrong, it's because I typed the wrong number, which is a different class of problem and at least one I can fix by correcting a single value.

Text-to-CAD introduces a layer of interpretation between what you ask for and what you get. That interpretation layer is sometimes very good and sometimes off by enough to matter. There's no way to predict in advance which outcome you'll get for a given prompt, which means you have to check every time.

For comparison: I don't measure every feature of a model I built myself in Fusion 360. I trust the software to put things where I told it to. I measure every feature of a text-to-CAD model before I'd do anything with it. That trust gap is the real accuracy problem, not any individual dimension being wrong.

The caliper test (after printing)#

I printed the plate and the L-bracket on an FDM printer. Prusa MK4, PLA, standard settings. Then I measured the prints.

The plate came off the printer at 79.3mm by 49.8mm by 4.9mm. Some of that shrinkage is the printer, not the model. But the hole that was already misplaced in the CAD file was now further off, because printer inaccuracy stacks on top of model inaccuracy. The total error on that hole position was about 1.1mm from where I originally asked for it. On a prototype to test fit, still usable. On a functional part, I'd need to drill the holes out and accept the slop.

The L-bracket was similar. The already-mismatched leg lengths got slightly worse. The hole spacing, already off by 0.7mm in the model, was off by about 1mm in the print. Again, usable for checking the concept. Not usable for assembling with a mating part that has fixed hole positions.

The takeaway: text-to-CAD inaccuracy and manufacturing process inaccuracy compound. If the model is already off by half a millimeter and the printer adds another half millimeter of error in a random direction, you can easily end up with a part that's 1mm or more away from what you intended. That's fine for some applications and completely unacceptable for others.

What this means for different use cases#

For prototyping and concept checking: text-to-CAD accuracy is usually good enough. You're testing form, fit, and general proportions, not hitting tolerances. If the bracket is roughly the right size and the holes are roughly in the right place, you can evaluate the concept. Roughly is the operative word, and for prototyping, roughly is often sufficient.

For 3D printing functional parts: it depends on how functional. A cable clip? Fine. A housing that doesn't need to interface with anything precise? Probably fine. A bracket that bolts to a specific component with specific hole spacing? Check the model first, and adjust as needed. The text-to-CAD for 3D printing workflow always includes a measurement step.

For CNC machining: no. Don't send text-to-CAD output to a machine shop without verifying every dimension. A machinist works to the model, and if the model is wrong, the part is wrong, and now you're paying for material, machine time, and a redo. Measure the STEP file. Fix what's off. Add tolerances. Then send it. The manufacturing reality of AI-generated parts is that they need significant human review before they're shop-ready.

For injection molding or sheet metal: not relevant, because text-to-CAD tools don't generate the process-specific features those methods require. Accuracy is a secondary issue when the fundamental limitations are about missing capabilities rather than dimensional errors.

How to work with the accuracy you get#

My workflow, which I recommend to anyone using these tools, is simple and non-negotiable:

Generate the part with a specific, detailed prompt. Import the STEP file into your CAD tool. Measure every dimension you care about. Fix what's off. Add tolerances, constraints, and relationships the AI didn't include. Save the corrected version as your actual model. Treat the AI output as a starting sketch, not a finished part.

This adds maybe five to ten minutes per part for simple geometry. On anything complex, you'll spend longer, but on anything complex you'll also be rebuilding most of the model anyway because of the other limitations beyond accuracy.

The people who get burned are the ones who skip the measurement step. They generate a part, export it, and send it downstream assuming the dimensions are what they asked for. Sometimes they are. Sometimes they're not. "Sometimes" is not an engineering specification.

The honest verdict#

Text-to-CAD is not accurate enough for production manufacturing. It's close enough for prototyping simple parts. It's inconsistent enough that you should never trust it without verification. And the accuracy gap is narrowing with each generation of these tools, but it's not closed yet and probably won't be for a while.

I'll keep using it for first drafts and quick checks. I'll keep measuring every output before I do anything with it. And I'll keep telling people that the accuracy question isn't really about the numbers. It's about trust. Right now, I trust my Fusion 360 model because I built it. I check my text-to-CAD model because the AI built it. That difference tells you everything about where the technology stands.

Newsletter

Get new TexoCAD thoughts in your inbox

New articles, product updates, and practical ideas on Text-to-CAD, AI CAD, and CAD workflows.

No spam. Unsubscribe anytime.