AI CAD for real work: manufacturing, accuracy, and limits
I took AI-generated CAD output and tried to actually make parts from it. CNC, 3D printing, injection molding. Here's what happened, what broke, and where the gap between demo and production still lives.
Quick answer
AI CAD tools can generate geometry that looks correct on screen, but most output still fails basic manufacturing checks: missing tolerances, non-manufacturable features, broken topology, and no DFM awareness. The technology is useful for early concepts and simple parts, not production-ready engineering.
AI-generated CAD output is not ready for production manufacturing in most cases, and anybody who tells you otherwise hasn't tried to actually make the parts. I spent a long week taking text-to-CAD output from three different tools, exporting STEP files, and sending them to a local machine shop and a 3D print service. The machinist called me after two hours. Not to ask a question. To tell me the geometry was, in his words, "theoretically a part." The wall on one side was 0.3 mm thick. A pocket had no tool access. Two holes were positioned where they'd intersect with a fillet that didn't exist in some views. The 3D printer fared slightly better, in the way that a C-minus is slightly better than failing.
That experience shaped everything in this post. I've spent over a decade in CAD, starting in AutoCAD, living in SolidWorks for years, now mostly working in Fusion 360, and I wanted to give text-to-CAD a fair shot in the context where it actually has to perform: making physical things. Not rendering them. Not spinning them in a browser. Making them, out of material, with tools that don't care about your demo.
The gap between demo geometry and real parts#
Every text-to-CAD demo I've seen follows the same script. Someone types a prompt. A 3D model appears. The audience makes impressed noises. The demo ends before anyone asks whether the part can be machined, molded, printed, or even dimensioned.
That gap is where real engineering lives, and it's where AI CAD currently falls apart.
The geometry that comes out of tools like Zoo.dev, AdamCAD, or CADAgent looks like a part. It has faces, edges, volumes. It exports to STEP or STL. In a viewport, it passes the squint test. But passing the squint test is not engineering. A bracket that looks like a bracket but has no defined tolerances, no consideration for tool access, no draft angles, and wall thicknesses that change arbitrarily from face to face is not a bracket. It's a sculpture with mechanical ambitions.
I'm not saying this to be cruel. I'm saying it because the accuracy problem is the single biggest obstacle between AI CAD and real work, and most of the conversation around these tools pretends it doesn't exist.
Dimensional accuracy: what the numbers actually look like#
When I talk about accuracy, I don't mean "does the shape look roughly right." I mean: if you ask for a 50 mm x 30 mm x 10 mm block with a 6 mm hole centered on the top face, do you get those dimensions?
The honest answer is: sometimes, sort of. The tools I tested got the gross dimensions within a few percent on simple prompts. A box is usually close to the right size. A cylinder is usually close to the right diameter. But "close" in manufacturing is not a compliment. A 6 mm hole that comes out as 5.7 mm doesn't fit the M6 bolt. A 50 mm dimension that's actually 49.2 mm means the part doesn't mate with the assembly. And these are the easy cases, single features on simple geometry.
Once you add complexity, holes on curved surfaces, pockets with fillets, features that reference other features, the dimensional drift gets worse. I measured one output where the prompted hole diameter was 8 mm and the generated geometry measured 7.4 mm on one axis and 7.8 mm on the other. Not a circle. An oval pretending to be a circle. The STL triangulation didn't help, but the underlying B-Rep was also off.
For reference, a typical CNC machining tolerance is plus or minus 0.1 mm for standard work, tighter for precision fits. The AI outputs I tested were off by 0.3 to 0.8 mm on individual features, and that's before you talk about GD&T, surface finish, or feature relationships. Nobody is holding position tolerance on a hole that the AI placed by vibes.
Tolerances: the thing AI doesn't know exists#
Here is a fact that should make any mechanical engineer uncomfortable: current text-to-CAD tools do not generate tolerances. Not dimensional tolerances. Not geometric tolerances. Not surface finish callouts. Nothing.
The model arrives as nominal geometry. It has no concept of fit classes, no awareness that a bearing bore needs to be H7, no understanding that a mating surface might need to be flat within 0.05 mm. The AI generates shapes. It does not generate engineering intent.
This matters more than most people outside manufacturing realize. A part without tolerances is a suggestion, not a specification. You can't quote it, inspect it, or hold a supplier accountable to it. Every shop I've worked with would look at a toleranceless model and either guess, call you, or add their own standard tolerances which may or may not match what you needed.
I have fixed this kind of mess before, usually while reheating the same coffee for the third time. The fix is always manual. You take the AI output, import it into your real CAD tool, and add all the engineering data yourself. Which raises the question: if you're doing all the engineering work anyway, how much time did the AI actually save?
Can you CNC machine AI-generated parts?#
Short answer: not directly.
Longer answer: the geometry might cut, but the part definition won't survive a real machining workflow without significant rework. Here's why.
CNC machining needs more than a 3D shape. It needs tool access. It needs radii that match available cutters. It needs walls thick enough to not chatter or deflect. It needs features positioned so a vise can hold the stock and a spindle can reach the cut. It needs draft considerations if the part goes into a fixture. It needs a drawing, or at least a model with PMI, that tells the machinist what matters.
AI-generated geometry ignores all of this. I sent a text-to-CAD bracket to my usual shop with a STEP file and nothing else. The response was educational. The internal pocket had sharp corners, which means no end mill can cut them without EDM or a secondary operation. One wall was 0.4 mm thick on a part that was supposed to be aluminum, which would flex like a beer can. Two holes were positioned so close to an edge that the material between them would likely crack during machining. And the overall shape, while parametric in the source tool, arrived as dumb geometry with no feature tree, no sketch references, and no way to adjust anything without essentially remodeling it.
A machinist who's been doing this for thirty years told me the geometry "looked like it was designed by someone who had seen a part but never held one." That stuck with me.
If you want to understand this problem better, I wrote about AI CAD for CNC machining in more detail. The short version: text-to-CAD can give you a starting shape for CNC work, but turning that shape into a machinable part is still a manual job, and it's not a small one.
3D printing: the most forgiving test, and it still struggles#
3D printing is supposed to be the easy case. FDM, SLA, SLS, and similar processes are more tolerant of weird geometry than subtractive methods. No tool access issues. No cutter radius limits. Less concern about workholding. If AI CAD was going to succeed anywhere, 3D printing should be it.
And it does work, sometimes, for simple things. I got a few box-shaped enclosures and basic bracket-like parts to print successfully on an FDM printer. The dimensions were close enough for a prototype. The shapes were printable. If your bar for success is "plastic object that exists and roughly resembles what you asked for," text-to-CAD can clear it.
But "roughly resembles" is a low bar, and even 3D printing has rules. Overhangs need support or design consideration. Wall thickness needs to be consistent and above minimum for the process. Bridging distances matter. Hole orientations affect accuracy. Print direction affects strength. None of this information is encoded in AI-generated output.
One part I tested had a floating internal ledge that would have required support material inside a closed cavity. The AI didn't model drain holes. It didn't consider print orientation. It just generated a shape that worked in the viewport and called it done. That's fine for a concept render. It's not fine for anyone trying to actually press "print" and get a usable result.
The gap is smaller here than with CNC, but it still exists. And for production 3D printing, where you need consistency, dimensional stability, and process-aware design, the gap is larger than the FDM prototype crowd might expect.
Injection molding: not even close#
I almost feel bad including this section because it's so lopsided. Injection molding is one of the most constraint-heavy manufacturing processes in common use. Draft angles. Uniform wall thickness. Gate location. Parting lines. Undercuts. Ejection. Sink marks. Weld lines. Shrinkage compensation. Material flow analysis. Every one of these factors needs to be considered during part design, not after.
Text-to-CAD tools have zero awareness of any of this.
I took a text-to-CAD generated enclosure, the kind of thing you'd injection mold for a consumer product, and showed it to a tooling engineer. He didn't even open it in CAD. He looked at the render for about ten seconds and pointed out three problems: no draft on any vertical face, variable wall thickness that would cause differential cooling and warpage, and a snap-fit feature that was geometrically impossible to eject from a two-part mold without a side action.
To be fair, most junior engineers also don't know injection molding constraints when they start. The difference is that junior engineers learn. Current AI CAD tools don't have the training data, the feedback loop, or the physics awareness to learn DFM constraints in any meaningful way. The geometry comes out looking like a part that forgot it needed to be manufactured.
If you're doing injection molding, text-to-CAD is not your tool. Not yet. Possibly not for a long time.
Sheet metal: a mixed bag#
Sheet metal CAD is a specialized domain. You need bend radii, K-factors, flat pattern calculations, relief cuts, hem considerations, and awareness of what a brake can actually do. The part in 3D needs to unfold into a flat pattern that can be laser-cut or punched, then bent into shape without tearing, buckling, or springing back into the wrong angle.
I tested one text-to-CAD tool's attempt at a simple L-bracket in sheet metal. It generated a solid body that looked like a bent piece of metal, but it was actually a solid extrusion with no sheet metal definition. No bend features. No flat pattern. No awareness of material thickness as a driving parameter. It was a picture of a sheet metal part, not a sheet metal part.
This is a pattern I keep seeing. AI CAD tools generate geometry that resembles the manufacturing output without understanding the manufacturing process. The visual fidelity is decent. The engineering fidelity is missing.
Where text-to-CAD actually fits in real workflows#
After all this complaining, let me be honest about where these tools do something useful. Because they do. It's just not the thing the demos promise.
Text-to-CAD is genuinely helpful for early-stage concept exploration. If you need to see a rough shape quickly, explore a few form options, or generate starting geometry that you'll rebuild properly in a real parametric tool, text-to-CAD can save time. Not manufacturing time. Design thinking time.
I've used it for:
- Generating starting geometry for simple brackets and mounting plates, then importing into Fusion 360 to add real dimensions, fillets, and hole patterns
- Exploring enclosure form factors quickly before committing to a parametric model
- Creating visual stand-ins for early assembly mockups where exact dimensions don't matter yet
- Generating geometry for non-critical fixtures and jigs that will be iterated anyway
In those cases, the workflow is: prompt the AI, get a rough shape, import it, throw away most of the geometry, and rebuild the part properly. The AI saves maybe 15 to 30 minutes of initial sketching and extrusion on simple parts. It does not save the engineering.
The problem with text-to-CAD limitations is not that the tools are useless. It's that they're being marketed as more capable than they are, and people who don't know manufacturing are believing the marketing. A concept model that looks like a machined part is not a machined part, in the same way that a photo of food is not dinner.
The DFM problem nobody is solving#
Design for manufacturability is not a checklist you apply after the geometry exists. It's a way of thinking about geometry while you create it. You choose wall thicknesses because of the molding process. You position holes relative to edges because of the tooling constraints. You add draft because the part needs to come out of the mold. You avoid sharp internal corners because the cutter has a radius. You think about how the part will be fixtured, inspected, and assembled while you're still sketching.
AI CAD tools don't think this way because they don't have a manufacturing model. They have a geometry model trained on existing CAD datasets, and those datasets don't typically include the manufacturing context that drove the design decisions. The AI can learn that brackets tend to have holes in certain positions, but it can't learn why those holes are positioned the way they are in relation to a specific manufacturing process.
This is a fundamental gap, not a software version gap. Until AI CAD tools are trained on manufacturing process data alongside geometry data, or until they can run DFM checks against their own output, the parts they generate will look right and be wrong. Not always catastrophically wrong. Sometimes just expensively wrong.
A tooling engineer I've worked with for years put it simply: "The geometry is the easy part. The hard part is knowing what you can't see in the model." He was talking about constraints, tolerances, process limits, and material behavior. He was also, I suspect, talking about experience. The kind of knowledge you build by having parts come back wrong and learning why.
What this means for CAD careers#
There's a question floating around engineering forums and LinkedIn posts that amounts to: will AI replace CAD designers? My answer is no, but it will change what the job looks like, and the reason goes directly back to manufacturing.
The parts of CAD work that AI can already do are the parts that require the least engineering judgment: generating a basic shape from a description, creating starting geometry, roughing out a concept. These tasks are real, but they're not where most of the value or difficulty in CAD work lives.
The hard part of being a CAD designer is everything else. Knowing that a 1 mm wall will warp in ABS. Knowing that a 90-degree internal corner will crack under cyclic loading. Knowing that the beautiful swept surface you just created will require a five-axis mill and triple the machining cost. Knowing that the assembly looks great in the exploded view but can't actually be assembled in that order because the fastener access is blocked.
That judgment is what makes a CAD professional worth paying, and it's exactly the knowledge AI CAD tools don't have. The people who should be worried are the ones whose entire job is tracing shapes from reference images or recreating simple geometry from sketches. That work is going to get automated. The people who understand manufacturing constraints, tolerancing, assembly design, and real-world trade-offs are going to be more valuable, not less, because someone needs to clean up after the AI.
Where this technology might go#
I'm skeptical but not cynical. The current state of AI CAD for manufacturing is poor, but the rate of improvement in AI generally is fast enough that writing it off permanently would be foolish.
The most plausible near-term improvements I see:
DFM validation layers that check AI output against manufacturing rules before the user ever sees it. This doesn't require the AI to understand manufacturing. It just requires a rule engine bolted onto the output. Several CAD companies are already working on this, and it's probably the fastest path to making AI-generated geometry more useful.
Process-specific training data. If you train an AI on parts that were actually manufactured (with their manufacturing context, tolerances, and process parameters), the output should get more realistic over time. The bottleneck is data. Most manufacturing data is proprietary, messy, and locked inside company PLM systems.
Hybrid workflows where the AI handles initial geometry and a human engineer handles everything else. This is basically what I described above, and it works today if you set expectations correctly. The AI is a drafting assistant, not an engineer.
What I don't see happening soon is AI that can replace the full design-for-manufacturing loop. That requires understanding physics, process constraints, cost trade-offs, supplier capabilities, assembly sequences, and inspection methods. It requires the kind of knowledge that comes from having a machinist hand you a ruined part and explain, with visible disappointment, what went wrong.
The honest summary#
AI CAD tools generate geometry. They do not generate engineered parts. The difference matters every time you try to make something physical from the output. For concept work, visualization, and early-stage exploration, these tools offer real time savings. For anything that will be machined, molded, printed at production quality, or assembled into a product that needs to work, the output requires significant manual rework by someone who understands manufacturing.
The gap between the demo and the shop floor is not a bug that the next software update will fix. It's a reflection of how much engineering knowledge goes into a real part beyond its shape. Shape is necessary but not sufficient. Until AI CAD tools understand that distinction, they'll keep producing parts that look right on screen and arrive wrong in the mail.
My advice: use these tools where they help, which is early and rough. Don't trust them where it matters, which is everywhere else. And if your machinist calls you two hours after receiving your AI-generated STEP file, answer the phone. You're going to learn something.
Newsletter
Get new TexoCAD thoughts in your inbox
New articles, product updates, and practical ideas on Text-to-CAD, AI CAD, and CAD workflows.