9 min read

Can AI actually design CAD models?

Sort of. AI can generate simple CAD geometry from text prompts, and the results are getting less terrible. But 'design' is a strong word for what it's actually doing.

Quick answer

AI can generate basic CAD models from text prompts using tools like Zoo.dev and CADAgent, producing real B-Rep geometry (not just meshes). But it cannot yet handle complex assemblies, tight tolerances, or manufacturing constraints. It generates geometry, not engineering design.

Last Tuesday I showed a colleague an enclosure I'd generated with Zoo.dev's text-to-CAD tool. Decent wall thickness, snap-fit tabs, four corner standoffs for a PCB, a slot for a USB connector. The whole thing had taken about thirty seconds of typing and fifteen seconds of waiting. He rotated it on screen, nodded a few times, and then asked the question I'd been trying not to ask myself: "But who designed it?"

I didn't have a good answer. I typed a prompt. The AI produced a shape. The shape looked like something a person would design. But nobody had sat there thinking about draft angles, or checking if the snap fits would actually engage, or worrying about whether the USB slot was wide enough for the connector plus the cable strain relief. The AI skipped all of that because the AI doesn't know any of that exists. It just made geometry that looked plausible, and plausible is a dangerous neighbourhood to live in when you're trying to make physical objects.

That afternoon stuck with me, partly because the enclosure really did look good on screen, and partly because my coffee was already cold by the time I'd finished measuring all the ways it wouldn't work. The question "can AI design CAD models" has a complicated answer, and most of the internet is giving you the simple one.

What AI can actually generate#

The honest answer is that AI can generate simple CAD geometry, and it's getting better at it faster than I expected. Tools like Zoo.dev, CADAgent, and a handful of others can take a text description and produce real B-Rep solid models. Not meshes. Not renders. Actual solids with faces, edges, and topology you can open in Fusion 360 or SolidWorks, select a face, add a fillet, export a STEP file. That part is real, and it matters.

The sweet spot is prismatic parts with clear descriptions. Brackets. Mounting plates. Simple enclosures. Standoffs. Cable clips. The kind of geometry that an experienced CAD user would knock out in fifteen minutes but that still takes fifteen minutes nobody wants to spend. If you type "L-bracket, 3mm aluminum, 50mm legs, two M5 clearance holes per leg on a 30mm spacing," the current tools will usually give you something close. Not perfect, but close enough to be a useful starting point.

I've been testing text-to-CAD tools for months now, and the pattern holds: the simpler and more specific the prompt, the better the result. A rectangular plate with a bolt pattern comes out fine. A cylindrical standoff with a counterbore works more often than not. A box enclosure with mounting bosses lands in the right neighbourhood. You learn to describe things the way a careful machinist would read a drawing, leaving nothing ambiguous, specifying every dimension that matters.

Where things start to go sideways is the moment you need features that relate to each other in ways the AI can't infer from a sentence. A snap fit that needs to flex a specific amount. A wall that tapers for draft. A pocket that references the position of a mating part. The AI doesn't understand mechanical context. It understands shapes that tend to appear near other shapes, which is a very different thing.

The difference between generating geometry and designing a part#

This is the part that matters, and the part that most AI hype conveniently skips over.

Generating geometry means producing a 3D solid that matches a description. Designing a part means understanding why the geometry should be a certain way, what forces act on it, how it will be manufactured, what tolerances it needs, how it mates with adjacent parts, what happens when the material shrinks or the temperature changes or the bolt gets overtorqued by an assembler having a bad Monday.

AI does the first thing. It does not do the second thing. Not even a little.

When I model a bracket in Fusion 360, I'm making dozens of small decisions that never appear in the final geometry. I'm choosing a wall thickness based on the material and the loads. I'm positioning holes relative to edges with enough meat to avoid cracking. I'm adding fillets not because they look nice but because stress concentrations kill parts. I'm thinking about whether a CNC mill can reach that pocket, or whether the bend radius works for the sheet metal brake in the shop down the road. Every feature has a reason that lives outside the model itself.

AI-generated geometry has none of that embedded knowledge. The bracket looks like a bracket. The holes are in plausible locations. The fillets exist. But the reasoning behind each feature is absent, and that reasoning is what separates a shape from an engineered part. A machinist I've worked with for years once described an AI-generated STEP file as "a part that had never met a tool." He wasn't wrong.

What AI cannot do yet#

The list is long, and it maps pretty directly to the things that make CAD work actually hard.

Complex assemblies are out. The current tools work on single bodies. Ask for an assembly with mating constraints, fasteners, clearances, and an assembly sequence, and you'll get either an error or a single blob that vaguely suggests multiple parts fused together. Real assemblies are about relationships between parts, and relationships require the kind of engineering judgment AI doesn't have.

Tolerances don't exist in AI output. No dimensional tolerances, no GD&T, no fit classes, no surface finish callouts. The geometry arrives as nominal dimensions that are approximately correct if you're lucky. For prototyping, this is workable. For anything going to a supplier with a purchase order, you're adding all the engineering data yourself. I've covered the accuracy problem in detail elsewhere, but the short version is: don't send AI-generated dimensions to a machine shop without measuring everything yourself first.

Design for manufacturing is completely absent. Draft angles for injection molding. Bend allowances for sheet metal. Tool access for CNC pockets. Minimum wall thickness for the process. Gate locations. Ejection considerations. Weld lines. None of this is in the AI's vocabulary. The geometry might look manufacturable on screen, but the shop floor has a way of revealing what the viewport hid.

Organic and complex surfaces barely work. Swept profiles, lofted blends, variable-radius fillets, anything that requires smooth G2 continuity across a surface network. These are hard for experienced CAD users. For AI, they're basically impossible right now. If your part has freeform surfaces, you're modeling them yourself.

The theme across all of these is the same: AI can approximate shape, but it cannot approximate the thinking that went into the shape. And in engineering, the thinking is the design. The shape is just its shadow.

An honest look at where things stand#

I want to be fair, because the technology is real and dismissing it entirely would be dishonest. I use text-to-CAD in my own workflow. Not for production parts, but for getting started. If I need a quick bracket concept to show a client, I'll generate one in thirty seconds rather than spending fifteen minutes in Fusion. If I want to explore five different enclosure proportions before committing, text-to-CAD lets me iterate at the speed of typing rather than the speed of sketching and extruding.

The tools I take most seriously are Zoo.dev, which outputs real B-Rep as STEP files through a well-documented API, and CADAgent, an open-source Fusion 360 add-in that generates models with actual feature history inside a real parametric environment. Both of these produce geometry you can genuinely work with, not just look at. The major CAD vendors are also adding AI features to their platforms, with Autodesk, Dassault, PTC, and Siemens all building various forms of AI-assisted modeling into their existing tools. Most of that is still early, but the direction is clear enough.

The honest assessment: for simple, well-described parts, text-to-CAD saves real time. For moderate complexity, it gives you a starting point that needs significant rework. For anything complex, it saves nothing because you end up rebuilding the model from scratch anyway. That's not failure. That's just early technology being early.

Where this is heading#

The research is moving quickly. The Text2CAD paper that got a spotlight at NeurIPS 2024 showed that sequence-based generation of CAD operations from text is a viable approach, and the commercial tools are catching up. If you want to understand the technical architecture behind how text-to-CAD works, the key insight is that these systems predict CAD operations, not raw geometry, which is why the output is editable at all. Training datasets are growing. Integration with real CAD environments is getting tighter. Within a year or two, I expect simple-to-moderate parts to be fairly reliable straight from a prompt.

The harder problems, the ones that require manufacturing awareness, tolerance reasoning, and multi-part thinking, will take longer. Maybe much longer. There's talk of bolting DFM validation onto AI output, which would catch the worst errors before they reach a shop. There's work on training models with manufacturing context, not just geometry. Both of those would help. Neither of those is shipping today.

I think the most likely near-term future is a hybrid one. AI generates the starting geometry. A human engineer adds the intelligence: tolerances, manufacturing constraints, assembly relationships, the stuff that turns a shape into a product. The comparison between AI and human CAD design is less about competition and more about figuring out which parts of the work benefit from automation and which parts still need a brain that's been yelled at by a machinist.

So, can AI design CAD models?#

It can generate them. It cannot design them. That distinction sounds pedantic until you try to manufacture the output, at which point it becomes the only thing that matters.

"Design" implies intent, constraint awareness, and engineering judgment. AI has none of those. It has pattern recognition trained on existing geometry, and it uses that to produce shapes that statistically resemble real parts. Sometimes the resemblance is close enough to be useful. Sometimes it's close enough to be dangerous, which is worse.

If you're exploring concepts, generating quick geometry for discussion, or producing simple parts for non-critical applications, AI text-to-CAD is a real tool that saves real time. If you're engineering a product that needs to work, fit, survive, and be manufactured reliably, AI gives you a starting sketch at best. The text-to-CAD limitations are not temporary inconveniences. They're reflections of how much implicit knowledge goes into every part a good engineer models.

I'll keep using these tools. I'll keep being surprised when they get something right and unsurprised when they miss. But I'm not calling what they do "design" until the output can survive a conversation with a machinist without anyone reaching for a chair. We're not there yet. We're getting closer. And my Fusion 360 shortcuts aren't going anywhere.

Newsletter

Get new TexoCAD thoughts in your inbox

New articles, product updates, and practical ideas on Text-to-CAD, AI CAD, and CAD workflows.

No spam. Unsubscribe anytime.