11 min read

Parametric AI design: generating editable feature trees

The real prize in AI CAD isn't geometry. It's generating a parametric model with a feature tree you can actually edit. Almost nobody can do this yet.

Quick answer

Parametric AI design means AI-generated CAD models that include editable feature trees (sketches, extrusions, fillets) rather than dead geometry. This is the hardest unsolved problem in text-to-CAD. Zoo.dev generates B-Rep output. Research like Neural CAD and DeepCAD targets parametric sequences. No tool produces fully editable parametric models from text in 2026.

Parametric AI design, the ability to generate a CAD model with an editable feature tree from a text prompt, is the hardest unsolved problem in text-to-CAD. No tool does it fully in 2026. What you get instead is geometry: solid, measurable, sometimes even good, but dead on arrival when it comes to editing. I've been staring at this gap for months now, importing STEP files from text-to-CAD tools into Fusion 360 and watching the feature manager display a single imported body with no history, no sketches, no parameters, no timeline. Just a lump of solid geometry sitting there like a paperweight. A very accurate paperweight, sometimes, but a paperweight nonetheless.

Last Tuesday I generated a bracket with Zoo.dev. Clean B-Rep output. Good dimensions. Exported it as STEP, opened it in Fusion 360. It showed up as an imported body. I needed to move one hole 3mm to the left. In a native Fusion model, that's a double-click on a sketch dimension, type the new number, press enter. In the imported body, that's: create a new sketch on the face, project the existing hole geometry, add a new hole in the right position, suppress or cut away the old hole, and hope the boolean operation doesn't produce weird internal faces. Five minutes of work for what should have been a three-second edit. This is the parametric gap, and it's the reason that AI-generated geometry, no matter how accurate, still can't replace a model built by a human who knows what they're doing.

What "parametric" actually means in CAD#

The word gets thrown around in marketing material for text-to-CAD tools, and most of the time it's used loosely enough to be misleading. So let me be specific about what a parametric model actually is, because the distinction is everything.

A parametric CAD model is not just geometry. It's geometry plus the recipe that created it. The recipe is the feature tree: an ordered sequence of operations (sketches, extrusions, cuts, fillets, patterns, mirrors, shells) where each operation has named parameters and can reference the results of previous operations. When you change a parameter, the whole tree re-evaluates, and the geometry updates accordingly.

This is why parametric models are useful. You can change a wall thickness and have every downstream feature update. You can modify a bolt pattern spacing and have the holes move. You can swap a material and have the mass properties recalculate. The model isn't static geometry. It's a program that produces geometry, and you can edit the program without starting over.

Non-parametric geometry, which is what all current text-to-CAD tools produce, is the output without the program. You have the shape but not the instructions. If you want to change something, you're reverse-engineering the model's construction after the fact, which ranges from tedious (simple changes) to effectively impossible (complex changes where features depend on each other in ways you can't reconstruct from the final shape).

The B-Rep vs mesh distinction matters here too. B-Rep geometry (what Zoo.dev and STEP files provide) is at least topologically correct: you can select faces, measure edges, and add new features on top of the existing body. Mesh geometry (what most text-to-3D tools produce) is a pile of triangles that can't even support basic CAD operations. B-Rep without history is annoying. Mesh without history is unusable for engineering.

Why it matters for real work#

I keep a mental list of the reasons I actually open a CAD model after it's been created. Change a dimension because the mating part changed. Add a feature the client asked for after the first review. Scale a family of parts where the proportions stay the same but the sizes differ. Fix a manufacturing issue a machinist flagged. Adapt a design for a different material with different wall thickness requirements. Run a simulation and adjust the geometry based on results.

Every single one of those tasks requires an editable feature tree. Every single one is impossible or painful with imported dead geometry. This is why the text-to-CAD limitations aren't just about accuracy or feature complexity. The most fundamental limitation is that the output can't be edited the way engineers need to edit things.

Consider a product family. You design a bracket in three sizes: small, medium, large. In parametric CAD, you build one model with driving dimensions, and you create configurations or variants by changing those dimensions. The feature tree rebuilds cleanly for each size. With text-to-CAD, you generate three separate models from three separate prompts. Each one is an independent chunk of geometry with no relationship to the others. If you need to add a rib to all three, you add it three times. If you need to change the wall thickness across the family, you generate three new models and hope they're consistent. There's no parametric link between them, because there's no parametric anything.

This is why experienced CAD users are underwhelmed by text-to-CAD in ways that beginners aren't. Beginners see geometry appear from text and think it's magic. Experienced users see geometry without a feature tree and think it's a dead end. Both reactions are correct for their respective contexts.

The current state: what exists#

The text-to-CAD guide maps the full tool landscape, but here's the specific parametric situation for each major approach.

Zoo.dev generates B-Rep geometry. Real solid bodies with faces and edges. You can export STEP, import into CAD software, and work with the body. But there's no feature history. The output is a single solid body. This is the highest-quality non-parametric output currently available, and for many use cases (prototyping, 3D printing, quick concept checks), it's sufficient. For engineering workflows that require editing, it's the starting point of a rebuild.

CADAgent and the Fusion 360 MCP bridges generate geometry inside Fusion 360 by issuing API commands, which means the output does have native feature history. If CADAgent creates a sketch, extrudes it, then cuts a hole, those operations appear in Fusion's timeline. You can roll back, edit a sketch dimension, and see the model update. This is the closest thing to parametric AI output currently available.

The catch is that the feature trees produced by AI-driven Fusion 360 API calls are often brittle. The AI doesn't think about feature tree robustness the way an experienced modeler does. It might reference a specific face by index rather than by a stable reference. It might create dependencies that break if you change an upstream feature. It might use an overly complex construction strategy when a simpler one would be more stable. The feature tree exists, but it's the kind of feature tree a beginner produces: technically functional, but fragile under modification.

The Neural CAD research from Autodesk and the academic community (DeepCAD, Text2CAD) is explicitly targeting operation-sequence generation. The goal is a neural network that predicts not just geometry but the construction recipe. The Text2CAD paper demonstrated this for simple sketch-and-extrude sequences. The output is a sequence of CAD operations that can be executed by a kernel. In principle, this is parametric output. In practice, the operation vocabulary is limited (no fillets, no patterns, no sweeps in the training data), the dimensional accuracy is approximate, and the sequences sometimes produce invalid geometry.

The feature tree prediction problem#

Predicting a feature tree is harder than predicting geometry. This is worth repeating because it explains why parametric AI output is so far behind non-parametric output.

A given piece of geometry can be constructed by many different feature trees. An L-bracket can be built by extruding an L-shaped sketch, or by extruding two rectangles and joining them, or by extruding a larger rectangle and cutting a corner away, or by starting with a block and using shell and cut operations. All produce the same visual result. They produce radically different feature trees, with different editing properties.

The "right" feature tree depends on design intent, which is a concept that lives in the engineer's head, not in the geometry. Design intent is the plan for how the model should behave when things change. If the bracket needs to stay symmetric when the leg length changes, the feature tree should be built around a symmetry plane. If the wall thickness might change independently on each leg, the construction strategy should use separate operations for each wall. If the fillet radii are all supposed to match, they should reference a single variable, not be hard-coded separately.

An AI predicting a construction sequence has no access to design intent because the text prompt doesn't encode it. "L-bracket, 40mm legs, 3mm thick, two holes per leg" describes the final geometry. It says nothing about how the model should behave under modification. Should the holes move if the leg length changes? Should the thickness be linked across both legs? Should the fillet radius scale with the thickness? These decisions define the quality of a feature tree, and they require understanding the part's purpose, its manufacturing context, and the likely future changes. That's engineering knowledge, not geometry knowledge.

This is why generating a feature tree that a senior CAD engineer would accept as production-quality is a fundamentally different problem from generating geometry that looks correct. The geometry problem has a right answer you can check by measuring. The feature tree problem has many valid answers, and which one is "right" depends on context that the AI doesn't have.

The research approaches#

Two main research directions are attacking the parametric prediction problem.

Sequence-to-sequence models treat the construction history as a token sequence and predict it autoregressively, the same way a language model predicts the next word. DeepCAD established this approach. Text2CAD added text conditioning. The strengths: the output is directly executable by a CAD kernel, the operations are interpretable, and the approach scales with data. The weaknesses: the operation vocabulary is limited by the training data (mostly sketch-and-extrude), the sequences are brittle (one bad operation breaks everything downstream), and the combinatorial explosion of valid construction strategies for a given shape makes training difficult.

Constraint-based approaches try to generate the parametric relationships directly, specifying not just operations but the constraints between dimensions, the references between features, and the parametric links that make a model editable. This is closer to what a human modeler does, and it's significantly harder to train because the constraint structure is more complex than the operation sequence. Little published work exists here because the problem is genuinely difficult and the training data (models with explicit constraint annotations) barely exists.

The approach that's likely to work first in production isn't pure neural prediction. It's probably a hybrid: an LLM that understands CAD APIs (like the approach CADAgent uses) combined with specialized models that handle geometric reasoning. The LLM provides the construction strategy. The specialized models provide the geometric calculations. A CAD kernel provides the validation. The feature tree emerges from the interaction between these components, not from a single end-to-end model. It's messier than a clean research architecture, but production tools are almost always messier than research papers suggest.

What "editable" really means#

One more distinction that marketing tends to blur. There's a spectrum between "dead geometry" and "fully parametric model," and most of the intermediate points are useful even if they're not the final goal.

Dead geometry (imported STEP body, no history): you can add new features on top but can't modify existing ones. This is what Zoo.dev gives you. Useful for manufacturing directly, painful for iterating.

Recorded history (feature tree exists but isn't stable): you can see the operations that created the model, and some of them might be editable, but changing one thing might break three others. This is approximately what AI-driven Fusion 360 API approaches produce today. Useful for simple modifications, unreliable for anything complex.

Stable parametric model (feature tree with robust references and intentional constraints): you can change driving dimensions and the model updates predictably. This is what a skilled human produces in Fusion 360 or SolidWorks. This is the target. Nobody's AI produces this from text yet.

Configurable model (parametric model with defined variants, design tables, and derived configurations): the most advanced form of parametric modeling, where a single model represents an entire family of parts. This is what companies use for product lines. This is so far beyond current AI capabilities that I mention it only to calibrate how far there is to go.

The honest assessment#

The how text-to-CAD works overview explains the current generation pipeline, and parametric output is conspicuously absent from what ships today. The tools that generate the best geometry (Zoo.dev) produce non-parametric B-Rep. The tools that produce feature trees (CADAgent, MCP bridges) are limited by the AI's ability to use a CAD API coherently, which works for simple parts and degrades rapidly for complex ones. The research that targets parametric sequence generation directly (Text2CAD, DeepCAD, Autodesk's Neural CAD) is real but pre-production.

This is the holy grail of text-to-CAD, and calling it a holy grail is accurate in both senses: it's the ultimate goal, and nobody's found it yet. The engineering value of AI-generated parametric models would be enormous. Imagine typing a description and getting a model you can edit as fluidly as one you built yourself. Imagine generating a family of parts by modifying parameters rather than writing new prompts. Imagine handing an AI-generated model to a colleague and having them modify it without rebuilding from scratch.

That future is plausible. The research trajectory points toward it. The data problem (we need vastly more training data with operation sequences and constraint annotations) is solvable in principle if the major CAD companies decide their users' modeling history is worth training on. The kernel integration problem (generated sequences need to execute in real CAD environments) is being solved by MCP bridges and API integrations.

But it isn't here yet, and pretending otherwise is the kind of optimism that leads to disappointment on a deadline. I use text-to-CAD regularly. I treat every output as a starting point. I rebuild the parts I need to edit. And I watch the Neural CAD research with genuine interest and zero expectation that it'll change my workflow this year. The gap between generating geometry and generating engineering intent is real, and closing it is the hardest problem in AI-assisted design. Whoever solves it first will have built something genuinely new. I just don't think the gap closes with one paper or one product release. It closes slowly, and the parts in my feature manager still need their constraints built by hand.

Newsletter

Get new TexoCAD thoughts in your inbox

New articles, product updates, and practical ideas on Text-to-CAD, AI CAD, and CAD workflows.

No spam. Unsubscribe anytime.