Text-to-CAD workflows and tools
A practical look at text-to-CAD workflows from prompt to export, covering the tools that exist, the ones that work, and the uncomfortable amount of manual cleanup still involved.
Quick answer
Text-to-CAD workflows follow a loop of prompting, generating, reviewing, editing, and exporting. The tools that matter right now are Zoo.dev for B-Rep output, AdamCAD for quick parametric parts, CADAgent inside Fusion 360, and OpenSCAD paired with an LLM. None of them replace knowing CAD, but some of them save real time on the right kind of geometry.
Text-to-CAD workflows start with a prompt and end with an editable model you can actually manufacture from, but the middle is where things get honest. The tools exist. Some of them produce real B-Rep geometry. The workflow is prompt, generate, review, clean up, export, and probably prompt again because the first result had the wall thickness of a credit card.
I was sitting at my desk last Tuesday, second coffee going cold, trying to see if I could get a usable electronics enclosure out of Zoo.dev faster than I could model one from scratch in Fusion 360. The enclosure was not complicated. Rectangular box, lid with screw bosses, a cutout for a USB-C port, standoffs for a PCB. The kind of part I've modeled hundreds of times and could probably sketch in my sleep, which is exactly why it felt like a fair test. If text-to-CAD can't beat me on a part I find boring, it's not going to help on the parts I find interesting.
The answer, for what it's worth, was mixed. The AI got me about 70% of the way in maybe two minutes. Then I spent another twenty minutes fixing the things it got wrong. Whether that counts as "faster" depends on how you do the math and how much you enjoy arguing with yourself.
That experience, more than any feature list or product demo, is what text-to-CAD workflows actually feel like right now. And I think anyone considering these tools deserves to hear that before they reorganize their process around a technology that still needs a human holding its hand through most of the interesting decisions.
The workflow loop nobody shows in demos#
Every text-to-CAD tool, regardless of what it's called or who sells it, follows the same basic loop. The marketing materials like to present this as a straight line: type a prompt, get a model, done. In practice it's a circle, and you will go around it more than once.
The loop looks like this: you write a prompt describing the part you want. The tool generates geometry. You look at the geometry and discover what the tool misunderstood. You refine the prompt or edit the model directly. You export. You check the export in your actual CAD environment. You find more problems. You fix those. Eventually you have something usable, or you give up and model it yourself.
I've gone through this loop dozens of times now with different tools, and the ratio of "generated geometry I kept" to "generated geometry I replaced" varies wildly depending on part complexity. Simple prismatic shapes with clear dimensions? The tools do fine. Anything with internal features, draft angles, thin walls near each other, or organic transitions? You're back to doing the work yourself, just with a worse starting point than a blank sketch.
The prompt stage deserves more attention than it usually gets. I've written about this in detail in my text-to-CAD prompt engineering guide, but the short version is: you need to think like you're writing a work order for someone who has never seen your part, has no spatial reasoning, and takes every word literally. Dimensions matter. Material context helps. Vague words like "small" or "sturdy" are useless. If you say "box with a hole," you'll get a box with a hole, and nothing else. No fillets, no wall thickness, no mounting features, no consideration of how the thing gets made. The how to use text-to-CAD guide covers the practical side of this in more detail.
The tools that actually exist#
The text-to-CAD space is young enough that the list of tools worth talking about fits on one hand. There are others, but I'm focusing on the ones I've actually used or that produce output worth discussing. If you want the broader picture, the text-to-CAD guide covers the full range.
Zoo.dev is the one most people encounter first. It runs on the KittyCAD geometric kernel, which is GPU-native and generates actual B-Rep (Boundary Representation) geometry, not mesh blobs. The output can be exported as STEP, glTF, OBJ, STL, and a few other formats. The STEP export is the one that matters if you're doing real work, because STEP is what your downstream CAD tool can actually digest as editable geometry.
I've had decent results with Zoo for mechanical parts: brackets, simple housings, mounting plates, that kind of thing. Where it struggles is anything requiring relationships between features. Tell it you want a mounting plate with four M4 clearance holes on a bolt pattern, and it might get the holes but space them wrong, or get the spacing right but forget the clearance. It's inconsistent in a way that means you always have to check. For a deeper walkthrough, the Zoo text-to-CAD tutorial covers the specifics.
Zoo also has an API, which is where things get more interesting if you're a developer. The Python SDK (kittycad) lets you script prompt-to-STEP pipelines, which opens up batch generation and integration into automated workflows. The text-to-CAD API post goes into this properly.
AdamCAD takes a different approach. It generates parametric geometry with adjustable sliders, which means you get a model and can then tweak dimensions without re-prompting. The output is mostly STL, which limits its usefulness for downstream parametric editing, but for quick prototyping and 3D printing workflows it's fast and surprisingly capable. At $5.99 a month for the basic tier, it's cheap enough to try without thinking about it.
CADAgent is the newest entry and the one I find most promising. It's an open-source Fusion 360 add-in (released March 2026, GitHub: er-fo/CADAgent) that generates models directly inside Fusion 360 using an Anthropic API key you provide. This matters because the output lives natively in your Fusion timeline. No import step, no format conversion, no prayer that the STEP file will behave. You describe the part, CADAgent writes the Fusion operations, and the result shows up in your feature tree like you modeled it yourself. It's early and limited, but the approach is right.
OpenSCAD paired with an LLM is the path that gets the least attention and might have the most long-term potential. OpenSCAD is already code-based, which means an LLM can generate its scripts directly. You describe a part in natural language, the LLM writes OpenSCAD code, you run it, you see the result, you iterate. With the OpenSCAD MCP server adding a visual feedback loop, the LLM can actually see what it generated and correct itself. I wrote a full piece on this: OpenSCAD + AI. If you're comfortable with code and want full control over the geometry, this is the workflow I'd recommend exploring first.
FreeCAD with AI-assisted Python macros follows similar logic. FreeCAD's Python API is well-documented enough that current LLMs can generate usable macros for common operations. It's not a polished product. You're basically asking ChatGPT or Claude to write FreeCAD Python scripts, then running them and fixing what breaks. But for open-source users who already know FreeCAD, it works surprisingly well for repetitive geometry.
Prompt engineering is the skill nobody expected to need#
I spent ten years learning how to communicate design intent through sketches, dimensions, and GD&T. Now I'm also learning how to communicate design intent through sentences typed into a text box, which is a sentence I would have found absurd in 2023.
The uncomfortable truth is that prompt quality is the single biggest factor in text-to-CAD output quality. Better prompts produce better parts. Vague prompts produce garbage. This is true across every tool I've tested.
A few things I've learned the hard way:
Start with overall dimensions. "A rectangular enclosure, 120mm long, 80mm wide, 40mm tall, with 2mm wall thickness" gives the tool something to work with. "A box for my Arduino" does not, because the tool has no idea which Arduino you mean, how much clearance you want, or whether "box" means open-top, lidded, snap-fit, or screw-closed.
Specify features in order of importance. The tool will try to include everything, but when features conflict with each other geometrically, the ones mentioned later tend to get mangled. If the mounting holes matter more than the cosmetic fillets, say the mounting holes first.
Use manufacturing language when you can. "3mm fillets on all external edges" is better than "rounded edges." "M4 counterbore holes, 8mm diameter, 4mm deep" is better than "screw holes." The tools have been trained on engineering data, and engineering vocabulary gets better results than conversational descriptions. My text-to-CAD prompt engineering guide has a lot more on this, including prompt templates that work.
Be explicit about what you don't want. If you want a solid body with no internal voids, say so. If you want a shell with uniform wall thickness, say that. Left to their own devices, these tools make assumptions, and the assumptions are not always the ones a person with manufacturing experience would make.
And accept that multi-step prompting usually works better than a single long prompt. Describe the base shape first. Get that right. Then add features. Get those right. Then refine. Trying to specify an entire complex part in one prompt is like trying to explain an assembly to someone using only one sentence. You can do it, but the result will be confusing for everyone involved.
File formats: where the workflow gets real#
The moment you export from a text-to-CAD tool, you enter the territory I've been complaining about for years: file format interoperability. The text-to-CAD tutorial covers the step-by-step of this, but here's the short version of what to expect.
STEP is the format you want for downstream editing. It carries B-Rep geometry that most professional CAD tools can import as solids. When Zoo.dev exports STEP, the result usually imports cleanly into Fusion 360 or SolidWorks as a dumb solid, meaning you get the geometry but not a feature tree. You can still add features on top of it, take measurements, cut sections, and do the kind of work you'd do with any imported body. But you can't go back and change the AI's sketch dimensions, because there are no sketch dimensions. It's a solid lump that happens to be the right shape.
STL is what you get from most tools when STEP isn't available. It's a mesh. It's fine for 3D printing. It's mostly useless for parametric editing. If you import an STL into SolidWorks, you get a mesh body that you can look at, measure, and maybe convert to a solid if you enjoy frustration and have a high tolerance for approximation artifacts.
glTF and OBJ are for visualization. They're what you'd use if you want to drop the geometry into a web viewer, a render engine, or a game. They're not for manufacturing.
DXF matters if you're working in 2D or doing laser cutting and CNC routing. Some tools (Vondy, for example) output DXF directly, which is useful for flat parts but doesn't help with 3D.
The practical advice is simple: if the tool can export STEP, use STEP. If it can't, you're either 3D printing the STL directly or you're rebuilding the geometry in your CAD tool using the AI output as a visual reference. Both are valid workflows, just don't pretend the second one is "AI-generated CAD" when what you actually did was trace over a robot's homework.
Editing AI output: the part they skip in the demo#
This is where text-to-CAD gets real, and where the demos stop being helpful. Every demo I've seen shows the prompt going in and the shiny model coming out. Nobody shows the twenty minutes after, when you're trying to figure out why the boss features are 0.3mm off-center, or why the wall is 1.5mm thick on one side and 2.1mm on the other, or why there's a tiny internal face that shouldn't exist but is going to ruin your fillet operation.
Editing AI-generated geometry is different from editing your own work. When you model something yourself, you know the construction logic. You know which sketch drives which feature. You know that the shell came before the boss, and that changing the draft angle will ripple into the parting line. AI-generated models have no such logic. They're just geometry. The "feature tree" in CADAgent output is better than nothing, but it's still not your feature tree.
My usual approach with imported AI geometry is to treat it like I'd treat any dumb import from a client: useful as a reference, not trustworthy as a starting point for parametric design. I'll measure it, verify the critical dimensions, and then decide whether to modify the import directly (adding features, cutting material, repairing faces) or use it as an underlay while I rebuild the part properly.
For simple parts, modifying the import is fast and fine. For anything going to manufacturing with tolerances, I rebuild. Every time. Because I've learned the hard way that "close enough" geometry from any source, AI or otherwise, has a habit of being exactly wrong in the one spot that matters.
Fitting text-to-CAD into an existing workflow#
The question I keep getting from other CAD users is not "which tool is best" but "where does this fit." And the honest answer is: it fits in the early stages, and it fits for certain kinds of parts, and it does not fit everywhere.
Where text-to-CAD works well right now: quick concept models for review. Early-stage prototyping where you need a physical shape fast and don't care about feature trees. Generating starting geometry for simple parts that would be tedious to model from scratch but don't require precision. Creating reference models to sanity-check proportions before committing to a full parametric build. Batch-generating variations of a basic shape for comparison.
Where it does not work well: anything with tight tolerances. Assemblies. Parts with complex internal features. Anything that needs a clean feature tree for future revision. Geometry that must conform to specific manufacturing constraints like draft angles, parting lines, or minimum bend radii. In other words, most of what professional CAD users spend their time on.
The workflow I've settled into is this: I use text-to-CAD the way I used to use hand sketches on scrap paper. It's for getting the rough shape out of my head and into something I can look at, spin around, and evaluate before I commit to the real modeling work. It's a thinking tool, not a production tool. Not yet.
For OpenSCAD users, the workflow is tighter because the LLM output is code, not geometry. You can read the code, understand it, modify it, version-control it, and parametrize it properly. This is closer to a real production workflow, and it's why I think the OpenSCAD + AI path is underrated.
What the vendors are doing#
The major CAD vendors are all adding AI features, but most of them are not doing text-to-geometry yet. Autodesk announced Neural CAD at AU 2025, which would generate editable 3D geometry from text prompts inside Fusion. It's in development. Dassault is shipping AI companions (AURA and LEO) in SolidWorks 2026, plus an assembly structure designer that takes text input. PTC has an AI Advisor in Onshape. Siemens has NX AI Chat.
These are mostly copilot-style features: AI that helps you use the existing tools faster, not AI that replaces the tools. The distinction matters. A copilot that suggests the right command when you type "extrude this face 10mm" is useful. It's not the same as generating an entire part from a description. The vendors know this, which is why most of them are taking the cautious path. The startups are the ones trying to skip ahead, and the results are predictably mixed.
I expect the vendor features to matter more in the long run, because they'll be integrated with the existing parametric environment. A Fusion 360 text-to-CAD feature that generates timeline operations, not just geometry, would be genuinely useful. CADAgent is a prototype of what that looks like, and even in its early state, the output is more editable than anything I've gotten from a standalone tool.
Where this is going#
I'm not going to predict the future, because every time someone in CAD predicts the future they end up looking foolish within eighteen months. But I'll say what I see.
Text-to-CAD right now is where 3D printing was around 2012. The technology works in constrained cases. The output is rough. The tools are young. The hype is significantly ahead of the practical reality. But the direction is clear enough that ignoring it seems unwise.
The tools will get better at understanding engineering intent. The output quality will improve. The integration with existing CAD environments will get tighter. And at some point, probably sooner than I'm comfortable with, the prompt-to-part loop will be fast enough and accurate enough that it changes how people start new designs.
It won't replace knowing how to model. It won't replace understanding manufacturing constraints. It won't replace the judgment calls that make the difference between a part that works on screen and a part that works in the real world. But it will change the first ten minutes of a lot of design sessions, and for a field where the first ten minutes often involve staring at a blank sketch and wishing someone else would draw the boring bit, that's not nothing.
My advice, if you're a working CAD user curious about this: try Zoo.dev or CADAgent on a part you already know how to model. Don't start with your hardest project. Start with something boring. See how far the tool gets. See what it misses. Fix what it gets wrong. That twenty minutes will teach you more about text-to-CAD than any demo reel or product announcement, and you'll know exactly where it fits in your workflow, if it fits at all.
Newsletter
Get new TexoCAD thoughts in your inbox
New articles, product updates, and practical ideas on Text-to-CAD, AI CAD, and CAD workflows.