<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>TexoCAD Blog</title>
    <link>https://blog.texocad.ai</link>
    <description>Thoughts, articles, and updates on Text-to-CAD, AI CAD, CAD workflows, and product development from TexoCAD.</description>
    <language>en-us</language>
    <atom:link href="https://blog.texocad.ai/feed.xml" rel="self" type="application/rss+xml"/>
    <lastBuildDate>Thu, 16 Apr 2026 00:00:00 GMT</lastBuildDate>
    <item>
      <title>What TexoCAD means, and why we built it</title>
      <link>https://blog.texocad.ai/posts/what-is-texocad</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/what-is-texocad</guid>
      <pubDate>Thu, 16 Apr 2026 00:00:00 GMT</pubDate>
      <description>TexoCAD comes from the Latin texo, &quot;to weave&quot; or &quot;to compose,&quot; plus CAD. We built it after buying a 3D printer and realizing the hard part was not printing objects. It was turning ideas into geometry in the first place.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>texocad</category>
      <category>coding-agents</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> TexoCAD comes from the Latin texo, &quot;to weave&quot; or &quot;to compose,&quot; plus CAD. We built it after buying a 3D printer and realizing the hard part was not printing objects. It was turning ideas into geometry in the first place.</p>
<p>The 3D printer showed up in a big brown box and immediately made the room feel more ambitious than it had any right to. Fresh plastic smell, foam inserts all over the floor, that thin little scraper they always include like you're about to become a manufacturing powerhouse with one piece of stamped metal and a prayer. We set it up, leveled the bed, ran the usual first print, and spent the next hour watching a small piece of plastic appear out of nowhere. It felt a bit like cheating. In a good way.</p>
<p>Then we hit the obvious problem.</p>
<p>None of us could really design for it.</p>
<p>We could download parts. Everyone can download parts. There is an entire internet full of phone stands, cable clips, headphone hooks, oddly aggressive drawer organizers, and replacement knobs for appliances you do not even own. But the moment you want a part that fits your thing, your desk, your enclosure, your weird little prototype, the fun stops. You are no longer browsing. You are modeling. And modeling is where a lot of people discover that "I have a 3D printer" and "I can make objects" are not remotely the same sentence.</p>
<p>That was the moment TexoCAD started, even if the name came a little later.</p>
<h2>We bought a 3D printer before we knew how to design</h2>
<p>What bothered me was not that CAD was hard. Of course it was hard. Every serious tool worth using is hard in some specific, slightly insulting way. What bothered me was the shape of the bottleneck. We had a machine that could turn bits into plastic on a table in front of us, and yet the actual path from idea to object still ran through a long detour of sketches, commands, menus, constraints, broken feature trees, and the kind of beginner frustration that makes you stare at a toolbar like it personally betrayed you.</p>
<p>The printer was ready. The physical world was ready. We were the slow part.</p>
<p>That felt backwards.</p>
<p>The more I thought about it, the stranger it seemed. Software had already made ridiculous progress in every other expressive medium. If you want to write, you type. If you want to edit a photo, you have a hundred tools. If you want to compose music, there is software for that too. But if you want to describe a bracket, a housing, a mount, a jig, or some ugly little adapter that exists only because two standards refuse to get along, you still need to learn a full modeling environment before the machine will listen to you.</p>
<p>That gap is what we cared about. Not "AI" in the abstract. Not futuristic demo fluff. The simple fact that a person can know exactly what they need, be perfectly capable of describing it, and still be blocked because geometry has historically been trapped behind specialist tooling.</p>
<p>We were not world-class CAD experts at the time. What we did have was a background in software. We could code. We understood structured systems, abstractions, parameters, syntax, iteration, and the very familiar ritual of getting something 80 percent right, testing it, cursing softly, then fixing it. That mattered more than I realized at first.</p>
<p>Because once you look at CAD through a programmer's eyes, a lot of it stops looking mystical and starts looking like something else: a language problem.</p>
<p>And once it looked like a language problem, the name stopped feeling like branding and started feeling like a way to describe the whole idea.</p>
<h2>Why the name is TexoCAD</h2>
<p>The name came from two pieces, but also from that shift in perspective. It was my attempt to name the problem the way it actually felt.</p>
<p><code>CAD</code> is the easy half. Computer-aided design. Everybody in this space knows that one, and if they do not, they will about five minutes after their first bad export.</p>
<p><code>Texo</code> is the more interesting half. It comes from the Latin verb <em>texo</em>, usually glossed as "to weave," with extended senses around composing, constructing, and joining things together. I like that meaning because it gets closer to what design actually feels like when you are doing it honestly. You are not summoning finished objects from the void. You are weaving constraints together. You are composing form, function, dimensions, material limits, manufacturing reality, and whatever dumb requirement showed up in the last message from a supplier.</p>
<p>That is design. Not pure invention. Structured composition.</p>
<p>So TexoCAD, at least in my head, meant something like woven CAD, composed CAD, constructed CAD. CAD that starts from language and intent, not just clicks. CAD that treats geometry less like a priesthood and more like something you can write, inspect, revise, and reason about.</p>
<p>I also liked that <em>texo</em> sits close to "text" without pretending to literally be the same word. That mattered because the deeper idea here was always that text, code, and geometry are closer cousins than most CAD software has historically admitted.</p>
<p>If that sounds slightly philosophical, fair enough. A lot of naming is philosophy with a domain check.</p>
<p>But that question does have a practical answer. If geometry really is something you can compose, then what is the best digital form for carrying that composition? For me, the answer kept circling back to code.</p>
<h2>Code is the closest digital version of a physical thing</h2>
<p>That is the view that kept getting stronger for me as we worked on this.</p>
<p>Code is not just instructions for computers. In a lot of cases, it is the cleanest digital representation of intent we have. It says what something is, how it behaves, what can vary, what must stay fixed, and which constraints matter. Good code is not merely output. It is structure with reasons attached.</p>
<p>That is also what a well-built CAD model is supposed to be.</p>
<p>A physical object is made of atoms. A digital object is made of bits. That part is obvious. The more useful analogy is one layer up. Atoms combine into parts, surfaces, edges, tolerances, fit, motion. Bits combine into parameters, operations, relationships, and rules. In both cases, the thing only becomes useful when structure appears. A pile of atoms is not a hinge. A pile of bits is not a model.</p>
<p>The job is to arrange the building blocks so they hold together.</p>
<p>That is why code matters here so much. Code can express relationships directly. Width equals board width plus clearance. Hole spacing matches the mounting pattern. Wall thickness stays constant. Corner radius changes, and the rest of the model updates with it. If you have ever used OpenSCAD, this already feels obvious, which is part of why I wrote about <a href="/posts/openscad-ai">OpenSCAD + AI</a>. It has been quietly proving for years that geometry becomes much easier to reason about when the representation is textual and explicit.</p>
<p>Traditional CAD systems can absolutely represent intent, sometimes beautifully. But they often hide that intent inside a feature tree, a proprietary format, or a click history that makes sense right up until it doesn't. You change one sketch dimension and the whole thing turns red like you have offended its ancestors. Every CAD user has been there. Usually with cold coffee nearby.</p>
<p>Code has its own failure modes, obviously. It can be ugly, brittle, overcomplicated, and written by someone who should have gone for a walk before opening the editor. But when code is the representation, you can inspect it. Diff it. Version it. Generate it. Refactor it. Ask an agent to explain it. Ask an agent to change one parameter without touching the rest. That gives you a very different kind of control from click-based geometry creation. It gives you room to work.</p>
<p>Once we started treating geometry as something closer to code, the path forward got much clearer. And once that clicked, coding agents stopped sounding like a side topic and started sounding central.</p>
<h2>Why coding agents change the whole equation</h2>
<p>That is where my opinion gets stronger.</p>
<p>I think coding agents matter enormously for CAD, not because they make engineers obsolete, and not because every prompt should become a printable part, but because they are unusually well matched to the structure of the problem. They work best when there is a formal language, a clear objective, feedback from execution, and a loop where each attempt can be inspected and improved. That description fits programming. It also fits a surprising amount of design work.</p>
<p>If you ask a coding agent to build a login form, it writes code, runs it, sees what broke, and tries again. If you ask a coding agent to build a bracket in a textual CAD system, or through a CAD API, the loop is not all that different. Define geometry. Run it. Render it. Measure it. Notice the hole is wrong. Change the parameter. Try again. The agent is not guessing in the dark. It is operating inside a structured environment with syntax, state, and feedback.</p>
<p>That matters because most of the pain in early-stage CAD is not deep geometric genius. It is translation. You know what you want. The software does not. So you spend time converting intent into operations. Sketch here. Constrain that. Offset this face. Add that fillet. Move the hole 3 mm because of course it intersects now. A coding agent is good at exactly this kind of translation work when the underlying system is exposed in a way it can manipulate.</p>
<p>This is also why I do not think the important question is "Will AI replace CAD designers?" That question gets asked because people enjoy drama and because LinkedIn would collapse without fake civilizational panic. The more useful question is: which parts of CAD are fundamentally authoring problems, and which parts are judgment problems?</p>
<p>Authoring is where agents help first. Generating first drafts. Converting a verbal description into geometry. Wiring up parameters. Producing several variations quickly. Cleaning up repetitive modeling chores. Working through the boring parts without getting bored, which is one of the computer's few truly admirable qualities.</p>
<p>Judgment is different. Deciding whether the part is manufacturable. Deciding whether a snap fit will actually survive use. Deciding whether the tolerance stack makes sense, whether the material choice is stupid, whether the mounting location guarantees a future service headache. That remains human territory for quite a while, and honestly I am fine with that. I have seen too many parts that were technically valid and practically idiotic.</p>
<p>But once you separate authoring from judgment, a lot of the noise falls away. Of course coding agents can help with CAD. CAD contains a huge amount of authoring work. Structured, repetitive, explicit, revisable authoring work. That is exactly the sort of terrain where software has always done its best work.</p>
<p>And that is the final step in the chain from the printer on the desk to the product itself. The printer exposed the bottleneck. The name gave the bottleneck a shape. Code suggested a better representation. Agents made that representation feel operational.</p>
<h2>Why we built TexoCAD</h2>
<p>TexoCAD came out of that entire chain of reasoning. We had a machine that could make physical things, and the main obstacle was still the interface between intention and geometry.</p>
<p>We did not want that interface to stay reserved for people who had already paid their dues to traditional CAD. Those tools matter. CAD expertise matters. Manufacturing knowledge matters even more. But there should also be a better path from "I know what I need" to "here is the model."</p>
<p>That is the bet.</p>
<p>Not that text replaces CAD.</p>
<p>Not that agents replace designers.</p>
<p>Not that physical reality becomes easy, because it does not. Physical reality is rude, dimensional, and unimpressed by your prompt.</p>
<p>The bet is that code, text, and agents can compress the distance between thought and object. And once that distance gets smaller, a lot more people can build useful things.</p>
<p>That is what the name means to me. Texo plus CAD. Weaving intent into geometry. Composing something digital that can survive contact with the physical world. Taking the machine on the desk seriously enough to admit that the real bottleneck was never the nozzle. It was the translation layer in front of it.</p>
<p>Now that we have coding agents, that layer looks a lot less permanent than it used to.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Vondy AI CAD Generator: quick look for beginners</title>
      <link>https://blog.texocad.ai/posts/vondy-ai-cad-generator</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/vondy-ai-cad-generator</guid>
      <pubDate>Mon, 13 Apr 2026 00:00:00 GMT</pubDate>
      <description>Vondy offers a browser-based AI CAD generator aimed at beginners. It&apos;s accessible, limited, and useful for exactly one thing: getting a non-CAD person to a simple STL fast.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>vondy</category>
      <category>beginners</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Vondy&apos;s AI CAD generator is a browser-based tool that creates simple 3D models from text prompts. Output is basic mesh geometry (STL/OBJ), not B-Rep. Best for beginners who need simple shapes quickly without installing CAD software. Not suitable for engineering work, manufacturing, or anything requiring dimensional accuracy or parametric editing.</p>
<p>Vondy's AI CAD generator produces simple geometry from text prompts in a browser, and the output is basic enough that calling it "CAD" is generous. It's a mesh generator with a text box. For absolute beginners who need a shape fast and don't care about editing it afterward, it does the job. For anyone who has opened a real CAD tool, it's going to feel like going from a kitchen knife to a butter knife. I tested it on a rainy afternoon after running the same prompts through Zoo.dev and AdamCAD, mostly to be thorough for the <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> comparison, and the results confirmed what I expected: accessible, limited, and fine for what it is.</p>
<p>My test bracket, the flanged rectangle with four M5 holes that I've used to evaluate every tool in this space, came back from Vondy as a recognizably bracket-shaped DXF file. Flat. Two-dimensional. No 3D. No extrusion. No holes with actual depth. Just a profile outline that you could, in theory, laser-cut. It was accurate enough in the outline, roughly the right proportions, but it was a flat drawing, not a solid model. For the other test prompts, Vondy produced similar 2D output. The enclosure prompt returned what looked like an unfolded box pattern, which would have been interesting if I'd asked for a sheet metal flat pattern, but I hadn't.</p>
<h2>What Vondy actually offers</h2>
<p>Vondy is a platform that hosts various AI-powered tools, and the CAD generator is one of many. It runs entirely in the browser. No software to install. No account required for basic use. You type a description, the AI generates output, and you download the result. The barrier to entry is about as low as it gets.</p>
<p>The output format is primarily DXF for 2D profiles. Some prompts produce basic 3D mesh output in STL or OBJ, but in my testing the 3D generation was inconsistent. Two out of five prompts produced 3D geometry. The other three produced flat profiles or failed to generate anything useful. The 3D output that did appear was low-polygon mesh, the kind of geometry that looks like it was generated by an AI that learned shapes from video game assets rather than engineering models.</p>
<p>There's no STEP export. No B-Rep output. No parametric editing. No feature tree. No dimension control beyond what you include in the prompt, and even then, the AI's interpretation of dimensions is approximate at best. My 80mm by 50mm plate came back at proportions that suggested 80mm by 50mm but could have been 85mm by 48mm for all the measurement precision it offered. Without STEP output, you can't even import it into Fusion 360 to check.</p>
<p>The interface is clean and simple, which is both its strength and its limitation. There are no settings to configure, no output quality sliders, no format options beyond what the tool decides to give you. For a beginner, this means less confusion. For anyone who wants control over the output, it means less control.</p>
<h2>Who this is actually for</h2>
<p>Vondy makes sense for one specific audience: people who have never used CAD software, don't want to learn CAD software, and need a simple shape for a non-critical purpose. A maker who wants a rough profile for a laser-cut bracket. A student who needs a basic 3D shape for a presentation. A hobbyist who wants to see what their idea might look like in three dimensions before deciding whether to learn Fusion 360.</p>
<p>For <a href="/posts/text-to-cad-for-beginners">text-to-CAD beginners</a>, Vondy is the lowest-friction entry point. Lower than Zoo.dev (which requires an account), lower than CADAgent (which requires Fusion 360 and an API key), lower than anything else in the space. The trade-off is that the output is proportionally less useful.</p>
<p>I showed Vondy to a friend who runs a small Etsy shop selling laser-cut decorations. She'd been paying someone on Fiverr to draw DXF profiles for her. Vondy generated profiles that were close enough to her usual designs that she started using it for initial concepts, then cleaning up the DXF files in Inkscape before sending them to her laser cutter. For her, Vondy saved maybe thirty minutes per design and a few dollars per Fiverr order. That's a real use case, even if it's a modest one.</p>
<h2>What it can't do</h2>
<p>The list of what Vondy can't do is longer than what it can, and if you're coming from any engineering or manufacturing background, these limitations are disqualifying.</p>
<p>No engineering-grade output. The geometry is approximate. Dimensions are suggestions, not specifications. You cannot use Vondy output for anything that requires dimensional accuracy, tolerance control, or mating with other parts.</p>
<p>No B-Rep geometry. The output is mesh (triangles) or 2D vectors (DXF). You cannot select a face and add a fillet. You cannot shell a body. You cannot do anything that requires a real solid model. The <a href="/posts/text-to-cad-vs-text-to-3d">text-to-CAD vs text-to-3D</a> distinction is relevant here: Vondy is firmly on the text-to-2D/3D side, not the text-to-CAD side, despite the name.</p>
<p>No parametric editing. What you get is what you get. If the dimensions are wrong, you regenerate and hope the next attempt is closer. There's no way to adjust a hole diameter or move a feature after generation. AdamCAD at least gives you parametric sliders. Zoo.dev gives you STEP files you can edit in real CAD. Vondy gives you a finished artifact with no edit path.</p>
<p>No manufacturing awareness. The output doesn't account for wall thickness, draft angles, overhang limits, toolpath constraints, or any other manufacturing reality. For 3D printing simple shapes, this might not matter. For anything beyond that, it matters a lot.</p>
<p>No complex geometry. Multi-body parts, assemblies, internal features, snap fits, threaded holes, gear teeth, splines. None of these are in scope. Vondy handles basic prismatic shapes and simple profiles. Ask for anything with geometric complexity and the output becomes unreliable or nonsensical.</p>
<h2>Comparison with serious tools</h2>
<p>The gap between Vondy and the dedicated text-to-CAD tools is significant, and it's worth spelling out because the category label can be misleading.</p>
<p>Zoo.dev generates real B-Rep STEP files with selectable faces, measurable edges, and geometry you can import into any professional CAD tool and edit. The output quality for simple parts is good enough that I've used Zoo-generated brackets in actual prototype assemblies. Zoo's free tier is accessible to beginners, and the output is genuinely useful for engineering work.</p>
<p>AdamCAD generates parametric STL with dimension sliders, letting you iterate on proportions after generation. The output isn't B-Rep, but the parametric controls give you something Vondy completely lacks: the ability to refine without regenerating.</p>
<p>CADAgent generates models inside Fusion 360 with full feature history. The output is indistinguishable from a hand-modeled part because it is a hand-modeled part, just modeled by an AI rather than a human. It requires Fusion 360 and an API key, so the barrier to entry is higher, but the output quality is in a different league.</p>
<p>Vondy generates flat DXF profiles and basic mesh shapes. The output is not editable, not dimensionally reliable, and not suitable for engineering. It's the Polaroid to Zoo.dev's DSLR. Both take pictures. One of them you might frame.</p>
<p>The <a href="/posts/text-to-cad-tools-comparison">text-to-CAD tools comparison</a> has a more detailed side-by-side if you want to see where each tool lands on specific test prompts.</p>
<h2>Honest recommendation</h2>
<p>Use Vondy if you've never touched CAD software and you want to see a shape from a text description in under a minute. Use it for quick concept visualization. Use it for rough 2D profiles you'll clean up in another tool. Use it if the alternative is not having any geometry at all and you don't want to spend time learning a real CAD tool first.</p>
<p>Don't use Vondy if you need dimensional accuracy. Don't use it if you need to edit the output. Don't use it if the geometry is going to be manufactured, mated with other parts, or evaluated by anyone who cares about tolerances. Don't use it if you have access to Zoo.dev's free tier, which produces better output with only slightly more effort.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the full range of tools from beginner-friendly to engineering-grade. Vondy sits at the far beginner end of that spectrum. It does one thing, it does it simply, and it does it at a quality level that matches the zero-effort input. For some people, on some afternoons, that's enough. For the work I do, it's a curiosity I tested once and haven't opened since. The bracket it drew me wasn't bad for a flat outline. It just wasn't a bracket. It was a picture of the idea of a bracket, and in engineering, those are not the same thing.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Does text-to-CAD work offline?</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-offline</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-offline</guid>
      <pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate>
      <description>Almost no text-to-CAD tool works offline. The models are too large, the inference too expensive, and the vendors need their API meters running. Here&apos;s the full picture.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>offline</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> No mainstream text-to-CAD tool works offline in 2026. Zoo.dev, AdamCAD, and CADScribe all require cloud API access. The AI models are too large for local inference on consumer hardware. The closest offline option is running OpenSCAD with a local LLM, which generates code rather than direct geometry. True offline text-to-CAD doesn&apos;t exist yet.</p>
<p>No mainstream text-to-CAD tool works offline in 2026. Not Zoo.dev, not AdamCAD, not CADScribe, not any of the browser-based generators. Every one of them sends your prompt to a cloud server and waits for geometry to come back. If your internet drops, your text-to-CAD workflow stops. I found this out the hard way during a site visit last fall, sitting in a manufacturing client's conference room with decent coffee and terrible Wi-Fi, trying to generate a quick bracket concept to show during the meeting. The prompt sat there spinning while everyone watched. Eventually I just sketched it on a notepad like it was 2005.</p>
<p>The question comes up a lot, especially from people working in environments where internet access is restricted, unreliable, or forbidden. Factory floors without public Wi-Fi. Classified facilities with air-gapped networks. Field offices in places where the nearest reliable connection is a forty-minute drive. Remote workshops where the satellite internet works between weather events. For all of these, the answer is the same: text-to-CAD in its current form is a cloud service, and cloud services need clouds.</p>
<h2>Why offline doesn't work yet</h2>
<p>The core problem is model size. The AI models that generate CAD geometry from text prompts are large neural networks. Zoo.dev's KittyCAD system runs on GPU clusters with significant compute resources. The models that power AdamCAD and CADScribe run on similar cloud infrastructure. These aren't lightweight algorithms you can run on a laptop CPU during a flight.</p>
<p>A typical large language model capable of generating competent code (which is essentially what text-to-CAD does under the hood, generating sequences of geometric operations from natural language) has tens of billions of parameters. Running a 70-billion-parameter model locally requires at minimum a workstation-class GPU with 40GB or more of VRAM, or multiple consumer GPUs. Running the smaller models that fit on a single consumer GPU (7B to 13B parameters) produces noticeably worse results, because smaller models are less capable at the complex reasoning needed to turn "flanged bracket with four M5 holes" into correct geometry.</p>
<p>Then there's the geometry kernel. Zoo.dev's KittyCAD is a proprietary GPU-native kernel. It's not something you can download and run locally. The kernel itself is part of the cloud service. Even if you could run the AI model locally, you'd still need a geometry kernel to execute the operations and produce a valid solid, and the best one available is a cloud service.</p>
<h2>What each tool requires</h2>
<p>Zoo.dev requires an internet connection and an API key. All generation happens server-side. The web interface and the <a href="/posts/text-to-cad-api">Python SDK</a> both connect to Zoo's cloud infrastructure. No connection, no geometry. There's no offline mode, no local caching of the generation capability, and no announced plans for one.</p>
<p>AdamCAD runs in the browser. The generation happens on AdamCAD's servers. You need an active internet connection for every prompt. The parametric sliders work in the browser after generation, but the initial creation is fully cloud-dependent.</p>
<p>CADScribe is cloud-based. Same pattern: prompt goes up, geometry comes down, nothing happens without a connection.</p>
<p>Vondy, HP's text-to-3D tools, and every other browser-based generator follow the same architecture. The browser is just a thin client for a remote service.</p>
<p>CADAgent for Fusion 360 is an interesting partial exception. The add-in itself runs locally inside Fusion, and the geometry creation happens through Fusion's own local kernel. But the AI inference, the part where your text prompt gets turned into a sequence of Fusion operations, requires an Anthropic API call. So you still need internet for the AI reasoning step, even though the geometry creation step happens on your machine.</p>
<h2>The OpenSCAD workaround</h2>
<p>The closest thing to offline text-to-CAD that actually works today is running <a href="/posts/openscad-ai">OpenSCAD</a> with a local LLM. The setup: install a local language model using Ollama or llama.cpp, run it on your machine, feed it prompts, and have it generate OpenSCAD scripts. Then run those scripts through OpenSCAD, which is free and runs locally, to produce 3D geometry. Everything stays on your machine. No network call leaves your system.</p>
<p>I've tested this on a workstation with an RTX 4090 running Llama 3 70B quantized. The experience is usable for simple parts. A rectangular plate with holes, a basic bracket, a cylindrical spacer. The model generates syntactically correct OpenSCAD about 70% of the time for simple geometry. For more complex prompts, the failure rate climbs quickly, and you end up debugging OpenSCAD code, which is its own kind of afternoon.</p>
<p>The quality gap between this local setup and what Zoo.dev produces is significant. The local model doesn't have the geometric reasoning depth of a purpose-built text-to-CAD system. It's generating code, not geometry, and the code generation quality of a local 70B model is noticeably below what you get from cloud models like Claude or GPT-4. The output is STL, not STEP, because that's what OpenSCAD produces. And the geometry is limited to what OpenSCAD's CSG (Constructive Solid Geometry) approach can express, which excludes freeform surfaces, fillets with variable radius, and other features you'd take for granted in Fusion 360.</p>
<p>It's a workaround, not a solution. But if you absolutely need text-to-geometry on an air-gapped machine, it's the most practical option that exists today. The <a href="/posts/text-to-cad-self-hosted">self-hosted text-to-CAD</a> post covers the setup and alternatives in more detail.</p>
<h2>Hardware for local inference</h2>
<p>If you're thinking about running AI models locally for offline text-to-CAD, here's what the hardware picture looks like.</p>
<p>For a small model (7B to 13B parameters, quantized to 4-bit): a consumer GPU with 8-16GB VRAM. An RTX 3060 or 4060 can handle this. Generation is fast, quality is poor. These models struggle with even moderately complex geometry descriptions.</p>
<p>For a medium model (30B to 70B parameters, quantized): you need 24-48GB of VRAM. An RTX 3090, 4090, or an A6000 workstation GPU. Generation is slower (maybe 20-60 seconds for a response), quality is acceptable for simple parts.</p>
<p>For a large model (70B+ at full precision): you're looking at multiple GPUs or a professional setup with A100/H100 cards. This is server-room hardware, not desktop hardware. The quality approaches what cloud models offer, but the cost and complexity approach "just buy a server" territory.</p>
<p>Apple Silicon Macs with large unified memory (M2 Ultra, M3 Max/Ultra with 96GB or more) can run quantized 70B models through llama.cpp, using system memory instead of VRAM. It's slower than dedicated GPU inference but it works, and for a single-user offline setup it's surprisingly practical. I've tested it on an M3 Max with 64GB and the results for simple OpenSCAD generation are tolerable, if you're patient with the generation speed.</p>
<p>The bottom line: you can run a local AI model for text-to-CAD on high-end consumer hardware, but the output quality scales directly with the model size, and the model size scales directly with the hardware cost. There's no free lunch here.</p>
<h2>When offline text-to-CAD might become practical</h2>
<p>Three things need to happen for real offline text-to-CAD to work.</p>
<p>First, local AI models need to get better at code generation in the 7B to 30B parameter range, so that a single consumer GPU can run a model capable of generating reliable CAD scripts. This is happening, slowly. Each generation of open-source models improves, and specialized fine-tuning for CAD code generation could accelerate it.</p>
<p>Second, an open-source geometry kernel needs to be integrated into the local pipeline so that the output is B-Rep STEP files, not just OpenSCAD STL. The pieces exist: OpenCascade is open source, build123d wraps it in Python, and projects like CAD Agent have demonstrated the architecture. What's missing is a packaged, reliable system that connects a local LLM to a B-Rep kernel with error correction and visual feedback, all running without a network connection.</p>
<p>Third, the hardware needs to come down. When a $1,500 workstation GPU can run a model that generates reliable geometry from text, offline text-to-CAD becomes practical for individual engineers and small firms. At current trajectories, that might be two to three years away, but hardware timelines are hard to predict.</p>
<p>Until all three of those pieces land, offline text-to-CAD remains a compromise: possible in a limited way, useful for simple parts, and significantly worse than what you get from cloud tools. If your work demands offline operation and cloud-quality results, you're stuck. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the full range of what's available today, and the <a href="/posts/text-to-cad-data-safety">data safety</a> post explains why some users need offline options in the first place.</p>
<p>My honest take: if you need offline CAD, use offline CAD. Fusion 360 with a downloaded cache, FreeCAD, SolidWorks on a laptop. Model the part yourself. It's not as fast as typing a prompt, but it works in a conference room with bad Wi-Fi, on a factory floor, or in a classified facility. Text-to-CAD is a cloud convenience, and treating it as anything else in 2026 means planning around a technology that isn't ready to meet you where you actually work.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Does text-to-CAD understand tolerances?</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-tolerances</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-tolerances</guid>
      <pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate>
      <description>Tolerances require GD&amp;T knowledge, fit specifications, and manufacturing process awareness. Text-to-CAD tools produce nominal geometry and call it a day.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>tolerances</category>
      <category>gd-and-t</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> No. Text-to-CAD tools generate nominal geometry without tolerances, GD&amp;T callouts, or fit specifications. The generated models have no tolerance information, no datum references, and no understanding of manufacturing process capabilities. Tolerance specification remains entirely manual work that requires engineering knowledge AI tools don&apos;t have.</p>
<p>Text-to-CAD tools do not understand tolerances. They generate nominal geometry, a 10mm hole is exactly 10.000mm in the model, with no tolerance band, no fit class, no surface finish callout, and no datum reference. The model has dimensions. It has no engineering intent about precision. I learned to stop expecting this after the third time I opened a STEP file from Zoo.dev, measured a bore that was supposed to accept a bearing, and found it at the nominal diameter with no fit specification. A bearing bore without an H7 tolerance is just a round hole with ambitions.</p>
<p>This isn't a bug that will get patched. It's a fundamental gap between what text-to-CAD produces (shapes) and what manufacturing requires (specifications). The gap is worth understanding properly, because tolerances are where CAD models stop being geometry and start being engineering.</p>
<h2>What tolerances actually are</h2>
<p>For anyone outside manufacturing, tolerances sound like a technicality. They're not. They're the language that tells a machine shop how precisely to make each feature.</p>
<p>A 10mm hole means nothing to a machinist without a tolerance. Does it need to be 10.000mm +/- 0.1mm? That's a loose hole, probably a clearance hole for a bolt, drilled and done. Does it need to be 10.000mm +0.015/-0.000mm? That's an H7 tolerance, a bearing bore, which needs to be reamed or bored to a specific finish. The difference between those two holes is the difference between a 30-second drilling operation and a multi-step process with inspection.</p>
<p>Dimensional tolerances specify how far a dimension can vary from nominal. Geometric tolerances (GD&#x26;T) specify how much a feature can deviate in form, orientation, and position. A hole can be the right diameter but in the wrong position. It can be the right diameter and the right position but out of round. GD&#x26;T captures all of these. It's a formal language defined by ASME Y14.5 (in the US) or ISO GPS standards (internationally), and it's how engineers communicate manufacturing precision.</p>
<p>The tolerance system exists because no manufacturing process is perfect. A CNC mill can hold +/- 0.025mm on a good day with proper tooling. A desktop 3D printer might hold +/- 0.2mm on simple features. Die casting holds +/- 0.1mm to 0.5mm depending on the size and complexity. Every process has a capability range, and tolerances must sit within what the chosen process can achieve. Specifying a tolerance tighter than the process can hold means expensive operations, rejected parts, or both.</p>
<h2>What text-to-CAD actually outputs</h2>
<p>Every text-to-CAD tool I've tested outputs geometry with nominal dimensions and nothing else. Here's what's missing.</p>
<p>No dimensional tolerances. A hole in the model is 10.000mm. Whether it's a clearance hole (needs to be 10.5mm for M10 clearance), a location fit (10.015 to 10.000mm, H7), or a press fit (9.985 to 9.972mm, P7) is nowhere in the data. The STEP file contains a cylindrical surface at exactly 10mm. That's it.</p>
<p>No geometric tolerances. There's no flatness callout on mating surfaces. No perpendicularity requirement on a bore relative to a face. No position tolerance on a bolt pattern. No concentricity between a bore and an outer diameter. None of the GD&#x26;T symbols that a manufacturing engineer reads to understand what matters on the part.</p>
<p>No datum references. GD&#x26;T requires datums: reference features that the part's tolerances are measured from. Datum A might be the mounting face. Datum B might be a locating bore. Datum C might be a slot. The datum scheme defines how the part is fixtured for inspection and how all other features are controlled relative to the functional references. Text-to-CAD models have no datum scheme because they have no GD&#x26;T.</p>
<p>No surface finish specifications. The model surfaces are mathematically perfect. In reality, every manufactured surface has roughness. Ra 0.8 for a bearing surface. Ra 3.2 for a general machined face. Ra 12.5 for a rough cut. These specifications affect function, cost, and manufacturing method. They're absent from AI-generated models.</p>
<p>No fit specifications. When a shaft goes into a hole, the fit class determines whether they slide freely (clearance fit), require gentle pressure (transition fit), or need to be pressed together (interference fit). Fit specifications depend on tolerance grades on both the shaft and the hole. Text-to-CAD generates a shaft and a hole at nominal. Whether they fit together and how is undefined.</p>
<h2>The gap between nominal and manufacturable</h2>
<p>The <a href="/posts/text-to-cad-dimensional-accuracy">text-to-CAD dimensional accuracy</a> question is about whether the AI hits the dimensions you asked for. The tolerance question is different: even if the nominal dimensions are perfect, the model still isn't manufacturing-ready because it carries no information about how precisely those dimensions need to be held.</p>
<p>Consider a simple bracket with two 6mm mounting holes and a 12mm bore for a pin. The STEP file from a text-to-CAD tool gives you three cylindrical features at their nominal diameters. For manufacturing, you need:</p>
<p>The mounting holes specified as 6.6mm clearance for M6 bolts, with a position tolerance of 0.25mm relative to the datum scheme, because the bolt pattern needs to align with the mating part.</p>
<p>The pin bore specified as 12mm H7 (12.000 to 12.018mm) with a surface finish of Ra 1.6 and a perpendicularity tolerance of 0.02mm relative to the mounting face, because the pin needs to fit properly and the mechanism needs to move smoothly.</p>
<p>The mounting face specified as a primary datum with a flatness tolerance of 0.05mm, because the bracket seats against a machined surface and needs full contact.</p>
<p>That's a simple bracket. Three features, maybe a dozen tolerance callouts. None of that information exists in the AI-generated model. All of it needs to be added by a human who understands the function of each feature, the manufacturing process, and the inspection method.</p>
<h2>Why AI can't learn tolerances from training data</h2>
<p>This is the part where people say "but won't the AI learn this eventually?" Maybe. But the obstacles are structural, not just a matter of more training data.</p>
<p>Tolerances live in drawings, not in 3D models. The STEP files and native CAD files that text-to-CAD models train on contain geometry. Tolerances are typically specified on 2D engineering drawings, in separate drawing files that reference the 3D model but contain the annotation layer. The training data for geometry and the training data for tolerances are in different file formats, often in different systems, and rarely linked in a way that's useful for machine learning.</p>
<p>Tolerances depend on function, not geometry. Two identical-looking holes can have wildly different tolerances depending on whether one is a bearing bore and the other is a clearance hole. The tolerance isn't determined by the shape. It's determined by what the shape does in the assembly, which requires understanding the part's function in context. A text prompt that says "12mm hole" contains no functional intent. Is it a press fit for a dowel pin? A clearance hole for a bolt? A bearing bore? The tolerance depends on the answer, and the prompt rarely provides it.</p>
<p>Tolerance specification requires process knowledge. An engineer choosing tolerances considers the manufacturing process capability, the inspection method, the cost impact of tighter tolerances, and the functional requirements of the mating parts. A surface that seals against an O-ring needs different roughness than a surface that's hidden inside the assembly. This is judgment based on experience, and it's exactly the kind of knowledge that doesn't encode well in CAD geometry training data.</p>
<p>The <a href="/posts/text-to-cad-limitations">text-to-CAD limitations</a> cover this at a higher level. Tolerances are one specific instance of the broader pattern: text-to-CAD produces geometry without engineering metadata, and for manufacturing, the metadata is often more important than the shape.</p>
<h2>What this means for your workflow</h2>
<p>If you're using text-to-CAD to generate parts for <a href="/posts/text-to-cad-for-manufacturing">manufacturing</a>, here's the practical impact.</p>
<p>Every AI-generated model needs a tolerance review. Open the model in your CAD tool. Identify every feature that has a functional requirement (mating surfaces, bearing bores, bolt holes, seal grooves, alignment features). Add the appropriate tolerances based on the part's function and your manufacturing process. This is engineering work, and it takes longer than generating the geometry.</p>
<p>Simple parts with loose requirements are less affected. A bracket that mounts with clearance bolts and has no precision interfaces? The general tolerances on the drawing (something like ISO 2768-m) might be sufficient, and you can add those as a note without individual feature callouts. For parts where nothing needs to be tight, the tolerance gap is a minor inconvenience.</p>
<p>Precision parts are where the gap becomes expensive. If the part has bearing bores, seal grooves, press fits, location pins, or any feature where the tolerance drives the function, the AI-generated geometry is only a starting shape. The tolerance specification, which determines whether the part actually works, is 100% human work.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> recommends using AI generation for concept and early prototyping, then rebuilding in proper CAD for production. The tolerance issue is one of the biggest reasons for that recommendation. A concept prototype printed on an FDM printer doesn't need GD&#x26;T. A production part going to a machine shop does. The transition from AI-generated geometry to production-ready model is where the tolerance work happens, and it happens entirely by hand.</p>
<h2>The drawing connection</h2>
<p>Tolerances and engineering drawings are inseparable. The tolerance information lives on the drawing, applied to the 3D model through annotation planes, leader lines, feature control frames, and dimension callouts. Without a drawing, there's nowhere for the tolerances to live in a communicable format.</p>
<p>Text-to-CAD doesn't generate drawings either, and I cover that in <a href="/posts/text-to-cad-drawings">a separate post</a>. But the connection matters here: even if a future text-to-CAD tool could generate tolerances and embed them in the 3D model as PMI (Product Manufacturing Information), the manufacturing world still largely runs on 2D drawings. The tolerance data needs to end up on a drawing eventually, and that drawing is created by an engineer, not an AI.</p>
<h2>The honest assessment</h2>
<p>Tolerances are one of those topics where the gap between "a shape on screen" and "a specification for manufacturing" is starkest. Text-to-CAD gives you shapes. Manufacturing needs specifications. The distance between the two is measured in engineering hours, and no amount of prompt writing reduces it.</p>
<p>I don't expect this to change soon. Tolerances require functional understanding that current AI architectures don't have and that current training data doesn't encode. The most likely near-term improvement is AI tools that suggest tolerances based on detected feature types (it looks like a bearing bore, here's a typical tolerance for that), but even that requires the AI to correctly identify the feature's function, which circles back to the same problem.</p>
<p>For now, tolerances are your job. The AI gives you the shape. You give it the meaning. And if that sounds like most of the engineering work is still on you, you're right. A CAD model without tolerances is like a recipe without temperatures. It tells you what to make. It doesn't tell you how carefully to make it. That's the part where the coffee goes cold and the engineering actually happens, and it's the part the AI hasn't touched.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Can text-to-CAD generate drawings?</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-drawings</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-drawings</guid>
      <pubDate>Wed, 08 Apr 2026 00:00:00 GMT</pubDate>
      <description>Engineering drawings require views, dimensions, tolerances, notes, title blocks, and revision history. Text-to-CAD can&apos;t generate any of that. The 3D model is only half the deliverable.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>drawings</category>
      <category>documentation</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> No. Text-to-CAD tools generate 3D geometry only, not engineering drawings. 2D drawings require projection views, dimensioning schemes, GD&amp;T callouts, surface finish specifications, notes, title blocks, and revision tracking. All of that must be created manually in your CAD tool&apos;s drawing environment after importing the AI-generated model.</p>
<p>Text-to-CAD tools generate 3D geometry. They do not generate engineering drawings. No views, no dimensions, no tolerances, no title block, no revision history, no notes, no section views, no detail views, no BOM callouts. The 3D model is about half the deliverable in most mechanical engineering workflows, and the other half, the drawing package, is entirely on you. I realized this would be a permanent frustration the afternoon I generated a decent-looking bracket from a text prompt, exported the STEP, imported it into Fusion 360, opened the Drawing workspace, and stared at an empty sheet. The AI got me halfway across the bridge and then vanished.</p>
<p>For anyone who thinks a 3D model is the final deliverable: in some workflows, it is. Model-based definition (MBD) is real, and some companies have moved to 3D-annotated models as the master document. But most of the manufacturing world still runs on 2D drawings. Machine shops want drawings. Inspection departments want drawings. Purchasing agents want drawings with a title block they can stamp. And until that changes, a 3D model without a drawing is a shape without instructions.</p>
<h2>What an engineering drawing contains</h2>
<p>A proper engineering drawing is not a picture of the part. It's a communication document that tells the manufacturer everything they need to know to make and inspect the part correctly.</p>
<p>Projection views show the part from multiple angles: front, top, right, isometric. The view arrangement follows either first-angle projection (common in Europe and most of the world) or third-angle projection (common in North America). The choice matters. Getting it wrong confuses the shop.</p>
<p>Dimensions call out every feature size and position that the manufacturer needs. Not every dimension in the model, necessarily, but every dimension that matters for function and manufacturing. Dimensioning is a skill. You dimension from functional datums. You avoid redundant dimensions that over-constrain the part. You place dimensions where they're clearest, usually on the view where the feature appears in true shape.</p>
<p>Tolerances and GD&#x26;T callouts specify how precisely each feature must be made. I covered this in detail in the <a href="/posts/text-to-cad-tolerances">text-to-CAD tolerances post</a>, but the short version is: tolerances are the engineering layer that turns shapes into specifications. Without them, the machinist guesses.</p>
<p>Surface finish symbols tell the manufacturer what roughness is acceptable on each surface. A bearing bore needs Ra 0.8. A cosmetic face needs Ra 1.6. A hidden internal surface can be Ra 6.3. These affect machining time, tooling choice, and cost.</p>
<p>Notes capture everything the views and dimensions don't. Material specification. Heat treatment requirements. Coating or plating callouts. Thread class. General tolerances for untoleranced dimensions. Deburring requirements. Any process-specific instruction that the manufacturer needs.</p>
<p>The title block contains the part number, revision, material, drawn-by, checked-by, approved-by, scale, projection method, and company information. It's the document control layer. Without it, the drawing is an orphan that nobody can track through a revision cycle.</p>
<p>Section views and detail views show internal features and small areas at enlarged scale. A section through a housing shows wall thickness, internal ribs, and pocket depths that aren't visible from outside. A detail view blows up a small feature, like a seal groove or a snap fit, so the dimensions and tolerances are legible.</p>
<p>A revision history tracks what changed, when, and who approved the change. In production environments, revision control is mandatory. A part manufactured to revision A might not fit an assembly designed for revision C.</p>
<p>None of this exists in text-to-CAD output.</p>
<h2>The gap between 3D model and drawing package</h2>
<p>The work of creating a drawing from a 3D model is not trivial, even when the model is clean and parametric. You need to choose the right view arrangement, decide which features to dimension and from which references, apply the correct tolerances based on function and manufacturing process, add section views for internal features, annotate surface finishes, write manufacturing notes, and fill out the title block. A drawing for a moderately complex part takes thirty minutes to an hour for someone who knows what they're doing. For a complex part with many GD&#x26;T callouts, it can take several hours.</p>
<p>When the 3D model comes from a text-to-CAD tool, the drawing process is harder, not easier. The <a href="/posts/text-to-cad-limitations">text-to-CAD limitations</a> that affect the model also affect the drawing. The model has no datum scheme, so you need to establish one. The model has no engineering intent encoded in the feature tree, so you need to figure out what the functional features are and tolerance them appropriately. The model may have dimensional inconsistencies that need to be corrected before you dimension the drawing. You're doing engineering rework and drawing creation simultaneously.</p>
<p>I've timed this. For a simple bracket generated by Zoo.dev: AI generation took about 30 seconds, model cleanup in Fusion 360 took about 15 minutes (fixing corner radii, adjusting hole sizes to standard dimensions, adding fillets the AI missed), and creating the drawing took about 25 minutes. The total time from prompt to drawing-ready deliverable was about 40 minutes. Modeling the same bracket from scratch in Fusion would have taken about 20 minutes including the drawing, because the feature tree would have been clean from the start and the dimensions would have been correct from the beginning.</p>
<p>For simple parts, text-to-CAD saves time on the geometry and costs time on the drawing. The net saving depends on the part complexity. For anything that needs a serious drawing package, the geometry generation is the easy part and always was.</p>
<h2>The documentation workflow after AI generation</h2>
<p>If you're using text-to-CAD and you need drawings, here's the realistic workflow.</p>
<p>Generate the part. Export STEP. Import into your CAD tool. This is the fast part.</p>
<p>Clean up the model. Check dimensions against your specification. Fix anything the AI got wrong. Add features the AI missed (internal fillets, chamfers, counterbores). Rebuild features that need proper parametric relationships. This takes 10 to 30 minutes depending on complexity.</p>
<p>Establish your datum scheme. Decide which surfaces are your primary, secondary, and tertiary datums based on how the part functions and how it will be fixtured for inspection. The AI has no opinion on this because it doesn't know what the part does.</p>
<p>Create the drawing. Add views. Add dimensions from functional datums. Add tolerances. Add GD&#x26;T callouts. Add surface finish symbols. Add notes. Fill out the title block. This takes 20 minutes for a simple part to several hours for a complex one.</p>
<p>Review and check. Have someone review the drawing. Check for missing dimensions, incorrect tolerances, ambiguous callouts, and drafting standard compliance. This is standard engineering practice that applies regardless of how the 3D model was created.</p>
<p>The drawing creation step is identical whether the model came from AI or was modeled by hand. Text-to-CAD doesn't help with the drawing at all. It produces a 3D shape, and the drawing is a separate deliverable that requires separate work.</p>
<h2>What about auto-dimensioning?</h2>
<p>Some CAD tools have auto-dimensioning features that can automatically add dimensions to a drawing view. Fusion 360, SolidWorks, and others offer this. The results are... mixed.</p>
<p>Auto-dimensioning adds dimensions to every feature it finds. You get every edge length, every hole diameter, every radius, every angle. The problem is that a proper engineering drawing doesn't dimension everything. It dimensions what matters, from the right references, with the right tolerances. Auto-dimensioning gives you too many dimensions from arbitrary references with no tolerances. You spend more time deleting and rearranging dimensions than you save by not placing them manually.</p>
<p>For AI-generated models, auto-dimensioning is even worse. The model's feature structure is often disorganized, so the auto-dimensioner picks up construction geometry, internal edges, and features you'd prefer to ignore. The output is a cluttered mess that communicates nothing useful and takes longer to clean up than a blank sheet.</p>
<p>I've tried the "AI generates model, auto-dimensioner creates drawing" pipeline. The result looked like someone dumped a bucket of dimension lines on the part. My colleague walked past my screen, looked at it, and said "what happened." That was the last time I tried.</p>
<h2>Is AI drawing generation on any vendor's roadmap?</h2>
<p>The honest answer: sort of, but not in the way you might hope.</p>
<p>Several CAD companies are working on AI-assisted drawing creation. The idea is that the AI analyzes the 3D model, identifies functional features, suggests a datum scheme, and proposes a dimensioning strategy. The engineer reviews and approves rather than creating from scratch. Siemens has published research on this. Autodesk has mentioned it in forward-looking presentations. PTC has shown prototypes.</p>
<p>The difficulty is that good drawing practice requires understanding the part's function, the manufacturing process, and the inspection method. A hole's tolerance depends on whether it's a clearance hole or a bearing bore. A surface's finish callout depends on whether it seals against an O-ring or sits hidden inside an assembly. These are engineering decisions that require context the 3D model alone doesn't contain.</p>
<p>The most plausible near-term AI assistance for drawings is template-based: you define a drawing standard (projection method, tolerance scheme, title block format, note templates) and the AI applies it consistently. This would save time on the repetitive formatting aspects of drawing creation while leaving the engineering decisions to the human. It's not a full solution, but it would reduce the drawing time by maybe 30 to 40 percent for standard parts.</p>
<p>For <a href="/posts/text-to-cad-guide">text-to-CAD</a> specifically, drawing generation would need to solve two problems simultaneously: understanding the generated geometry well enough to dimension it intelligently, and understanding the part's function well enough to apply appropriate tolerances. Neither is solved currently, and solving both together is a research problem, not a feature update.</p>
<h2>Model-based definition as an alternative</h2>
<p>MBD (model-based definition) eliminates the 2D drawing by embedding all dimensional, tolerance, and annotation information directly in the 3D model as PMI (product manufacturing information). The 3D model becomes the authoritative document. No separate drawing needed.</p>
<p>In theory, MBD solves the drawing gap for text-to-CAD. If the AI could generate a 3D model with embedded PMI, you wouldn't need a drawing. In practice, MBD adoption is still limited to large companies with the infrastructure to support it (Boeing, major automotive OEMs, defense contractors). Most machine shops, especially smaller ones, still want a PDF drawing. They open it on a tablet next to the machine. They mark it up with a red pen during inspection. They file it in a folder when the job is done. MBD requires 3D PMI viewers, training, and process changes that most shops haven't made.</p>
<p>And even for companies using MBD, the PMI data still needs to be created by an engineer. Embedding tolerances and annotations in the 3D model is the same engineering work as putting them on a drawing, just in a different format. Text-to-CAD doesn't add PMI any more than it adds drawing dimensions.</p>
<h2>The practical takeaway</h2>
<p>If you need engineering drawings, and most manufacturing workflows still do, text-to-CAD handles the easy part (generating 3D geometry) and leaves the hard part entirely to you. The drawing is where the engineering knowledge lives: the datum scheme, the tolerance strategy, the dimensioning approach, the manufacturing notes, the revision control. None of that comes from a text prompt.</p>
<p>For <a href="/posts/ai-cad-for-real-work">AI CAD in real work</a>, the drawing requirement is a reality check. A 3D model that can't be drawn can't be manufactured, at least not with the confidence and traceability that production work requires. The AI gives you a shape. You give it a specification. The drawing is where the specification lives, and it's still entirely a human deliverable.</p>
<p>I keep a drawing template in Fusion 360 with my company title block, standard notes, and general tolerance callout pre-filled. It saves about five minutes per drawing. The AI generation saves about ten minutes per model. The remaining forty-five minutes of creating the actual drawing, with proper dimensions from proper datums with proper tolerances, is the same regardless of whether the model came from my keyboard or a prompt. That's the ratio. The geometry is the fast part. It always was.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Is your design data safe with text-to-CAD tools?</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-data-safety</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-data-safety</guid>
      <pubDate>Tue, 07 Apr 2026 00:00:00 GMT</pubDate>
      <description>You&apos;re sending geometry descriptions to cloud APIs. Your prompts describe proprietary parts. The output passes through someone else&apos;s servers. Here&apos;s what you should know about data safety.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>data-safety</category>
      <category>privacy</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD tools process prompts on cloud servers. Zoo.dev&apos;s API processes your text descriptions server-side. Most tools don&apos;t store generated geometry long-term, but your prompts describe proprietary designs. Check each vendor&apos;s data retention policy, NDA compliance, and whether prompts are used for model training. Self-hosted options are extremely limited.</p>
<p>Your text-to-CAD prompts describe proprietary geometry, and every one of them leaves your machine. That's the short version. The longer version involves reading privacy policies written by lawyers who get paid by the clause, and I've done that so you don't have to. I spent a Tuesday evening going through the data handling documentation for every text-to-CAD tool I could find, which is not how I'd normally choose to spend an evening, but a client had asked me point-blank whether their enclosure designs were safe with these tools, and I didn't have a good answer. Now I do. It's complicated.</p>
<p>The thing about text-to-CAD that makes data safety different from, say, using a cloud CAD tool like Onshape is the nature of the input. When you use Onshape, you're uploading geometry you've already created. When you use text-to-CAD, you're describing geometry that doesn't exist yet, in plain language that spells out dimensions, features, and sometimes the exact purpose of the part. A prompt like "cylindrical housing for a pressure sensor, 22mm OD, 1.5mm wall, with a sealed cable gland on one end" tells the server quite a lot about what you're building. If that's a proprietary product under development, your prompt is a design disclosure in a text box.</p>
<h2>What actually leaves your machine</h2>
<p>When you send a prompt to a <a href="/posts/text-to-cad-guide">text-to-CAD tool</a>, the following data typically goes to the vendor's servers:</p>
<p>Your text prompt, which contains the geometry description, dimensions, features, and any design intent you've written. Optional parameters like output format, units, and material hints. Your account credentials or API key for authentication. Metadata: timestamps, IP address, session information, the usual web request payload.</p>
<p>The server processes this, runs inference on a large AI model, generates geometry, and sends the result back. Depending on the tool, the generated geometry might also be stored temporarily or permanently on the server side.</p>
<p>What doesn't leave your machine is any existing CAD data on your local drive, your feature trees, your assembly files, or your manufacturing drawings. Text-to-CAD tools don't reach into your file system. The risk is specifically about what you type into the prompt and what the server does with it afterward.</p>
<h2>Vendor policies: what I found</h2>
<p>I went through the published data policies for the main text-to-CAD tools. Here's what they say, and what they don't say.</p>
<p>Zoo.dev processes prompts server-side through their KittyCAD geometry kernel. Their privacy policy states they collect usage data including prompts and generated outputs. The key question for enterprise users is whether prompts are used to train future models. As of early 2026, Zoo's terms allow them to use data to improve their services, which is standard language that could include model training. For organizations with strict IP policies, that ambiguity is a problem. Zoo does not currently offer a self-hosted deployment option, which I covered in the <a href="/posts/text-to-cad-self-hosted">text-to-CAD self-hosted</a> post.</p>
<p>AdamCAD runs generation on their cloud servers. Their terms of service describe standard data collection. The parametric model generation happens server-side, and the results are sent back to your browser. I couldn't find a published statement specifically addressing whether prompts are used for model training, which in practice means you should assume they might be.</p>
<p>CADScribe generates STEP and STL files server-side. Their privacy documentation is limited. For a tool that handles geometry descriptions, the lack of a detailed data retention policy is itself a data point, and not a reassuring one.</p>
<p>For tools that rely on third-party AI providers (like CADAgent, which uses the Anthropic API), your prompts also pass through the AI provider's infrastructure. Anthropic's commercial API terms state they don't train on API inputs by default, but that's Anthropic's policy, not the tool developer's. The prompt travels through two sets of servers and two sets of policies.</p>
<h2>The training data question</h2>
<p>This is the one that makes engineers uncomfortable. If a vendor uses your prompts to train their AI model, your design descriptions become part of the model's learned knowledge. Not in a way where someone can extract your exact prompt, but in a way where the patterns, dimensions, and design approaches you described influence future outputs.</p>
<p>Is that a real risk? For a single prompt describing a generic bracket, probably not. The model has seen thousands of brackets. Yours doesn't meaningfully change anything. For a prompt describing a proprietary mechanism with unusual geometry and specific dimensional relationships, the calculus is different. The more unique your design, the more identifiable the training signal.</p>
<p>Most major AI providers offer opt-out mechanisms for training data use on their commercial API tiers. OpenAI's API does not use inputs for training by default. Anthropic's API has the same policy. Google's enterprise Gemini offerings have similar commitments. But these policies apply to the AI provider, not necessarily to the text-to-CAD tool built on top of them. A tool developer could, in principle, log your prompts and use them for their own fine-tuning regardless of the underlying AI provider's policy.</p>
<p>The practical advice: read the tool's own data policy, not just the AI provider's. Ask explicitly whether prompts are used for model training. Get the answer in writing if your IP requires it.</p>
<h2>NDA and IP implications</h2>
<p>If you're working under an NDA, describing a client's product geometry in a text prompt sent to a third-party cloud service could be a breach. I'm not a lawyer, and this isn't legal advice, but I've been in enough contract reviews to know that "reasonable measures to protect confidential information" probably doesn't include typing that information into a cloud API with default privacy settings.</p>
<p>The industries where this matters most are exactly the ones you'd expect. Defense contractors operate under ITAR and CUI regulations that explicitly restrict where technical data can be processed and stored. Sending a prompt describing a defense component to a cloud API server, especially one that might process data outside the US, is a compliance problem, not a preference.</p>
<p>Medical device companies working under FDA 21 CFR Part 820 and ISO 13485 have design control requirements that include controlling access to design outputs. A text-to-CAD prompt that describes a device component's geometry is arguably a design input, and sending it to an unvalidated cloud service creates a documentation and compliance gap.</p>
<p>Aerospace suppliers operating under AS9100 face similar traceability and information security requirements. Automotive suppliers with TISAX certification have information security obligations that cover product design data.</p>
<p>For all of these industries, the question isn't whether text-to-CAD is useful (it might be), it's whether the data handling meets regulatory requirements. In most cases today, it doesn't, because the tools weren't designed with regulated industries in mind.</p>
<h2>What you can actually do about it</h2>
<p>The most secure approach is not using cloud text-to-CAD tools for proprietary designs. That's the honest answer, even if it's not the exciting one. For non-sensitive work, concept exploration, generic parts, educational use, or designs that aren't under IP protection, the data risk is minimal and the tools are useful.</p>
<p>For sensitive work, you have a few options. The <a href="/posts/text-to-cad-self-hosted">self-hosted</a> route using OpenSCAD with a local LLM keeps everything on your machine, but the output quality is significantly lower than cloud tools. Running a local LLM to generate FreeCAD or build123d scripts is another option, with similar quality trade-offs.</p>
<p>If you must use a cloud tool for sensitive work, take these steps: read the vendor's data retention policy and confirm in writing how long prompts and outputs are stored. Ask whether prompts are used for model training and whether you can opt out. Use the <a href="/posts/text-to-cad-api">API</a> rather than a web interface when possible, as API terms sometimes offer better data handling commitments than consumer-facing products. Avoid including project names, client names, or product identifiers in your prompts. Describe geometry abstractly when you can. Keep a log of what prompts you've sent and to which service, because if you ever need to demonstrate due diligence to a client or auditor, that record matters.</p>
<p>If your organization has a data classification system, text-to-CAD prompts for proprietary designs should be treated at whatever level your geometry data falls under. If your STEP files are "confidential," the prompts that describe those STEP files should be "confidential" too.</p>
<h2>The gap between convenient and safe</h2>
<p>The frustrating part of this whole topic is that the tools that produce the <a href="/posts/best-text-to-cad-tools">best results</a> are the ones that require the most trust. Zoo.dev generates the best B-Rep geometry, and it's cloud-only. The self-hosted options that protect your data completely produce noticeably worse output. There's no tool in 2026 that gives you cloud-quality text-to-CAD generation with on-premise data security.</p>
<p>That gap will close eventually. Local AI models are getting larger and more capable. The open-source geometry kernels (OpenCascade, build123d) are mature enough to handle the generation side. The bottleneck is the language model quality for local inference, and that's improving faster than most other parts of this stack.</p>
<p>Until then, the data safety question for text-to-CAD comes down to a trade-off that every team has to make for themselves. The convenience of typing a prompt and getting usable geometry in fifteen seconds is real. The risk of sending proprietary design descriptions to a cloud service is also real. Pretending either side of that trade-off doesn't exist is how organizations end up surprised, either by slow workflows or by uncomfortable questions from a compliance auditor who found out where their design data went.</p>
<p>My habit is simple: I use cloud text-to-CAD tools freely for personal projects, concept work, and anything that isn't under an NDA. For client work with IP sensitivity, I model the part myself in Fusion 360 and keep my prompts to myself. It's less exciting than the demo reel, but my clients' geometry stays where it belongs, which is on my machine and nowhere else.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Can text-to-CAD handle assemblies?</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-assemblies</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-assemblies</guid>
      <pubDate>Mon, 06 Apr 2026 00:00:00 GMT</pubDate>
      <description>No. Not a single text-to-CAD tool generates actual assemblies with mates, constraints, and component relationships. You get individual parts. Assembly thinking is still your job.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>assemblies</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> No current text-to-CAD tool can generate assemblies. All tools (Zoo.dev, AdamCAD, CADScribe) output single parts only. Assembly design requires mate definitions, constraint relationships, interference checking, and component interaction that AI cannot produce from text prompts. You can generate individual components and assemble them manually.</p>
<p>No, text-to-CAD cannot handle assemblies. Not in 2026, not from any tool I've tested, not even close. Every current text-to-CAD tool generates single parts: one body, one STEP file, no mates, no constraints, no component relationships. I found this out the obvious way, by asking Zoo.dev to generate "a simple hinge assembly with a pin, two leaves, and a bushing." What I got back was a single solid body that vaguely resembled a hinge in the same way a snowman vaguely resembles a person. The leaves were fused together. There was no pin. The bushing was a decorative ring attached to the outside. It was a sculpture of the idea of a hinge, not a mechanism.</p>
<p>That was six months ago, and I've retried the same experiment a few times since. The results haven't changed. The tools generate shapes, not assemblies, and the difference is fundamental enough that a prompt rewrite won't fix it.</p>
<h2>What assembly design actually requires</h2>
<p>An assembly in CAD isn't just multiple parts sitting in the same file. It's a system of relationships.</p>
<p>Each part exists as a separate component with its own geometry, its own feature tree, its own material assignment. Parts connect to each other through mates or constraints: a bolt passes through a clearance hole and threads into a tapped hole, with a concentric mate aligning the bolt axis to the hole axis and a coincident mate seating the bolt head against the surface. A shaft fits into a bearing bore with a press fit, constrained concentrically and axially. A lid sits on an enclosure, aligned by a tongue-and-groove or a step joint, held by screws in a bolt pattern that matches between the two parts.</p>
<p>These relationships carry engineering meaning. A concentric mate means two features share an axis. A coincident mate means two faces touch. A tangent mate means a curved surface contacts a flat one. The mate types communicate how the assembly goes together, how it moves, and where the critical interfaces are.</p>
<p>Beyond mates, assemblies require interference checking (do any parts collide when assembled?), clearance checking (is there enough room for tools and hands during assembly?), motion simulation (does this linkage actually move the way it should?), and a bill of materials (what parts, how many, what material, from where?). In production work, you also need exploded views, assembly instructions, and a component numbering scheme that connects to your PLM system.</p>
<p>Text-to-CAD tools produce none of this. They generate one shape. A bracket. An enclosure. A standoff. The shape has no knowledge of what it connects to, what it fits inside, or what's supposed to move relative to what.</p>
<h2>Why this is harder than single-part generation</h2>
<p>The <a href="/posts/text-to-cad-limitations">text-to-CAD limitations</a> are well documented for single parts, but assemblies multiply every one of those problems.</p>
<p>For a single part, the AI needs to get the geometry roughly right. For an assembly, the AI needs to get the geometry right on every part, and it needs to get the relationships between parts right. A 0.5mm error on a bracket dimension might be tolerable. A 0.5mm error on a mating interface means the parts don't fit together. The tolerance for error shrinks dramatically when multiple components have to interact.</p>
<p>The training data problem is also worse for assemblies. Single-part CAD files are relatively common in training datasets. Assembly files are rarer, more complex, and contain relationship data (mates, constraints) that's structured differently in every CAD format. A STEP file of an assembly contains geometry but loses most of the constraint information. A native Fusion 360 or SolidWorks assembly file contains everything, but those formats are proprietary and less available for training.</p>
<p>Then there's the combinatorial problem. A single part has one geometry to get right. An assembly of five parts has five geometries plus every pairwise interaction between them. The number of things that can go wrong scales faster than the number of parts. Anyone who's tried to assemble parts from different vendors, or from the same vendor on different days, knows this feeling. Getting things to fit together is harder than getting things to exist individually.</p>
<h2>What current tools actually produce</h2>
<p>I tested four scenarios across Zoo.dev, AdamCAD, and CADScribe to see how they handle requests that imply assembly intent.</p>
<p>Scenario 1: "A hinge with two leaves and a pin." Zoo.dev generated a single body that looked hinge-like but was fused solid. AdamCAD generated OpenSCAD code for two separate rectangular blocks with cylinders, but they overlapped in space and had no hinge geometry that would allow rotation. CADScribe generated a single Fusion 360 body with a split feature that suggested two leaves, but the pin was missing and there was no clearance between the leaves.</p>
<p>Scenario 2: "An enclosure with a removable lid." Zoo.dev generated a box with a separate-looking lid, but the STEP file contained one body. There was no gap between the lid and the box walls. No alignment features. No fastener points. AdamCAD produced two separate bodies (box and lid) but with no tongue-and-groove, no step joint, and wall thicknesses that didn't match between the two. CADScribe generated an enclosure with a shell feature and a separate lid sketch, but the lid dimensions didn't match the enclosure opening.</p>
<p>Scenario 3: "A bracket with two M4 bolts." Every tool generated a bracket with holes. None generated bolts. The "assembly" was always interpreted as a single part with holes, not a multi-component system.</p>
<p>Scenario 4: "A shaft in a bearing block." Zoo.dev generated a block with a cylindrical hole. No shaft. AdamCAD generated a block and a cylinder, but the cylinder diameter matched the hole diameter exactly (zero clearance, press fit by accident, bearing fit by coincidence). CADScribe generated a bearing block body only.</p>
<p>The pattern is clear: <a href="/posts/text-to-cad-for-mechanical-parts">text-to-CAD for mechanical parts</a> means single mechanical parts. Assembly prompts are either ignored, misinterpreted, or reduced to single-body approximations.</p>
<h2>The workaround workflow</h2>
<p>If you want to use text-to-CAD in a project that involves assemblies, the workflow is generate parts individually and assemble them yourself. It works, with caveats.</p>
<p>Step 1: Break your assembly into individual components. Think about each part independently. What are its dimensions? What features does it need? What are the interface dimensions that must match other parts?</p>
<p>Step 2: Generate each part with explicit dimensions for all mating interfaces. If the bracket has M4 clearance holes on a 40mm by 30mm pattern, specify that exactly. If the lid needs to be 82mm by 62mm to fit inside an 80mm by 60mm enclosure with 1mm clearance per side, specify 82mm by 62mm, not "a lid that fits the enclosure."</p>
<p>Step 3: Export each part as STEP and import them into your CAD assembly environment. Fusion 360, SolidWorks, whatever you use.</p>
<p>Step 4: Add mates and constraints manually. Align the holes. Seat the surfaces. Check the fits. This is where the actual assembly engineering happens.</p>
<p>Step 5: Check interference. Run the interference detection tool. Fix the overlaps. Adjust the clearances. This step almost always reveals problems because the individual parts weren't designed with a shared reference frame.</p>
<p>I've done this workflow a few times. It's faster than modeling every part from scratch when the parts are simple. It's slower than modeling from scratch when the parts have complex mating interfaces, because the AI-generated dimensions on interface features are the least reliable, and those are exactly the dimensions that matter most in an assembly.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> describes this as a "concept assembly" workflow, which is about right. You get a rough assembly that communicates the general arrangement. You don't get a production assembly that you can send to a contract manufacturer with a BOM and expect functional parts back.</p>
<h2>When might assembly generation arrive?</h2>
<p>This is speculation, but informed speculation based on where the research is heading.</p>
<p>The near-term likely improvement is multi-body generation: a tool that produces multiple separate bodies in one output, dimensionally coordinated but without formal assembly constraints. This would be the equivalent of generating parts that fit together by design, even if the CAD file doesn't contain mate definitions. It's a geometry problem rather than an assembly-data problem, and it's plausible within the next year or two.</p>
<p>True assembly generation, with mates, constraints, and component relationships, is a harder problem. The AI would need to understand not just what each part looks like, but how it connects to other parts, what degrees of freedom the connection allows, and how the constraints propagate through the assembly. This requires a fundamentally different training approach than current single-part generation, and the training data for assemblies is scarce compared to individual parts.</p>
<p>The most realistic path is probably CAD-native integration. A tool like CADScribe, which works inside Fusion 360's API, could theoretically generate components in the Fusion assembly environment and apply mates using Fusion's native constraint system. The AI would need to understand Fusion's assembly features, but the assembly data structure would be handled by the CAD platform rather than invented by the AI. This is harder than generating single bodies but possible within the existing CAD infrastructure.</p>
<h2>The practical reality</h2>
<p>If your work involves assemblies, text-to-CAD is useful for generating the easy parts quickly and useless for the hard parts: the interfaces, the fits, the constraints, the motion, the BOM. Assembly design is where engineering judgment lives, and it's the last place AI will replace human thinking.</p>
<p>I still use text-to-CAD for individual components when the geometry is simple and the <a href="/posts/ai-cad-for-real-work">AI CAD output is good enough for real work</a>. But the moment I need those components to fit together, I'm in Fusion 360 doing the assembly work the same way I've always done it: manually, carefully, with the mate dialog open and the measure tool within reach.</p>
<p>The hinge I tried to generate six months ago? I modeled it from scratch in Fusion in about twenty minutes. Two leaves, a pin, a bushing, proper clearances, and mates that actually let it rotate. Twenty minutes isn't long. But it was twenty minutes of actual engineering, and that's the part no text prompt replaces.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Shapr3D AI: where it stands in 2026</title>
      <link>https://blog.texocad.ai/posts/shapr3d-ai</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/shapr3d-ai</guid>
      <pubDate>Sun, 05 Apr 2026 00:00:00 GMT</pubDate>
      <description>Shapr3D built a genuinely good direct modeling tool for iPad and desktop. Their AI features are more cautious than most vendors, which might be the smartest thing they&apos;ve done.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>shapr3d</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Shapr3D&apos;s AI features in 2026 focus on AI-assisted modeling suggestions and geometry recognition rather than text-to-CAD generation. Shapr3D uses AI for feature recognition on imported geometry and smart selection. It does not offer text-to-model generation. The approach is more conservative than competitors but better integrated into actual modeling workflows.</p>
<p>Shapr3D's AI features focus on making direct modeling smarter, not on generating geometry from text prompts. If you're here because you searched "Shapr3D AI" hoping to find a text-to-CAD tool, the answer is: they haven't built one, and that restraint might be the most interesting thing about their AI strategy. Every other CAD vendor is tripping over themselves to announce text-to-geometry generation, chatbot assistants, and AI companions with names. Shapr3D shipped feature recognition and smart selection instead, and honestly, those features do more for my actual workflow than most of the chatbot demos I've sat through.</p>
<p>I've been using Shapr3D on and off for about two years, mostly on iPad with an Apple Pencil, occasionally on desktop when I need the bigger screen for assemblies. I started as a skeptic. A CAD tool that began on iPad sounded like a toy for architects who want to sketch on planes. It's not. The direct modeling kernel (based on Siemens Parasolid, which is the same kernel that powers Solid Edge and NX) is legitimate. The modeling is responsive, the STEP export is clean, and the Apple Pencil input is the most natural way to select edges and faces I've used in any CAD tool. I still do my serious parametric work in Fusion 360, but for quick concept modeling, import cleanup, and client presentations, Shapr3D has become the tool I reach for first.</p>
<h2>What Shapr3D's AI actually does</h2>
<p>Shapr3D's AI features fall into a category I'd call "model intelligence" rather than "model generation." The distinction matters. These features help you work with geometry that already exists, rather than creating geometry from scratch.</p>
<p>Feature recognition is the headline capability. When you import a STEP file from another CAD tool, Shapr3D's AI can analyze the geometry and identify standard features: holes, fillets, chamfers, pockets, bosses, ribs. In a traditional parametric tool, imported geometry arrives as a dumb solid. You can see the fillets, but the software doesn't know they're fillets. You can't suppress them or change their radius without manually recreating each one. Shapr3D's feature recognition turns those dumb surfaces back into identifiable features you can select and modify.</p>
<p>I tested this with a STEP file of a plastic housing I'd originally modeled in Fusion 360. The housing had about forty fillets, a dozen counterbored holes, a shell, and several ribs. Shapr3D identified roughly 80% of the features correctly on import. The fillets were recognized as fillets with their radii preserved. The holes were identified with correct diameters. Some of the more complex features, particularly the ribs that intersected with other geometry at odd angles, were missed or misidentified. But the 80% that worked saved me significant time compared to the traditional approach of manually selecting and modifying each feature on an imported body.</p>
<p>Smart selection uses AI to predict which geometry you're likely trying to select based on context. If you're adding a fillet and hover near a set of edges that form a logical group (all edges of a pocket, for example), the AI suggests selecting the entire group rather than making you click each edge individually. On a complex part with hundreds of edges, this kind of context-aware selection reduces the tedious click-count that makes direct modeling slower than it should be.</p>
<p>Body detection on import recognizes when a single imported solid should logically be treated as multiple bodies or components. Some STEP files arrive as a single merged solid when the original model had separate bodies. Shapr3D's AI can identify the logical boundaries and suggest splitting the import into separate bodies. This doesn't always work, especially on organic shapes where the boundaries aren't obvious, but for mechanical parts with clearly defined mating surfaces, it's useful.</p>
<h2>Why no text-to-CAD</h2>
<p>Shapr3D hasn't announced or shipped text-to-CAD generation, and based on their public statements, they seem to be deliberately avoiding the rush. There are a few possible reasons, and I think some of them are strategic rather than just cautious.</p>
<p>Shapr3D is a direct modeler, not a parametric/history-based tool. Text-to-CAD tools like Zoo.dev generate geometry as a finished solid. That fits naturally into a direct modeling workflow where you push, pull, and modify faces without worrying about a feature tree. But it also means the generated geometry would have no history, no constraints, and no design intent beyond "here's a shape." In a direct modeler, that's already how everything works, so the AI output wouldn't feel foreign. But it also wouldn't add the kind of parametric value that tools like CADAgent (which generates Fusion 360 feature trees) provide.</p>
<p>The market positioning is another factor. Shapr3D has positioned itself as the tool for designers who care about the modeling experience: speed, fluidity, the feel of working with geometry. Adding a text-to-CAD feature that produces mediocre geometry from mediocre prompts could undermine that positioning. Better to ship features that make the existing workflow better (smarter selection, smarter imports) than to add a text box that produces results you can't control.</p>
<p>The <a href="/posts/ai-in-cad-software">AI in CAD software</a> space is crowded with half-baked chatbot assistants and text-to-geometry demos that work great on stage and poorly at the desk. Shapr3D seems to have looked at that crowd and decided to sit it out until the technology is more mature, which is a bet that might look very smart or very slow depending on how fast text-to-CAD improves.</p>
<h2>How AI fits differently in direct modeling</h2>
<p>This is the part that doesn't get discussed enough. Direct modeling and parametric/history-based modeling have different relationships with AI, and features that make sense in one context don't necessarily translate to the other.</p>
<p>In a parametric tool like SolidWorks or Fusion 360, AI can generate a feature tree: a sequence of sketches, extrudes, fillets, and cuts that builds the model step by step. The output has design intent baked in. You can roll back the timeline, change a sketch dimension, and the rest of the model updates. That's powerful, and it's what tools like <a href="/posts/ai-cad-copilot">CADAgent</a> exploit.</p>
<p>In a direct modeler like Shapr3D, there is no feature tree. The model is just geometry. You modify it by pushing faces, adding fillets directly on edges, cutting with construction planes, and unioning bodies. The operations happen in sequence, but they don't form a dependency chain. Change a fillet radius on a direct model and nothing else in the model cares, because nothing downstream depends on it.</p>
<p>This means AI in a direct modeler needs to work differently. Feature recognition on import is a perfect example: it takes dumb geometry and gives it back some intelligence, so you can modify it without starting over. Smart selection is another: it reduces the interaction cost of the direct modeling workflow itself. These are AI features that make the existing tool better rather than replacing parts of the workflow with generation.</p>
<p>The vendors shipping <a href="/posts/ai-cad-copilot">AI CAD copilot</a> chatbots are mostly targeting parametric tools where the command vocabulary is large and the menu structure is deep. Shapr3D's interface is already simple enough that a chatbot would have less to do. You don't need an AI to find the fillet command when it's already one tap away.</p>
<h2>What Shapr3D does well without AI</h2>
<p>The AI features are useful, but the core product is what keeps me coming back. Import cleanup is where Shapr3D genuinely shines. I regularly receive STEP files from suppliers and clients that need modification before they're useful: removing fillets for FEA meshing, adding features for fixturing, splitting bodies for manufacturing analysis. In Fusion 360, modifying imported geometry is a pain that involves the mesh workspace, body replacement, or rebuilding features from scratch. In Shapr3D, you just push a face, add a cut, or remove a fillet directly on the imported body. The Parasolid kernel handles it cleanly, and the direct modeling approach means you're not fighting a feature tree that doesn't exist.</p>
<p>The Apple Pencil input on iPad is better than any mouse-based face selection I've used. Hovering a pencil tip over an edge to select it feels more natural than clicking, and the pressure sensitivity adds a layer of control that a mouse doesn't offer. For client presentations where you want to show a model and make live modifications, the iPad workflow is faster and more impressive than anything I can do on a desktop.</p>
<p>STEP and Parasolid export quality is excellent. What comes out of Shapr3D imports cleanly into SolidWorks, Fusion 360, and NX without the surface and edge artifacts I sometimes get from other tools.</p>
<h2>The conservative bet</h2>
<p>Shapr3D's AI strategy is a bet that the text-to-CAD and chatbot assistant hype will mature before it delivers enough value to justify the investment. They're betting that practical features (better imports, smarter selection, feature recognition) will matter more to working users than a text box that generates geometry of unpredictable quality.</p>
<p>It's a reasonable bet. The <a href="/posts/best-ai-cad-tools-2026">best AI CAD tools in 2026</a> are mostly shipping copilot assistants and automation features, not reliable text-to-geometry generation. The most useful AI features in practice tend to be the boring ones: automatic drawing creation in SolidWorks, smart assembly snapping in Solid Edge, feature recognition in Shapr3D. These don't make good keynote demos. They make good Wednesdays.</p>
<p>Whether Shapr3D eventually adds text-to-CAD generation, I'd guess they will, when the technology produces results consistent enough to match their product quality standards. Until then, they're shipping AI features that solve real problems in real workflows, which is more than most vendors can honestly say about their chatbot panels. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the dedicated generation tools if that's what you're after. But if you're looking for a CAD tool that uses AI to make modeling better rather than to make a press release, Shapr3D is doing it more quietly and more effectively than the noise would suggest.</p>
]]></content:encoded>
    </item>
    <item>
      <title>HP AI text-to-3D: printing-focused generation</title>
      <link>https://blog.texocad.ai/posts/hp-text-to-3d</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/hp-text-to-3d</guid>
      <pubDate>Sat, 04 Apr 2026 00:00:00 GMT</pubDate>
      <description>HP has been making noise about AI-assisted 3D printing workflows. Some of it connects to text-to-3D generation. Most of it is about print optimization, not design.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>hp</category>
      <category>3d-printing</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> HP&apos;s AI efforts focus on 3D print optimization (orientation, support generation, lattice filling) rather than text-to-CAD geometry generation. HP&apos;s Multi Jet Fusion ecosystem uses AI for build preparation and quality prediction. For actual text-to-3D model generation, HP relies on partnerships and third-party tools rather than building their own generation engine.</p>
<p>HP's AI story is mostly about making 3D printing smarter, not about generating 3D models from text. That distinction matters, because if you've landed here searching for HP's text-to-3D capabilities expecting something like Zoo.dev but with HP branding, you're going to be disappointed. I was. I spent an afternoon last month going through HP's AI announcements expecting to find a text-to-geometry tool and instead found a collection of print optimization features wearing an AI label. Good features, some of them. But not what the search results implied.</p>
<p>I've been using HP printers since the LaserJet 4 days, and the company has always been better at the manufacturing side of output than the design side of input. Their 3D printing story follows the same pattern. HP makes excellent Multi Jet Fusion machines. The print quality is genuinely good. The AI features they've built are about making those machines print better, not about generating the geometry you print on them.</p>
<h2>What HP actually means by "AI"</h2>
<p>When HP talks about AI in their 3D printing ecosystem, they're referring to several things, and none of them are text-to-model generation in the way <a href="/posts/text-to-cad-guide">text-to-CAD tools</a> work.</p>
<p>Build optimization is the biggest piece. HP uses machine learning to optimize build preparation for Multi Jet Fusion printers. This includes automatic part orientation (deciding which way to position the part in the powder bed for best surface quality and dimensional accuracy), support generation (though MJF needs minimal supports compared to FDM), and nesting (packing multiple parts into a single build volume efficiently). These are legitimate AI applications that save time and improve print outcomes. A human operator making these decisions for a full build tray might spend an hour. HP's automation handles it in minutes.</p>
<p>Quality prediction uses sensor data and historical print data to predict whether a build will succeed before you commit hours of machine time and kilograms of powder. HP's machines have thermal cameras monitoring the powder bed during printing, and the AI models use that data to flag potential defects in real time. For production environments running Multi Jet Fusion, this is genuinely valuable. A failed build on an MJF machine isn't a minor annoyance like a spaghetti print on a desktop FDM. It's hundreds of dollars in wasted powder and hours of machine time.</p>
<p>Lattice and infill optimization is where HP's AI gets closest to affecting geometry. Their tools can generate optimized internal lattice structures for parts, reducing weight while maintaining structural performance. This is similar to what generative design tools in Fusion 360 and Creo do, but optimized specifically for the MJF process. The lattice structures HP generates are tuned for the layer thickness, material properties, and thermal characteristics of their specific printing process, which gives them an advantage over generic topology optimization.</p>
<p>Material prediction uses AI to estimate mechanical properties of printed parts based on build parameters, orientation, and material batch. For production applications where you need to certify that a printed part meets strength requirements, having a prediction before you destructive-test the actual part saves time and money.</p>
<h2>What HP doesn't do</h2>
<p>HP does not offer a text-to-3D geometry generation tool comparable to what Zoo.dev, AdamCAD, or even browser-based tools like Vondy provide. You cannot type "flanged bracket with four M5 holes" into an HP interface and get a 3D model back. That's not what their AI does.</p>
<p>HP's ecosystem assumes you already have geometry. You bring a CAD model (typically as an STL or 3MF file), and HP's tools help you print it better. The AI lives between your design and the printer, not between your idea and the design. That's a meaningful distinction that HP's marketing materials don't always make clear.</p>
<p>The confusion partly comes from HP's broader announcements about "AI-powered 3D workflows" and partnerships with companies that do offer generative capabilities. HP has partnerships with Autodesk, Siemens, and Materialise, and some of those partners are building AI geometry generation features into their platforms. But those features belong to the partners, not to HP. If you generate a model in Fusion 360 using Autodesk's AI and then print it on an HP MJF machine, both companies might call that an "AI-powered workflow," but the text-to-3D part is Autodesk's and the printing optimization is HP's.</p>
<h2>HP's text-to-3D tool</h2>
<p>HP did release a browser-based AI 3D model generator that takes text prompts and produces printable geometry. I tested it for the <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> roundup. The output is manufacturing-focused STL, not editable B-Rep, and it's clearly designed to funnel users toward HP's printing ecosystem.</p>
<p>The tool generates simple geometry from text descriptions. My flanged bracket test came back as a printable STL that was technically correct but not editable. You couldn't select a face in Fusion 360 and modify it. You couldn't adjust hole diameters or add features. It was a one-shot mesh: usable for printing, useless for iteration.</p>
<p>Compared to Zoo.dev, which produces real B-Rep STEP files you can edit in any CAD tool, HP's generator is limited. Compared to a <a href="/posts/text-to-cad-vs-text-to-3d">text-to-3D tool like Meshy</a>, it's more manufacturing-oriented and less focused on visual quality. It sits in an odd middle ground where the output is too simple for engineering work and too utilitarian for creative work.</p>
<p>The best use case I found was generating simple fixtures and test shapes specifically for MJF printing, where you don't need to edit the geometry afterward and you just need something printable fast. For that narrow purpose, it works. For anything else, you're better off with a tool that produces editable output.</p>
<h2>Where HP fits in the broader ecosystem</h2>
<p>The <a href="/posts/text-to-cad-for-3d-printing">text-to-CAD for 3D printing</a> space is mostly served by general-purpose text-to-CAD tools that happen to export STL. Zoo.dev, AdamCAD, and CADAgent all generate geometry that can be exported for printing. None of them are optimized for a specific printer or process, which means none of them know whether a 45-degree overhang is going to work on your specific machine with your specific material. That's the gap HP's optimization tools fill.</p>
<p>HP's real strength is in the post-design, pre-print space. If you're running Multi Jet Fusion production, HP's AI tools for build optimization, quality prediction, and material characterization are genuinely useful and probably worth whatever HP charges for the software. They solve real problems that cost real money on production MJF machines.</p>
<p>For the <a href="/posts/ai-cad-for-real-work">AI CAD for real work</a> question, HP's answer is honest even if the marketing overstates it. They're not trying to replace CAD with AI. They're trying to make the printing side smarter once you already have a model. That's a more boring story than "type a sentence, get a part," but it's also a more honest one.</p>
<h2>The marketing vs. reality gap</h2>
<p>HP's press releases and event presentations use the word "AI" frequently enough that you might assume they're building a comprehensive text-to-3D design tool. They're not. The AI label gets applied to everything from the thermal camera analysis on the printer to the build nesting algorithm to the material property database. Some of these are genuinely AI in the meaningful sense (learned models making predictions from data). Some are optimization algorithms that existed before anyone called them AI but got rebranded because the conference organizers needed more AI content.</p>
<p>This isn't unique to HP. Every 3D printing company is doing the same thing. Stratasys, 3D Systems, EOS, all of them have "AI" features now, and most of those features are process optimization, not design generation. The 3D printing industry has always been better at making printers smarter than at making design accessible, and AI hasn't changed that pattern.</p>
<h2>Who should care about HP's AI</h2>
<p>If you're a Multi Jet Fusion user running production parts, HP's AI build optimization and quality prediction tools are worth evaluating. They solve a specific, expensive problem (failed builds, suboptimal nesting, inconsistent quality) with technology that's been tested on HP's own machines with HP's own materials.</p>
<p>If you're looking for a text-to-3D tool to generate models from text descriptions, HP's offering is limited. Use Zoo.dev if you want editable B-Rep output. Use HP's browser tool if you just need a quick printable STL and you're already in the MJF ecosystem.</p>
<p>If you're trying to understand the broad <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> landscape and wondering where HP fits, the answer is: at the printing end, not the design end. HP has bet on making the print smarter rather than making the model. Given that they make printers, not CAD software, that's probably the right bet. I'd rather they make my prints better than try to generate my geometry, because they know powder bed physics better than they know design intent, and I'd rather each company stay in the part of the workflow they actually understand.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Generative design software: the full 2026 roundup</title>
      <link>https://blog.texocad.ai/posts/generative-design-software</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/generative-design-software</guid>
      <pubDate>Fri, 03 Apr 2026 00:00:00 GMT</pubDate>
      <description>Generative design has been available for years now. Some tools matured. Some stayed in demo territory. Here&apos;s where every major option stands in 2026.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>generative-design</category>
      <category>roundup</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Major generative design tools in 2026: Autodesk Fusion 360 (most accessible, cloud-based), nTopology (best for lattice/complex geometry), Altair Inspire (FEA-integrated), ANSYS Discovery (simulation-driven), SolidWorks (new in 2025+), and Siemens NX. All require manufacturing constraints to produce usable output. Fusion 360 has the lowest barrier to entry.</p>
<p>Generative design software in 2026 includes Fusion 360, nTopology, Altair Inspire, ANSYS Discovery, SolidWorks, and Siemens NX, with Fusion 360 being the most accessible and nTopology best for complex lattice geometry. I spent a week last January trying to get a useful bracket out of every one of these tools for the same design problem: a motor mount for a test fixture, 6061 aluminum, three bolts on one side, two on the other, 200N static load. My coffee went cold twice during the Fusion cloud solve and three times waiting for ANSYS to finish meshing. By Friday I had six different shapes, five valid opinions about which one to machine, and a strong desire to just draw the thing myself.</p>
<p>But that's always been the tension with generative design. The idea is good. Give the computer the constraints, let it explore shapes no human would think of, pick the best one. In practice, the gap between "shapes no human would think of" and "shapes a machinist will actually make without calling you first" is where most of the frustration lives. Some of these tools have narrowed that gap significantly over the years. Others are still presenting conference slides.</p>
<p>Here's where each one actually stands, based on what I've used, what colleagues report, and what the tools produce when the demo is over and the real geometry starts.</p>
<h2>Fusion 360 Generative Design</h2>
<p>Fusion 360 is where most people encounter generative design for the first time, and that's not an accident. Autodesk has put more effort into making this accessible than any other vendor. The generative design workspace lives inside the same Fusion 360 you already use for modeling, so there's no separate application to learn, no file format juggling, no import-export cycle. You define your preserve and obstacle regions in the Fusion environment, set loads and constraints, pick materials and manufacturing methods, and send the study to the cloud.</p>
<p>The cloud part is both the feature and the limitation. Autodesk runs the optimization on their servers, which means you don't need a workstation-class machine to get results. It also means you're dependent on their servers, their queue times, and their pricing. A typical study with three manufacturing methods and two materials generates maybe a dozen candidate solutions. You compare them in a results gallery, pick one, and it drops into your Fusion timeline as editable geometry.</p>
<p>Where Fusion excels: the manufacturing constraint options are genuinely useful. You can specify 2.5-axis milling, 3-axis milling, 5-axis, die casting, or additive manufacturing, and the solver will respect those constraints in ways that produce geometry a shop can actually make. The milling-constrained results look like real machined parts, not coral reef sculptures. I've sent Fusion generative results to machine shops and gotten quotes without callbacks, which is more than I can say for some of the other tools.</p>
<p>Where it falls short: the cloud solve times are unpredictable. Sometimes a study comes back in an hour. Sometimes it's overnight. Complex problems with many load cases can take a full day. And the <a href="/posts/fusion-360-ai-features">Fusion 360 AI features</a> beyond generative design are still thin, so you're paying the generative design subscription premium for one capability.</p>
<p>Pricing is an add-on to the base Fusion 360 subscription. The exact cost has shifted over the years and depends on your plan tier. It's not cheap, but it's the lowest barrier to entry for generative design by a wide margin.</p>
<h2>nTopology</h2>
<p>nTopology (nTop) is a different animal entirely. It's not a traditional CAD tool with generative bolted on. It's a computational design platform built from the ground up for advanced geometry: lattice structures, topology-optimized solids, conformal channels, implicit geometry, field-driven design. If Fusion 360 is the approachable generative design tool, nTop is the specialized one.</p>
<p>I first used nTop on a lightweight satellite bracket project a few years ago, and it immediately did things that Fusion couldn't touch. Graded lattice structures that transition from solid at the mounting surfaces to sparse internal lattice where stiffness requirements are lower. Conformal cooling channels that follow the part contour rather than running in straight drill lines. Surface textures driven by stress fields. This is the tool you reach for when the geometry needs to be genuinely advanced, not just topology-optimized.</p>
<p>The learning curve is steep. nTop uses a node-based workflow rather than a traditional feature tree, which feels more like visual programming than CAD modeling. Every operation is a block in a graph, and you connect them to build your design logic. For engineers who think in workflows and parameters, this is powerful. For someone who just wants to run a quick optimization on a bracket, it's overkill.</p>
<p>nTop's sweet spot is additive manufacturing. The lattice structures, surface textures, and organic shapes it produces are designed for processes that can build any shape: SLS, MJF, DMLS. If you're machining your parts on a 3-axis mill, most of what nTop can do is irrelevant because you can't make it. But if you're printing in titanium for aerospace or polymer for medical devices, nTop is one of the best tools available.</p>
<p>Pricing is enterprise-tier. They don't publish list prices. If you have to ask, you probably need a purchasing department to handle the conversation.</p>
<h2>Altair Inspire</h2>
<p>Altair Inspire is the tool that most directly connects generative design to serious FEA heritage. Altair has been in the structural optimization business for decades. Their solver, OptiStruct, is one of the most validated optimization engines in the industry. Inspire is the interface they built to make that solver accessible to design engineers who aren't simulation specialists.</p>
<p>The workflow is straightforward. Import or create geometry, define your design space, apply loads and constraints, set the optimization objective (minimum weight, maximum stiffness, whatever you need), and run the study. Inspire handles the meshing, the solving, and the smoothing of the result into a usable solid body. The output is a B-rep solid you can export as STEP and refine in your CAD tool of choice.</p>
<p>What I like about Inspire: the results are structurally trustworthy. OptiStruct has been validated against physical test data for years, and the optimization algorithm is mature. When Inspire tells you a shape meets your structural requirements, you can believe it with more confidence than most other tools. The manufacturing constraint support is decent, with options for casting, milling, forging, and additive, though not as refined as Fusion's milling constraints.</p>
<p>What I don't like: the interface feels like it was designed by simulation engineers for simulation engineers. It's functional but not intuitive. The learning curve is moderate, somewhere between Fusion (easy) and nTop (hard). And the integration with mainstream CAD tools is a two-step process: optimize in Inspire, export, import into your CAD tool, clean up. There's no native parametric link back to your original model.</p>
<p>Pricing is commercial license territory. Altair offers various bundles, and the cost depends on which solvers and capabilities you need. Not cheap, but not enterprise-only either.</p>
<h2>ANSYS Discovery</h2>
<p>ANSYS Discovery is ANSYS's answer to the "simulation should be accessible to designers" argument that has been going on for about fifteen years. It combines real-time simulation (you drag a load arrow and the stress colors update immediately) with topology optimization in a single interface. The pitch is that designers can explore and optimize without switching to a separate FEA tool.</p>
<p>I've used Discovery for quick what-if studies where I need to see how a load path changes when I modify geometry. The real-time simulation is genuinely impressive for that use case. You move a feature, the stress plot updates in seconds, and you develop an intuition for where material is needed and where it isn't. Then you run a topology optimization on the same model and get a result that confirms (or contradicts) your intuition.</p>
<p>The topology optimization in Discovery is solid but not as flexible as Altair's. Manufacturing constraints exist but are more limited. The real-time simulation runs on GPU, so you need decent hardware. And this is ANSYS, which means the pricing conversation involves a sales team, an NDA on the quote, and a number that makes your manager blink.</p>
<p>For someone already in the ANSYS ecosystem for simulation, Discovery adds generative capabilities without leaving the platform. For someone looking for generative design as a standalone capability, there are easier and cheaper entry points.</p>
<h2>SolidWorks Topology Optimization</h2>
<p>SolidWorks has offered topology optimization through its Simulation add-on for a few years now, but it's still catching up to the dedicated tools. The Topology Study in SolidWorks Simulation lets you define a design space, set loads and constraints, specify manufacturing controls (minimum member size, demold direction, symmetry), and run the optimization. The result is a smoothed mesh that you can convert to a B-rep body using the Surface from Mesh tools.</p>
<p>The conversion step is where the pain lives. Going from the optimized mesh to a clean parametric solid that you can actually work with in SolidWorks requires manual effort. The Surface from Mesh tools have improved over the years, but the result is rarely as clean as what Fusion or Inspire produces. You end up with a lot of patching, stitching, and manual surface work to get a geometry that's ready for detailing.</p>
<p>Where SolidWorks topology optimization makes sense: you're already a SolidWorks user with a Simulation license, you need basic structural optimization for relatively simple parts, and you don't want to leave the SolidWorks ecosystem. The integration with the rest of SolidWorks (drawings, assemblies, configurations) means the result, once cleaned up, fits naturally into your existing workflow.</p>
<p>Where it doesn't make sense: complex optimization problems with multiple load cases, advanced manufacturing constraints, or any scenario where you need the solver to be smarter than "remove material where stress is low." The <a href="/posts/solidworks-ai-features-2026">SolidWorks AI features in 2026</a> are expanding, but topology optimization specifically is still behind the dedicated tools.</p>
<p>Pricing is bundled with SolidWorks Simulation Professional or Premium. If you already have the license, the topology study is included. If you don't, the Simulation add-on is a significant upgrade cost.</p>
<h2>Siemens NX</h2>
<p>Siemens NX has topology optimization capabilities through its built-in structural simulation tools. NX is an enterprise platform, and the generative design features reflect that: powerful, deeply integrated, and complex to access. The optimization runs locally, which means no cloud dependency but also no cloud scaling. You need serious hardware for large problems.</p>
<p>NX's strength is in the integration with the rest of the Siemens PLM ecosystem. If your company runs Teamcenter for data management and NX for design, the topology-optimized geometry flows naturally through the same PLM pipeline as everything else. For large organizations with established Siemens infrastructure, this is a meaningful advantage.</p>
<p>For individual users or small teams, NX is hard to justify for generative design alone. The licensing cost is high, the learning curve is steep, and the topology optimization capability, while competent, is not significantly better than what's available in more accessible tools. You'd choose NX for generative design because you already use NX for everything else, not because it's the best standalone generative design tool.</p>
<h2>The manufacturing constraint gap</h2>
<p>Every generative design tool promises manufacturing constraints. Not all of them deliver equally. This matters because <a href="/posts/ai-topology-optimization">AI topology optimization</a> without manufacturing constraints produces beautiful geometry that exists only in the digital world. The solver doesn't care that your shape can't be machined. It cares about mass and stiffness. Constraints are what keep the output connected to physical reality.</p>
<p>Fusion 360 has the best manufacturing constraint interface I've used. The milling constraints produce geometry that actually looks like a milled part: accessible tool paths, appropriate radii, no undercuts that would require five-axis when you specified three. Altair Inspire is close behind, with good casting and milling constraints. nTop handles additive beautifully but doesn't concern itself much with subtractive processes, which makes sense given its user base. ANSYS and SolidWorks have basic manufacturing controls but fewer options and less refinement.</p>
<p>If your parts are going to be machined, Fusion or Altair are the safer choices. If they're going to be printed, nTop or Fusion. If you're casting, Altair. The constraint quality varies enough to matter in the final output.</p>
<h2>Generative design versus text-to-CAD</h2>
<p>These are different tools for different problems, and I've covered the distinction in detail in the <a href="/posts/text-to-cad-vs-generative-design">text-to-CAD vs generative design</a> comparison. But the short version matters here: generative design optimizes geometry under engineering constraints. <a href="/posts/text-to-cad-guide">Text-to-CAD</a> creates geometry from descriptions. One produces structurally validated shapes. The other produces shapes that look right.</p>
<p>If you need a bracket and you know the loads, the material, and the manufacturing process, generative design gives you a shape that's provably good. If you need a bracket and you just want something that looks like a bracket, text-to-CAD gives you that in seconds.</p>
<p>The interesting development is that some <a href="/posts/ai-in-cad-software">AI in CAD software</a> is starting to combine both approaches: use AI to set up the optimization problem, then use traditional solvers to find the answer. We're not there yet in any shipping product, but the trajectory is clear.</p>
<h2>Which one is worth trying</h2>
<p>If you're a Fusion 360 user, try the generative design extension first. It's the easiest entry point and the manufacturing constraints are good enough for real work. The subscription cost is annoying but the capability is genuine.</p>
<p>If you're doing advanced additive manufacturing work with lattices, conformal features, or field-driven design, nTop is the tool that can actually do those things. Nothing else comes close for that specific category.</p>
<p>If you need trusted structural optimization results and you value solver maturity over interface polish, Altair Inspire is the quiet workhorse. OptiStruct has earned its reputation.</p>
<p>If you're already in SolidWorks or NX and just want to add basic topology optimization without leaving your ecosystem, the built-in tools will get you started. They won't wow you, but they'll keep you inside the workflow you already know.</p>
<p>My own setup: Fusion 360 for most generative design work, nTop when the geometry demands it. I've accepted that no single tool covers everything, which is annoying but also honest. The vendors who claim otherwise are usually the ones whose constraint library is the shallowest.</p>
<p>The technology is real. The output requires judgment. And the best generative design tool is still the one where you understand the manufacturing process well enough to set up the problem correctly. The solver is only as good as the constraints you give it, and constraints come from experience, not subscriptions.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI CAD for architecture: different world, same promises</title>
      <link>https://blog.texocad.ai/posts/ai-cad-for-architecture</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-cad-for-architecture</guid>
      <pubDate>Thu, 02 Apr 2026 00:00:00 GMT</pubDate>
      <description>Architecture has its own AI tools, its own problems, and its own version of vendors promising things that don&apos;t work in production. The overlap with mechanical CAD AI is smaller than you&apos;d think.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>architecture</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI CAD for architecture uses different tools than mechanical CAD: Midjourney/DALL-E for concept visualization, Hypar and Testfit for building configurators, and Autodesk Forma for environmental analysis. Text-to-CAD as used in mechanical design (Zoo.dev, etc.) doesn&apos;t apply to architecture. BIM requirements make AI-generated geometry even less useful.</p>
<p>AI CAD for architecture uses a completely different set of tools than mechanical CAD AI, because buildings are not brackets. Architectural design requires BIM data, code compliance, environmental analysis, and coordination between dozens of disciplines, none of which text-to-CAD tools even attempt to address. I learned this the hard way last year when an architect friend asked me to look at "the AI tools everyone's talking about" and I spent an evening realizing that my entire frame of reference was wrong.</p>
<p>I'd been testing <a href="/posts/text-to-cad-guide">text-to-CAD tools</a> for months at that point. Zoo.dev, AdamCAD, the usual suspects. My friend watched me generate a bracket from a text prompt and said, "That's cute. Can it do a floor plan with fire egress compliance and structural column placement?" I stared at him. He stared at me. We both realized the AI-in-CAD conversation has been happening in two completely different rooms, and neither room knows what the other is talking about.</p>
<p>This post is for the people in my room, the mechanical CAD users, who keep seeing "AI in architecture" headlines and wondering if it's the same technology. It's not. Here's what's actually happening.</p>
<h2>Why architectural CAD is a different problem</h2>
<p>Mechanical CAD and architectural CAD share a surface similarity: both involve creating 3D geometry on a computer. That's roughly where the overlap ends.</p>
<p>In mechanical CAD, a bracket is a bracket. It has geometry, dimensions, material properties, and tolerances. The file contains the shape and maybe some metadata. The geometry is the deliverable.</p>
<p>In architectural CAD, a wall is not just geometry. It's a BIM object. It has a type (load-bearing, partition, curtain), a fire rating, an acoustic rating, a thermal resistance value, material layers (gypsum board, insulation, vapor barrier, structure, more gypsum board), connections to floors and ceilings, penetrations for mechanical systems, and relationships to every other element in the building model. The geometry is maybe 20% of what the wall object contains. The other 80% is data that makes the building compliant, constructable, and coordinated across disciplines.</p>
<p>This is why <a href="/posts/what-is-text-to-cad">text-to-CAD</a> as it exists in mechanical design doesn't translate to architecture. Generating a wall shape from a text prompt is trivial. Generating a wall that carries its full BIM data, connects correctly to adjacent elements, meets fire code requirements for its location in the building, and coordinates with the HVAC ducts passing through it? That's a completely different problem. No text prompt captures that level of intent.</p>
<p>The tools that matter in architecture aren't trying to generate geometry from words. They're trying to solve the problems architects actually have, which are about configuration, compliance, analysis, and visualization.</p>
<h2>Concept visualization: where AI actually landed</h2>
<p>The most visible use of AI in architecture right now is concept visualization. Midjourney, DALL-E, Stable Diffusion, and similar image generators have been adopted by architecture firms for early-stage design exploration. An architect types a description of a building, a material palette, a mood, and gets a rendered image in seconds.</p>
<p>This is useful in the same way that a concept sketch is useful: it communicates an idea without committing to technical decisions. A partner at a mid-size firm I spoke with uses Midjourney to generate a dozen exterior concepts before a client meeting, pins the three that feel right, and uses them as conversation starters. The images aren't architecture. They're not BIM models. They're not even 3D models. They're pictures. But for aligning on aesthetic direction before any real design work starts, they save time that would otherwise be spent on hand sketches or SketchUp studies that the client might not even like.</p>
<p>The limit is obvious: these images contain no technical information. The AI doesn't know about structural grids, floor-to-floor heights, code-required setbacks, or the difference between a feasible facade system and a beautiful impossibility. I've seen AI-generated architectural images with cantilevers that would require structural engineering bordering on magic, window-to-wall ratios that would violate energy code in every climate zone, and building massing that ignores the lot boundary by a comfortable margin. They look wonderful. They communicate mood. They don't communicate architecture.</p>
<p>For mechanical CAD users wondering how this compares to <a href="/posts/text-to-cad-vs-text-to-3d">text-to-CAD vs text-to-3D</a>, it's the same distinction at a larger scale. Image generation creates pictures. Model generation creates geometry. Architecture mostly uses the picture side, because the model side requires too much embedded data to generate from prompts.</p>
<h2>Building configurators: Hypar and Testfit</h2>
<p>The AI tools that come closest to "generative design for architecture" aren't really AI in the mechanical-CAD sense. They're parametric configurators with optimization layers.</p>
<p>Hypar is a cloud-based platform where architects and developers define building parameters (site boundaries, floor counts, unit mix, parking requirements, structural grid) and the system generates building configurations that meet those parameters. It's not generating architecture from a text prompt. It's solving a constraint satisfaction problem: given this site, these zoning rules, and this program, what configurations work?</p>
<p>Testfit does something similar for multifamily and mixed-use developments. You define the site, the unit types, the parking requirements, and it generates feasible building layouts showing unit stacking, corridor configurations, and parking garage layouts. The output is diagrammatic rather than BIM-ready, but it answers the fundamental feasibility question: can this program fit on this site?</p>
<p>These tools are useful because the early architectural design problem is largely about configuration and fit. Can I put 200 apartments on this lot and still meet parking requirements and setback rules? That question used to take an architect a week of sketch studies. Hypar or Testfit answer it in minutes, with multiple options.</p>
<p>The connection to <a href="/posts/ai-in-cad-software">AI in CAD software</a> in the mechanical world is thin. These tools don't generate detailed geometry. They don't produce BIM models ready for construction documents. They produce feasibility studies and configuration options that architects then develop into real designs using Revit, ArchiCAD, or whatever their BIM platform is. The AI handles configuration. The architect handles architecture.</p>
<h2>Environmental analysis: Autodesk Forma</h2>
<p>Autodesk Forma (formerly Spacemaker) is probably the most interesting AI tool in architecture right now, because it addresses a problem that architects have always struggled with: understanding how a building design affects and is affected by its environment before committing to a geometry.</p>
<p>Forma analyzes wind patterns around proposed buildings, solar exposure on facades and surrounding streets, daylight availability inside units, noise propagation from nearby roads, and microclimate effects. You place building masses on a site and the analysis runs in real time, showing you which facades get adequate daylight, which outdoor spaces will be uncomfortably windy, and which units won't meet local daylight requirements.</p>
<p>This is genuine AI applied to a genuine architectural problem. The analysis uses machine learning models trained on CFD (computational fluid dynamics) and environmental simulation data to produce approximate results in seconds rather than the hours or days that full simulations would require. The trade-off is precision: Forma gives you directional answers (this facade will be windy, this courtyard will be shaded) rather than precise numbers (the wind speed at this point will be 4.7 m/s). For early design decisions, directional is enough.</p>
<p>I find Forma interesting because it's one of the few AI tools in any CAD domain that solves a problem humans genuinely can't solve intuitively. I can sketch a bracket from experience and get close to what the optimizer would produce. An architect cannot intuit the wind patterns around a 30-story building next to two existing towers and a river. The physics is too complex for human intuition, which makes it a good problem for AI.</p>
<h2>What mechanical text-to-CAD can't do here</h2>
<p>People occasionally ask whether tools like Zoo.dev or AdamCAD could be applied to architectural design. They can't, and the reasons go beyond technical capability.</p>
<p><a href="/posts/ai-cad-for-real-work">AI-generated CAD for real work</a> in mechanical design means producing geometry that can be manufactured. The equivalent in architecture would be producing a building model that can be permitted, bid, and constructed. The gap between those two is enormous.</p>
<p>An architectural model for construction documents contains thousands of objects, each with properties, relationships, and code compliance data. A door is not just a rectangle in a wall. It's an assembly with a fire rating, an accessibility clearance, hardware specifications, a frame type, and a location that satisfies egress path requirements. None of this exists in a geometry-only model.</p>
<p>Text-to-CAD's output, geometry without engineering data, is already a limitation in mechanical design. In architecture, where the data-to-geometry ratio is even higher, geometry alone is nearly useless. You can't permit a building from shapes. You can't coordinate MEP systems through shapes. You can't do a code review on shapes. The BIM requirement kills the text-to-geometry approach before it starts.</p>
<h2>Where AI helps architects today</h2>
<p>The honest list is short but real.</p>
<p>Concept visualization through image generators. Fast, cheap, good for client communication and early design exploration. Not architecture, but useful before architecture begins.</p>
<p>Site and building configuration through tools like Hypar and Testfit. Answers feasibility questions quickly. Saves weeks of sketch studies in early project phases. Doesn't produce BIM-ready output.</p>
<p>Environmental analysis through Forma and similar tools. Provides directional environmental feedback in real time during massing studies. Genuinely useful for decisions that humans can't make intuitively.</p>
<p>Code compliance checking through emerging tools that scan BIM models against building code requirements. This is early-stage but promising. The code compliance process is rule-based and data-heavy, which makes it a reasonable AI application.</p>
<p>Documentation assistance through AI that helps generate specifications, schedules, and code narratives from BIM data. This is more about language models than CAD models, but it addresses a real time sink in architectural practice.</p>
<h2>Where the promises outrun reality</h2>
<p>Vendors love to demo AI generating floor plans from text descriptions. "A 2-bedroom apartment with an open kitchen and a corner living room." The result is a floor plan that looks plausible until you measure the corridor width (too narrow for accessibility), check the structural column locations (not on a grid), notice the plumbing walls don't stack between floors, and realize the window placement violates the energy code.</p>
<p>The demo-to-production gap in architectural AI is, if anything, wider than in mechanical CAD. A mechanical bracket that's dimensionally close is still useful as a starting point. A floor plan that violates accessibility code is useful as a conversation piece and nothing else.</p>
<p>I've seen architects excited about AI-generated floor plans and architects dismissive of them, and the divide usually comes down to whether they've tried to take one past the concept phase. The concept is fine. The execution requires an architect to redo most of the work, which is the same conclusion I reach about <a href="/posts/ai-cad-for-real-work">text-to-CAD for real work</a> in the mechanical world, just with higher stakes and more regulations.</p>
<h2>The honest comparison</h2>
<p>Mechanical CAD AI and architectural CAD AI are solving different problems with different tools for different users. The marketing makes them sound like cousins. In practice, they barely share a vocabulary.</p>
<p>Mechanical text-to-CAD generates parts. Architectural AI generates images, configurations, and analyses. Neither industry has AI that produces production-ready output without significant human work.</p>
<p>The interesting parallel is the maturity curve. Both fields are at the "useful for early-stage exploration, not ready for production deliverables" phase. Both have vendors overpromising. Both have users who are excited and users who are skeptical, and both groups are right about different things.</p>
<p>If you work in mechanical CAD and someone asks you about AI in architecture, the honest answer is: different tools, same growing pains. And if an architect asks you whether text-to-CAD could help with their work, the honest answer is no, but not because the technology is bad. It's because the problem is different, the data requirements are different, and a building is not a bracket, no matter how much the LinkedIn posts want them to be the same story.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Will AI replace CAD designers? A CAD designer&apos;s honest answer.</title>
      <link>https://blog.texocad.ai/posts/will-ai-replace-cad-designers</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/will-ai-replace-cad-designers</guid>
      <pubDate>Wed, 01 Apr 2026 00:00:00 GMT</pubDate>
      <description>No. But it will change what the job looks like, which tasks feel tedious, and which skills keep you employed. Here&apos;s what actually matters.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>career</category>
      <category>opinion</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI will not replace CAD designers. It will automate some geometry creation tasks (simple parts, first drafts, repetitive features) but cannot replace design intent, manufacturing knowledge, assembly thinking, or client communication. CAD designers who learn to work with AI tools will be more productive. Those who ignore AI entirely may lose routine work.</p>
<p>No, AI will not replace CAD designers. Not this year, not in five years, and probably not in any timeframe that should affect your career decisions today. I'm saying this as someone who has been doing CAD work for over a decade, who actually uses <a href="/posts/text-to-cad-guide">text-to-CAD tools</a> regularly, and who spent last Thursday afternoon watching an AI-generated bracket fail every DFM check my machinist could think of while my lunch got cold on the desk behind me.</p>
<p>The anxiety is real, though. I see it on forums, in LinkedIn comments, in emails from students asking whether they should bother finishing their mechanical design program. Every time a demo shows a model appearing from a text prompt in fifteen seconds, someone in the comments writes "CAD designers are done." And every time, I think about the six hours I spent last week on a single assembly where nothing the AI generated would have survived the first design review.</p>
<p>Let me walk through what's actually happening, what AI can and can't do in CAD right now, and what it means for people who make a living pushing geometry around a screen.</p>
<h2>What AI can actually do in CAD today</h2>
<p>Let's start with an honest inventory. Current AI tools, the ones that exist and work, not the ones in research papers or keynote demos, can do a few things reasonably well.</p>
<p>They can generate simple prismatic geometry from text descriptions. Brackets, plates, basic enclosures, standoffs. The kind of parts you'd create with sketch-extrude-cut-fillet in Fusion 360 or SolidWorks. If you describe a rectangular plate with four holes, you'll get a rectangular plate with four holes. The dimensions might be off by a millimeter. The hole positions might drift. But the gross geometry arrives, and for concept work or a first draft, that's something.</p>
<p>They can act as copilots inside existing CAD tools. Autodesk's assistant, Siemens NX's AI chat, PTC's Creo assistant. These tools help with command discovery, suggest operations, and sometimes automate repetitive actions like patterning features or applying standard hole sizes. Think of them as a colleague who knows the menu structure better than you do.</p>
<p>They can search and retrieve. Finding similar parts in a PLM system, recommending existing designs before you model from scratch, surfacing relevant documentation. This is where AI is genuinely useful today, and nobody talks about it because it's not as exciting as watching geometry appear from thin air.</p>
<p>That's roughly the full list of what works in production right now. The <a href="/posts/ai-in-cad-software">AI in CAD software</a> landscape is expanding, but what ships and what gets demoed are very different things.</p>
<h2>What AI cannot do</h2>
<p>Here's where the list gets longer, and where the "CAD designers are toast" narrative falls apart.</p>
<p>AI cannot understand design intent. It generates shapes. It doesn't know why those shapes exist. It doesn't know that the boss on the side of the housing is there because a PCB needs to mount at that specific height, that the slot on the back panel has to clear a cable harness, or that the two holes on the left flange need to align with holes on a mating bracket that lives in a different subassembly. Design intent is the connective tissue of real CAD work, and AI doesn't have it.</p>
<p>AI cannot do assembly design. Real products are assemblies of parts that need to fit together, move together, be assembled in a specific order, and be disassembled for service. Text-to-CAD generates one part at a time with no concept of the parts around it. The bracket it generates has no relationship to the frame it mounts on, the cable it holds, or the fasteners that go through it. I tried this enough times to know the results are consistently disappointing. The <a href="/posts/text-to-cad-limitations">text-to-CAD limitations</a> are fundamental here, not just version-level gaps.</p>
<p>AI cannot specify tolerances, GD&#x26;T, or surface finish. These aren't decorations you add to a drawing. They define whether a part actually works. A bearing bore that's 0.02 mm too large is a failed part. A mating surface that isn't flat within spec causes leaks. A hole that doesn't have position tolerance can end up wherever the shop decides, and the shop will decide based on what's cheapest for them, not what works for your assembly. None of this exists in AI-generated output.</p>
<p>AI cannot evaluate manufacturability. It doesn't know that a sharp internal corner can't be CNC milled. It doesn't know that a thin wall will chatter on a lathe. It doesn't know that vertical faces on injection-molded parts need draft or the part won't eject from the mold. The DFM knowledge that experienced designers carry around in their heads doesn't exist in any text-to-CAD model I've tested.</p>
<p>AI cannot communicate with clients, machinists, or suppliers. It can't sit in a design review and explain why the geometry is shaped the way it is. It can't negotiate a tolerance stack-up with the quality team. It can't call the sheet metal shop and ask whether they can hold a 0.5 mm bend radius on 1.6 mm stainless. Half of a CAD designer's job is communication, and the software doesn't even attempt it.</p>
<h2>The tasks that are safe</h2>
<p>If your job is primarily any of the following, AI is not coming for it anytime soon.</p>
<p>Assembly design. Building products from multiple parts with defined relationships, clearances, and assembly sequences. This requires contextual thinking that AI has no architecture for.</p>
<p>DFM and process-aware design. Designing parts that account for machining access, mold flow, bend sequences, weld distortion, or thermal management. This knowledge lives in the intersection of geometry and physics, and AI tools don't have that intersection.</p>
<p>GD&#x26;T and tolerance specification. Defining what matters on a part, what needs to be precise and what can be loose. This is judgment work that depends on function, process, and cost, all at once.</p>
<p>Client and team communication. Explaining design decisions. Defending choices in review. Adapting geometry because the supplier can't hold the original spec. Managing revision cycles. None of this is automatable.</p>
<p>Complex surface design. Ergonomic forms, class-A surfaces, aerodynamic profiles, anything with curvature continuity requirements. AI can barely handle a lofted surface, let alone a consumer product exterior.</p>
<h2>The tasks at risk</h2>
<p>Being honest means admitting that some CAD tasks are vulnerable. The work most at risk is repetitive geometry creation for simple parts. If your job is primarily creating basic brackets, adapter plates, simple enclosures, and standoff drawings from rough specifications, AI tools are already faster than you at generating the first draft.</p>
<p>Routine drawing creation is another area. As AI gets better at reading 3D models and generating 2D drawings with standard views, section cuts, and dimension placement, the time spent on manual drawing setup will shrink. Not disappear, because someone still needs to add GD&#x26;T and check the output, but shrink.</p>
<p>Standard part modeling where the geometry follows predictable patterns. Families of similar parts. Configurations that differ only in a few dimensions. This kind of repetitive work is exactly what automation does well.</p>
<p>I don't say this to scare anyone. I say it because pretending the threat doesn't exist for any CAD task is as dishonest as claiming the whole profession is dying. <a href="/posts/ai-cad-for-real-work">AI CAD for real work</a> has narrow capabilities today, and those capabilities overlap with some real job functions.</p>
<h2>How other industries handled similar automation anxiety</h2>
<p>CAD designers aren't the first group to face this question. Accountants heard it when spreadsheet software arrived. Graphic designers heard it when desktop publishing appeared. Programmers hear it every six months.</p>
<p>In every case, the pattern has been the same: the tools automated the mechanical part of the work, the profession shifted toward higher-judgment tasks, and the people who learned to use the tools became more productive while the people who only did the mechanical part struggled.</p>
<p>Accountants who only did data entry lost to spreadsheets. Accountants who understood tax strategy, financial planning, and business context got more done with better tools. The profession didn't shrink. It changed shape.</p>
<p>I expect the same thing to happen with CAD design. The geometry-generation part of the job will get faster. The engineering-judgment part will become more valuable. The people who can do both, use AI for speed and apply engineering knowledge for quality, will be the ones who thrive.</p>
<h2>What skills to develop</h2>
<p>If you're a CAD designer thinking about the next five years, here's what I'd invest in.</p>
<p>Manufacturing knowledge. Understand how parts are actually made. CNC, injection molding, sheet metal, additive. The more you know about processes, the harder you are to replace, because AI doesn't know any of it. This is the single biggest differentiator.</p>
<p>Tolerance and GD&#x26;T fluency. Being able to specify what matters on a part, and why, is a skill that AI can't touch. Most CAD designers are weak here. Getting strong at it makes you more valuable immediately, AI or no AI.</p>
<p>Assembly and systems thinking. Understanding how parts work together, how tolerance stacks accumulate, how thermal expansion affects fits, how assembly sequences constrain geometry. This is where senior designers earn their salary.</p>
<p>AI tool fluency. Learn to use the tools. Not because they replace you, but because they can save you time on the boring parts. Generate a first draft with <a href="/posts/text-to-cad-guide">text-to-CAD</a>, then spend your time on the engineering work that matters. The people who refuse to touch AI tools will be slower than the people who use them as a starting point.</p>
<p>Communication skills. The ability to explain a design decision, run a review, negotiate with a supplier, or translate between engineering and business gets more valuable as the mechanical generation work gets cheaper.</p>
<h2>My honest personal assessment</h2>
<p>I've been doing CAD work since my early twenties. I've seen software come and go. I've watched features get announced at conferences that never shipped, and I've watched quiet updates that genuinely changed how I work. I've rebuilt models at 11 PM because a supplier changed their minimum bend radius. I've argued with feature trees that turned hostile after one sketch edit. I've sat through demos that made everything look effortless and then spent the next month discovering all the ways the real tool differed from the demo.</p>
<p>AI in CAD is real and it's going to keep getting better. But the gap between "generates geometry from text" and "replaces a CAD designer" is enormous. It's the same gap that separates "autocomplete writes code" from "replaces a software engineer." The mechanical act of creating lines and extrusions is a small part of what a CAD designer actually does. The thinking, the constraints, the judgment, the communication, the manufacturing awareness, that's the job. And it's not going anywhere.</p>
<p>The designers who will struggle are the ones who only push buttons. The ones who don't understand why the geometry is shaped the way it is, who can't evaluate whether a part can be manufactured, who can't adapt when constraints change. If your entire value is speed at geometry creation, AI is faster. If your value is engineering judgment applied through geometry, you're fine.</p>
<p>I'm not worried about my job. I'm mildly annoyed that I now have to evaluate AI-generated output in addition to everything else, and that some of that output arrives with the confidence of a finished part and the quality of a first sketch. But that's a workflow problem, not an existential one. The AI makes geometry. I make parts that work. Those are different things, and they're going to stay different for a long time.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Should you still learn CAD if AI can generate models?</title>
      <link>https://blog.texocad.ai/posts/should-i-learn-cad-if-ai</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/should-i-learn-cad-if-ai</guid>
      <pubDate>Tue, 31 Mar 2026 00:00:00 GMT</pubDate>
      <description>Yes. And here&apos;s why that answer won&apos;t change for a long time.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>career</category>
      <category>education</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Yes, absolutely learn CAD. AI generates geometry but doesn&apos;t understand design intent, manufacturing constraints, assembly relationships, or tolerance specification. Learning CAD teaches you engineering thinking that AI tools can&apos;t replace. AI makes CAD faster for experienced users. It doesn&apos;t make CAD knowledge unnecessary.</p>
<p>Yes, learn CAD. AI generates geometry but doesn't understand design intent, manufacturing constraints, assembly relationships, or tolerance specification. That answer won't change for a long time, and I'm confident enough to say it without hedging because I've spent the last year testing every text-to-CAD tool I can get my hands on. I've watched them generate brackets in seconds and fail at everything that makes a bracket actually work in a product. The tools are impressive and profoundly limited, and the limitations all live in the same place: the engineering knowledge that you'd learn by actually doing CAD work.</p>
<p>A student emailed me last month asking whether she should switch from mechanical engineering to "AI design." She'd seen a demo and figured traditional CAD skills were about to become obsolete. I wrote back a longer reply than she probably wanted, but I kept thinking about it afterward, sitting at my desk with a Fusion 360 assembly open that contained 47 parts, each one shaped by constraints the AI couldn't begin to understand. Here's the expanded version of what I told her.</p>
<h2>What CAD actually teaches you</h2>
<p>There's a misconception that learning CAD means learning to click buttons in SolidWorks or Fusion 360. That's like saying learning to write means learning to type. The software is the tool. What you're actually learning is how to think about physical objects in three dimensions, with constraints, and for a purpose.</p>
<p>When you sketch a rectangle in a CAD tool and add constraints, you're learning that geometry isn't just shape. It's relationships. The width is linked to a standard. The height is driven by the clearance above a PCB. The hole positions are symmetric because the bracket mounts in two orientations. None of this is in the shape itself. It's in the reasoning behind the shape, and that reasoning is what separates a model from a pile of surfaces.</p>
<p>When you extrude a boss and realize the draft angle needs to change because the mold can't release the part, you're learning DFM. When you try to assemble two parts and discover the bolt heads interfere with the cable harness, you're learning assembly thinking. When you add a tolerance callout and your machinist calls to negotiate, you're learning manufacturing communication.</p>
<p>These skills compound. After a year of CAD work, you start seeing parts differently. You look at an object and think about how it was made, how the mold split, where the gate was, why the wall is that thickness. After five years, you can look at a 3D model and spot problems before the simulation runs. After ten years, you can sketch a part on a napkin and your machinist knows exactly what you mean because you've internalized the constraints.</p>
<p>None of this is something an AI tool teaches you. It's what working in CAD teaches you.</p>
<h2>Why AI needs an informed operator</h2>
<p>Here's a scenario I've seen play out three times now. Someone with no CAD experience uses text-to-CAD to generate a part. The part looks great in the viewport. They export it, send it to a print service or a machine shop, and get back something that doesn't work. The dimensions are off. The features don't align with the mating part. The walls are too thin for the process. The internal corners can't be machined.</p>
<p>They don't know any of this is wrong because they don't know what right looks like. The AI gave them a shape that resembled their description, and they assumed resemblance was enough. It isn't. Resemblance is the starting point of engineering, not the conclusion.</p>
<p>An experienced CAD user looks at the same AI output and immediately spots problems. The wall thickness is wrong for injection molding. The hole pattern doesn't match the standard fastener spacing. The fillet radius is too small for the available cutter. They fix it in ten minutes because they know what they're looking at. The AI saved them some sketching time. Their knowledge saved them from a bad part.</p>
<p>This is the same pattern we see with every productivity tool. Spell-checkers are useful. They're more useful to people who already know how to write. Autocomplete in code editors is helpful. It's more helpful to people who can read the suggestion and know whether it's correct. <a href="/posts/text-to-cad-for-beginners">Text-to-CAD for beginners</a> is a place to start, but starting there without also learning real CAD is like starting with spell-check without learning grammar.</p>
<h2>The calculator didn't kill math</h2>
<p>When calculators became cheap in the 1970s, people asked whether schools should still teach arithmetic. The answer was yes, obviously, because arithmetic is the foundation that lets you know whether the calculator's output makes sense. Nobody stopped teaching math. The curriculum shifted to spend less time on manual calculation and more time on problem-solving, but the underlying mathematical thinking didn't become optional.</p>
<p>The same logic applies here. AI CAD tools will handle more of the mechanical geometry generation over time. The curriculum will shift. Students will spend less time on basic extrusion exercises and more time on design intent, DFM, tolerancing, and assembly thinking. But the underlying knowledge, spatial reasoning, constraint thinking, manufacturing awareness, doesn't become optional because a tool can generate a bracket from a sentence.</p>
<p>If anything, the AI makes the knowledge more important. When geometry generation is fast and easy, the bottleneck shifts to evaluation. Can you tell whether the output is correct? Can you tell whether it's manufacturable? Can you tell whether it fits the assembly? Those questions require the same knowledge that traditional CAD education builds. The path to the question changed. The question didn't.</p>
<h2>What to learn first</h2>
<p>If you're starting from scratch, here's the order I'd recommend, based on what actually matters for the long term rather than what looks impressive fastest.</p>
<p>Start with a real parametric CAD tool. Fusion 360's personal license is free. SolidWorks has educational licenses. Pick one and commit for at least six months. Learn to sketch with constraints, not just draw lines. Learn to extrude, cut, fillet, and pattern. Learn how features relate to each other in the timeline or feature tree. This is the foundation everything else builds on.</p>
<p>Learn to think about manufacturing early. Before you've spent a year making models that only exist on screen, visit a machine shop. Watch a 3D printer. Look at how injection-molded parts are designed. Understanding that your model will become a physical object, and that the physical process constrains the geometry, is the single most valuable thing a CAD student can learn. I wish someone had told me this in my first year instead of my third.</p>
<p>Learn tolerancing and GD&#x26;T. This is the part most CAD education skips or defers, and it's the part that matters most once your models leave the screen. A model without tolerances is a suggestion. A model with tolerances is a specification. The difference matters every time someone tries to make your part.</p>
<p>Then learn AI tools. Once you have a foundation, <a href="/posts/text-to-cad-guide">text-to-CAD tools</a> become useful productivity aids instead of confidence traps. You'll be able to evaluate the output, fix the problems, and integrate AI-generated geometry into real workflows. The tools will make you faster because you already know what you're looking at.</p>
<h2>How to integrate AI into learning without skipping fundamentals</h2>
<p>I'm not suggesting you ignore AI tools while learning. That's unrealistic and unnecessary. But there's a difference between using AI as a learning aid and using it as a substitute for learning.</p>
<p>Good ways to use AI while learning CAD: generate a part with text-to-CAD, then open it in Fusion 360 and try to rebuild it manually. Compare your version with the AI version. Where did your dimensions match? Where did they differ? Is the AI version actually manufacturable? This turns AI output into a learning exercise rather than a crutch.</p>
<p>Another good approach: use AI to explore design options quickly, then pick the most promising one and model it properly from scratch. The AI helps with ideation. The manual modeling builds your skills. You get the best of both without short-circuiting the learning.</p>
<p>Bad ways to use AI while learning: generate every part with AI and never model anything yourself. Trust the AI output without measuring it. Skip learning parametric constraints because the AI doesn't use them. Skip learning DFM because the AI ignores it. These habits will make you fast at generating geometry and unable to evaluate whether that geometry is any good.</p>
<p>The distinction is the same as using Google Translate while learning a language. Reading the translation to check your work helps you learn. Reading only the translation and never writing your own sentences means you'll never actually learn the language. You'll just learn to paste.</p>
<h2>The skills that won't become obsolete</h2>
<p>Some CAD skills are more durable than others. Here's what I'd bet will still matter in ten years, regardless of how good AI gets.</p>
<p>Spatial reasoning and 3D thinking. Understanding how shapes relate in space, how cross-sections change along a path, how two parts fit together, how a flat pattern folds into a 3D shape. This is cognitive, not mechanical, and it's built through practice.</p>
<p>Design intent and constraint thinking. Knowing why a dimension has a specific value, how features relate to each other, and how the model should behave when requirements change. This is the soul of parametric CAD, and no AI tool generates it.</p>
<p>Manufacturing process knowledge. Knowing what a CNC machine can and can't do, what injection molding requires, how sheet metal bends, what welding distortion looks like. This knowledge comes from experience with physical processes, and it's what separates a model that looks like a part from a model that is a part.</p>
<p>Tolerance specification and fit engineering. Understanding H7/g6, knowing when to use position tolerance versus profile, knowing how tolerance stacks accumulate through an assembly. This is precision thinking that AI doesn't attempt.</p>
<p>Communication. Explaining a design to a machinist, negotiating a tolerance with a supplier, defending a geometry choice in a review, translating between engineering and business. <a href="/posts/will-ai-replace-cad-designers">Will AI replace CAD designers</a>? Not while half the job is talking to other humans.</p>
<h2>The honest assessment for students and career-changers</h2>
<p>If you're a student deciding whether to pursue CAD or mechanical design, do it. The tools will change. The underlying knowledge won't. The people who understand how physical objects work, how they're made, and how to specify them precisely will be in demand as long as physical objects exist. AI will make some parts of the job faster. It won't make the job unnecessary.</p>
<p>If you're mid-career and worried about AI replacing your role, invest in the skills AI can't do: DFM, tolerancing, assembly design, client communication. If your entire value is speed at sketching simple parts, yes, you're competing with software. If your value is engineering judgment expressed through geometry, you're fine. You're more than fine. You're becoming more valuable as AI makes the easy parts cheaper and the hard parts more visible.</p>
<p>If you're someone who has never used CAD and is thinking about learning, start now. The learning curve is real but not brutal. The free tools are genuinely capable. And the combination of CAD skills plus AI fluency will be more valuable than either alone. The worst version of the future is one where you can prompt AI but can't evaluate what it gives you. Don't be that person. Learn the craft. Then let the tools make you faster at it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Machine learning in CAD: beyond the hype</title>
      <link>https://blog.texocad.ai/posts/machine-learning-cad</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/machine-learning-cad</guid>
      <pubDate>Mon, 30 Mar 2026 00:00:00 GMT</pubDate>
      <description>Machine learning has been in CAD for longer than the marketing suggests. Feature recognition, mesh cleanup, and constraint solving all used ML before &apos;AI&apos; became a line item in every vendor&apos;s pitch deck.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>machine-learning</category>
      <category>technical</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Machine learning has been used in CAD for years: feature recognition in CAM, mesh repair algorithms, and constraint solving optimization. Recent additions include generative design, text-to-CAD, AI assistants, and natural language commands. The most impactful ML applications in CAD are still the boring ones: classification, search, and defect detection, not geometry generation.</p>
<p>Machine learning has been embedded in CAD software for longer than the marketing departments want you to realize. The most impactful ML applications in CAD are still the boring ones: feature recognition in CAM, part classification for search, mesh repair, and defect detection. Not geometry generation. I know this because I watched a CAM software correctly identify a pocket feature in my model last week, route the toolpath without my input, and save me about ten minutes of manual programming. Nobody called that AI. Nobody put it in a press release. It just worked, the way ML in CAD has been quietly working for years before "AI" became a line item on every vendor's pitch deck.</p>
<p>The recent wave of AI announcements, text-to-CAD, copilots, natural language commands, gets all the attention. And some of it is genuinely new. But understanding the full history of machine learning in CAD gives you better context for evaluating what's hype, what's incremental, and what might actually matter. The answer, as usual, is less exciting and more useful than the keynote version.</p>
<h2>The quiet history: ML in CAD before anyone called it AI</h2>
<p>Feature recognition in CAM has used machine learning techniques for over a decade. When your CAM software looks at a 3D model and identifies holes, pockets, slots, and bosses without you explicitly labeling them, that's pattern recognition. Early implementations used rule-based systems, but the better ones moved to statistical learning approaches years ago. Mastercam, Fusion 360's manufacturing workspace, and several other CAM tools use trained classifiers to recognize machining features and suggest operations. This is ML. It's just not new enough to put on a slide.</p>
<p>Mesh cleanup and repair is another area where ML arrived quietly. When you import a mesh from a 3D scan or a mesh-based tool and the software automatically identifies and fixes gaps, overlapping triangles, and non-manifold edges, there's often a trained model underneath doing the classification. "Is this gap an error or intentional geometry?" is exactly the kind of ambiguous classification problem that ML handles well. Tools like Materialise Magics and Artec Studio have been using ML-assisted repair for years.</p>
<p>Constraint solving optimization, the math that figures out how to satisfy your sketch constraints in real time, has benefited from ML approaches too. When SolidWorks or Fusion 360 solves a fully constrained sketch instantly, part of the efficiency comes from heuristics that learned good solving strategies from millions of constraint patterns. This is the kind of thing nobody notices because it just makes the software responsive. The moment you notice a constraint solver is when it fails, which is a different kind of learning.</p>
<p>Parts classification and search in PLM systems started using ML-based similarity matching before anyone was talking about AI in CAD. Siemens' Geolus shape search, which can find parts similar to a given 3D shape in a database, uses geometric feature extraction and similarity learning. It's been available since the mid-2010s. When a company with a million parts in their PDM system needs to find an existing bracket that's close to what they need instead of designing a new one, that search saves real engineering hours. It's some of the most commercially valuable ML in CAD, and it's been around longer than most people realize.</p>
<h2>The recent wave: what's actually new</h2>
<p>Starting around 2021 with the <a href="/posts/deepcad-dataset">DeepCAD dataset</a> and accelerating through 2024-2026, a genuinely new set of ML applications arrived in CAD. These are the ones getting the conference talks and the funding rounds.</p>
<p>Generative design uses ML-assisted topology optimization to explore design spaces under constraints. You define loads, materials, manufacturing methods, and performance targets. The software generates shapes that satisfy those constraints, often producing organic-looking geometries that no human would have drawn. Autodesk has had this in Fusion 360 for years. PTC has it in Creo GTO. This is the most mature "new wave" ML feature in CAD, and it works genuinely well for its specific use case: structural optimization.</p>
<p>Text-to-CAD is the flashiest new application. You describe a part in natural language, and an ML model generates parametric CAD geometry. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the full landscape. The key ML innovation here came from treating CAD models as sequences of operations (sketch, extrude, fillet) and training transformer architectures to predict those sequences from text, similar to how language models predict the next word. The <a href="/posts/text2cad-paper">Text2CAD paper</a> from NeurIPS 2024 formalized this approach, and it's now the foundation for several commercial tools.</p>
<p>AI assistants and copilots are the third new category. <a href="/posts/ai-in-cad-software">Onshape's AI Advisor</a>, SolidWorks' AURA and LEO, Creo AI Assistant, Solid Edge Design Copilot, and Autodesk Assistant all use large language models trained on CAD documentation and user interactions. They answer questions, suggest operations, diagnose errors, and in some cases execute commands from natural language input. The ML here is the language model itself. The CAD-specific part is the training data and the integration layer.</p>
<p>Natural language command execution, like Fusion 360's Text to Command concept, uses language models to translate spoken or typed descriptions into CAD operations. "Extrude this face by 15 mm" gets mapped to the specific API call in the CAD tool. This requires understanding both the user's intent and the software's command structure, which is a natural language understanding problem that LLMs handle reasonably well for well-defined operation sets.</p>
<h2>What actually works well: the boring stuff</h2>
<p>If I ranked ML applications in CAD by actual impact on daily work, the order would look nothing like the ranking by conference attention.</p>
<p>At the top: feature recognition in CAM. Saves time on every manufactured part. Reliable. Mature. Boring.</p>
<p>Second: parts search and classification. Saves time every time someone needs to find an existing part instead of designing a new one. Most useful in large organizations with big part libraries. Invisible to anyone who doesn't manage a PLM system.</p>
<p>Third: defect detection in manufacturing. ML models trained on inspection data can identify surface defects, dimensional outliers, and process deviations faster and more consistently than manual inspection. This is more manufacturing than CAD, but it closes the loop: the model predicts what the part should look like, the inspection system checks what it actually looks like, and the ML classifier flags the gap. Companies doing high-volume production have been using this for several years.</p>
<p>Fourth: mesh repair and import cleanup. Saves time every time you import geometry from an external source, which for anyone doing cross-platform work is constantly. Not glamorous. Genuinely useful.</p>
<p>Fifth: generative design. Powerful for specific structural optimization problems. Not broadly applicable. Most engineers don't do topology optimization regularly. Those who do find it valuable.</p>
<p>Text-to-CAD and AI assistants rank lower on this list not because they're unimportant but because they're early. The impact today is small. The potential impact is large. But potential doesn't make parts.</p>
<h2>What doesn't work well yet</h2>
<p>Geometry generation from ML models is the most prominent weak spot. The <a href="/posts/how-text-to-cad-works">how text-to-CAD works</a> post covers the technical architecture. The short version: current models can generate simple parametric geometry from text descriptions, but the output is unreliable for anything beyond basic prismatic parts. The dimensions are approximate. The features are sometimes wrong. The manufacturing context is absent. The ML model learned what parts look like but not why they look that way.</p>
<p>Parametric prediction, where an ML model predicts not just a shape but a full parametric feature tree with proper constraints and design intent, is an active research area that hasn't produced reliable commercial results. The closest is Autodesk's Neural CAD concept, which aims to generate editable feature trees, but it's still in development. The fundamental problem is that design intent, the reason behind each feature and constraint, isn't encoded in most training data. The model sees the result but not the reasoning.</p>
<p>Assembly-level ML is almost nonexistent. Understanding how parts relate to each other, predicting interference, suggesting mating strategies, optimizing assembly sequences, these are all tasks that would benefit enormously from ML and that nobody has cracked at scale. Assembly data is scarce, complex, and deeply contextual. Two parts might be 10 mm apart because of a thermal expansion requirement, or because that's where the fastener access is, or because the designer forgot to update the constraint after the last revision. An ML model trained on assembly geometry alone can't distinguish between these reasons.</p>
<p>DFM-aware generation is the big missing piece. Training an ML model to generate geometry that can actually be manufactured requires training data that includes manufacturing context: process parameters, tooling constraints, material properties, tolerance requirements. That data is almost entirely proprietary. The <a href="/posts/cad-dataset-for-ai">CAD dataset</a> problem is the root cause of most of text-to-CAD's current limitations.</p>
<h2>The training data problem</h2>
<p>Every ML application in CAD is limited by its training data, and CAD training data is uniquely scarce.</p>
<p>Image AI was trained on billions of images scraped from the internet. Language AI was trained on trillions of tokens from public text. CAD AI is trained on datasets in the low hundreds of thousands, mostly simple geometry, mostly missing the metadata that would make the models useful for engineering.</p>
<p>The <a href="/posts/deepcad-dataset">DeepCAD dataset</a> has about 178,000 models. That's the primary training set for text-to-CAD research. 178,000 sounds like a lot until you compare it to the billions of data points in other AI domains. And those 178,000 models are simple: sketch-and-extrude operations producing basic prismatic parts. No sweeps, no lofts, no sheet metal, no assemblies.</p>
<p>The Fusion 360 Gallery has about 8,000 models with design history. ShapeNet has around 51,000 3D models, mostly meshes. The ABC dataset has over a million models but without text annotations or manufacturing metadata.</p>
<p>Meanwhile, the really useful CAD data, parts designed for real products with real tolerances and real manufacturing constraints, sits inside corporate PDM systems behind firewalls. Companies don't share this data because it's proprietary, because it contains trade secrets, and because nobody has built a standard way to anonymize and contribute CAD models at scale.</p>
<p>This data gap is the single biggest constraint on ML in CAD. The models can only learn what they're shown. If they're shown 178,000 simple extrusions, they learn to generate simple extrusions. The ceiling won't move until the data does.</p>
<h2>Where to watch for real progress</h2>
<p>Not all areas of ML in CAD are advancing at the same rate. Here's where I think the most meaningful progress is likely in the next two to three years.</p>
<p>DFM validation layers. Not AI that generates manufacturable geometry from scratch, but AI that checks existing geometry against manufacturing rules and flags problems. This is a classification problem, which ML handles well, and the training data (known manufacturing failures, common DFM violations) is more available than generative training data. Several startups and at least two major vendors are working on this.</p>
<p>Improved feature recognition and automatic machining strategy selection. CAM feature recognition has been good for simple features for years. Extending it to complex, multi-setup parts with compound geometry is harder, and ML models with more diverse training data will improve it. This is incremental progress on an existing strength, which is the kind of improvement that actually ships.</p>
<p>Better text-to-CAD for simple parts. The current tools will get more accurate, more dimensionally reliable, and better at handling a wider range of basic geometry. This won't happen through architecture breakthroughs alone. It'll happen through better and larger training datasets, which is a data engineering problem more than an ML research problem.</p>
<p>ML-assisted tolerance analysis. Given a model with nominal geometry, suggesting appropriate tolerances based on similar parts, common fit requirements, and manufacturing process capabilities. This would be enormously useful, and the training data exists inside companies' quality and inspection records, but nobody has assembled it at scale yet.</p>
<p>The pattern: the areas likely to see real progress are the ones where ML is extending existing capabilities (search, classification, recognition) or where the training data problem is solvable (DFM rules, tolerance standards). The areas that remain hard are the ones that require data nobody has or reasoning nobody has modeled.</p>
<h2>The honest assessment</h2>
<p>Machine learning has been making CAD software better for years, mostly in ways you never notice. The feature recognition that saves you time in CAM, the search that finds a similar part in your PLM system, the import repair that fixes a broken mesh from a 3D scan, these are all ML applications that work, that ship, and that nobody writes conference papers about anymore because they're just part of the software.</p>
<p>The new wave of ML in CAD, the text-to-CAD generators and the AI copilots, is more visible, more hyped, and less mature. It will get better. The research trajectory is clear. But the gap between "interesting research" and "reliable production tool" is the same gap it has always been in CAD: filled with manufacturing constraints, edge cases, and the accumulated judgment of people who have watched parts come back wrong.</p>
<p>If you want to benefit from ML in CAD today, use the boring stuff. Turn on feature recognition in your CAM workflow. Use your PLM system's search features. Let the import repair tools do their work. These are the ML applications that have earned trust through years of shipping, and they'll save you more time this week than any text-to-CAD demo.</p>
<p>For the new stuff, experiment. Try the <a href="/posts/text-to-cad-guide">text-to-CAD tools</a>. Test the vendor copilots. See what works for your specific tasks. But test the output. Measure the parts. Don't trust the preview. The marketing says "AI is transforming CAD." The reality is more like "ML is making some of the tedious parts slightly less tedious, and also there's a chatbot now." Less exciting. More honest. About what I'd expect from a technology that's been quietly useful for years and only recently learned to generate press releases.</p>
]]></content:encoded>
    </item>
    <item>
      <title>The future of CAD and AI: what I actually expect</title>
      <link>https://blog.texocad.ai/posts/future-of-cad-ai</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/future-of-cad-ai</guid>
      <pubDate>Sun, 29 Mar 2026 00:00:00 GMT</pubDate>
      <description>Vendors promise a lot. Research papers promise more. Here&apos;s what I think will actually ship, actually work, and actually matter in the next five years.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>future</category>
      <category>opinion</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> In the next 2-3 years: better AI assistants inside existing CAD tools, improved text-to-CAD accuracy for simple parts, and AI-powered search/recommendation in PLM systems. In 5 years: parametric AI generation for simple part families, AI-assisted DFM checking, and natural language CAD editing. Full autonomous design is 10+ years away, if ever.</p>
<p>In the next two to three years, expect better AI assistants inside existing CAD tools, improved text-to-CAD for simple parts, and AI-powered search in PLM systems. Full autonomous AI design is ten-plus years away, if it arrives at all. Everything in between is a gradient of vendor promises, research prototypes, and cautious optimism from people like me who have been burned by too many roadmap slides that never turned into shipping features.</p>
<p>I sat through four vendor presentations last quarter. Every single one had an AI slide. Every single one used the phrase "AI-powered" next to something that was either a search bar with a chatbot skin, a generative geometry demo running on a curated prompt, or a bullet point about a feature that doesn't exist yet. I took notes. I also took a photo of my coffee, which I'd been holding for forty minutes without drinking because the presentations kept promising things that made me want to ask uncomfortable questions.</p>
<p>Here's what I actually expect to happen, organized by how far out it is and how much confidence I have.</p>
<h2>The next one to two years: incremental and real</h2>
<p>The near-term future of AI in CAD isn't dramatic. It's useful. The changes that are already shipping or nearly shipping are modest, practical improvements that make existing workflows faster without fundamentally changing what a CAD designer does.</p>
<p>AI assistants in major CAD tools will get better at command discovery and operation suggestions. Autodesk, Siemens, PTC, and Dassault have all shipped some version of an <a href="/posts/ai-in-cad-software">AI assistant inside their CAD software</a>. Right now, these assistants are somewhere between a fancy search bar and a junior colleague who sometimes gives useful advice. They'll improve. They'll get better at understanding context, suggesting next operations based on what you've already done, and automating repetitive actions like applying standard features.</p>
<p>This isn't exciting. It's the AI equivalent of a better autocomplete, but in a CAD menu structure that has a thousand commands spread across fifty toolbars. Finding the right command faster is genuinely valuable, especially for occasional users who don't have the shortcut keys memorized. I use Fusion 360 daily and I still discover commands I forgot existed.</p>
<p>Text-to-CAD accuracy will improve for simple parts. The tools I test today, Zoo.dev, AdamCAD, others, are already better than they were a year ago at hitting prompted dimensions and generating cleaner topology. I expect that trend to continue. Simple brackets, plates, and enclosures will get more reliable. The dimensional accuracy gap will narrow from "usually close, sometimes wrong" to "almost always close, occasionally wrong." That's meaningful for concept work, even if it still isn't good enough for production.</p>
<p>AI-powered search and retrieval in PLM systems will become standard. Finding similar parts, suggesting reuse before modeling from scratch, identifying duplicate geometry across a company's part library. This is boring, high-value work that AI handles well, and it's starting to ship in enterprise tools. It won't make headlines, but it'll save real time for companies with large libraries.</p>
<h2>Three to five years: the interesting middle ground</h2>
<p>This is where my predictions get less certain and more interesting. The next few years are where the gap between what's possible in research and what's usable in production will either narrow or stay stubbornly wide.</p>
<p>Parametric AI generation for simple part families. Right now, text-to-CAD produces dumb solids with no feature tree. The research on generating parametric models, geometry with constraints, sketch relationships, and editable feature histories, is active and making progress. I expect that within three to five years, at least one tool will be able to generate a simple parametric bracket that you can edit by changing dimensions in a feature tree rather than re-prompting from scratch.</p>
<p>This matters more than it sounds. A dumb STEP file is a dead end. A parametric model is a starting point you can live with. The difference between those two things determines whether AI output integrates into a real <a href="/posts/text-to-cad-guide">text-to-CAD workflow</a> or stays a party trick.</p>
<p>AI-assisted DFM checking. Not DFM-aware generation, that's harder, but automated checking of geometry against manufacturing rules. "This wall is too thin for injection molding." "This internal corner needs a radius for CNC milling." "This overhang angle needs support for SLA printing." Rule-based DFM checking already exists in some tools. Adding AI to make it smarter, more contextual, and easier to use is a natural next step.</p>
<p>I'll believe it's real when my machinist stops calling me about AI-generated parts with impossible features. That hasn't happened yet, but I can see the path.</p>
<p>Natural language CAD editing. Instead of finding the right command, selecting the right face, and entering the right parameters, you say "make this wall 2 mm thicker" or "add M4 holes on a 40 mm bolt circle on the top face" and the tool does it. This is an extension of the AI assistant concept, but applied to editing rather than just command discovery. Fusion 360's timeline-based architecture seems well-suited for this. SolidWorks is probably thinking about it too.</p>
<p>The tricky part is disambiguation. "Make this wall thicker" is simple when there's one wall. When there are forty walls and the AI needs to figure out which one you mean from context, it gets hard. But for specific, well-described edits, I think this will work within the next few years.</p>
<h2>Five to ten years: speculation territory</h2>
<p>Everything beyond five years is guessing, and I want to be honest about the difference between prediction and speculation. Here's what I'd bet on, with low confidence.</p>
<p>Multi-part AI generation. Generating simple assemblies where parts have defined relationships, clearances, and mating conditions. Not a full assembly of a hundred parts, but maybe a two or three-part enclosure with a lid that actually fits, snap fits that actually snap, and internal mounting features that align with a PCB outline. This is hard because it requires the AI to understand spatial relationships between parts, not just geometry within a single part.</p>
<p>Simulation-informed generation. AI that generates geometry and then checks it against basic structural or thermal simulation, iterating until the design meets a performance target. This is generative design with an AI front end, and some version of it exists today in Fusion 360's generative design tools. Making it accessible through natural language and connecting it to AI-generated starting geometry is plausible in the five to ten year range.</p>
<p>Process-aware geometry. AI that knows the part will be injection-molded and generates draft angles, uniform walls, and gate-friendly geometry from the start. This requires training on manufacturing process data alongside geometric data, and the data pipeline is the bottleneck. Most manufacturing data is proprietary and poorly structured. Companies that solve the data problem will have a real advantage.</p>
<p>What I don't expect even at the ten-year horizon is fully autonomous design. The idea that you describe a product and an AI engineers the complete solution, with all tolerances, manufacturing considerations, assembly sequences, and cost trade-offs, is so far from current capabilities that I'd put it firmly in the "maybe someday" category. It's not just a scaling problem. It's a knowledge representation problem that the field hasn't solved.</p>
<h2>What needs to happen technically</h2>
<p>For any of these predictions to come true, a few technical problems need solutions. Parametric generation is the biggest. Current text-to-CAD models generate B-Rep output or fragile construction history. Producing clean parametric models with editable feature trees requires AI that understands CAD operations as meaningful sequences, not just paths to a final shape. The DeepCAD research line is promising but not production-ready.</p>
<p>DFM integration requires manufacturing process data that AI models are not currently trained on. That data exists inside companies but is rarely structured, annotated, or shareable. Assembly reasoning requires spatial and functional understanding that current models lack entirely. And simulation integration needs fast approximate solvers, because running full FEA on every generated iteration is too slow to be practical.</p>
<h2>Vendor roadmap versus reality</h2>
<p>When a vendor says "AI-powered" at a conference, divide the promised capability by four and add two years. That's roughly what will ship and when. I'm not being cynical. I'm pattern-matching from a decade of watching CAD vendor keynotes.</p>
<p>Autodesk is probably closest to useful AI integration, with <a href="/posts/fusion-360-ai-features">Fusion 360 AI features</a> that are incremental but real. Siemens has deep technology but ships user-facing features slowly. PTC and Dassault are moving, but enterprise CAD moves at enterprise speed, and their customers are conservative about new workflows. The most interesting work might come from startups that don't carry legacy code or legacy business models. <a href="/posts/text-to-cad-guide">Zoo.dev's approach</a> is different from what the big vendors are doing, and that diversity is healthy.</p>
<h2>My personal bets</h2>
<p>If I had to bet on what matters most in the next five years, I'd put my money on three things.</p>
<p>First, AI-powered search and reuse in enterprise PLM. This is unsexy and incredibly valuable. Companies waste enormous amounts of engineering time redesigning parts that already exist somewhere in their system. AI that surfaces existing designs before you start modeling will save more total hours than text-to-CAD geometry generation. It'll just never make a flashy demo.</p>
<p>Second, natural language editing inside existing CAD tools. Not generation from scratch, but modification of existing geometry through conversational commands. This is closer to how designers actually work. You don't start from nothing every day. You modify, adapt, and iterate. An AI that's good at helping with that process is more useful than one that generates a first draft you'll throw away.</p>
<p>Third, DFM validation on AI-generated output. A safety net that catches the worst manufacturing violations before the geometry leaves the design environment. This doesn't require the AI to understand manufacturing. It requires a checking layer that knows the rules. It's achievable, practical, and would immediately make every text-to-CAD tool more useful for <a href="/posts/ai-cad-for-real-work">real work</a>.</p>
<h2>What I'm not betting on</h2>
<p>Full autonomous design. Prompt in, engineered product out. The complexity of real engineering is so far beyond current AI that I'd be surprised to see this in my career.</p>
<p>AI replacing CAD software. AI will live inside CAD tools, augment them, and change how people interact with them. But the fundamental need for a precision geometric modeling environment isn't going away.</p>
<p>The death of parametric modeling. Feature trees capture design intent. They make models adaptable. AI generation without parametric structure produces disposable geometry. Parametric modeling will coexist with AI generation, not lose to it.</p>
<h2>Where this leaves working designers</h2>
<p>If you're a CAD designer watching the AI space, my advice is simple: keep doing good work, learn the tools as they ship, and don't panic about a future that vendor slides promise but can't deliver yet.</p>
<p>The next five years will bring tools that make you faster at some parts of your job. They won't make you unnecessary. The parts AI can't do, design intent, manufacturing awareness, assembly thinking, tolerance specification, client communication, are what make you valuable. Those skills are worth developing more than learning to write better prompts.</p>
<p>The AI will keep getting better. What I don't expect is to walk into my office one morning and find that a language model has figured out how to design a multi-part injection-molded assembly with proper draft angles, tolerance stacks, and a tooling cost estimate. When that day comes, I'll be impressed, worried, and immediately suspicious of the tolerance callouts. Until then, the future of CAD and AI is incremental improvement. That's fine. It's how most useful technology actually progresses.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How AI is actually changing CAD (not how vendors say it is)</title>
      <link>https://blog.texocad.ai/posts/how-ai-is-changing-cad</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/how-ai-is-changing-cad</guid>
      <pubDate>Sun, 29 Mar 2026 00:00:00 GMT</pubDate>
      <description>The vendor version: AI is transforming design. The reality: AI is automating some annoying tasks, generating simple geometry, and making search slightly less terrible. That&apos;s still useful. It&apos;s just not a revolution yet.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>opinion</category>
      <category>trends</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI is changing CAD incrementally, not revolutionarily. The biggest real impacts in 2026: AI-powered search in PLM systems, natural language command input (Fusion 360 Text to Command), simple geometry generation via text-to-CAD, and AI copilots for documentation. Design thinking, assemblies, DFM, and complex modeling remain human tasks.</p>
<p>Last Tuesday I was sitting in a webinar where a vendor showed a slide titled "AI is Revolutionizing CAD." The slide had a gradient background and a stock image of a brain made of circuit boards. I took a sip of my coffee, which had gone cold during the previous twenty minutes of "transformation" talk, and waited for the demo. The demo was a chatbot that looked up help articles. That was the revolution. A search bar that understood sentences.</p>
<p>AI is changing CAD, but incrementally, not in the way the conference slides suggest. The real shifts in 2026 are smaller and more useful than the marketing implies: better search inside PLM systems, <a href="/posts/ai-in-cad-software">natural language command input</a> for common operations, simple geometry generation from text prompts, and copilots that help with documentation. Design thinking, complex assemblies, DFM, and tolerance specification are still entirely human work. The vendor narrative skips these inconvenient details because they don't fit on a slide with a gradient background.</p>
<p>I've been doing CAD work for over a decade. AutoCAD first, then SolidWorks for years, now mostly Fusion 360. I've seen enough hype cycles to know the pattern: vendor announces a feature, conference crowd applauds, feature arrives eighteen months later at half the capability, and working engineers figure out how to use the 30% that actually matters. AI in CAD is following this pattern almost exactly.</p>
<h2>The vendor narrative vs. what's on your screen</h2>
<p>Every major CAD vendor is running the same playbook right now. The press release says AI is transforming design. The keynote shows a flashy demo. The actual feature that ships is a chatbot trained on the help documentation, or an error diagnostic tool, or a search function that understands natural language slightly better than the previous search function.</p>
<p>I'm not saying these things are useless. Solid Edge's Design Copilot, Onshape's AI Advisor, Creo's AI Assistant, SolidWorks' AURA and LEO companions, these are all real products that real people can use right now. And for their specific tasks, like looking up how to fix a broken feature or finding the right command without digging through menus, they work fine. Some of them work well.</p>
<p>The problem is the framing. When a vendor says "AI-powered design," they might mean an actual geometry generator, or they might mean a chatbot that links to a tutorial. Both get the same headline. A working engineer hears "AI in CAD" and imagines describing a part in English and having it appear fully dimensioned and ready for the machine shop. What actually ships is closer to a slightly smarter version of Clippy, one that knows what a fillet is.</p>
<p>Autodesk's Neural CAD is the closest any major vendor has come to the "describe it, build it" vision, and it's still in development. The demos look impressive. I've written about <a href="/posts/fusion-360-ai-features">Fusion 360's AI features</a> in detail, and the short version is: the ambition is real, the shipping date is not. Until it's in my installed copy of Fusion and I can break it with a real project, it's a demo.</p>
<h2>What has actually changed in daily CAD work</h2>
<p>Strip away the marketing and look at what's different about using CAD software in 2026 compared to two years ago. There are four things I'd call genuine changes, not transformations, changes.</p>
<p>The first is search. Finding the right part, the right command, the right document inside a PLM system used to require either memorizing a filing system or getting lucky with keywords. AI-powered semantic search, the kind that understands "the bracket from the Johnson project that had the weird offset holes" instead of requiring exact part numbers, is genuinely useful. It's not exciting. Nobody demos it at keynotes because watching someone search for a file is boring. But for engineers who spend real time hunting through Windchill or Teamcenter, it matters.</p>
<p>The second is natural language command input. Fusion 360's Text to Command concept, where you type "extrude this face by 12 mm" instead of clicking through menus, is a real productivity shift for people who know what they want to do but don't remember which submenu it's hiding in. SolidWorks 2026 is doing similar things with its AI companions. The time savings are small per operation, maybe a few seconds, but they accumulate across a day. I've spent enough of my career hunting for the chamfer tool in a reorganized menu to appreciate this one personally.</p>
<p>The third is simple geometry generation. <a href="/posts/text-to-cad-guide">Text-to-CAD tools</a> can now generate basic mechanical parts from text descriptions. A bracket, a plate with mounting holes, a simple enclosure. The output needs editing, the dimensions need checking, the manufacturing constraints need adding, but the starting point saves time on simple parts. I've written a whole guide on this because the reality is more nuanced than either the hype or the skepticism suggests. For a detailed look at what works and what doesn't in these tools, the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the full picture.</p>
<p>The fourth is documentation assistance. AI that helps generate drawing notes, populates title blocks, suggests standard views, or automates the tedious parts of drawing creation. SolidWorks 2026 and Solid Edge 2026 are both shipping features here, and for anyone who has spent a Friday afternoon placing dimensions on thirty standard views of a bracket, this is the kind of automation that earns its keep. It's not glamorous. It is useful.</p>
<h2>What hasn't changed at all</h2>
<p>Here's the list that matters more than the previous one, because this is where most of the actual engineering happens.</p>
<p>Design thinking. Deciding what to build, why, and how it fits into a system. No AI tool in any CAD platform is doing this. The chatbots can answer "how do I create a loft in Fusion 360?" They cannot answer "should this part be a casting or a machined part given the production volume and the tolerance requirements?" That question requires understanding physics, cost, supplier capability, and project context. It requires judgment. It requires having been wrong before and remembering why.</p>
<p>Assembly design. Positioning parts relative to each other, defining mates and constraints, checking interference, designing for assembly sequence. SolidWorks 2026 has an assembly structure generator that takes text prompts, which is interesting, but assembly design is 90% relationship thinking and 10% clicking mates into place. The clicking is the easy part. No AI is handling the thinking part.</p>
<p>Design for manufacturability. Draft angles, wall thickness, tool access, bend radii, undercut avoidance, gate locations, parting lines. I've written about this at length in the <a href="/posts/ai-cad-for-real-work">AI CAD for real work</a> post. The short version: AI-generated geometry has zero awareness of manufacturing processes. A bracket that looks right on screen but has 0.3 mm walls and sharp internal corners is not a bracket. It's a suggestion that will cost you a phone call from your machinist.</p>
<p>Tolerancing and GD&#x26;T. No AI tool generates tolerances. None. The model arrives as nominal geometry with no concept of fit classes, feature control frames, or surface finish callouts. This is the difference between a shape and a specification, and it's entirely manual work.</p>
<p>Revision and collaboration. The messy human process of reviewing a design, negotiating changes, tracking versions, resolving conflicts between what engineering wants and what manufacturing can do. AI plays no role here in 2026, and I suspect it won't for a long time, because this work is fundamentally about communication between people with different priorities.</p>
<h2>The incremental pattern vs. the disruption fantasy</h2>
<p>There's a narrative floating around LinkedIn and conference stages that AI is about to disrupt CAD the way Uber disrupted taxis or streaming disrupted video stores. I've heard this comparison made without irony by people who have apparently never tried to CNC machine a part.</p>
<p>CAD is not a content delivery problem. It's an engineering tool. The output has to be physically correct, dimensionally accurate, and manufacturable. An AI that generates a wrong shape is not a "disruption" anyone wants. The tolerance for error in mechanical engineering is measured in hundredths of a millimeter, not in "close enough for the algorithm."</p>
<p>What's actually happening is incremental improvement, the same pattern CAD has followed for decades. Parametric modeling was incremental. Direct modeling was incremental. Cloud CAD was incremental. Each new capability expanded what was possible without replacing the fundamental workflow of sketch, constrain, extrude, fillet, check, revise, export. AI is following the same curve. It's adding capabilities at the edges of the workflow, not replacing the center.</p>
<p>This is fine. Incremental improvement is how useful tools get better. The problem is that "incremental improvement" doesn't raise venture capital or fill keynote seats. So the story gets inflated until a search improvement becomes a "revolution" and a chatbot becomes "the future of design."</p>
<h2>What a realistic timeline looks like</h2>
<p>Based on what I'm seeing in both the vendor ecosystem and the research community, here's my honest read on timing.</p>
<p>Already here: AI copilots and assistants for documentation, error diagnosis, and command execution. AI-powered search in PLM systems. Simple text-to-CAD geometry generation for basic mechanical parts. Automated drawing creation for standard views and dimensions.</p>
<p>Next 12 to 18 months: text-to-command features shipping in more tools, probably Fusion 360's version going live. Better text-to-CAD accuracy on simple parts. AI-assisted DFM checking (not generating DFM-aware geometry, but flagging problems in existing geometry). More integration between AI assistants and CAD operations, so the chatbot can do things, not just explain them.</p>
<p>Two to four years: AI that can generate moderately complex geometry with some manufacturing awareness, probably trained on process-specific datasets that don't exist yet. The <a href="/posts/cad-dataset-for-ai">CAD dataset problem</a> is real and it's the main bottleneck. Better parametric prediction, where the AI generates not just a shape but editable features with proper constraints. Deeper integration of AI into revision workflows, maybe flagging when a design change will break downstream parts.</p>
<p>Not on any timeline I'd bet on: AI replacing the detailed design phase. AI generating assembly relationships. AI handling tolerance analysis. AI understanding project context well enough to make design decisions. These require the kind of reasoning that current AI architectures don't handle well, and they require training data that doesn't exist in any public or, as far as I know, private dataset.</p>
<h2>The useful middle ground</h2>
<p>The honest position on AI in CAD is neither the vendor's "transformation" story nor the skeptic's "it's all hype" dismissal. It's somewhere in between, in a territory that's less satisfying to talk about but more useful to work from.</p>
<p>AI is making some annoying tasks faster. It's making some simple tasks automatic. It's making search better. It's giving text-based access to operations that previously required menu archaeology. These are real improvements that save real time. I use AI features in my daily work. Not for design. Not for engineering judgment. For the small stuff that was always tedious and is now slightly less tedious.</p>
<p>If you want to try text-to-CAD for generating starting geometry, the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> is where I'd start. If you want to understand <a href="/posts/ai-in-cad-software">what the major vendors are shipping</a> right now, that post tracks the reality. Both are more useful than the keynote version.</p>
<p>The vendors want you to believe AI is changing everything. The skeptics want you to believe it's changing nothing. The truth is smaller than both: AI is changing the boring parts, slowly, and the interesting parts are still yours. That's not a revolution. But some weeks, getting the boring parts done faster is enough.</p>
]]></content:encoded>
    </item>
    <item>
      <title>CAD datasets for AI training: what&apos;s available and what&apos;s locked up</title>
      <link>https://blog.texocad.ai/posts/cad-dataset-for-ai</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/cad-dataset-for-ai</guid>
      <pubDate>Sat, 28 Mar 2026 00:00:00 GMT</pubDate>
      <description>Training AI to generate CAD models requires CAD training data. Most of the good data is locked inside corporate vaults. What&apos;s publicly available is limited, biased, and often missing the metadata that matters most.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>datasets</category>
      <category>training-data</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Key public CAD datasets for AI: ABC Dataset (~1M models), DeepCAD (~180K parametric sequences), Fusion 360 Gallery (~8,000 models with design history), and ShapeNet (~51K 3D models). Most are biased toward simple mechanical parts. Corporate CAD libraries with real-world complexity and manufacturing metadata remain proprietary. The training data gap limits text-to-CAD quality.</p>
<p>The key public CAD datasets for training AI are the ABC Dataset (about one million models), DeepCAD (about 178,000 parametric sequences), Fusion 360 Gallery (about 8,000 models with design history), ShapeNet (about 51,000 3D models), and Thingi10K (about 10,000 printable models). Most are biased toward simple mechanical parts. The real-world CAD data, the complex assemblies with tolerances and manufacturing metadata, is locked inside corporate PDM vaults where no researcher can touch it. I know this because I've tried to find better training data for a side project, and every promising lead ended at a firewall, an NDA, or a polite email explaining that sharing CAD models was "not aligned with our IP strategy."</p>
<p>That was three months ago. I was sitting at my desk with a Fusion 360 Gallery model open on one screen and a DeepCAD sample on the other. Both were extruded rectangles with holes. One had a feature tree. The other was a command sequence. Neither looked anything like the parts I design for actual clients. And it hit me: the reason text-to-CAD tools struggle with real engineering geometry is that they've never seen real engineering geometry. They've seen the public dataset equivalent of a first-semester homework assignment.</p>
<p>This post is about what's actually available, what's missing, and why the gap between public CAD data and corporate CAD data is the most important bottleneck in the entire text-to-CAD field.</p>
<h2>ABC Dataset: the million-model foundation</h2>
<p>The ABC Dataset, published in 2019 by Koch et al. from Princeton, is the largest publicly available collection of CAD models. It contains approximately one million models sourced from Onshape's public projects. The name stands for "A Big CAD Model Dataset for Geometric Deep Learning."</p>
<p>The models are stored as B-Rep geometry, which means they have proper faces, edges, and topology. You get STEP files and derived meshes. The geometric quality is generally good because the models come from a real CAD platform with a real geometric kernel.</p>
<p>The problems: no text annotations. No manufacturing metadata. No design history. No parametric information beyond the raw geometry. You get shapes, not processes. You can train a model to recognize what parts look like but not how they were built, which is exactly the information text-to-CAD needs.</p>
<p>The distribution is also skewed. Onshape's public projects are dominated by hobbyists, students, and early-career users. The models tend to be simple: brackets, plates, basic housings, mechanical components that a single person would create as a public project. Complex assemblies, multi-body parts, and production-quality engineering models are rare because professionals don't usually share their work publicly. The dataset is large but shallow.</p>
<p>ABC is useful for training geometric understanding, shape classification, and surface analysis models. It's less useful for text-to-CAD specifically because there's no text to pair with the geometry.</p>
<h2>DeepCAD: the dataset that made text-to-CAD research possible</h2>
<p>I've written about the <a href="/posts/deepcad-dataset">DeepCAD dataset</a> in detail, but the summary matters here.</p>
<p>DeepCAD contains approximately 178,000 parametric CAD models represented as sequences of CAD commands: sketch a profile, extrude it, sketch another profile, cut-extrude through. Each model is a recipe, not just a shape. This command-sequence representation is what made it possible to train generative models that output CAD operations instead of raw geometry.</p>
<p>The dataset was derived from ABC by filtering to models that could be represented as sketch-and-extrude sequences. That filtering is important. It means DeepCAD excludes sweeps, lofts, revolves, shell features, sheet metal operations, surfacing, and any other modeling approach that doesn't fit the sketch-extrude pattern. The result is geometrically simple: plates, blocks, cylinders, brackets, and basic prismatic shapes.</p>
<p>The <a href="/posts/text2cad-paper">Text2CAD paper</a> added approximately 660,000 text annotations to DeepCAD, using Mistral and LLaVA-NeXT to generate descriptions at beginner, intermediate, and expert levels. This annotation layer transformed DeepCAD from a geometry-only dataset into a text-geometry paired dataset, enabling the first end-to-end text-to-CAD models.</p>
<p>DeepCAD is the most-cited dataset in text-to-CAD research. It defined the field's technical approach. It also defined the field's limitations. When a text-to-CAD tool generates a beautiful bracket but can't handle a gear, a swept tube, or a thin-walled injection-molded housing, the training data is a big part of the reason.</p>
<h2>Fusion 360 Gallery: small but rich</h2>
<p>The Fusion 360 Gallery Dataset, published by Autodesk Research in 2021 (Willis et al.), is much smaller than ABC or DeepCAD, about 8,625 models, but much richer in information.</p>
<p>Each model includes the complete design history: the sequence of modeling operations, the sketch geometry, the constraints, the parameters, and the feature tree. This is the only major public dataset that preserves full parametric design history as a human engineer would experience it in a real CAD tool. You don't just see what the part looks like. You see how it was built, step by step, decision by decision.</p>
<p>The models also include B-Rep geometry, mesh representations, segmented surfaces, and metadata about the design operations used. It's the most complete representation of the CAD design process available in any public dataset.</p>
<p>The problems: size and source bias. 8,625 models is tiny by ML standards. And because the models come from Fusion 360 Gallery, they represent what people chose to share publicly, which skews toward demonstrations, tutorials, and personal projects rather than production engineering. You get interesting geometry but not necessarily representative geometry.</p>
<p>For researchers working on <a href="/posts/how-text-to-cad-works">how text-to-CAD works</a> at the architecture level, the Fusion 360 Gallery is invaluable because it provides ground truth for what a proper feature tree looks like. For training models that need to produce editable, parametric output, it's one of the few sources that shows what "right" looks like. It just doesn't show it at scale.</p>
<h2>ShapeNet: the one from the graphics world</h2>
<p>ShapeNet was published in 2015 by Chang et al. and contains approximately 51,300 3D models organized into 55 categories from WordNet taxonomy. It's been enormously influential in 3D deep learning research, and you'll see it cited in almost every paper about 3D generation.</p>
<p>The catch for CAD: ShapeNet models are meshes, not B-Rep solids. They were collected from online 3D model repositories, not CAD tools. The geometry represents visual appearance, not engineering definition. You can't extract feature trees, sketch constraints, or parametric dimensions from ShapeNet models because that information was never there.</p>
<p>ShapeNet is useful for training models that need to understand 3D shape in general: classification, retrieval, reconstruction. It's less useful for text-to-CAD specifically because the output format doesn't match what CAD engineers need. A mesh chair from ShapeNet and a parametric bracket from DeepCAD occupy different universes in terms of engineering utility.</p>
<p>Some research projects have used ShapeNet as supplementary data for shape understanding while using DeepCAD for the actual CAD generation task. That's a reasonable approach, but it doesn't solve the fundamental problem: the CAD-specific data remains scarce.</p>
<h2>Thingi10K: the 3D printing dataset</h2>
<p>Thingi10K, published in 2016 by Zhou and Jacobson, contains 10,000 3D models from Thingiverse, the largest repository of user-submitted 3D printing files. The models are stored as meshes (STL and OBJ) and span a wide range of categories: mechanical parts, art, household items, toys, tools, cosplay props.</p>
<p>The value of Thingi10K is its diversity and its connection to real fabrication. These are models people actually printed. They include mechanical parts alongside decorative objects, giving a broader view of what users create in 3D modeling tools.</p>
<p>The limitations for AI training: mesh-only format (no parametric data), no design history, no manufacturing metadata beyond "someone printed this." The geometric quality varies enormously because the models come from users of all skill levels. Some are well-designed mechanical parts. Some are decorative meshes that would crash a CAD kernel.</p>
<p>For text-to-CAD research specifically, Thingi10K is marginal. For broader research on 3D shape understanding and generation, it's a useful supplementary dataset.</p>
<h2>What each dataset includes and misses</h2>
<p>To make this concrete, here's what a real engineering part needs and what each dataset provides:</p>
<p>Geometry (the shape itself): All datasets provide this, though in different formats. ABC and DeepCAD provide B-Rep. ShapeNet and Thingi10K provide meshes. Fusion 360 Gallery provides both.</p>
<p>Design history (how it was built): Only Fusion 360 Gallery. DeepCAD has command sequences, which is a partial version. Everything else is geometry-only.</p>
<p>Text descriptions: Only DeepCAD, through the Text2CAD annotation layer. Everything else has no text pairing.</p>
<p>Dimensional accuracy (precise measurements): DeepCAD and Fusion 360 Gallery preserve exact dimensions. ABC preserves B-Rep dimensions. ShapeNet and Thingi10K are approximate at best.</p>
<p>Tolerances: None. No public dataset includes tolerance information.</p>
<p>Material specifications: None in any meaningful way.</p>
<p>Manufacturing process data: None. No dataset records whether a part was machined, molded, printed, or cast, or what process parameters were used.</p>
<p>Assembly context: None. All major datasets contain individual parts, not assemblies with mating relationships.</p>
<p>Design intent (why features exist): None, beyond what can be inferred from the feature sequence.</p>
<p>The pattern is clear: public datasets provide shape. Real engineering requires shape plus context. The context, tolerances, materials, manufacturing processes, assembly relationships, design intent, is exactly what's missing, and it's exactly what would make text-to-CAD output useful for actual engineering work.</p>
<h2>The proprietary data problem</h2>
<p>The most interesting CAD data in the world sits inside corporate PDM systems, and it's not coming out.</p>
<p>A mid-size manufacturing company might have 50,000 to 500,000 parts in their vault. A large automotive or aerospace company has millions. These parts are complex. They have real tolerances, real material specifications, real manufacturing data associated with them. Many have revision histories going back years or decades. Some are linked to inspection reports, manufacturing defect records, and cost data.</p>
<p>This data, if it could be assembled, cleaned, and annotated, would be transformative for ML in CAD. Instead of training on 178,000 simple extrusions, you could train on millions of production parts spanning every manufacturing process and material. The models would learn what real engineering looks like because they'd see real engineering.</p>
<p>But companies don't share this data, for legitimate reasons. Part designs are proprietary. They contain trade secrets. They reveal product roadmaps, manufacturing capabilities, and competitive information. Even anonymized, a collection of automotive bracket designs from a specific company might reveal something about their upcoming vehicle platform. IP protection is real, and no amount of academic enthusiasm is going to override it.</p>
<p>Some companies have internal ML initiatives using their own data. Siemens, Autodesk, PTC, and Dassault all have access to customer data through their cloud platforms, subject to their terms of service and privacy policies. Whether and how they use this data for training is an active area of legal and ethical discussion that I'm watching with professional interest and personal skepticism.</p>
<p>The practical result: public research advances on limited data. Corporate initiatives advance on proprietary data that never gets published. And the gap between what public text-to-CAD tools can generate and what production engineering requires remains wide.</p>
<h2>Dataset bias: the simple-parts problem</h2>
<p>Every public CAD dataset is biased toward simple mechanical parts. This isn't an accident. It's a consequence of how the data was collected.</p>
<p>Public repositories attract hobbyists, students, and demonstrators. The geometry they share tends to be simple, self-contained, and single-part. The complex, multi-feature, multi-body, assembly-integrated parts that make up real products don't get shared because they're proprietary, because they require context to understand, and because sharing a single part from a 200-part assembly without the rest of the assembly is like sharing one chapter from the middle of a novel.</p>
<p>This bias has direct consequences for text-to-CAD. Models trained on simple parts generate simple parts. When I ask a text-to-CAD tool for "a bracket," it produces something reasonable because brackets are well-represented in the training data. When I ask for "a four-cavity injection mold base with guided ejection," the output is useless because the model has never seen one.</p>
<p>The bias extends to modeling operations too. DeepCAD only contains sketch-and-extrude operations. Parts built with sweeps, lofts, revolves, patterns, or surfacing techniques are excluded. This means the AI literally cannot produce geometry that requires these operations. It's not a quality problem. It's a vocabulary problem. The training data taught the model to speak in extrusions. Asking it to loft is like asking someone who only knows English to write in Japanese.</p>
<h2>What metadata is missing and why it matters</h2>
<p>The missing metadata is as important as the missing geometry, maybe more so.</p>
<p>Tolerances define what "close enough" means for each feature. Without them, a generated part has no specification, just a shape. Every hole is exactly its nominal size, which is not how manufacturing works. A 6 mm hole might need to be 6.000 +0.018/-0.000 for a bearing press fit, or 6.2 ±0.1 for a clearance hole. The number 6 alone is meaningless without the tolerance, and no training dataset includes this information.</p>
<p>Material specifications determine what's physically possible. A 0.5 mm wall in polycarbonate is fine. A 0.5 mm wall in aluminum is a problem on a mill. A 0.5 mm wall in cast iron doesn't exist outside of research papers. The AI doesn't know the material, so it can't know the limits.</p>
<p>Manufacturing intent, the reason a part looks the way it does, is the deepest kind of missing metadata. A fillet exists because the machinist needs a tool radius there, or because the molder needs draft, or because the stress analyst said the sharp corner would crack. Three identical fillets, three different reasons. The training data records the fillet. It doesn't record the reason. And the reason is what determines whether the fillet should be 1 mm or 3 mm.</p>
<h2>How the data gap affects text-to-CAD quality</h2>
<p>Every limitation I've described in <a href="/posts/text-to-cad-guide">text-to-CAD output quality</a> traces back to the training data.</p>
<p>Dimensional inaccuracy: the models learned from annotations that say "about 40 mm" paired with geometry that's 41.3 mm. The model learned to approximate, not to be precise.</p>
<p>Limited geometry range: the models learned from simple extrusions and produce simple extrusions. Complex geometry is out of vocabulary.</p>
<p>No manufacturing awareness: the models never saw manufacturing context, so they can't produce manufacturing-aware output.</p>
<p>No tolerance generation: the models never saw tolerances, so they can't generate them.</p>
<p>No assembly understanding: the models never saw assemblies, so they can't reason about part relationships.</p>
<p>This is not a criticism of the researchers who built these datasets. Given the constraints, they've done remarkable work. DeepCAD enabled an entire field of research. The Fusion 360 Gallery is the gold standard for design history data. The ABC Dataset proved that large-scale CAD data collection was possible.</p>
<p>But the honest picture is this: the public data available for training CAD AI is like teaching someone woodworking using only photos of IKEA furniture. The shapes are there. The material is hinted at. The joints, the grain direction, the tooling marks, the assembly sequence, and the decades of craft knowledge that went into making the joints work, none of that is in the picture. And it shows in the output.</p>
<h2>Where the data might come from</h2>
<p>I see three plausible paths to better training data.</p>
<p>Synthetic data generation: using existing CAD tools to procedurally generate large numbers of parametric models with controlled properties. This is already happening in some research labs. The risk is that synthetic data produces synthetic-looking output, models that are geometrically valid but don't reflect how real engineers design.</p>
<p>Federated or anonymized corporate data: companies contributing anonymized geometry and metadata to shared datasets without revealing proprietary designs. This requires solving real technical and legal problems around anonymization, but the incentive exists: better AI tools benefit the companies whose data trains them. Industry consortia or standards bodies might eventually broker this.</p>
<p>Annotation of existing public data: taking the models that already exist in ABC, DeepCAD, and other datasets and adding the missing metadata through expert annotation or inference. This is labor-intensive but feasible for specific metadata types. Estimating likely manufacturing processes from geometry, inferring material from typical dimensions and features, adding tolerance standards based on common practice.</p>
<p>None of these paths is fast. All of them require significant investment. And none will produce data as rich as what already sits inside corporate servers.</p>
<h2>The honest picture</h2>
<p>The text-to-CAD field is limited by its training data more than by its models. The ML architectures are capable enough. The <a href="/posts/neural-cad">neural approaches to CAD generation</a> are improving. The bottleneck is that nobody has the data to train these models on what real engineering looks like.</p>
<p>Public datasets gave us the proof of concept. Simple brackets and extruded plates from AI prompts are now a reality. That's a real achievement built on DeepCAD, ABC, Fusion 360 Gallery, and the researchers who assembled them.</p>
<p>The next step, generating geometry that's dimensionally precise, manufacturing-aware, properly toleranced, and representative of the full range of engineering design, requires data that doesn't exist in public and may not exist in any single location. Building that data is the boring, expensive, unglamorous work that determines whether text-to-CAD stays a prototyping novelty or becomes a real engineering tool.</p>
<p>My bet is that it stays a novelty for longer than the vendors will admit and becomes useful faster than the skeptics expect. The timeline depends entirely on the data. And the data, right now, is a collection of extruded rectangles in a research dataset, looking nothing like the parts I design for a living.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI vs human CAD design: what each is actually good at</title>
      <link>https://blog.texocad.ai/posts/ai-vs-human-cad-design</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-vs-human-cad-design</guid>
      <pubDate>Fri, 27 Mar 2026 00:00:00 GMT</pubDate>
      <description>AI is faster at generating simple geometry. Humans are better at everything else. The interesting part is where the boundary actually sits right now.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>career</category>
      <category>comparison</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI excels at generating simple prismatic geometry quickly (brackets, mounts, basic enclosures). Humans are superior at design intent, assembly integration, DFM, surface quality, tolerance specification, and adapting to constraints. The practical boundary: AI handles first drafts of simple parts; humans handle everything that requires judgment, context, or manufacturing knowledge.</p>
<p>AI is faster at generating simple geometry from a description. Humans are better at everything else in CAD design. That's the honest summary, and the interesting question is where exactly the line sits today, because most of the conversation about this topic either pretends AI is about to take over mechanical design or pretends it's useless. Neither is true. Last week I ran a head-to-head test on a real part to find out where the boundary actually falls, and the result was more nuanced than either camp wants to admit.</p>
<p>The part was a motor mounting bracket. Nothing exotic. L-shaped, two mounting holes on each flange, a slot for cable routing, fillets on the inner bend. The kind of thing I've designed a hundred times in Fusion 360, usually while half-listening to a conference call. I gave a text-to-CAD tool a detailed prompt. Then I modeled the same part manually, timing both approaches. What happened after that is the most honest comparison I can offer.</p>
<h2>Speed: where AI wins and where it doesn't</h2>
<p>The AI generated a motor bracket in about twenty seconds. I had a 3D model on screen, exportable as a STEP file, while my manual version was still a half-finished sketch. For raw geometry creation speed on a simple part, AI wins. It's not close.</p>
<p>But speed is more complicated than "time to first model." Here's how the full timeline played out.</p>
<p>AI route: 20 seconds to generate the part. Then 5 minutes to open the STEP in Fusion and measure everything. The slot was 1.2 mm narrower than I specified. One hole was 0.8 mm off-center. The fillet radius was close but not the value I asked for. Then 15 minutes to rebuild the features I couldn't trust, re-dimension the slot, move the hole, and add the mounting details that the AI skipped. Total usable time: about 20 minutes.</p>
<p>Manual route: about 12 minutes to sketch, extrude, cut the slot, pattern the holes, and add fillets. Everything dimensioned exactly. Constraints in place. Feature tree clean enough to modify later. Total usable time: 12 minutes.</p>
<p>For this part, manual was faster to a finished, accurate result. The AI was faster to a first shape. Those are different things, and which one matters depends entirely on what you're doing with the output.</p>
<p>If I'm exploring form options early in a design and I don't care about dimensional precision yet, AI's twenty-second turnaround is genuinely valuable. If I need a finished part with correct dimensions and a good feature tree, the AI's speed advantage evaporates during the cleanup.</p>
<h2>Quality: the gap that matters</h2>
<p>Let's compare what each approach actually produces, because this is where the conversation usually falls apart.</p>
<p>Surface quality and topology. My manual model had clean B-Rep geometry. Planar faces were truly planar. Cylindrical holes were true cylinders. Fillet surfaces were tangent-continuous. The AI-generated model was close but had minor surface deviations on what should have been flat faces, and the fillet geometry had slight irregularities visible when I checked the curvature analysis. For a 3D print prototype, nobody would notice. For a machined part or a mating surface, it matters.</p>
<p>Design intent. My manual model captured relationships. The holes were positioned parametrically relative to the flange edges. The slot width was driven by the cable diameter plus clearance. The overall dimensions were linked so I could scale the bracket by changing two values. The AI model captured none of this. Features existed at fixed coordinates with no encoded reason for their positions. Moving one hole meant manually moving it. Changing the cable size meant manually re-cutting the slot. The model was geometry without memory.</p>
<p>Dimensional accuracy. I asked for specific dimensions. My manual model hit every one exactly, because I typed them in. The AI model was close on most, off on a few. The 6 mm holes were 5.7 mm. The 40 mm flange was 39.4 mm. For a concept, fine. For ordering fasteners or checking mate clearances, not fine.</p>
<p>Manufacturing readiness. My manual model had 0.5 mm internal fillet radii compatible with a standard end mill. Wall thicknesses checked out for 6061 aluminum. Hole positions left enough material to the edge. The AI model had one internal corner with zero radius, walls that varied slightly in thickness, and a hole position that left only 1.8 mm of material to the edge, which my machinist would flag immediately.</p>
<h2>The context advantage</h2>
<p>Here's something that doesn't show up in a feature comparison but dominates real work: context.</p>
<p>When I designed that bracket manually, I knew it was going to mount to a specific aluminum extrusion. I knew the motor shaft centerline needed to be at a certain height. I knew the cable running through the slot was a 14 AWG silicone wire with a specific bend radius. I knew the bracket would be machined from 6061-T6 and anodized. I knew it sat next to a heat sink that needed 3 mm clearance.</p>
<p>The AI knew none of this. It generated a bracket-shaped object that existed in isolation. No relationship to the motor, the frame, the cable, the adjacent components, or the manufacturing process. The bracket was technically a bracket, but it was a bracket without a purpose, just a shape that happened to look like one.</p>
<p>This is the fundamental asymmetry. <a href="/posts/ai-cad-for-real-work">AI-generated CAD for real work</a> operates without context. Every part is an island. Human designers work with context as their primary material. The shape is downstream of the constraints, and the constraints come from assembly, manufacturing, cost, and function, none of which the AI has access to.</p>
<h2>The head-to-head scoreboard</h2>
<p>After running this comparison across several parts (the bracket, a simple plate, a U-channel enclosure, and a PCB standoff), here's where things landed.</p>
<p>Speed to first visible geometry: AI wins on everything except extremely simple parts where a manual extrusion is almost as fast as writing the prompt.</p>
<p>Dimensional accuracy: Human wins every time. AI gets close. Close isn't good enough for manufacturing.</p>
<p>Design intent and parametric flexibility: Human wins completely. AI output has no usable feature tree, no constraints, and no capacity to adapt to changes.</p>
<p>Manufacturing readiness: Human wins completely. AI has no DFM awareness at all. I wrote about this at length in the <a href="/posts/text-to-cad-limitations">text-to-CAD limitations</a> post, and every test I run confirms it.</p>
<p>Surface quality and topology: Human wins, though the gap is smaller on simple geometry.</p>
<p>Assembly integration: Human wins completely. AI can't do this. At all.</p>
<p>Communication and documentation: Human wins completely. AI generates geometry. It doesn't generate drawings, annotations, material callouts, or anything a shop needs to make the part.</p>
<p>The score is roughly 1-6 in favor of human designers, with AI winning on speed to first geometry and losing everywhere else. That single win is worth something, which is why I still use text-to-CAD. But it's important to be clear about how narrow that win is.</p>
<h2>Where collaboration actually works</h2>
<p>The useful framing isn't AI versus human. It's AI then human.</p>
<p>The workflow I've settled into looks like this: I use text-to-CAD to generate starting geometry for simple parts when I want to react to a shape rather than imagine it. The AI gives me something to look at, rotate, and evaluate. Then I take that geometry into Fusion 360 and do the actual design work: rebuild with proper constraints, add the dimensions I need, account for manufacturing, integrate with the assembly.</p>
<p>It's like using a rough clay model before committing to the final sculpture. The clay doesn't need to be precise. It needs to help me think. The AI output helps me think faster on some parts. Not all. Not most. But some.</p>
<p>The other place collaboration works is search and retrieval. Using AI to find similar parts in an existing library, surface relevant designs before I model from scratch, or suggest standard components that fit my constraints. This isn't glamorous, but it saves real time in environments with large part libraries.</p>
<p>Where collaboration doesn't work is expecting the AI to handle any step that requires engineering judgment. The moment you need to specify a tolerance, evaluate a tool path, check a clearance, or decide between two manufacturing approaches, you're back to being a human designer with a keyboard and a cup of coffee that's gone cold again.</p>
<h2>What this means in practice</h2>
<p>If you're a CAD designer, AI is a tool that's good at one thing: fast rough geometry for simple parts. It's bad at everything else you do. The correct response is to learn to use it where it helps and not feel threatened by a capability that covers maybe ten percent of your actual job.</p>
<p>If you're a manager evaluating AI for your design team, understand that the demo isn't the workflow. Generating a bracket in twenty seconds is impressive. Turning that bracket into a production part takes the same engineer the same amount of time whether the starting geometry came from AI or from their own sketch. The savings are real but modest, and they're concentrated at the early concept stage.</p>
<p>If you're a student wondering whether to learn CAD or just learn to prompt AI, learn CAD. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> is useful, but it's useful in the way a power tool is useful: it makes an experienced person faster. It doesn't make an inexperienced person competent.</p>
<p>The honest current state: AI generates shapes. Humans generate parts. The difference between a shape and a part is engineering, and engineering is still a human job. I don't expect that to change quickly, and I don't lose sleep over it. I just wish the AI could also generate the missing tolerance callouts while it's at it. That would actually save me some time.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI CAD workflow: where AI fits in a real design process</title>
      <link>https://blog.texocad.ai/posts/ai-cad-workflow</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-cad-workflow</guid>
      <pubDate>Thu, 26 Mar 2026 00:00:00 GMT</pubDate>
      <description>A real design process has concept, detail design, DFM review, documentation, and revision. AI fits into maybe two of those stages. Here&apos;s which ones.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>workflow</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI fits into CAD workflows at two main stages: early concept geometry generation (text-to-CAD for first drafts) and documentation assistance (AI search, automated drawing notes). AI does not fit into detailed design, DFM review, tolerance specification, or assembly integration. The most productive approach: use AI for rough geometry, then switch to traditional CAD for everything else.</p>
<p>AI fits into a real CAD design process at two stages: early concept geometry and late-stage documentation. That's it, as of April 2026. It does not fit into detailed design, DFM review, tolerance specification, or assembly work. I spent a month trying to wedge AI tools into every stage of a product design project, a small plastic enclosure for a client's sensor board, and the results were instructive. The AI helped me generate three rough enclosure concepts in about fifteen minutes. It also helped me auto-populate some drawing notes at the end. Everything in between, the six weeks of actual engineering, was me, Fusion 360, a machinist's feedback, and a lot of coffee.</p>
<p>That ratio tells you where we really are. Not where the vendor slides say we are.</p>
<h2>The stages of a real design process</h2>
<p>Before talking about where AI fits, it helps to name what "a design process" actually contains, because the conference version and the working version are different things.</p>
<p>The conference version: concept, design, manufacture. Three clean boxes. Maybe an arrow between them.</p>
<p>The working version, the one I've lived in for over a decade across AutoCAD, SolidWorks, and Fusion 360, looks more like this:</p>
<p>A client describes what they need, usually vaguely. You ask questions. You sketch something on paper or in a quick 3D concept. You build a rough model to test the basic geometry, clearances, and proportions. You throw that model away. You build a better one. You get into detail design: dimensioned features, proper constraints, fillets that serve a purpose, mounting features that reference real hardware. You do a DFM review, either yourself or with your shop, and discover that half your features can't be manufactured the way you drew them. You revise. You add tolerances. You create drawings. Someone asks for a change. You revise again. You export. You send files. The machinist calls. You revise one more time. The part gets made.</p>
<p>That's maybe seven or eight distinct stages, and they don't flow cleanly. They loop. They overlap. The DFM review sends you back to detail design. The revision request sends you back to DFM. A new requirement from the client sends you back to concept.</p>
<p>The question isn't whether AI can do CAD. It's which specific stages in this messy loop can AI contribute to without creating more work than it saves.</p>
<h2>Stage 1: Concept geometry, where AI actually helps</h2>
<p>This is AI's best moment in the whole process, and it's still limited.</p>
<p>When you're in the early concept phase, you need rough shapes fast. Not finished parts. Not dimensioned geometry. Just enough 3D form to look at proportions, check clearances against a board outline, see if the basic approach makes sense before you commit to a parametric model you'll spend days refining.</p>
<p><a href="/posts/text-to-cad-guide">Text-to-CAD tools</a> are genuinely useful here. I used Zoo.dev to generate three variants of a rectangular enclosure with different mounting tab positions. Each one took about 30 seconds. The dimensions were approximate, the fillets were wrong, the wall thickness was whatever the AI decided, but I could import the STEP files into Fusion 360 and immediately see which form factor worked best with the board layout. That saved me maybe 20 minutes of sketching and extruding three throwaway concepts by hand.</p>
<p>The key insight: this only works because concept geometry is disposable. You're going to throw it away and rebuild. The inaccuracies don't matter because you're not keeping the model. You're keeping the idea.</p>
<p>The <a href="/posts/text-to-cad-workflows-and-tools">text-to-CAD workflows and tools</a> post covers the specific tools and how to set up this kind of generation loop. The practical advice: describe the part with specific dimensions, even if they're approximate. "Rectangular enclosure 90 by 60 by 25 mm, wall thickness 2 mm, four corner mounting tabs with M3 holes" gets you something useful. "A box for a sensor" gets you something generic and useless.</p>
<h2>Stage 2: Detail design, where AI does not help</h2>
<p>This is where the real work happens and where AI has nothing useful to offer.</p>
<p>Detail design means: proper sketch constraints. Dimensions that reference real hardware datasheets. Features that relate to each other through parametric references. A wall that's 1.5 mm because the injection molder said anything thinner warps in ABS. A rib pattern that follows the load path. A snap-fit designed to deflect 0.8 mm without exceeding the material's yield stress. A boss that positions a threaded insert at a specific height relative to the PCB standoff on the opposite wall.</p>
<p>None of this can be prompted. I tried. I asked various AI tools to "add a snap-fit latch to the east wall of the enclosure, 12mm from the top edge, designed for ABS with 1.2mm deflection." What I got was a bump. A decorative protrusion that looked vaguely like a snap-fit in the same way a drawing of a door handle is vaguely like a door handle. No cantilever mechanics. No strain calculation. No consideration of the mating geometry on the lid.</p>
<p>Detail design requires engineering judgment at every feature. The AI doesn't know your material. It doesn't know your mating parts. It doesn't know your production volume or your supplier's capabilities. It doesn't know that the hole on the left side needs to be 4.2 mm because it's an M4 clearance hole, not because 4.2 is a nice number. Every dimension in a detailed model has a reason, and the AI doesn't have access to any of those reasons.</p>
<p>This stage is 60 to 70 percent of the total design time on most projects I work on. AI contributes zero to it.</p>
<h2>Stage 3: DFM review, where AI is absent</h2>
<p>Design for manufacturability review is the stage where you check whether the part you designed can actually be made. With the tools. With the materials. With the tolerances. At the cost that makes the project viable.</p>
<p>I have never seen an AI tool that can do a DFM review. Not a real one. There are AI-powered DFM checkers emerging that flag obvious problems, walls too thin, draft angles too shallow, but a real DFM review is a conversation. It's your machinist saying "I can hold that tolerance on the bore but not on the outer diameter with the setup you're assuming." It's your molder saying "that rib will sink on the show surface and your customer will reject it." It's you redesigning a feature because the tooling cost for the ideal geometry is three times the budget.</p>
<p>The <a href="/posts/ai-cad-for-real-work">AI CAD for real work</a> post covers the manufacturing gap in detail. The summary: AI generates geometry without manufacturing context because it was trained on geometry without manufacturing context. That's not a bug that gets patched. It's a fundamental limitation of the training data.</p>
<h2>Stage 4: Tolerancing and specification, where AI doesn't exist</h2>
<p>After the geometry is finalized and DFM-reviewed, you add the engineering metadata that turns a shape into a specification. Dimensional tolerances. Geometric tolerances. Surface finish callouts. Material specifications. Notes about critical features.</p>
<p>No text-to-CAD tool, no AI copilot, no vendor assistant generates this data. Not in 2026. The model arrives as nominal geometry. A hole is 6 mm. It's not 6 mm H7. It's not 6 mm plus 0.012 minus zero. It's just 6 mm, which in manufacturing terms means "the shop will guess."</p>
<p>This stage is tedious and it requires precision. It's exactly the kind of thing you'd want AI to help with, but the training data to teach an AI about tolerance specification doesn't exist in any public dataset. GD&#x26;T is a specialist language that encodes decades of manufacturing knowledge into symbols, and nobody has trained a model on it at scale.</p>
<h2>Stage 5: Documentation, where AI helps again</h2>
<p>Drawing creation. Standard views. Dimension placement. Notes. Title block population. Bill of materials.</p>
<p>This is the second stage where AI earns its place. SolidWorks 2026 ships AI-powered drawing generation that can produce 70 to 80 percent of a standard drawing automatically. Solid Edge 2026 does something similar. These tools choose standard views, place dimensions, and generate the repetitive layout work that used to eat Friday afternoons.</p>
<p>I've been doing engineering drawings for long enough to know that this specific task, creating standard documentation from a finished 3D model, is one of the most automatable parts of CAD work. The rules are well-defined. The standards are known. The layout conventions are predictable. This is exactly the kind of structured, repetitive task that AI handles well.</p>
<p>It still needs review. You still check every dimension placement, every note, every view alignment. But going from a blank drawing to an 80 percent complete one in seconds instead of starting from scratch is a real time savings on every part, every project.</p>
<h2>A practical daily workflow example</h2>
<p>Here's what my actual AI-assisted CAD workflow looks like on a typical project in 2026. I'm being specific because the general descriptions are always more optimistic than the reality.</p>
<p>Morning: client sends a rough spec for a sensor enclosure. I spend 30 minutes reading the spec and the board datasheet. I use Zoo.dev to generate three concept enclosures with different proportions and mounting approaches. I import the STEP files into Fusion 360, drop in the board model, and check basic clearances. I pick the concept that works best. Total AI involvement: 15 minutes of generation time, maybe 10 minutes of prompt iteration.</p>
<p>The next three days: I rebuild the enclosure from scratch in Fusion 360 as a proper parametric model. I design the wall thickness based on the material and process. I add snap-fits, bosses, standoffs, and cable routing features. I reference the board datasheet for every mounting hole position. I run an interference check with the lid. I send screenshots to the client. They want changes. I revise. AI involvement during these three days: zero.</p>
<p>Day five: I do a DFM check with my molder. Two features need redesigning. I spend an afternoon revising. AI involvement: zero.</p>
<p>Day six: I create engineering drawings. I use SolidWorks' drawing automation to generate the initial layout, then spend an hour adjusting views, adding GD&#x26;T, and writing notes. AI involvement: maybe 30 minutes of initial automation.</p>
<p>Day seven: export, final review, send to the shop. AI involvement: zero.</p>
<p>Total project time: roughly 40 hours. Total time saved by AI: maybe one to two hours. That's useful. It's not transformative. And it's honest.</p>
<h2>The handoff from AI to manual CAD</h2>
<p>The transition point between AI-generated geometry and real engineering work is the most important moment in this workflow, and it's the one nobody talks about.</p>
<p>When you import a text-to-CAD generated STEP file into Fusion 360, you get a dumb solid. No feature tree. No sketch constraints. No parametric dimensions. It's a starting shape, nothing more. Every feature you need, every dimension you need to control, every relationship between features, you build from scratch. The AI output is a reference, not a foundation.</p>
<p>I've tried using AI-generated geometry as a starting body and adding features to it. It works for simple additions: cutting a hole into an imported solid, adding a pocket. It breaks down for anything that requires the existing geometry to be parametrically defined. You can't constrain a new hole to be centered on a face that has no sketch reference. You can't drive a wall thickness parametrically when the wall is just a dumb solid with no history.</p>
<p>The practical approach: use the AI geometry as a visual reference. Put it on a separate layer or body. Build your real model next to it, sketching proper constrained geometry that references the hardware, the mating parts, and the manufacturing process. Delete the AI geometry when you're done.</p>
<p>This isn't a workaround. It's the workflow. And it's worth understanding before you invest time trying to make AI output do something it can't do.</p>
<h2>Which integrations actually work</h2>
<p>Not all AI-CAD integrations are equally useful. Based on my experience over the past year, here's an honest ranking.</p>
<p>AI-powered search in PLM and file systems: genuinely useful. Saves real time every week. Works well enough that I don't think about it much, which is the best compliment a tool can get.</p>
<p>Automated drawing generation: useful for standard parts with standard documentation requirements. Saves 30 to 60 minutes per part on documentation. Needs review but produces a solid starting point.</p>
<p>Text-to-CAD for concept geometry: useful for the first 10 percent of a project. Saves 15 to 30 minutes on simple parts. Worthless for complex geometry. The <a href="/posts/how-ai-is-changing-cad">how AI is changing CAD</a> post puts this in the broader context of what's really shifting in the industry.</p>
<p>AI copilots for troubleshooting: occasionally useful. Good when the error is common and well-documented. Less useful when the problem is specific to your model's history. I still search forums for the weird stuff.</p>
<p>AI command input (natural language): mildly useful. Saves a few seconds per operation. Adds up over a week. Not yet reliable enough to replace knowing the keyboard shortcuts.</p>
<p>AI for design decisions, manufacturing review, or tolerance specification: does not exist in any usable form.</p>
<h2>The honest picture</h2>
<p>AI fits into a CAD workflow the way a good calculator fits into structural engineering. It speeds up specific, bounded tasks. It does not replace the thinking. It does not understand the context. It does not know why you're building what you're building.</p>
<p>The two stages where AI contributes, early concept and late documentation, are the bookends of the process. The middle, where all the real engineering happens, is still manual, judgment-driven, and stubbornly human. That middle is also where most of the project time goes and most of the value gets created.</p>
<p>If you're looking to add AI to your design workflow, start with the bookends. Generate concept geometry to explore ideas fast. Use documentation automation to stop wasting Friday afternoons on drawing layouts. And don't feel guilty about doing everything in between the old-fashioned way, with constraints, dimensions, manufacturing knowledge, and a hot coffee. The AI isn't ready for the middle yet. Based on what I see in the tools and the research, it won't be for a while.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI CAD trends in 2026: what changed and what didn&apos;t</title>
      <link>https://blog.texocad.ai/posts/ai-cad-2026-trends</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-cad-2026-trends</guid>
      <pubDate>Wed, 25 Mar 2026 00:00:00 GMT</pubDate>
      <description>A year ago, everyone predicted AI would revolutionize CAD. Some predictions were right. Most were early. A few were just wrong. Here&apos;s the honest scorecard.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>trends</category>
      <category>2026</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Key 2026 AI CAD trends: text-to-CAD tools improved but remain limited to simple parts. Major vendors (Autodesk, Dassault, PTC, Siemens) all shipped AI assistants. B-Rep generation got better. Parametric AI generation is still research-stage. The biggest actual impact is AI-assisted search and documentation, not geometry generation.</p>
<p>The biggest AI CAD trends in 2026 so far: text-to-CAD improved but remains limited to simple parts, every major vendor shipped an AI assistant, B-Rep generation got noticeably better, and parametric AI generation is still stuck in research papers. The real impact this year came from AI-assisted search and documentation, not the flashy geometry generation that gets all the attention. A year ago, the predictions were louder and bolder. Here's how they held up.</p>
<p>I keep a text file on my desktop called "predictions.txt" where I track the things people confidently claimed about AI and CAD at the start of each year. It's become one of my favorite documents. Not because I enjoy being right, though that helps, but because the gap between prediction and reality tells you more about where the technology actually stands than any product announcement. I added a new column this spring, labeled "what happened," and filled it in over a long Saturday with a pot of coffee that got progressively more bitter as the day wore on. Fitting.</p>
<h2>The prediction scorecard</h2>
<p>Let me start with the claims that were floating around in early 2025 and the beginning of 2026, and how they've played out.</p>
<p>"Text-to-CAD will generate production-ready parts." Verdict: wrong. Text-to-CAD tools are better at generating simple geometry than they were a year ago. Zoo.dev's output is cleaner. The dimensional accuracy has improved. But production-ready? Not remotely. The <a href="/posts/text-to-cad-limitations">text-to-CAD limitations</a> I wrote about months ago are all still present: no tolerances, no DFM awareness, no assembly support, poor complex surfaces. The tools generate better starting geometry. They don't generate finished engineering.</p>
<p>"AI will replace junior CAD designers." Verdict: premature. Junior designers are still employed. The work they do has not been automated to any meaningful degree. Yes, some simple geometry tasks are faster with AI. But junior designers do a lot more than extrude rectangles. They learn DFM, they participate in design reviews, they chase drawing revisions, they argue with suppliers about bend radii. AI does none of that. The concern is real for the long term. The timeline was wrong.</p>
<p>"Parametric AI generation will ship in a commercial tool." Verdict: hasn't happened. This was the prediction I most wanted to be true, because parametric output would make AI-generated geometry actually useful in revision cycles. Research papers keep showing promising results. No commercial tool has shipped reliable parametric generation that produces clean, editable feature trees. We're still getting dumb solids and fragile construction history. The gap between the research demo and a shipping product turned out to be wider than the optimists assumed.</p>
<p>"Every major vendor will have AI features." Verdict: correct, but less than expected. Autodesk, Dassault, PTC, and Siemens all ship some form of <a href="/posts/ai-in-cad-software">AI in their CAD software</a>. Credit where it's due: they moved. But the features are mostly assistant-level tools, command discovery, natural language help, and in some cases geometry suggestions. They're useful. They're not transformative. The gap between the keynote demo and the daily workflow is exactly as large as a decade of watching CAD vendor demos taught me to expect.</p>
<p>"B-Rep generation will surpass mesh quality." Verdict: partially correct. B-Rep generation from AI models has genuinely improved in 2026. The topology is cleaner. Edge cases that used to produce degenerate geometry are handled better. Zoo.dev and a few other tools produce STEP files that import without errors more often than they did a year ago. But "surpass mesh" overstates it. B-Rep generation is better. It's still not as reliable as manually created geometry, and the dimensional accuracy issues persist. Progress is real. The destination hasn't been reached.</p>
<h2>What vendors actually shipped</h2>
<p>Let me be specific about what arrived, not what was announced.</p>
<p>Autodesk added conversational features to Fusion 360 that help with command discovery and basic operation guidance. You can ask "how do I create a circular pattern" and get a useful answer with steps. The <a href="/posts/fusion-360-ai-features">Fusion 360 AI features</a> are functional and save time for users who don't know the interface well. For experienced users, the value is lower. I tried it for two weeks and found myself reaching for keyboard shortcuts instead because muscle memory is faster than typing a question.</p>
<p>Dassault added AI-assisted search to 3DEXPERIENCE that finds similar parts in a company's database using geometric similarity, not just file names or metadata. This is genuinely useful for large organizations where duplicate parts are a chronic problem. A designer looking for a bracket can search by describing what they need, and the system returns similar existing designs. This saves more real engineering time than any geometry generator, and it's the kind of quiet improvement that deserves more attention.</p>
<p>Siemens NX's AI chat handles operation guidance and some basic geometry suggestions. PTC's Creo assistant is similar. Both are early, both are improving, and both are less capable than their respective marketing materials imply. Standard vendor behavior.</p>
<p>On the startup side, Zoo.dev continued to improve their text-to-CAD generation quality. CADAgent for Fusion 360 remains useful for generating geometry with native feature history, though the feature trees still require cleanup. Several new tools appeared, most generating mesh rather than B-Rep, which limits their usefulness for engineering work.</p>
<p>The <a href="/posts/best-ai-cad-tools-2026">best AI CAD tools in 2026</a> look better than 2025's options. But "better" is relative. We went from "barely usable" to "usable for simple cases with manual cleanup." That's progress. It's just not the revolution.</p>
<h2>Text-to-CAD progress and remaining gaps</h2>
<p>The most visible trend in AI CAD is text-to-CAD, so let me be specific about what improved and what didn't.</p>
<p>What improved: simple part generation is more reliable. I run a standard test prompt monthly (a rectangular plate with holes and fillets), and the success rate has gone from about 60% to about 80% over the past year. The dimensional accuracy on simple features has tightened. The surface topology is cleaner. Error rates on STEP export have dropped. If you need a quick bracket or mounting plate for concept work, <a href="/posts/text-to-cad-guide">text-to-CAD</a> is a better tool than it was twelve months ago.</p>
<p>What didn't improve much: complex geometry. Anything beyond prismatic shapes, gears, complex curves, swept features, shell operations, still produces unreliable results. Assembly generation still doesn't exist in any practical sense. Tolerance and GD&#x26;T output still doesn't exist at all. Sheet metal and injection molding awareness hasn't appeared. DFM checking on generated output is still absent from the generation tools themselves, though some third-party checkers can be applied after the fact.</p>
<p>The remaining gaps are not version-number gaps. They're architecture gaps. Current text-to-CAD models are trained on geometry datasets that don't include manufacturing context, tolerance specifications, or assembly relationships. Until the training data changes, the output limitations won't change in fundamental ways. Incremental accuracy improvements, yes. Missing capabilities appearing from nowhere, no.</p>
<h2>B-Rep versus mesh: the quiet progress</h2>
<p>One area where real technical progress happened in 2026 is B-Rep generation quality. B-Rep (Boundary Representation) is what professional CAD tools use: precise mathematical surfaces with exact edges and proper topology. Mesh is triangulated approximation, good enough for visualization and 3D printing, not good enough for engineering.</p>
<p>A year ago, most AI geometry tools produced mesh or produced B-Rep with frequent topology errors. Degenerate faces, gaps between surfaces, self-intersecting geometry. You'd import a STEP file and spend time healing it before you could use it. In 2026, the healing step is needed less often. The B-Rep quality from the better tools is genuinely improved, to the point where simple parts import cleanly and you can select faces, add features, and work with the geometry without fighting it.</p>
<p>This matters because it determines whether AI output can integrate into real CAD workflows or whether it stays a separate dead-end format. Better B-Rep means the AI-generated bracket can become the starting point for a real parametric model in Fusion 360, rather than a reference shape you stare at and then rebuild from scratch.</p>
<p>The progress is real, and I give credit to the teams working on it. It's the kind of thankless infrastructure improvement that makes everything else more useful.</p>
<h2>The real impact areas: not what you'd expect</h2>
<p>If you asked most people what the biggest AI impact on CAD in 2026 has been, they'd probably say text-to-CAD. They'd be wrong.</p>
<p>The biggest impact has been AI-assisted search and documentation. Finding parts in large libraries. Generating initial drawing views from 3D models. Auto-populating BOM data. Suggesting similar designs. Extracting manufacturing parameters from existing models. These are boring tasks that consume enormous amounts of time in enterprise environments, and AI is genuinely good at them.</p>
<p>The second biggest impact has been AI-powered code and script generation for CAD automation. Using AI to write Fusion 360 Python scripts, OpenSCAD programs, SolidWorks macros, and CNC post-processors. This isn't text-to-CAD in the way most people think of it, but it's arguably more useful because it produces parametric, repeatable output that integrates with existing workflows. I covered some of this in the <a href="/posts/ai-in-cad-software">how AI is changing CAD</a> context, and it's where I see the most practical value per hour spent.</p>
<p>The third biggest impact is AI assistants that reduce the time spent learning and navigating complex CAD interfaces. CAD software has thousands of commands. An AI that helps you find the right one, explains a workflow, or suggests an approach is a genuine productivity tool, even if it never generates a single face of geometry.</p>
<p>Geometry generation gets the demos. Search, documentation, and navigation get the results. That mismatch between attention and value is the defining characteristic of AI CAD in 2026.</p>
<h2>What's still missing, and what to watch in 2027</h2>
<p>Parametric AI generation is the most-wanted feature and the furthest from shipping commercially. Until AI can produce models with proper feature trees and parametric relationships, the output is throwaway geometry that can't survive a revision cycle. Research is active. Products aren't ready.</p>
<p>DFM-aware generation, simulation-coupled generation, and multi-part assembly generation are all technically plausible and commercially absent. Each requires training data or reasoning capabilities that don't exist in shipping products. <a href="/posts/future-of-cad-ai">Future CAD AI</a> predictions often include all three. The path to solving them is clear. The timeline is not.</p>
<p>For 2027, I'll be watching whether any tool ships usable parametric generation, even for a narrow class of parts. That's the single biggest unlock. I'll be watching whether vendor AI assistants improve enough to change daily workflows for experienced users, not just novices learning the interface. And I'll be watching whether anyone builds a large-scale manufacturing-aware training dataset, because the geometry data exists but the manufacturing context doesn't, at scale.</p>
<p>I'll update predictions.txt at the end of the year and do this again. My coffee will be cold. My expectations will be calibrated. And the gap between what was promised and what was delivered will tell the same story it always does: progress is real, but it's slower, messier, and less dramatic than the keynote slides suggest. That's fine. Useful technology doesn't need to be dramatic. It just needs to work.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI CAD automation: scripts, macros, and LLMs</title>
      <link>https://blog.texocad.ai/posts/ai-cad-automation</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-cad-automation</guid>
      <pubDate>Wed, 25 Mar 2026 00:00:00 GMT</pubDate>
      <description>Before LLMs, CAD automation meant Python scripts, iLogic rules, and macros that broke every update. Now it means LLMs that write those scripts for you. Both approaches have the same failure mode: nobody tests the edge cases.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>automation</category>
      <category>scripting</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI CAD automation in 2026 takes three forms: traditional scripts/macros (Python, iLogic, VBA), LLM-generated scripts (ChatGPT writing Fusion 360 Python or OpenSCAD), and AI-native features (text-to-command, copilots). Traditional automation is most reliable. LLM-generated scripts save time but require validation. AI-native features are limited but improving.</p>
<p>Before LLMs entered the picture, CAD automation meant Python scripts, iLogic rules, VBA macros, and journal files that broke every time the vendor shipped an update. I had a SolidWorks macro that automated hole table placement on drawings. It worked perfectly for about fourteen months. Then the 2024 service pack changed something in the API, the macro threw an error on line 47, and I spent an evening debugging a tool that was supposed to save me time. My wife asked what I was doing. I said "arguing with a spreadsheet" because explaining VBA to a non-programmer is a form of cruelty.</p>
<p>AI CAD automation in 2026 adds a new layer to this old pattern. LLMs can write those scripts for you, often faster than you could write them yourself. But both the old scripts and the AI-generated ones share the same failure mode: nobody tests the edge cases until something breaks in production. The tools have changed. The problem hasn't.</p>
<p>There are now three tiers of CAD automation, and understanding when to use each one is the difference between a workflow that actually works and a workflow that works great in the demo and explodes on the third real part.</p>
<h2>Tier 1: Traditional scripts and macros</h2>
<p>This is the foundation, and it's still the most reliable.</p>
<p>Fusion 360 has a Python API that can create geometry, modify features, export files, and automate repetitive operations. SolidWorks has VBA macros and a .NET API. Inventor has iLogic, which is Visual Basic with built-in access to part parameters. Creo has Pro/TOOLKIT and J-Link. NX has NX Open with Python and .NET bindings. FreeCAD has a Python console that can do almost anything, if you can navigate the documentation.</p>
<p>I've written automation scripts in most of these environments. The pattern is always the same: you identify a repetitive task (placing holes on a grid, generating variants of a part family, exporting multiple formats, populating drawing views), you write a script that does it, you test it until it works on your specific models, and then you hope nothing changes in the next software update.</p>
<p>The advantages of traditional scripting: complete control, deterministic output, repeatable behavior. When a script runs the same way every time, you can trust the results. You can version-control it. You can hand it to a colleague and they get the same output you did.</p>
<p>The disadvantages: writing scripts takes time, CAD APIs are often poorly documented, and maintenance is a real cost. I have a folder of old SolidWorks macros that I'm afraid to open because half of them probably reference API calls that no longer exist. Every CAD vendor treats their API as a second-class citizen compared to the GUI, and it shows.</p>
<p>For anyone starting with <a href="/posts/freecad-ai-macro">FreeCAD automation</a>, the Python integration is genuinely good and the community has built libraries for common operations. It's the most approachable entry point if you're new to CAD scripting.</p>
<h2>Tier 2: LLM-generated scripts</h2>
<p>This is where things got interesting about two years ago, and where most of the actual time savings are happening for me in 2026.</p>
<p>The idea is simple: instead of writing a CAD script from scratch, you ask an LLM to write it. "Write a Fusion 360 Python script that creates a rectangular array of M4 clearance holes on the selected face, 5 columns, 3 rows, 15mm spacing." ChatGPT, Claude, and the other major models can produce working Fusion 360 API scripts for tasks like this. Not always on the first try. Not always without bugs. But close enough that fixing the output is faster than writing the script from scratch.</p>
<p>I've been using this approach for about a year and a half. The practical results: LLM-generated scripts save me roughly 60 to 80 percent of the time I would have spent writing the script manually, for straightforward automation tasks. The remaining 20 to 40 percent is spent testing, debugging, and handling the edge cases the AI didn't anticipate.</p>
<p><a href="/posts/openscad-ai">OpenSCAD</a> is the best target for LLM-generated CAD scripts because the entire model is already code. The language is small, well-documented, and shows up extensively in training data. I've had ChatGPT produce working OpenSCAD scripts on the first attempt for simple parts more often than not. The workflow feels natural because there's no translation between the AI's output and the working file: the script is the model.</p>
<p>For Fusion 360 and SolidWorks, the success rate drops because the APIs are more complex and less well-represented in training data. The LLM will sometimes hallucinate API calls that don't exist, or use deprecated methods from an older version. You need to know enough about the API to catch these errors, which means LLM-generated scripts work best for people who could write the scripts themselves but want to do it faster.</p>
<p>This is the important nuance that the "AI writes code for you" narrative misses. The LLM is a drafting tool for scripts, not an autonomous programmer. If you don't understand CAD scripting well enough to evaluate the output, you'll run code you can't debug, and that's how you end up with a macro that silently places holes in the wrong positions on every tenth part.</p>
<h2>Tier 3: AI-native CAD features</h2>
<p>This is what the vendors are shipping, and it's the most limited tier in terms of what you can actually automate.</p>
<p>Fusion 360's Text to Command concept (still in development) lets you describe operations in natural language: "extrude this face by 10 mm," "chamfer all edges 0.5 mm." SolidWorks 2026's AURA and LEO companions can execute some operations from conversational input. The <a href="/posts/text-to-cad-workflows-and-tools">text-to-CAD workflows and tools</a> post covers the broader set of tools that generate geometry from text.</p>
<p>These are automation, in a sense. You describe an operation, the software executes it. But the scope is narrow. You can automate individual commands, not sequences. You can't say "create a parametric part family with five variants based on this configuration table." You can't say "run my standard export workflow: save as STEP, export drawings as PDF, update the PLM record." The AI-native features handle atomic operations, not workflows.</p>
<p>The <a href="/posts/text-to-cad-api">text-to-CAD API</a> tools like Zoo.dev's API are more useful for workflow automation because they expose geometry generation as an API call you can integrate into a script. Generate a bracket from a text prompt, receive a STEP file, import it into your CAD tool, add features. That's a workflow you can script. The text-to-command approach in Fusion 360 is more interactive, designed for a human at the keyboard, not a script running in the background.</p>
<h2>Reliability comparison</h2>
<p>I've been running all three tiers in parallel for the past year. Here's the honest reliability picture.</p>
<p>Traditional scripts: most reliable once debugged. Failure modes are predictable: API changes break the script, unusual geometry produces unexpected results, and edge cases (empty selections, zero-thickness bodies, failed features) need explicit handling. Once you handle these, the script works every time on every model that fits the expected pattern. Maintenance cost is real but manageable.</p>
<p>LLM-generated scripts: reliable for simple tasks, fragile for complex ones. The AI writes correct code for straightforward operations about 70 percent of the time. For complex multi-step scripts, that drops to maybe 40 percent working on the first attempt. The remaining debugging is usually small: wrong API call, missing error handling, incorrect loop logic. But "usually small" is doing a lot of work in that sentence. Occasionally the AI produces code that runs without errors and generates wrong geometry, which is the worst kind of bug because you don't know it's wrong until someone measures the part.</p>
<p>AI-native features: reliable for their narrow scope but limited in what they can do. Text to Command handles simple operations well. It doesn't handle complex, multi-step automation at all. It's a convenience feature, not an automation platform.</p>
<h2>When to use which approach</h2>
<p>The decision tree I follow:</p>
<p>If the task is a one-time operation (place these specific holes, export this specific model), I use AI-native features or just do it by hand. The setup cost of writing a script isn't justified for something I'll do once.</p>
<p>If the task is a repeatable workflow I'll run on many parts (export routine, drawing generation, parameter-driven variant creation), I write a traditional script. Sometimes I ask an LLM to draft it, then I debug and validate it myself. The script goes into version control and becomes part of my standard toolkit.</p>
<p>If the task is a quick parametric model (a jig, a test fixture, a simple bracket), I use LLM-generated <a href="/posts/openscad-ai">OpenSCAD</a> code. The model is the script, so there's no API to worry about, no integration layer, no GUI automation. Just geometry as code.</p>
<p>If the task is exploring automation possibilities (what could I automate? what would the script look like?), I start by asking an LLM to draft a script. Even if the code doesn't run, it shows me the API calls I'd need, the approach I'd take, and whether the task is even scriptable in the first place. This is one of the most underrated uses of LLMs in CAD: not writing the final script, but sketching the approach.</p>
<h2>The testing gap</h2>
<p>Here's where all three tiers share a common weakness, and it's the thing that trips up most people.</p>
<p>CAD automation scripts need testing on real, varied geometry. Not one model. Many models. Different configurations. Different sizes. Edge cases. Unusual inputs. The script that works perfectly on your test bracket might fail on a bracket with a different number of holes, or with holes on a curved surface, or with a feature tree that has a suppressed operation the script doesn't expect.</p>
<p>LLM-generated scripts make this problem worse because they look correct. The code is well-structured, the variable names are sensible, and the comments explain what's happening. Everything about the output suggests it was written by someone who knew what they were doing. This creates a false sense of confidence. I've caught AI-generated Fusion 360 scripts that iterated over faces in the wrong order, producing correct results on simple geometry and silently wrong results on anything with more than four planar faces.</p>
<p>The fix is boring and old-fashioned: test on multiple real parts. Measure the output. Compare against expected values. Run the script on a part that's slightly different from what you had in mind. Run it on a part that's very different. If it fails, fix it. If it silently produces wrong results, fix the validation logic.</p>
<p>Nobody does this enough. Not with traditional scripts. Not with AI-generated scripts. Not with anything. The testing gap is the most consistent failure mode in CAD automation, across all three tiers, and AI hasn't fixed it. It might have made it worse, because the scripts are easier to generate and therefore easier to trust without verification.</p>
<h2>Building a sustainable automation strategy</h2>
<p>The word "strategy" makes this sound more corporate than it needs to be. What I really mean is: how do you build automation that keeps working?</p>
<p>Start with the repetitive tasks that cost you the most time. For most CAD users, that's export workflows, drawing generation, and parameter-driven part families. These are well-suited to traditional scripts and they pay back the development time quickly.</p>
<p>Use LLMs to draft scripts faster, but validate everything. The <a href="/posts/ai-cad-workflow">AI CAD workflow</a> I described in detail covers where AI fits in the broader design process. For automation specifically, the LLM is a first-draft tool. You are the reviewer, the debugger, and the person responsible when the script places a hole in the wrong location.</p>
<p>Keep scripts in version control. I use git for everything, including my OpenSCAD library and my Fusion 360 scripts. When something breaks after a software update, the diff shows you what changed and the history shows you what used to work.</p>
<p>Document the expected behavior. This is the part I used to skip and now don't. Write a one-paragraph description of what the script does, what inputs it expects, and what output it produces. When you come back to the script six months later, or when a colleague tries to use it, that paragraph saves more time than you'd think.</p>
<p>Test on varied geometry. I said this already. I'm saying it again because it's the single most important thing and the single most frequently skipped thing in CAD automation.</p>
<p>Don't automate everything. Some tasks are faster to do by hand. Some tasks happen rarely enough that the script development time never pays off. Some tasks are too complex to automate reliably. The best CAD automation strategy includes knowing when not to automate. That judgment comes from experience, and it's the one thing no AI can write a script for.</p>
<h2>Where this is heading</h2>
<p>Traditional scripts aren't going away. They're too reliable and too well-understood to replace. What's changing is how fast you can write them, thanks to LLMs, and how many people can write them, because the barrier to entry drops when you can describe what you want in English and get a working first draft.</p>
<p>AI-native features will expand. Text to Command will ship in more tools, handle more operations, and eventually support multi-step sequences. That's the natural trajectory based on what the vendors are showing.</p>
<p>The interesting convergence is scripts generated by AI, validated by humans, and executed inside AI-enabled CAD tools. That's a workflow where the LLM drafts the automation, the engineer reviews and tests it, and the CAD environment provides the AI infrastructure to run it. We're not there yet. We're close enough that I'm planning for it.</p>
<p>Until then, the state of the art in CAD automation is what it's been for twenty years: someone who understands both the CAD software and the scripting environment writes a tool that solves a specific problem. The LLM just made that person faster, not unnecessary.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD for hobby projects: fun, fast, and occasionally wrong</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-for-hobby-projects</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-for-hobby-projects</guid>
      <pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate>
      <description>If you want a custom bracket for your 3D printer, a replacement knob, or a case for your Raspberry Pi and you don&apos;t want to learn Fusion 360 first, text-to-CAD is genuinely useful. Just check the dimensions before you print.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>hobby</category>
      <category>makers</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD is well-suited for hobby projects where tolerances are forgiving, iteration is cheap, and the stakes are low. Custom brackets, enclosures, mounts, adapters, and replacement parts can be generated in seconds and 3D printed. Dimensions may need manual tweaking, but for hobbyists, this is the fastest path from idea to STL.</p>
<p>Text-to-CAD is genuinely useful for hobby projects where the tolerances are forgiving, a reprint costs a dollar in filament, and nobody is going to sue you if the bracket is 0.5mm too wide. I've been using it for exactly this kind of work for the last few months, and I'll say plainly: for custom mounts, enclosures, adapters, and all the little one-off parts that accumulate around a workshop, it's the fastest path from "I need a thing" to holding the thing.</p>
<p>Last week I needed a bracket to hold a cable chain to the side rail of my 3D printer. Not a standard part. Not something you'd find on Thingiverse because the rail profile is specific to my machine and the cable chain is a weird off-brand one I bought in a lot of ten because the price was right and I have poor impulse control. In Fusion 360, I'd spend fifteen minutes measuring, sketching, extruding, adding fillets, second-guessing the screw hole positions, and exporting. With <a href="https://zoo.dev">Zoo.dev</a>, I typed a prompt, got a STEP file, imported it into my slicer, tweaked one screw hole by half a millimeter, and printed it. Total time from idea to installed bracket: maybe twenty minutes, including print time. The bracket is still holding up. The cable chain is still ugly. Everything works.</p>
<p>That's the sweet spot. Low stakes, fast iteration, cheap material, geometry that doesn't need to be perfect.</p>
<h2>What hobby parts work well</h2>
<p>The <a href="/posts/text-to-cad-for-beginners">text-to-CAD for beginners</a> guide covers the general category of parts that these tools handle reliably, but for hobby work specifically, the list is generous because your success criteria are different. In industry, "close enough" gets you a phone call from an angry machinist. In a home workshop, "close enough" gets you a functional part and a small pile of failed prints you'll throw away without guilt.</p>
<p>Parts that work reliably:</p>
<p>Printer mods and upgrades. Spool holders, filament guides, tool holders that clip onto the frame, fan duct adapters, LCD bezels. These are all simple prismatic geometry with generous tolerances. If the spool holder is 1mm wider than you planned, the spool still sits on it. This is the ideal text-to-CAD territory.</p>
<p>Electronics enclosures. Raspberry Pi cases, Arduino housings, ESP32 boxes with ventilation holes and USB cutouts. I've generated a dozen of these. The critical dimension is the board mounting hole pattern, and if you specify those holes with explicit coordinates in the prompt, they usually come out close enough to work with M2.5 standoffs. The <a href="/posts/text-to-cad-for-3d-printing">text-to-CAD for 3D printing</a> guide goes into more detail on enclosure workflows.</p>
<p>Cable management. Clips, guides, channels, mounts. These parts are geometrically simple and functionally forgiving. A cable clip that's 0.5mm too tight still clips. One that's 1mm too loose gets reprinted with adjusted dimensions, and nobody lost sleep over it.</p>
<p>Replacement parts. Broken knobs, missing feet, cracked handles, stripped adjustment wheels. If you can measure the original or the mating feature, you can describe the replacement in a prompt. I replaced a broken adjustment knob on a cheap vise last month. The generated part was close enough on the first try that I just sanded the inside bore slightly for a tighter fit and glued it in place. It's held up fine. The vise itself will probably break somewhere else before that knob does.</p>
<p>Adapters and spacers. Metric-to-imperial adapters, tripod mount plates, speaker stand risers, shelf bracket extensions. Anything that bridges two known dimensions with a simple shape. This is geometry that practically writes its own prompt.</p>
<p>Organizers. Desk trays, tool holders, battery caddies, bit organizers. These are boxes with compartments. The dimensions are usually driven by the things they hold, and if a compartment is 2mm too wide, it still holds the screwdriver. You just get a little rattle.</p>
<h2>The workflow</h2>
<p>The hobby text-to-CAD workflow is simple enough that I've gotten friends with zero CAD experience to follow it successfully on their first try. Here's the real version, including the parts that trip people up.</p>
<p>Write a prompt with dimensions. This is where most failures originate. "Make me a phone stand" produces something vaguely stand-shaped with random proportions. "Phone stand, 80mm wide, 100mm tall, 60mm deep, 8mm thick walls, phone slot 12mm wide angled at 70 degrees from horizontal" produces something you can use. Include every dimension you care about. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> has good examples of prompts that work.</p>
<p>Generate and export. Use <a href="https://zoo.dev">Zoo.dev</a> or whichever tool you prefer. Export as STL if you're going straight to a printer. Export as STEP if you want to check dimensions or make modifications in a CAD tool first. For hobby work, STL direct to slicer is usually fine because you'll find out fast enough if something's wrong, and reprinting is cheap.</p>
<p>Check the critical dimensions. Not every dimension. Just the ones that matter for fit. Screw holes, mating surfaces, clearances for existing hardware. If you exported STEP and have Fusion 360 or FreeCAD, measure those features. If you exported STL and have a slicer with measurement tools, check them there. Cura and PrusaSlicer both let you measure distances in the viewport. It's not precision metrology, but it's enough to catch the obvious errors.</p>
<p>Slice and print. Standard slicer settings for PLA or PETG. Nothing special about text-to-CAD parts compared to any other STL. If the walls look thin in the slicer preview, they probably are. Text-to-CAD tools sometimes produce walls thinner than what you asked for, especially on enclosures where inner and outer dimensions both have to be correct and the AI splits the difference badly.</p>
<p>Test fit and iterate. This is where hobby work has an enormous advantage over professional work. If the part doesn't fit, you tweak the prompt or the model and print again. The total cost is fifteen cents of filament and thirty minutes of print time. In a professional setting, a failed part might mean a wasted machining setup, scrapped material, or a blown delivery date. In your workshop, it means you open the slicer again while the first print cools enough to throw in the recycle bin.</p>
<h2>Common gotchas</h2>
<p>Wall thickness is the most frequent problem. I've generated enclosures where I asked for 2mm walls and got somewhere between 1.5mm and 2.5mm around the perimeter. The AI gets the outer dimensions approximately right and the inner cavity approximately right, but those approximations don't always add up to consistent walls. For hobby prints, you can usually live with it. If you need consistent walls, specify both the outer dimensions and the wall thickness in the prompt, and check the model before printing.</p>
<p>Screw holes are close but not exact. If you need M3 screws to pass through cleanly, ask for 3.4mm or 3.5mm holes to give yourself clearance. The AI might place the hole at 3.0mm or 3.2mm, and by the time you add printer tolerance, a 3.0mm hole in your model becomes 2.8mm on the print and your screw doesn't fit. Oversize the holes in the prompt. You can always ream them out, but it's easier to get it right the first time.</p>
<p>Fillets and chamfers are hit or miss. Small fillets (0.5mm to 1mm) sometimes get ignored or applied inconsistently. Larger fillets (3mm and above) usually work. For FDM printing, you often want chamfers on bottom edges anyway for bed adhesion, and those are worth adding manually in the slicer or requesting explicitly in the prompt.</p>
<p>Overhangs and printability aren't the AI's concern. Text-to-CAD generates geometry. It doesn't think about print orientation, support material, or overhang angles. A beautifully generated enclosure might have features that require supports in every possible orientation. Think about printability before you generate, and include constraints in the prompt if needed. "Flat bottom, all features accessible from the top" goes a long way.</p>
<h2>Why tolerances matter less for hobby work</h2>
<p>In professional manufacturing, a 0.5mm dimensional error can mean the difference between a part that assembles and a part that doesn't. In hobby work, that same 0.5mm error is usually invisible. The reasons are structural.</p>
<p>Hobby parts typically interface with forgiving things. A bracket screwed to an aluminum extrusion has a slot, not a precision hole. A knob pressed onto a shaft can be sanded or shimmed. An enclosure sitting on a desk doesn't need to seal against water pressure.</p>
<p>FDM printing has its own tolerances that dwarf the AI's inaccuracy. Your printer is accurate to maybe 0.1-0.2mm on a good day, and dimensional variation between prints of the same file can be 0.3mm or more depending on temperature, humidity, and whether your bed is actually level or just level enough. By the time the part comes off the bed, the AI's 0.5mm placement error is lost in the noise of printer variation.</p>
<p>Iteration is effectively free. A <a href="/posts/best-text-to-cad-tools">text-to-CAD tool</a> generates a new model in seconds. A slicer prepares a new print in a minute. Filament costs pennies per part for anything under 50mm in size. The total cost of three failed iterations followed by a successful fourth is maybe a dollar and two hours, most of that being print time where you're doing something else anyway.</p>
<p>This is why text-to-CAD and hobby 3D printing are natural partners. The tool's weaknesses (dimensional approximation, limited precision, simplified geometry) collide perfectly with the workflow's strengths (cheap iteration, forgiving tolerances, and a user who is both the designer and the customer).</p>
<h2>Real examples that worked</h2>
<p>I keep a folder of text-to-CAD hobby parts I've actually printed and used. Here are the ones that worked well enough that I didn't need more than one iteration:</p>
<p>A GoPro mount adapter for a headlamp strap. Simple geometry: a plate with the GoPro three-prong pattern on one side and a slot for the strap on the other. Printed in PETG. Survived a full season of evening trail runs. The slot was slightly too wide, so I wrapped one layer of electrical tape around the strap where it sits in the slot. High-precision engineering? No. Functional? Yes.</p>
<p>A battery holder for four 18650 cells, used in a DIY flashlight build. Essentially a box with four cylindrical pockets and spring contact notches at each end. The pocket diameters came out 0.3mm over what I asked for, which turned out to be perfect because the batteries needed room to slide in and out. Accidental success.</p>
<p>Desk cable pass-through grommets. Flat ring with a slot, sized to friction-fit into a 50mm hole. I printed six of these and installed them in my desk. Two of them were snug. Four were slightly loose. I added a wrap of tape to the four loose ones. The total investment in time and material was less than what a bag of rubber grommets would cost at the hardware store, and these match my desk color because I printed them in black PLA.</p>
<p>A replacement foot for a monitor stand. The original rubber foot disintegrated. I measured the recess with calipers, described it in a prompt, generated a cylinder with a shoulder, and printed it in TPU for some flex. Took two tries because the first one was 1mm too tall and the monitor sat crooked. Second print was fine. Cost of the fix: about ten cents and twenty minutes. Cost of a replacement stand: more than I wanted to spend on a monitor I got secondhand anyway.</p>
<h2>When to upgrade to real CAD</h2>
<p>Text-to-CAD for hobby work starts to struggle when:</p>
<p>The part has features that depend on each other. A gearbox housing where bearing bores need to be concentric and shaft distances need to match specific gear mesh requirements. The AI doesn't reason about inter-feature relationships. It places features approximately where they should go based on your description, and "approximately" isn't good enough when gears need to mesh.</p>
<p>You're designing for someone else. If your hobby project turns into a kit or a product, the bar goes up. Other people's printers have different tolerances. Other people's expectations are less forgiving. A bracket that works on your machine might not fit someone else's, especially if the dimensions were approximate to begin with.</p>
<p>You need to modify the design over time. Text-to-CAD gives you geometry, not a parametric model. If you want to change a dimension six months later, you're starting from a new prompt, not opening a file and editing a parameter. For a one-off part, that's fine. For a design you plan to evolve through multiple versions, you want a real feature tree. Learning Fusion 360's personal license or FreeCAD is worth the investment at that point.</p>
<p>The geometry isn't prismatic. Anything organic, sculpted, or involving complex curves is outside what text-to-CAD handles today. If you're designing a custom controller grip, an ergonomic handle, or a curved enclosure, you need a CAD tool that supports freeform surfaces.</p>
<p>For everything else, for the brackets and mounts and clips and adapters and holders that make up ninety percent of hobby 3D printing, text-to-CAD is fast, cheap, and good enough. That last phrase is the honest assessment. Not perfect. Not precise. Good enough. For hobby work, good enough is exactly the right standard.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD for education: teaching CAD with AI</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-for-education</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-for-education</guid>
      <pubDate>Mon, 23 Mar 2026 00:00:00 GMT</pubDate>
      <description>Text-to-CAD could be a great teaching tool. It could also produce a generation of engineers who can&apos;t sketch a rectangle without a prompt. The answer depends on how you use it.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>education</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD has legitimate educational uses: demonstrating geometry concepts, generating reference models for study, and lowering the barrier to 3D thinking. The risk is students skipping foundational skills. Best used as a supplement alongside traditional CAD instruction, not a replacement for learning constraints, sketching, and feature trees.</p>
<p>Text-to-CAD can teach geometry concepts, generate reference models, and lower the barrier to 3D thinking in a classroom. It can also let students skip every foundational skill that makes a CAD model actually useful. I've been watching this tension play out in conversations with two professors I know, one teaching sophomore-level mechanical design at a state university, the other running a continuing education program for machinists moving into design roles. They have opposite opinions about it, and I think they're both right.</p>
<p>The first one, let's call him Dave, demoed <a href="https://zoo.dev">Zoo.dev</a> to a lecture hall of sixty students last semester. He typed "L-bracket, 3mm thick, 40mm legs, two M4 holes per leg" and a model appeared in about fifteen seconds. The room went quiet in that specific way where you can tell half the students are thinking "why am I learning sketch constraints" and the other half are thinking "that's cool but I bet the holes are wrong." Dave told me afterward that the rest of the semester was a fight to get students to care about fully constrained sketches.</p>
<p>The second professor, who I'll call Maria, uses text-to-CAD as a warm-up exercise. Students generate a simple part from a prompt, then rebuild it manually in SolidWorks, then compare the two. Her argument is that the AI output gives students something concrete to analyze before they've learned enough to create it themselves. Like handing someone a finished piece of furniture before teaching them to use the tools. They can see the dovetails before they know how to cut one.</p>
<p>Both approaches use the same technology. The educational outcomes are completely different.</p>
<h2>Where it actually helps</h2>
<p>The biggest barrier in teaching CAD to beginners isn't the software. It's the leap from flat thinking to 3D thinking. Students come into their first CAD course having spent eighteen years in a world of screens, paper, and flat surfaces. Asking them to mentally rotate an object, identify which face to sketch on, and predict what an extrusion will look like requires a spatial reasoning skill that some students have naturally and others need to build through practice.</p>
<p>Text-to-CAD can shorten the early confusion. A student who types "box with a slot through the middle" and sees a 3D model appear has a starting point for understanding what a slot actually looks like in solid geometry. They can rotate it, section it, measure it. They haven't learned how to make it yet, but they've seen it, and seeing it is the first step toward building it.</p>
<p>This is where the <a href="/posts/text-to-cad-for-beginners">text-to-CAD for beginners</a> workflow maps neatly onto educational use. The same low-barrier entry point that works for hobbyists works for freshmen. Type a description, get a model, start asking questions about it.</p>
<p>Concept demonstrations are another genuine use. An instructor explaining fillets can generate ten cylinders with different fillet radii in a minute instead of modeling each one by hand. A lecture on draft angles can include real geometry that students interact with rather than static slides from a textbook published in 2019. The speed of generation makes it practical to show variation and comparison in ways that traditional CAD instruction simply can't match without burning an entire lab session on setup.</p>
<p>Reference models for study are useful too. Students learning to read engineering drawings can generate the 3D part from the description on the drawing, then compare it to the drawing to check their spatial understanding. It's a quiz tool. Generate, compare, iterate. Faster than an instructor modeling each part live, and it frees class time for the conversations that actually need a human in the room.</p>
<h2>Where it becomes a problem</h2>
<p>The risk isn't theoretical. I've already seen it.</p>
<p>A student in Dave's class turned in a project that was clearly text-to-CAD output imported into SolidWorks and submitted as their own modeling work. The giveaway wasn't the geometry, which was passable, but the feature tree. Or rather, the absence of one. The part had been imported as a dumb solid. No sketch history, no parametric relationships, no constraints. Just a block of geometry sitting in the feature manager like a rock someone dropped on a desk.</p>
<p>Dave caught it because he teaches SolidWorks and knows what a native feature tree looks like versus an import. Not every instructor would catch it, and the detection problem is only going to get harder. Tools like <a href="https://github.com/er-fo/CADAgent">CADAgent</a> generate parts with native Fusion 360 feature history, which means the output has a real timeline with sketches and extrusions. It still doesn't look quite like student work to an experienced eye, but the gap is closing.</p>
<p>The deeper educational problem isn't plagiarism. It's skill formation. CAD instruction isn't really about producing geometry. It's about learning to think in constraints, tolerances, manufacturing intent, design intent, and feature relationships. A student who can produce a bracket from a text prompt but can't explain why the sketch is fully constrained, why the holes are positioned relative to a datum, or why the fillet radius matters for moldability has missed the entire point of the course.</p>
<p>The <a href="/posts/how-text-to-cad-works">how text-to-CAD works</a> explanation makes this clear from the technical side: these tools predict geometry from patterns in training data, not from engineering reasoning. The AI doesn't know why a wall thickness is 2mm. It doesn't know that a draft angle exists because the part needs to come out of a mold. It doesn't understand that two holes need to be on the same bolt pattern because they mate with a standard component. All of that understanding is what CAD education is supposed to build, and text-to-CAD bypasses it entirely.</p>
<h2>The prompt-dependent thinking trap</h2>
<p>There's a subtler risk that I haven't seen discussed much. Students who learn with text-to-CAD develop prompt-dependent thinking. They start reasoning about geometry in terms of what they can describe in a sentence rather than what they can construct in a feature tree.</p>
<p>This sounds similar, but it's not. A sentence is a linear description. A feature tree is a dependency graph. "Box with a hole in the center" is a prompt. A feature tree for the same part is: sketch a rectangle, constrain it to origin, extrude, sketch a circle on the top face, constrain it concentric to the rectangle, cut-extrude through all. The prompt describes the result. The feature tree describes the process and the relationships.</p>
<p>Engineering thinking lives in the process and the relationships. When a dimension changes, which features update? When a face moves, what breaks? When a client calls at 4pm on Friday and says "make it 5mm wider," can you change one dimension and have the model update, or do you rebuild from scratch? That reasoning requires understanding feature trees, constraints, and parametric relationships. Text-to-CAD teaches none of it.</p>
<h2>Where it fits in a curriculum</h2>
<p>The most sensible use I've seen is Maria's approach: use text-to-CAD as a foil, not a crutch.</p>
<p>Generate a part. Then rebuild it. Compare the two. Ask students to identify what's different, what's missing, and what the AI got wrong. This turns text-to-CAD into a teaching tool that actually reinforces fundamentals rather than replacing them. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the full range of tool capabilities and limitations, which is itself useful curriculum material.</p>
<p>Some specific ways it works in practice:</p>
<p>Geometry literacy exercises. Generate five variations of a bracket with different dimensions. Students measure each one, identify the differences, and sketch the geometry manually with proper constraints. The AI provides the visual reference. The student provides the engineering understanding.</p>
<p>Error analysis assignments. Give students a text-to-CAD output and ask them to find what's wrong with it. Dimensions that don't match the prompt. Features that aren't properly positioned. Geometry that can't be manufactured. This is a real skill, because evaluating AI output critically is something they'll need in industry, and it reinforces dimensional awareness and manufacturing knowledge at the same time.</p>
<p>Reverse engineering practice. Students receive a generated model and have to produce a fully dimensioned engineering drawing from it. The AI gives them the 3D reference. The student still needs to identify critical dimensions, choose datum schemes, and apply GD&#x26;T. The geometry comes for free. The engineering doesn't.</p>
<p>Design intent comparison. Generate the same part via text-to-CAD and have each student model it manually. Compare the feature trees. Discuss why the student's version is editable and the AI version isn't. This is a lesson about parametric modeling that sticks because the student can see the difference rather than just being told about it.</p>
<h2>What instructors should know</h2>
<p>If you teach CAD and you're wondering how to handle text-to-CAD in your courses, here's my honest take, informed by too many conversations about this over mediocre faculty lounge coffee.</p>
<p>You can't ban it. Students have access to <a href="https://zoo.dev">Zoo.dev</a> for free, they can run LLMs against OpenSCAD on their laptops, and the tools are only going to get more accessible. Banning text-to-CAD in a CAD course is like banning calculators in a math course. You can try, but the energy is better spent teaching students when to use the tool and when the tool is useless.</p>
<p>You should teach evaluation. The most important skill a student can develop regarding AI-generated CAD is the ability to look at the output and identify what's wrong, what's missing, and what can't be manufactured. That skill requires all the fundamentals you were going to teach anyway. Text-to-CAD doesn't eliminate the need for CAD knowledge. It changes where the knowledge gets applied: from creation to evaluation.</p>
<p>Assessment needs to change. If your exam asks students to produce a part and the AI can produce that part from a prompt, your exam is testing the wrong thing. Test feature tree construction. Test the ability to modify a model when requirements change. Test constraint reasoning. Test GD&#x26;T application. Test manufacturing awareness. These are the things text-to-CAD can't do, and they're the things that matter in engineering practice.</p>
<p>The <a href="/posts/should-i-learn-cad-if-ai">should you still learn CAD</a> question is going to come up in every class from now on. The answer is yes, emphatically, but the reasons need to be articulated in terms students find convincing. "Because I said so" works for about thirty seconds. "Because the AI doesn't know why this tolerance matters and you need to" works for a career.</p>
<h2>The real lesson</h2>
<p>I keep coming back to Maria's framing. She told me: "I don't care if students use text-to-CAD. I care if they understand what it gave them." That's the right standard. A student who can generate a bracket and then explain every dimension, constraint, and manufacturing consideration in that bracket understands CAD. A student who can generate a bracket and can't explain any of those things has a geometry printer, not an education.</p>
<p>Text-to-CAD in education is a mirror. It reflects whatever the instructor puts in front of it. Used carelessly, it lets students skip the hard parts. Used deliberately, it gives students a faster path to the hard parts, which is where the learning actually happens. The technology is neutral. The pedagogy isn't.</p>
<p>My bet is that in five years, every CAD course will include text-to-CAD the way every writing course now includes spell check. Not as a replacement for the skill. As a tool that makes the skill easier to practice, and easier to fake if nobody's paying attention. The instructors who figure out that distinction first are the ones whose students will actually know what they're doing when they graduate.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD for automotive design</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-for-automotive</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-for-automotive</guid>
      <pubDate>Sun, 22 Mar 2026 00:00:00 GMT</pubDate>
      <description>Automotive design involves surfacing, Class A continuity, packaging constraints, crash analysis, and a review process that makes aerospace look relaxed. Text-to-CAD fits into approximately none of that.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>automotive</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD has no meaningful role in automotive design today. Vehicle design requires Class A surfacing, package integration across subsystems, crash and NVH simulation-ready geometry, and OEM-specific data standards. AI-generated models lack surface continuity, design intent, and integration with existing platform architectures.</p>
<p>Text-to-CAD has no meaningful role in automotive design in 2026, and the reasons start with surface quality and end with a review process so layered it makes defense procurement look casual. I spent the better part of a week thinking about this after a conversation with a friend who does exterior surfacing at a Tier 1 supplier. I asked him what he thought about AI-generated car body panels. He laughed. Not the polite kind. The kind where coffee almost comes out your nose.</p>
<p>His point was simple: the reflection lines on a Class A surface tell you everything. If the reflection is wobbly, the surface is wrong, and the customer (the OEM) will reject the data before anyone even looks at the dimensions. Text-to-CAD tools don't know what a reflection line is. They generate surfaces. Whether those surfaces have the continuity and curvature quality that automotive exterior design demands is a question the AI isn't equipped to answer, or even understand.</p>
<h2>Class A surfacing: the requirement nobody outside automotive appreciates</h2>
<p>If you've never worked in automotive surfacing, Class A might sound like a vague quality label. It's not. Class A refers to surfaces visible to the customer on the finished vehicle, and they have specific mathematical requirements for continuity. Not just tangent continuity (G1), which means surfaces meet without a crease. Curvature continuity (G2), which means the curvature transitions smoothly across surface boundaries. And for premium OEMs, curvature-rate continuity (G3), which means even the rate of curvature change is smooth.</p>
<p>The reason for this obsession isn't vanity. A car body panel is a large, gently curved surface lit by the sun and reflections from the surrounding environment. Any discontinuity in curvature shows up as a kink, crease, or ripple in the reflection. Customers don't know the math, but they see the imperfection. Their eye reads it as "cheap," and the OEM's design studio treats it as a defect.</p>
<p>Text-to-CAD tools generate surfaces that might be tangent-continuous on a good day. Curvature continuity requires control over the underlying surface parameterization, which means carefully constructing NURBS patches with specific control point distributions and boundary conditions. This is specialized work in dedicated tools (ICEM Surf, Alias, CATIA's Generative Shape Design) with decades of development behind them. The <a href="/posts/text-to-cad-limitations">limitations of text-to-CAD</a> in surface quality aren't a bug to be patched. They're a reflection of how far the technology is from the baseline requirements of automotive exterior design.</p>
<p>I measured the surface quality of a text-to-CAD generated "car hood" in Fusion 360 using curvature analysis. The curvature map looked like a weather radar during a thunderstorm. Patches of high curvature next to patches of low curvature, with transitions that would make a surfacing specialist reach for the Aleve. For a concept render viewed from two meters away, it might pass. For tooling data that will stamp sheet metal into a visible body panel, it's not even in the same conversation.</p>
<h2>Packaging and subsystem integration</h2>
<p>An automotive part doesn't exist in isolation. It exists in a package, which is the three-dimensional space allocation for every component in the vehicle. The package defines where the engine sits, where the wiring harness runs, where the HVAC ducting goes, where the suspension articulates, where the crash structure absorbs energy, and where the interior trim panels live. Every part is designed within its allocated envelope, and that envelope is determined by hundreds of other parts.</p>
<p>Text-to-CAD generates a part from a prompt. That prompt doesn't include the surrounding package. It doesn't know where the neighboring components are. It doesn't know the wiring harness routing. It doesn't know the suspension travel envelope. It doesn't know the crash load path that runs through the adjacent rail. The generated part has no spatial awareness of the vehicle it's supposed to live in.</p>
<p>This is not a fixable problem within the current text-to-CAD paradigm. Even if you described the packaging constraints in the prompt (and you'd need a novel-length prompt to capture a fraction of them), the AI doesn't reason about spatial interference, load path continuity, or assembly sequence constraints. <a href="/posts/ai-cad-for-real-work">AI CAD for real work</a> already struggles with simple assembly context. Automotive packaging is that problem multiplied by a factor of several thousand.</p>
<p>A body structure engineer once told me that every bracket in a vehicle touches five other engineers' work. The NVH team cares about the bracket's stiffness. The crash team cares about its deformation mode. The manufacturing team cares about the weld access. The assembly team cares about the fastener sequence. The packaging team cares about the clearance to the neighboring harness clip. A text-to-CAD bracket that satisfies one of those constraints by accident is not a useful bracket. It's a starting point that still needs five engineers to review it.</p>
<h2>Simulation-ready geometry: crash, NVH, durability</h2>
<p>Automotive design is simulation-driven to a degree that would surprise people outside the industry. A typical vehicle program runs thousands of crash simulations before a single physical prototype is built. NVH (noise, vibration, and harshness) analysis drives the stiffness requirements for every structural joint. Durability analysis predicts fatigue life based on road load data. Thermal analysis determines cooling system capacity. CFD determines aerodynamic performance and underhood airflow.</p>
<p>Every one of these analyses requires geometry that's clean, properly connected, and representative of the manufacturing intent. A crash simulation mesh needs mid-surface representations with correct thickness assignments. An NVH model needs accurate stiffness at every joint. A durability model needs correct material properties and manufacturing-process-induced residual stresses.</p>
<p>Text-to-CAD geometry is not simulation-ready. It's geometry-shaped. The wall thicknesses aren't consistent. The connections between features don't represent manufacturing joints (welds, bolts, adhesive bonds). The material isn't specified in a way that maps to a simulation material card. Getting AI-generated geometry into a state where a CAE engineer could mesh it and trust the results would take longer than modeling the part from scratch with simulation requirements in mind.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> describes realistic workflows for these tools, and none of those workflows include "feed the output into LS-DYNA and trust the crash results."</p>
<h2>The OEM data ecosystem</h2>
<p>Here's something that people outside automotive don't always realize: the OEM dictates the CAD system, data format, naming conventions, and data management process. If you supply to a VW Group brand, you work in CATIA V5 or 3DEXPERIENCE. If you supply to Toyota, you probably work in NX. If you supply to a North American OEM, it depends on the program, but CATIA and NX dominate.</p>
<p>These aren't preferences. They're contractual requirements. The OEM's PLM (Product Lifecycle Management) system ingests native CAD data, associates it with part numbers and revision levels, links it to the bill of materials, and distributes it to manufacturing. The entire supply chain operates on this data pipeline. A STEP file from a text-to-CAD tool is not native CATIA or NX data. It imports as dumb geometry, without feature history, without the metadata structure the PLM expects, and without the naming conventions the OEM's data management system requires.</p>
<p>Even if the geometry were perfect (it's not), integrating it into an OEM's data ecosystem would require converting it to native format, rebuilding the feature tree for editability, adding all the PMI (Product Manufacturing Information) the downstream processes need, and linking it into the PLM with correct part numbering. That's not a workflow improvement. That's a format conversion project.</p>
<h2>Where AI actually helps in automotive (today)</h2>
<p>I'm going to be fair, because the picture isn't entirely bleak if you look at the right places.</p>
<p>Concept sketching and early styling. Some OEM design studios are experimenting with AI-generated images and 3D concepts for early-phase styling exploration. This isn't text-to-CAD in the engineering sense. It's closer to AI-assisted industrial design, generating visual concepts that a clay modeler or digital sculptor then interprets. The AI output never touches the engineering data chain. It influences the design direction the way a napkin sketch might.</p>
<p>Aftermarket parts and accessories. Aftermarket brackets, mounts, and adapters are designed to simpler standards than OEM components. They don't go through the OEM's crash and NVH validation. They don't need to integrate with the vehicle's PLM system. A text-to-CAD phone mount bracket or cup holder adapter is a reasonable use case, within the <a href="/posts/is-text-to-cad-accurate">usual accuracy limitations</a>.</p>
<p>Non-visible, non-structural brackets for prototyping. When a prototype build needs a temporary bracket to hold a sensor or route a harness, and the bracket will be redesigned properly before production, text-to-CAD can generate starting geometry that saves a few minutes of sketching. The bracket gets 3D printed, used for the prototype, and thrown away. The risk is low because the part never sees production.</p>
<p>Internal tooling and fixtures. Assembly fixtures, quality check gauges (rough ones, not GD&#x26;T inspection fixtures), and handling tools for prototype shops. These parts live inside the company, don't ship with the vehicle, and have relaxed requirements compared to product parts.</p>
<h2>Why the OEM pipeline has no slot for generated geometry</h2>
<p>The automotive product development process is a series of gates. Each gate requires specific deliverables at specific maturity levels. At early gates, you need package studies and space claims. At middle gates, you need design-validated geometry with simulation results. At late gates, you need manufacturing-validated geometry with tooling data.</p>
<p>Text-to-CAD output doesn't fit into any of these gates because it doesn't meet the maturity requirements of any stage. It's too crude for the middle and late gates (no simulation readiness, no manufacturing data). And at the early gates, the work is about space claims and system architecture, not generating individual part geometry. The early-phase question is "how much space does this subsystem need?" not "what does this bracket look like?" The bracket doesn't exist yet because the space it occupies hasn't been finalized.</p>
<p>This structural mismatch is why text-to-CAD tools feel alien to automotive engineers. The tools solve a problem, quick part generation, that doesn't map to how automotive design work is organized. The <a href="/posts/ai-in-cad-software">AI in CAD software</a> conversation is happening in every industry, but automotive's specific combination of surface quality requirements, packaging integration, simulation dependency, and OEM data standards makes it one of the hardest industries for text-to-CAD to penetrate.</p>
<h2>The honest assessment</h2>
<p>Text-to-CAD for automotive is a solution looking for a problem that the industry doesn't have, at least not in the form these tools currently provide. Automotive design is not bottlenecked by the speed of generating individual part geometry. It's bottlenecked by the complexity of integrating parts into a vehicle system, validating them through simulation, and managing the data through a multi-year development program with hundreds of suppliers.</p>
<p>If you work in automotive and someone suggests using text-to-CAD for production parts, the answer is the same laugh my surfacing friend gave me. The reflection lines won't lie, the crash model won't forgive, and the OEM's data management system won't accept geometry that arrived as a STEP file with no history, no metadata, and no certification trail.</p>
<p>For prototype fixtures, aftermarket accessories, and concept-phase napkin sketches that happen to be 3D? Sure. Keep your expectations calibrated, check the geometry, and don't confuse a shape with a design. Automotive engineering is what happens between the shape and the design, and that gap is where thousands of engineers spend their careers.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD for aerospace: why it&apos;s not ready yet</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-for-aerospace</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-for-aerospace</guid>
      <pubDate>Sat, 21 Mar 2026 00:00:00 GMT</pubDate>
      <description>Aerospace has certification requirements, material traceability, stress analysis dependencies, and tolerances measured in fractions of a thou. Text-to-CAD doesn&apos;t know any of that exists.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>aerospace</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD is not suitable for aerospace design work in 2026. Aerospace parts require AS9100 traceability, FEA-validated geometry, material-specific design rules, fatigue considerations, and documentation that current AI generation cannot provide. AI-generated geometry has no design intent, no load path awareness, and no certification trail.</p>
<p>Text-to-CAD is not ready for aerospace, and it won't be for a long time. I can say that with confidence because I spent a week trying to use AI-generated geometry in a stress analysis workflow, and the result was the kind of quiet failure where everything looks plausible until you open the FEA results and realize nothing about the part was designed with load paths in mind. The bracket looked like a bracket. The mounting features were in roughly the right places. But the geometry was just shapes arranged in space, with no awareness that this particular arrangement of shapes was supposed to keep something attached to an airframe at 30,000 feet.</p>
<p>I should admit upfront that I'm not an aerospace engineer. My background is product design. But I've done enough contract work adjacent to aerospace, fixtures, ground support equipment, test jigs, that I've sat through AS9100 audits, argued about documentation requirements, and watched the look on a quality engineer's face when someone suggests skipping the design history file. Aerospace is a different animal. The rules are not optional, the tolerances are not suggestions, and the documentation exists because people's lives depend on it.</p>
<h2>Why aerospace is fundamentally different</h2>
<p>In general mechanical design, if a bracket is 0.5mm thicker than it needs to be, nobody cares. You added material. The part is heavier and slightly more expensive. That's usually the end of it.</p>
<p>In aerospace, extra weight is a cost driver measured in fuel burn over the life of the aircraft. A bracket that's 0.5mm thicker than the stress analysis says it needs to be is not a safety margin, it's a weight penalty that someone has to justify, document, and get signed off. And a bracket that's 0.5mm thinner than it needs to be is a potential fatigue failure that could ground a fleet.</p>
<p>Every dimension on an aerospace part exists for a reason that's traceable to a requirement. The material choice is traceable to a specification. The heat treatment is traceable to a process document. The surface finish is traceable to a fatigue analysis. The inspection criteria are traceable to the design intent. This traceability chain is not a nice-to-have. It's the core of AS9100 and the regulatory frameworks (FAA, EASA) that govern flight hardware.</p>
<p>Text-to-CAD tools generate geometry from a text prompt. That geometry has no traceability to anything. No requirements document. No stress analysis. No material specification beyond what the prompt mentioned in passing. No design rationale. No revision history that shows why a fillet radius changed from 3mm to 5mm between rev B and rev C. The <a href="/posts/text-to-cad-limitations">limitations of text-to-CAD</a> that affect all industries become disqualifying in aerospace because the documentation requirements aren't bureaucratic overhead. They're the mechanism that keeps aircraft in the air.</p>
<h2>What happens when you run FEA on AI-generated geometry</h2>
<p>I tried this, because I was curious and slightly masochistic. I prompted Zoo.dev to generate a mounting bracket with specific dimensions and hole patterns, something that might serve as a structural element in a non-primary load path. Exported the STEP file. Imported it into Fusion 360. Set up a simple static stress analysis with fixed supports at the bolt holes and a load on the mounting face.</p>
<p>The analysis ran. Results came back. And here's where the education happened.</p>
<p>The stress concentrations were in places that made no engineering sense for the intended load case. The fillet radii were inconsistent: some transitions had generous radii, others had near-sharp corners, and there was no logic to which got which. The AI had distributed material based on what brackets in its training data looked like, not based on where the stress actually flows. One section had plenty of material where the stress was near zero, and another section was undersized right where the peak stress occurred.</p>
<p>This is not a cosmetic problem. In aerospace structural analysis, the geometry and the FEA are supposed to evolve together. You run the analysis, identify the stress concentrations, modify the geometry to reduce them, run again, iterate. The part shape is driven by the physics. Text-to-CAD reverses this: the shape is driven by training data, and the physics is whatever happens when you test the shape you got. That's not design. That's hope.</p>
<p>A stress engineer I've worked with on fixture designs put it plainly: "I can't sign off on geometry that doesn't have a rationale. If I can't explain why the wall is 3mm thick instead of 2.5mm, I can't certify the analysis." In aerospace, the ability to explain every design decision is not optional. <a href="/posts/ai-cad-for-real-work">AI CAD for real work</a> already struggles with manufacturing realities. Aerospace adds the requirement that every geometric choice be justified, documented, and traceable.</p>
<h2>The documentation gap</h2>
<p>Here's where the conversation gets uncomfortable for anyone excited about AI in aerospace design.</p>
<p>A typical aerospace part has a design history file that includes: the requirements it was designed to meet, the loads and environmental conditions it's designed for, the material specification and the reason for the material choice, the stress analysis showing the part meets the load requirements with appropriate factors of safety, the drawing with GD&#x26;T callouts traceable to the functional requirements, the inspection plan, and the test data if the part was qualified by test.</p>
<p>Text-to-CAD generates a 3D model with none of that. Not "some of it missing." None of it. The geometry arrives as a dumb solid body in a STEP file. No feature tree with design intent. No parameters linked to requirements. No inspection callouts. No surface finish specifications. No material specification beyond whatever the user typed in the prompt. No traceability to loads, no link to an analysis, no documentation of design decisions.</p>
<p>For a prototype jig that never sees flight loads, this is fine. For flight hardware, this is a non-starter. Not because the geometry is necessarily wrong (though it might be), but because there's no evidence that it's right. In aerospace, "it looks fine" is not an acceptable basis for certification. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> describes realistic workflows for these tools, and none of them include "put the output on an airplane."</p>
<h2>Material and fatigue: what the AI doesn't know</h2>
<p>Aerospace design is deeply material-specific. An aluminum 7075-T6 bracket is designed differently from a titanium Ti-6Al-4V bracket, not just because the allowable stresses are different, but because the fatigue behavior, corrosion susceptibility, machinability, and damage tolerance characteristics are different. The material drives the geometry in ways that text-to-CAD tools have no framework to understand.</p>
<p>Fatigue is the big one. Most aerospace structural failures are fatigue failures, not static overloads. A part that passes a static stress check can still fail after 10,000 flight cycles if the stress concentrations are in the wrong places, the surface finish is too rough, the grain direction of the material is wrong, or the residual stresses from manufacturing weren't accounted for.</p>
<p>Text-to-CAD generates geometry with no awareness that fatigue exists. The fillet radii it chooses are cosmetic, not stress-driven. The transitions between features are whatever the training data suggests, not what the S-N curve for the material requires. The surface finish implied by the smooth viewport rendering bears no relationship to the surface finish achievable by the manufacturing process or required by the fatigue life target.</p>
<p>I once watched a fatigue specialist spend forty-five minutes explaining to a junior engineer why a particular fillet radius had to be exactly 5mm and not 4mm. The explanation involved S-N data, stress concentration factors, a safety factor chain, and a service life calculation that traced back to the aircraft's inspection interval. That single radius captured more engineering knowledge than a text-to-CAD prompt could express in a page of text.</p>
<h2>Where AI might eventually help in aerospace</h2>
<p>I'm not saying AI has no future in aerospace. I'm saying its current form, generating geometry from text prompts, is the wrong tool for the job as aerospace currently defines the job. But there are spaces where AI-assisted CAD could be useful if the expectations are calibrated correctly.</p>
<p>Non-critical ground support equipment. Test fixtures, handling tools, protective covers, storage racks. These parts don't fly. They're often designed to general engineering standards rather than aerospace-specific requirements. The documentation burden is lighter. The tolerance requirements are looser. A text-to-CAD bracket for a ground handling cart is a different conversation from a text-to-CAD bracket for a wing rib.</p>
<p>Early concept geometry for trade studies. If you need to quickly evaluate three different bracket configurations for weight and envelope before committing to a detailed design, AI-generated starting geometry could save time in the concept phase. The key is that nobody should confuse concept geometry with design geometry. The concept gets you to a conversation. The design gets you to a part.</p>
<p>Tooling and jig design, where the tooling doesn't affect part quality and the failure mode is "we make a new jig," not "the aircraft has a problem." I've used <a href="/posts/text-to-cad-guide">text-to-CAD</a> output as starting geometry for simple fixtures, imported into Fusion 360, rebuilt with proper constraints, and used for non-flight purposes. It saved some sketching time. It didn't save any engineering time.</p>
<p>Automation of documentation templates and drawing formatting. This isn't geometry generation, but it's where AI might help aerospace workflows: automating the boring parts of the documentation process rather than trying to automate the engineering judgment that the documentation captures.</p>
<h2>The certification wall</h2>
<p>The fundamental problem with text-to-CAD in aerospace is the certification wall. Every piece of flight hardware needs to be shown to meet its requirements through a combination of analysis, test, or similarity to previously qualified hardware. All three paths require documentation that traces the part geometry back to the requirements.</p>
<p>AI-generated geometry doesn't trace back to anything. It traces back to a text prompt and a training dataset. Neither of those constitutes a design rationale in any regulatory framework I'm aware of. A Designated Engineering Representative (DER) can't sign off on a part whose design basis is "the AI thought it should look like this." That's not how certification works, and no amount of AI improvement changes the regulatory structure.</p>
<p>The aerospace industry moves slowly on purpose. The conservatism is annoying until you remember that it exists because the consequences of failure are catastrophic and irreversible. The industry adopted CATIA V5 over a decade after it was available. It's still transitioning from CATIA V5 to 3DEXPERIENCE, and that's software from the same vendor. The idea that aerospace will adopt AI-generated geometry for flight hardware in the near term requires ignoring everything about how the industry actually adopts technology.</p>
<h2>The training data problem</h2>
<p>There's another issue that rarely comes up in the text-to-CAD hype: the training data. Current text-to-CAD models are trained on publicly available CAD datasets, things like ABC, Fusion 360 Gallery, and similar repositories. These datasets are dominated by consumer products, simple mechanical parts, and educational examples.</p>
<p>Aerospace geometry is almost entirely proprietary. The bracket designs, structural configurations, material-specific feature choices, and analysis-driven geometry that characterize real aerospace hardware are locked inside OEM and supplier PLM systems. They're export-controlled under ITAR or EAR. They're not in any training dataset, and they can't be without serious legal and security implications.</p>
<p>So even if text-to-CAD tools could handle the documentation and certification challenges, they wouldn't know what aerospace parts look like. The training data doesn't contain the kinds of parts you'd actually need to generate. You'd get consumer-product brackets with aerospace aspirations, which is roughly what I got when I tested it.</p>
<h2>The honest assessment</h2>
<p>Text-to-CAD for aerospace is a mismatch between a tool designed for speed and an industry designed for certainty. Speed is nice. Certainty keeps aircraft in the sky. When those priorities conflict, certainty wins, and it should.</p>
<p>If you work in aerospace and someone asks you about using text-to-CAD for flight hardware, the answer is no. Not "not yet." No. The <a href="/posts/text-to-cad-limitations">limitations</a> aren't just about geometry quality or dimensional accuracy. They're about the absence of everything that surrounds the geometry in an aerospace design process: the analysis, the documentation, the traceability, the certification basis.</p>
<p>For ground support equipment, fixtures, and concept-phase geometry that will be completely redesigned before it matters? Sure, give it a try. Treat the output like a napkin sketch that happens to be 3D. Import it, rebuild it, add everything the AI left out, and do the engineering. The AI generated a shape. Your job is turning it into a part. In aerospace, the distance between those two things is measured in documentation pages, analysis reports, and sign-off authorities. That distance isn't getting shorter anytime soon.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD dimensional accuracy: I measured the output</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-dimensional-accuracy</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-dimensional-accuracy</guid>
      <pubDate>Fri, 20 Mar 2026 00:00:00 GMT</pubDate>
      <description>I prompted five parts with specific dimensions, exported the output, and measured everything in Fusion 360. The results were educational, in the way that watching someone parallel park into a fire hydrant is educational.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>accuracy</category>
      <category>benchmarks</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD dimensional accuracy varies by tool and geometry complexity. In testing, Zoo.dev hit specified dimensions within 5% for simple prismatic parts. Curved geometry and holes were less accurate (10-20% deviation). No tool consistently produced the exact dimensions requested. Complex parts with multiple interacting dimensions had the worst accuracy.</p>
<p>Text-to-CAD dimensional accuracy is inconsistent, tool-dependent, and worse on complex geometry than on simple prismatic shapes. I know this because I measured it. Five test parts, three tools, a STEP file importer, and the inspection tools in Fusion 360 that I normally use for checking supplier models. It took me most of a Saturday afternoon, which I could have spent doing something useful, but the numbers were worth it because nobody else seems to be publishing actual measurements.</p>
<p>The whole exercise started because I was tired of the vague claims. "Close enough for prototyping." "Usually within a few percent." "Good for concept work." These phrases float around text-to-CAD conversations without anyone attaching numbers. I wanted numbers. Specific, reproducible, measured-in-CAD numbers that I could point at when someone asks whether the output dimensions can be trusted. The answer, backed by data from my Saturday of clicking "Measure" repeatedly: sometimes yes, often no, and never with the confidence you'd need for anything beyond rough prototyping.</p>
<h2>The methodology</h2>
<p>I designed five test parts on paper with specific, unambiguous dimensions. No organic shapes. No vague descriptions. Every feature had an exact dimension specified in the prompt. The point was to test the AI's ability to produce what was asked for, not to test how creatively it interprets vague prompts.</p>
<p>The five test parts:</p>
<p>Part 1: A flat rectangular plate, 80mm by 50mm by 5mm, with four 4.2mm holes on a 60mm by 30mm bolt pattern, centered on the plate. Six critical dimensions: length, width, thickness, hole diameter, and the two bolt pattern spacings.</p>
<p>Part 2: A cylindrical standoff, 25mm outer diameter, 12mm inner bore, 30mm tall, with a 2mm chamfer on both ends of the outer edge. Five critical dimensions plus the chamfers.</p>
<p>Part 3: An L-bracket with 3mm wall thickness, 40mm by 40mm legs, with a 15mm radius fillet at the inside corner and two 5mm holes per leg on a 25mm spacing. Nine critical dimensions including the fillet.</p>
<p>Part 4: A U-channel, 60mm long, 30mm wide, 20mm tall, 2mm wall thickness, open top. Five critical dimensions, but the wall thickness uniformity is what I was really watching.</p>
<p>Part 5: A flanged cylinder. A 20mm diameter, 30mm tall cylinder sitting centered on a 50mm by 50mm by 4mm square flange, with a 10mm through-bore. Seven critical dimensions and a concentricity relationship between the bore and the outer cylinder.</p>
<p>I ran each prompt through three tools: Zoo.dev, AdamCAD, and CADScribe. Exported the STEP files. Imported every one into Fusion 360. Measured every critical dimension using the Inspect tool. Wrote everything down in a spreadsheet that's still open on my desktop because I haven't had the heart to close it.</p>
<h2>Zoo.dev results</h2>
<p>Zoo.dev performed best on the simple parts and worst on the complex ones, which was roughly what I expected given my <a href="/posts/is-text-to-cad-accurate">earlier accuracy testing</a>.</p>
<p>Part 1 (rectangular plate): Length 80.0mm, width 50.0mm, thickness 5.0mm. Good. Holes measured 4.2mm. Also good. Bolt pattern: 59.4mm by 29.7mm instead of 60mm by 30mm. Close, but a 0.6mm error on a bolt pattern means the holes are shifted. On an M4 clearance hole you'd probably still get the bolts through, but it's not what I asked for. Overall: the gross dimensions are right, the feature positions have sub-millimeter drift.</p>
<p>Part 2 (cylindrical standoff): OD 25.0mm, bore 12.0mm, height 30.0mm. The chamfers were 1.8mm instead of 2mm. Honestly, pretty good. Cylinders are Zoo.dev's comfort zone. Simple rotational geometry with clear dimensions.</p>
<p>Part 3 (L-bracket): This is where things got interesting. Wall thickness measured 3.0mm on one leg and 2.8mm on the other. Leg lengths were 40.0mm and 39.4mm. The fillet radius at the inside corner was 13mm instead of 15mm, a 13% error on a feature that would affect stress distribution if this were a structural part. Hole diameters were 4.9mm and 5.1mm instead of 5.0mm. Hole spacing was 24.2mm on one leg and 25.3mm on the other. The symmetry I described in the prompt didn't carry through to the output. Both legs should have been identical. They weren't.</p>
<p>Part 4 (U-channel): External dimensions were close: 60.1mm long, 30.0mm wide, 20.0mm tall. But the wall thickness varied from 1.7mm to 2.3mm around the channel. I specified 2mm uniform. The AI got the outer shell right and let the inner cavity drift. This is the same pattern I noticed in my <a href="/posts/is-text-to-cad-accurate">earlier enclosure testing</a>: external dimensions are more reliable than internal features.</p>
<p>Part 5 (flanged cylinder): Flange was 49.6mm by 49.7mm by 4.0mm. Cylinder OD 19.5mm, height 29mm, bore 10.0mm. The cylinder was 0.5mm too small in diameter and 1mm too short. The bore was centered on the cylinder, but the cylinder was not perfectly centered on the flange: offset by about 0.4mm from center. Concentricity, a geometric relationship, was approximate rather than exact.</p>
<h2>AdamCAD results</h2>
<p>AdamCAD generates OpenSCAD code from prompts, so the output is parametric and in theory should match the specified dimensions exactly, since the code explicitly sets dimension values.</p>
<p>Part 1 (rectangular plate): All box dimensions correct at 80, 50, and 5mm. Holes at 4.2mm. Bolt pattern spacing correct at 60mm by 30mm. AdamCAD's code-generation approach means the top-level dimensions are literally typed into the code. Where it gets less reliable is in features that require geometric calculation rather than direct dimension input.</p>
<p>Part 2 (cylindrical standoff): Dimensions correct. Chamfers were generated as 45-degree cuts at 2mm. Accurate. Simple OpenSCAD geometry.</p>
<p>Part 3 (L-bracket): This is where code generation got tricky. The wall thickness was 3mm as specified. Leg lengths were correct at 40mm. But the fillet was implemented as a cylinder subtracted from the corner rather than a proper fillet, and the radius measured 15mm as specified but the fillet didn't smoothly blend with the leg surfaces. The result was geometrically correct on paper but produced a visible seam in the STEP export. Holes were 5.0mm and spacing was 25.0mm. Dimensionally accurate but geometrically rough.</p>
<p>Part 4 (U-channel): All dimensions correct because it's a simple Boolean operation in OpenSCAD. Wall thickness uniform at 2mm. This is AdamCAD's strength: straightforward parametric geometry where every dimension is a variable in the code.</p>
<p>Part 5 (flanged cylinder): Dimensions correct. Concentricity exact because the code uses the same center coordinate for both features. AdamCAD's code-based approach eliminates the positional drift that Zoo.dev showed.</p>
<p>The trade-off: AdamCAD is dimensionally more accurate for parts that can be described with OpenSCAD primitives and Booleans, but the geometry quality (surface smoothness, fillet quality, edge treatment) is rougher. You get the right numbers in a less refined package.</p>
<h2>CADScribe results</h2>
<p>CADScribe generates Fusion 360 commands, so the output should have native feature history and good geometry quality.</p>
<p>Part 1 (rectangular plate): Length 80.0mm, width 50.0mm, thickness 5.0mm. Holes 4.2mm. Bolt pattern 60.0mm by 30.0mm. Fully accurate. The Fusion 360 sketch constraints held the pattern precisely.</p>
<p>Part 2 (cylindrical standoff): All dimensions correct. Chamfers at 2mm. Clean native Fusion geometry.</p>
<p>Part 3 (L-bracket): Wall thickness 3.0mm. Legs 40.0mm. Fillet radius 15.0mm, properly blended. Holes 5.0mm, spacing 25.0mm. The Fusion 360 native features handle this geometry cleanly. The sketch constraints and feature operations produce exact results.</p>
<p>Part 4 (U-channel): Correct dimensions. Uniform 2mm wall. The Shell feature in Fusion 360 produced clean, consistent walls.</p>
<p>Part 5 (flanged cylinder): All dimensions correct. Concentricity exact. The Fusion 360 construction geometry (center points, axes) ensures features align precisely.</p>
<p>CADScribe's results were the most accurate across all five parts. The catch: CADScribe's accuracy depends on the AI correctly translating the prompt into Fusion 360 operations. When it works, the dimensions are exact because Fusion 360's geometric kernel enforces them. When the translation fails (which happens with more complex prompts), you get an error rather than wrong geometry. It fails loudly rather than silently, which is actually preferable to silent dimensional drift.</p>
<h2>Where dimensions break down</h2>
<p>Across all three tools, I noticed consistent patterns about what kinds of features are most and least accurate.</p>
<p>Most accurate: overall bounding dimensions (length, width, height), hole diameters specified explicitly, features that map directly to a single CAD operation (a hole, an extrusion, a chamfer with a single dimension).</p>
<p>Least accurate: features that reference other features (bolt patterns, hole positions relative to edges), fillet radii (often approximate rather than exact), wall thicknesses on parts generated with Boolean operations rather than Shell features, and concentricity or symmetry relationships between features.</p>
<p>The pattern makes sense if you think about how these tools work. A dimension that maps to a single number in a CAD operation (extrude 5mm, hole diameter 4.2mm) tends to be accurate because the AI just needs to put the right number in the right field. A dimension that requires calculating a position relative to other features (hole center is 10mm from an edge that's at a certain position based on the overall part width) introduces compounding errors. Each reference in the chain can drift slightly, and the errors accumulate.</p>
<p>For a <a href="/posts/is-text-to-cad-accurate">deeper look at this accuracy problem</a>, my earlier testing on Zoo.dev covers the pattern in more detail. This round of testing confirmed the same trends across multiple tools.</p>
<h2>The "close enough" threshold</h2>
<p>Whether text-to-CAD accuracy is acceptable depends entirely on what you're doing with the output.</p>
<p>For concept visualization and design reviews: a part that's within 5% on all dimensions is fine. You're evaluating form and proportion, not building to spec. All three tools are acceptable for this use case.</p>
<p>For FDM prototyping of non-functional parts: within 1-2mm is usually workable. Zoo.dev and CADScribe are reliable here for simple parts. Complex parts need verification.</p>
<p>For FDM prototyping of functional parts (test fits, assembly checks): you need sub-millimeter accuracy on critical interfaces. Only CADScribe consistently delivered this, and only on prompts it successfully translated. Zoo.dev is hit-or-miss. AdamCAD is dimensionally precise but geometrically rough.</p>
<p>For CNC machining or any production process: no tool is consistently accurate enough. Always verify every dimension in your CAD tool before sending anything to manufacturing. The <a href="/posts/ai-cad-for-real-work">limitations of AI-generated geometry for real work</a> go beyond dimensional accuracy, but dimensional accuracy is the first thing a machinist will notice.</p>
<h2>What this means for prompt engineering</h2>
<p>The accuracy data suggests some practical prompt-writing strategies.</p>
<p>State every dimension explicitly. Don't say "small hole." Say "5mm diameter hole." The AI performs best when dimensions are numbers, not adjectives.</p>
<p>Keep feature count low. Each additional feature is another opportunity for positional drift. A plate with two holes is more accurately generated than a plate with eight holes in a complex pattern.</p>
<p>Specify relationships explicitly. Instead of "holes near the corners," say "holes centered 10mm from each edge." Instead of "a fillet at the corner," say "a 5mm radius fillet at the inside corner." The more specific the prompt, the less the AI has to guess, and guessing is where the errors come from.</p>
<p>Verify before using. I know this sounds obvious, but the number of people who generate a part and send it to a printer without measuring a single dimension is higher than it should be. Open the STEP file in your CAD tool. Use the measure tool. Check the critical dimensions. It takes two minutes and it's the difference between a usable prototype and a confusing waste of filament.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> has more on prompt strategies. But no amount of prompt engineering eliminates the need to verify. The accuracy is good enough to be useful and inconsistent enough to require checking. That's where the technology is. My Saturday of measuring confirmed it.</p>
<h2>The honest verdict</h2>
<p>Text-to-CAD dimensional accuracy is better than I expected on simple parts and worse than the demos imply on anything with feature relationships. Zoo.dev gets you in the ballpark. AdamCAD gets you the exact numbers in rough geometry. CADScribe gets you the exact numbers in clean geometry, when it works. No tool is reliable enough to skip the verification step.</p>
<p>My spreadsheet has fifty-odd measurements in it now, and the story they tell is consistent: text-to-CAD is a first draft, not a specification. Treat the output accordingly. Measure what matters. Fix what's wrong. And keep the <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> in perspective: they're impressive for what they are, and insufficient for what a lot of people want them to be. My Saturday afternoon confirmed both halves of that sentence.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Neural CAD: what it means and who&apos;s building it</title>
      <link>https://blog.texocad.ai/posts/neural-cad</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/neural-cad</guid>
      <pubDate>Thu, 19 Mar 2026 00:00:00 GMT</pubDate>
      <description>Neural CAD is the idea that neural networks can learn to produce CAD operations, not just final geometry. Autodesk is the loudest about it. The research is real. The production readiness is not.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>neural-cad</category>
      <category>research</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Neural CAD refers to neural network approaches that generate CAD modeling operations (sketches, extrusions, fillets) rather than raw geometry. Autodesk&apos;s research (Neural CAD, DeepCAD) trains on parametric modeling sequences. The goal is AI that thinks in feature trees, not meshes. Still research-stage in 2026, not production-ready.</p>
<p>Neural CAD means neural networks that generate CAD modeling operations, sketches, extrusions, fillets, the actual construction sequence, rather than predicting final geometry. Autodesk coined the term for their own research effort, but the idea is broader than one company. The goal is AI that thinks in feature trees, not triangulated surfaces. It's the difference between an AI that draws a picture of a bracket and an AI that tells you how to build one step by step in Fusion 360. The research is genuine. I've read the papers. The production readiness, as of April 2026, is not there.</p>
<p>I first heard the phrase "Neural CAD" during Autodesk University 2025, watching the keynote stream on my laptop with a cup of coffee that went cold before the demo was half over. Mike Haley from Autodesk Research used the term while showing a prototype that generated editable geometry inside Fusion 360's canvas from a text prompt. The audience applauded. I wrote "when?" in my notes and underlined it twice. Six months later, the answer is still "not yet," but the underlying research has continued moving, and it's worth understanding what's actually happening beneath the marketing.</p>
<h2>What "neural CAD" means technically</h2>
<p>Traditional text-to-3D AI, the kind that powers tools like DreamFusion or Point-E, generates geometry as a final output. You give it text, it gives you a shape. The shape might be a point cloud, a mesh, a NeRF, or a voxel grid. What it isn't is a construction history. There's no sequence of operations you can replay, edit, or modify. The output is dead geometry. You can look at it, but you can't really work with it.</p>
<p>Neural CAD flips this. Instead of predicting what the final geometry looks like, the network predicts the sequence of CAD operations that produce the geometry. Sketch a rectangle on the XY plane. Constrain it to 80mm by 50mm. Extrude it 10mm in the positive Z direction. Sketch a circle on the top face. Constrain it to 5mm diameter. Cut-extrude through all. That kind of sequence.</p>
<p>This is fundamentally harder. Predicting a surface is a regression problem: approximate the right coordinates. Predicting a construction sequence is a structured prediction problem: get the right operations, in the right order, with the right parameters, and have the whole thing compile into valid geometry when executed by a CAD kernel. It's like the difference between predicting what a finished house looks like versus predicting the construction plan that builds it. The plan is more useful, but it's also more constrained and more brittle when things go wrong.</p>
<p>The <a href="/posts/how-text-to-cad-works">how text-to-CAD works</a> post covers the broader technical landscape, but the neural CAD approach represents the most ambitious branch of that tree. It's aiming not just for geometry, but for engineering-grade output.</p>
<h2>The key research</h2>
<p>Three lines of research define the neural CAD space as of 2026. They're related but distinct.</p>
<p>DeepCAD, published in 2021, introduced the dataset and the autoencoder architecture that most subsequent work builds on. The <a href="/posts/deepcad-dataset">DeepCAD dataset</a> contains approximately 178,000 parametric CAD models from ABC (a large CAD dataset derived from Onshape public models), represented as sequences of sketch-and-extrude operations. The DeepCAD model learned to encode these sequences into a latent space and decode them back into valid operation sequences. It proved that neural networks could learn the language of CAD construction, not just the appearance of CAD geometry.</p>
<p>The dataset itself is a major contribution. Before DeepCAD, there was no large-scale collection of CAD models stored as operation sequences with enough variety to train a neural network. Image generation had ImageNet and LAION. Language models had the internet. CAD generation had almost nothing. DeepCAD created the foundation, limited as it is (the models are geometrically simple, mostly prismatic parts with basic features).</p>
<p>The <a href="/posts/text2cad-paper">Text2CAD paper</a>, published as a NeurIPS 2024 spotlight, built on DeepCAD by adding text-conditioning. The team annotated the DeepCAD dataset with approximately 660,000 text descriptions at multiple skill levels (beginner, intermediate, expert) and trained a transformer that takes a text prompt and generates a CAD operation sequence. This was the first end-to-end pipeline from natural language to parametric CAD. The model architecture uses a BERT encoder for text and an autoregressive decoder for CAD tokens, predicting each operation conditioned on the text encoding and the operations generated so far.</p>
<p>Autodesk's Neural CAD research, presented at AU 2025, takes a proprietary approach that builds on similar ideas but with Autodesk's internal data and engineering. The details are thinner because Autodesk hasn't published the architecture with the same openness as academic work. What they've shown publicly is a foundation model trained on CAD geometry that can generate native, editable B-Rep geometry inside Fusion 360. The demo at AU showed text-to-geometry generation producing objects with selectable faces and edges in the Fusion canvas. The <a href="/posts/fusion-360-neural-cad">Fusion 360 Neural CAD</a> post covers what Autodesk has said publicly about this effort.</p>
<h2>Why generating operations is harder than generating meshes</h2>
<p>The difficulty gap between generating a mesh and generating a CAD construction sequence is enormous, and understanding why explains a lot about the current state of neural CAD.</p>
<p>A mesh is forgiving. If a triangle is slightly wrong, the overall shape might still look fine. Meshes degrade gracefully. You can have a mesh that's ugly up close but perfectly usable at a reasonable zoom level. And there's no requirement for internal consistency beyond the faces connecting at edges. This is why text-to-3D tools that generate meshes have improved so rapidly. The problem is inherently tolerant of small errors.</p>
<p>A CAD operation sequence is not forgiving. If a sketch constraint is wrong, the sketch might be invalid. If an extrusion references a face that doesn't exist because a previous operation failed, the whole sequence breaks. If the parameters are slightly off, you don't get a slightly wrong part. You might get an error, or a completely different part, or a kernel crash. CAD sequences are like programs: they either compile or they don't, and a single wrong token can break everything.</p>
<p>This brittleness is why neural CAD models produce a meaningful percentage of invalid outputs. The Text2CAD paper reports invalidity ratios for different generation conditions, and while the valid outputs are impressive, the invalid ones highlight the core challenge. Generating a sequence that looks statistically correct is not the same as generating a sequence that executes correctly in a geometric kernel. Every operation has preconditions, and the network has to satisfy all of them in order.</p>
<p>There's also the combinatorial problem. The number of possible mesh surfaces for a given shape is large but continuous. You can interpolate between nearby meshes smoothly. The number of valid CAD operation sequences for a given shape is discrete and combinatorial. A bracket can be built a hundred different ways: sketch the L-profile and extrude, or sketch a rectangle and extrude twice, or use a shell operation on a box. Each construction yields the same visual result but a different feature tree. The network has to pick not just the right geometry but a valid construction strategy, and there are many valid strategies with no clear way to smoothly interpolate between them.</p>
<h2>The B-Rep advantage</h2>
<p>The reason neural CAD matters, the reason researchers are going through this pain instead of just generating meshes, is B-Rep.</p>
<p>B-Rep (Boundary Representation) is how professional CAD tools represent solid geometry internally. Faces, edges, vertices, and the topological relationships between them. A B-Rep model has real faces you can select, real edges you can fillet, real surfaces you can measure. It's the native language of manufacturing. STEP files are B-Rep. SolidWorks files are B-Rep. Every CNC machine and every mold shop works from B-Rep geometry.</p>
<p>Meshes are approximations. A mesh representation of a cylinder isn't a true cylinder. It's a collection of flat triangles arranged to look like a cylinder. You can't grab a face of a mesh cylinder and extrude it cleanly. You can't add a precise fillet to a mesh edge. You can't measure a mesh dimension and get an exact number. For engineering work, mesh output from an AI is fundamentally limited.</p>
<p>Text-to-CAD tools like <a href="https://zoo.dev">Zoo.dev</a> already generate B-Rep output, but they do it through their own geometric kernel (KittyCAD) rather than through neural operation-sequence prediction. The output is B-Rep, which is great for manufacturing, but it doesn't have an editable feature tree. You get a solid body. You can measure it and machine from it. You can't easily modify it by changing a dimension in a timeline.</p>
<p>Neural CAD is after the full prize: B-Rep geometry with an editable construction history. A model that comes out of a neural CAD system would be indistinguishable from one a human modeled in Fusion 360. You could roll back the timeline, change a sketch dimension, and watch the rest of the model update. That's the goal, and nobody has delivered it at production quality yet.</p>
<h2>Current state of production readiness</h2>
<p>The honest assessment of neural CAD in April 2026:</p>
<p>The research works. The <a href="/posts/text2cad-paper">Text2CAD paper</a> demonstrated that text-to-CAD-operations is possible. DeepCAD demonstrated that neural networks can learn to generate valid construction sequences. Autodesk has demonstrated internally that this can work inside a real CAD environment.</p>
<p>The output quality is not production-grade. The models in the DeepCAD training set are simple. Boxes, cylinders, plates, basic mechanical shapes. Neural CAD models generate geometry within that vocabulary. Ask for a complex injection-molded housing with snap-fits, ribs, and draft angles, and you're far outside the training distribution. The dimensional accuracy is approximate. The feature vocabulary is limited (sketch-and-extrude dominates; fillets, chamfers, patterns, and sweeps are mostly absent from the training data).</p>
<p>No commercial tool offers neural CAD output to end users. Autodesk has announced it. They've demoed it. They haven't shipped it. The <a href="/posts/fusion-360-ai-features">Fusion 360 AI features</a> inventory makes this clear: Neural CAD is on the roadmap, not in the product. The Text2CAD research code is available on GitHub but is non-commercial and research-grade.</p>
<p>The training data problem is unsolved. Image generation models train on billions of images scraped from the internet. The largest CAD operation sequence dataset (DeepCAD) has 178,000 models. That's a five-order-of-magnitude gap. Real engineering CAD data is locked inside companies, protected by IP concerns, and stored in proprietary formats that are hard to extract operation sequences from. Until this data gap narrows, neural CAD models will be limited to simple geometry.</p>
<h2>What it means for the future</h2>
<p>Neural CAD is the path to AI-generated models that engineers can actually edit. Not just view, not just manufacture from, but open in Fusion 360 or SolidWorks and modify as if they'd built it themselves. That's the <a href="/posts/parametric-ai-design">parametric AI design</a> goal that the whole field is chasing, and neural CAD is the most plausible technical approach to getting there.</p>
<p>The timeline is unclear. Autodesk has the research team, the data (Fusion 360 has millions of models created by users), and the motivation. But the gap between "works in a demo" and "ships in a product" has historically been measured in years for this kind of technology. Fusion 360's generative design took years to go from research to stable product. Neural CAD is a harder problem.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers what's usable today, and none of the currently shipping tools use neural operation-sequence generation as their primary approach. Zoo.dev generates B-Rep through a different method. CADAgent uses LLMs to issue Fusion 360 API commands, which is closer to neural CAD in spirit but architecturally different. The pure neural-CAD approach, where a single network predicts a complete construction sequence from text, is still in the lab.</p>
<p>My expectation is that when neural CAD does ship commercially, it'll start narrow. Simple parts, limited feature vocabulary, modest dimensional accuracy. The way autocomplete started with word suggestions and gradually became paragraph-level generation. The first production neural CAD tool won't replace a senior CAD engineer. It'll replace the first five minutes of modeling a simple part, and it'll do it with a feature tree instead of a dead body. That alone would be worth the decade of research it took to get there.</p>
<p>For now, neural CAD is the most interesting thing happening in AI-generated geometry research, and the least useful thing for someone who needs a bracket by Thursday. Both of those statements are true, and there's no contradiction. The research is building the foundation. The production tools will come later. I'm watching it closely and modeling my brackets the old way in the meantime.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Parametric AI design: generating editable feature trees</title>
      <link>https://blog.texocad.ai/posts/parametric-ai-design</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/parametric-ai-design</guid>
      <pubDate>Thu, 19 Mar 2026 00:00:00 GMT</pubDate>
      <description>The real prize in AI CAD isn&apos;t geometry. It&apos;s generating a parametric model with a feature tree you can actually edit. Almost nobody can do this yet.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>parametric</category>
      <category>feature-trees</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Parametric AI design means AI-generated CAD models that include editable feature trees (sketches, extrusions, fillets) rather than dead geometry. This is the hardest unsolved problem in text-to-CAD. Zoo.dev generates B-Rep output. Research like Neural CAD and DeepCAD targets parametric sequences. No tool produces fully editable parametric models from text in 2026.</p>
<p>Parametric AI design, the ability to generate a CAD model with an editable feature tree from a text prompt, is the hardest unsolved problem in text-to-CAD. No tool does it fully in 2026. What you get instead is geometry: solid, measurable, sometimes even good, but dead on arrival when it comes to editing. I've been staring at this gap for months now, importing STEP files from text-to-CAD tools into Fusion 360 and watching the feature manager display a single imported body with no history, no sketches, no parameters, no timeline. Just a lump of solid geometry sitting there like a paperweight. A very accurate paperweight, sometimes, but a paperweight nonetheless.</p>
<p>Last Tuesday I generated a bracket with Zoo.dev. Clean B-Rep output. Good dimensions. Exported it as STEP, opened it in Fusion 360. It showed up as an imported body. I needed to move one hole 3mm to the left. In a native Fusion model, that's a double-click on a sketch dimension, type the new number, press enter. In the imported body, that's: create a new sketch on the face, project the existing hole geometry, add a new hole in the right position, suppress or cut away the old hole, and hope the boolean operation doesn't produce weird internal faces. Five minutes of work for what should have been a three-second edit. This is the parametric gap, and it's the reason that AI-generated geometry, no matter how accurate, still can't replace a model built by a human who knows what they're doing.</p>
<h2>What "parametric" actually means in CAD</h2>
<p>The word gets thrown around in marketing material for text-to-CAD tools, and most of the time it's used loosely enough to be misleading. So let me be specific about what a parametric model actually is, because the distinction is everything.</p>
<p>A parametric CAD model is not just geometry. It's geometry plus the recipe that created it. The recipe is the feature tree: an ordered sequence of operations (sketches, extrusions, cuts, fillets, patterns, mirrors, shells) where each operation has named parameters and can reference the results of previous operations. When you change a parameter, the whole tree re-evaluates, and the geometry updates accordingly.</p>
<p>This is why parametric models are useful. You can change a wall thickness and have every downstream feature update. You can modify a bolt pattern spacing and have the holes move. You can swap a material and have the mass properties recalculate. The model isn't static geometry. It's a program that produces geometry, and you can edit the program without starting over.</p>
<p>Non-parametric geometry, which is what all current text-to-CAD tools produce, is the output without the program. You have the shape but not the instructions. If you want to change something, you're reverse-engineering the model's construction after the fact, which ranges from tedious (simple changes) to effectively impossible (complex changes where features depend on each other in ways you can't reconstruct from the final shape).</p>
<p>The <a href="/posts/brep-vs-mesh-ai-generation">B-Rep vs mesh</a> distinction matters here too. B-Rep geometry (what Zoo.dev and STEP files provide) is at least topologically correct: you can select faces, measure edges, and add new features on top of the existing body. Mesh geometry (what most text-to-3D tools produce) is a pile of triangles that can't even support basic CAD operations. B-Rep without history is annoying. Mesh without history is unusable for engineering.</p>
<h2>Why it matters for real work</h2>
<p>I keep a mental list of the reasons I actually open a CAD model after it's been created. Change a dimension because the mating part changed. Add a feature the client asked for after the first review. Scale a family of parts where the proportions stay the same but the sizes differ. Fix a manufacturing issue a machinist flagged. Adapt a design for a different material with different wall thickness requirements. Run a simulation and adjust the geometry based on results.</p>
<p>Every single one of those tasks requires an editable feature tree. Every single one is impossible or painful with imported dead geometry. This is why the <a href="/posts/text-to-cad-limitations">text-to-CAD limitations</a> aren't just about accuracy or feature complexity. The most fundamental limitation is that the output can't be edited the way engineers need to edit things.</p>
<p>Consider a product family. You design a bracket in three sizes: small, medium, large. In parametric CAD, you build one model with driving dimensions, and you create configurations or variants by changing those dimensions. The feature tree rebuilds cleanly for each size. With text-to-CAD, you generate three separate models from three separate prompts. Each one is an independent chunk of geometry with no relationship to the others. If you need to add a rib to all three, you add it three times. If you need to change the wall thickness across the family, you generate three new models and hope they're consistent. There's no parametric link between them, because there's no parametric anything.</p>
<p>This is why experienced CAD users are underwhelmed by text-to-CAD in ways that beginners aren't. Beginners see geometry appear from text and think it's magic. Experienced users see geometry without a feature tree and think it's a dead end. Both reactions are correct for their respective contexts.</p>
<h2>The current state: what exists</h2>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> maps the full tool landscape, but here's the specific parametric situation for each major approach.</p>
<p>Zoo.dev generates B-Rep geometry. Real solid bodies with faces and edges. You can export STEP, import into CAD software, and work with the body. But there's no feature history. The output is a single solid body. This is the highest-quality non-parametric output currently available, and for many use cases (prototyping, 3D printing, quick concept checks), it's sufficient. For engineering workflows that require editing, it's the starting point of a rebuild.</p>
<p>CADAgent and the Fusion 360 MCP bridges generate geometry inside Fusion 360 by issuing API commands, which means the output does have native feature history. If CADAgent creates a sketch, extrudes it, then cuts a hole, those operations appear in Fusion's timeline. You can roll back, edit a sketch dimension, and see the model update. This is the closest thing to parametric AI output currently available.</p>
<p>The catch is that the feature trees produced by AI-driven Fusion 360 API calls are often brittle. The AI doesn't think about feature tree robustness the way an experienced modeler does. It might reference a specific face by index rather than by a stable reference. It might create dependencies that break if you change an upstream feature. It might use an overly complex construction strategy when a simpler one would be more stable. The feature tree exists, but it's the kind of feature tree a beginner produces: technically functional, but fragile under modification.</p>
<p>The <a href="/posts/neural-cad">Neural CAD</a> research from Autodesk and the academic community (DeepCAD, Text2CAD) is explicitly targeting operation-sequence generation. The goal is a neural network that predicts not just geometry but the construction recipe. The <a href="/posts/text2cad-paper">Text2CAD paper</a> demonstrated this for simple sketch-and-extrude sequences. The output is a sequence of CAD operations that can be executed by a kernel. In principle, this is parametric output. In practice, the operation vocabulary is limited (no fillets, no patterns, no sweeps in the training data), the dimensional accuracy is approximate, and the sequences sometimes produce invalid geometry.</p>
<h2>The feature tree prediction problem</h2>
<p>Predicting a feature tree is harder than predicting geometry. This is worth repeating because it explains why parametric AI output is so far behind non-parametric output.</p>
<p>A given piece of geometry can be constructed by many different feature trees. An L-bracket can be built by extruding an L-shaped sketch, or by extruding two rectangles and joining them, or by extruding a larger rectangle and cutting a corner away, or by starting with a block and using shell and cut operations. All produce the same visual result. They produce radically different feature trees, with different editing properties.</p>
<p>The "right" feature tree depends on design intent, which is a concept that lives in the engineer's head, not in the geometry. Design intent is the plan for how the model should behave when things change. If the bracket needs to stay symmetric when the leg length changes, the feature tree should be built around a symmetry plane. If the wall thickness might change independently on each leg, the construction strategy should use separate operations for each wall. If the fillet radii are all supposed to match, they should reference a single variable, not be hard-coded separately.</p>
<p>An AI predicting a construction sequence has no access to design intent because the text prompt doesn't encode it. "L-bracket, 40mm legs, 3mm thick, two holes per leg" describes the final geometry. It says nothing about how the model should behave under modification. Should the holes move if the leg length changes? Should the thickness be linked across both legs? Should the fillet radius scale with the thickness? These decisions define the quality of a feature tree, and they require understanding the part's purpose, its manufacturing context, and the likely future changes. That's engineering knowledge, not geometry knowledge.</p>
<p>This is why generating a feature tree that a senior CAD engineer would accept as production-quality is a fundamentally different problem from generating geometry that looks correct. The geometry problem has a right answer you can check by measuring. The feature tree problem has many valid answers, and which one is "right" depends on context that the AI doesn't have.</p>
<h2>The research approaches</h2>
<p>Two main research directions are attacking the parametric prediction problem.</p>
<p>Sequence-to-sequence models treat the construction history as a token sequence and predict it autoregressively, the same way a language model predicts the next word. <a href="/posts/deepcad-dataset">DeepCAD</a> established this approach. <a href="/posts/text2cad-paper">Text2CAD</a> added text conditioning. The strengths: the output is directly executable by a CAD kernel, the operations are interpretable, and the approach scales with data. The weaknesses: the operation vocabulary is limited by the training data (mostly sketch-and-extrude), the sequences are brittle (one bad operation breaks everything downstream), and the combinatorial explosion of valid construction strategies for a given shape makes training difficult.</p>
<p>Constraint-based approaches try to generate the parametric relationships directly, specifying not just operations but the constraints between dimensions, the references between features, and the parametric links that make a model editable. This is closer to what a human modeler does, and it's significantly harder to train because the constraint structure is more complex than the operation sequence. Little published work exists here because the problem is genuinely difficult and the training data (models with explicit constraint annotations) barely exists.</p>
<p>The approach that's likely to work first in production isn't pure neural prediction. It's probably a hybrid: an LLM that understands CAD APIs (like the approach CADAgent uses) combined with specialized models that handle geometric reasoning. The LLM provides the construction strategy. The specialized models provide the geometric calculations. A CAD kernel provides the validation. The feature tree emerges from the interaction between these components, not from a single end-to-end model. It's messier than a clean research architecture, but production tools are almost always messier than research papers suggest.</p>
<h2>What "editable" really means</h2>
<p>One more distinction that marketing tends to blur. There's a spectrum between "dead geometry" and "fully parametric model," and most of the intermediate points are useful even if they're not the final goal.</p>
<p>Dead geometry (imported STEP body, no history): you can add new features on top but can't modify existing ones. This is what Zoo.dev gives you. Useful for manufacturing directly, painful for iterating.</p>
<p>Recorded history (feature tree exists but isn't stable): you can see the operations that created the model, and some of them might be editable, but changing one thing might break three others. This is approximately what AI-driven Fusion 360 API approaches produce today. Useful for simple modifications, unreliable for anything complex.</p>
<p>Stable parametric model (feature tree with robust references and intentional constraints): you can change driving dimensions and the model updates predictably. This is what a skilled human produces in Fusion 360 or SolidWorks. This is the target. Nobody's AI produces this from text yet.</p>
<p>Configurable model (parametric model with defined variants, design tables, and derived configurations): the most advanced form of parametric modeling, where a single model represents an entire family of parts. This is what companies use for product lines. This is so far beyond current AI capabilities that I mention it only to calibrate how far there is to go.</p>
<h2>The honest assessment</h2>
<p>The <a href="/posts/how-text-to-cad-works">how text-to-CAD works</a> overview explains the current generation pipeline, and parametric output is conspicuously absent from what ships today. The tools that generate the best geometry (Zoo.dev) produce non-parametric B-Rep. The tools that produce feature trees (CADAgent, MCP bridges) are limited by the AI's ability to use a CAD API coherently, which works for simple parts and degrades rapidly for complex ones. The research that targets parametric sequence generation directly (Text2CAD, DeepCAD, Autodesk's Neural CAD) is real but pre-production.</p>
<p>This is the holy grail of text-to-CAD, and calling it a holy grail is accurate in both senses: it's the ultimate goal, and nobody's found it yet. The engineering value of AI-generated parametric models would be enormous. Imagine typing a description and getting a model you can edit as fluidly as one you built yourself. Imagine generating a family of parts by modifying parameters rather than writing new prompts. Imagine handing an AI-generated model to a colleague and having them modify it without rebuilding from scratch.</p>
<p>That future is plausible. The research trajectory points toward it. The data problem (we need vastly more training data with operation sequences and constraint annotations) is solvable in principle if the major CAD companies decide their users' modeling history is worth training on. The kernel integration problem (generated sequences need to execute in real CAD environments) is being solved by MCP bridges and API integrations.</p>
<p>But it isn't here yet, and pretending otherwise is the kind of optimism that leads to disappointment on a deadline. I use text-to-CAD regularly. I treat every output as a starting point. I rebuild the parts I need to edit. And I watch the <a href="/posts/neural-cad">Neural CAD</a> research with genuine interest and zero expectation that it'll change my workflow this year. The gap between generating geometry and generating engineering intent is real, and closing it is the hardest problem in AI-assisted design. Whoever solves it first will have built something genuinely new. I just don't think the gap closes with one paper or one product release. It closes slowly, and the parts in my feature manager still need their constraints built by hand.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI topology optimization: generative design&apos;s older cousin</title>
      <link>https://blog.texocad.ai/posts/ai-topology-optimization</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-topology-optimization</guid>
      <pubDate>Wed, 18 Mar 2026 00:00:00 GMT</pubDate>
      <description>Topology optimization was doing AI-adjacent geometry generation before text-to-CAD existed. It&apos;s still more useful for structural parts, and it&apos;s still a pain to manufacture.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>topology-optimization</category>
      <category>generative-design</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI topology optimization uses algorithms (not text prompts) to find optimal material distribution for given loads and constraints. Unlike text-to-CAD, it produces structurally validated geometry. Available in Fusion 360, nTopology, Altair Inspire, and ANSYS. Output is often organic and hard to manufacture without additive processes.</p>
<p>AI topology optimization uses algorithms to remove material where it isn't structurally needed, producing parts that are lighter and often stronger than anything a human would sketch freehand. I remember the first time I saw topology-optimized output, maybe six years ago, sitting next to a colleague who'd been running an overnight simulation on a suspension bracket. The result looked like a piece of bone. Not like a bracket. Not like anything you'd find in a McMaster-Carr catalog. He stared at it for a full ten seconds, said "well, that's ugly," and then we spent the next two hours figuring out how to actually make it. That tension between what the algorithm wants and what a machine shop can produce hasn't gone away. It's just gotten more sophisticated on both sides.</p>
<p>Topology optimization has been around longer than most people in the text-to-CAD world seem to realize. The mathematical foundations go back to the late 1980s. The practical CAD tools have existed for over a decade. It was doing AI-adjacent geometry generation before anyone was typing prompts into a text box and expecting brackets to appear. And for structural parts, where loads, stiffnesses, and weight targets actually matter, it's still more useful than anything text-to-CAD can produce. The output just happens to look like something that crawled out of a coral reef.</p>
<h2>How it actually works</h2>
<p>Topology optimization is not prompt-based. You don't describe a part in words. You describe a problem in engineering terms.</p>
<p>The setup looks like this: you define a design space, which is the maximum volume the part is allowed to occupy. You define keep-out zones, regions where geometry must exist (bolt holes, mounting surfaces) or must not exist (clearance for other components). You specify loads and boundary conditions: where forces act, where the part is fixed, and how much load it needs to carry. You set a material. And you set an objective, usually minimum weight for a given stiffness, or maximum stiffness for a given weight.</p>
<p>The solver then iterates. It starts with the full design space filled with material and progressively removes material from areas that contribute least to the structural performance. Each iteration runs a finite element analysis (FEA), checks which elements are carrying load and which aren't, and removes the unloaded ones. After hundreds or thousands of iterations, what's left is a geometry that carries the required loads with the minimum amount of material.</p>
<p>This is fundamentally different from <a href="/posts/how-text-to-cad-works">how text-to-CAD works</a>. Text-to-CAD predicts geometry from patterns in training data. Topology optimization computes geometry from physics. The output of a topology optimization run is structurally validated by construction. The output of a text-to-CAD run is structurally unknown unless you run FEA on it afterward. That distinction matters enormously for any part that has to carry a load.</p>
<h2>The generative design wrapper</h2>
<p>When people say "generative design" in 2026, they usually mean topology optimization wrapped in a friendlier interface with a few extra capabilities. Fusion 360's Generative Design extension, for instance, lets you define the problem graphically in the Fusion environment, run the optimization in the cloud, and receive multiple candidate solutions that you can compare side by side. It adds manufacturing constraints (can this be milled? Can it be cast? Is it suitable for additive?) so the solver avoids producing geometry that's physically impossible to make with your chosen process.</p>
<p>The <a href="/posts/text-to-cad-vs-generative-design">text-to-CAD vs generative design</a> comparison lays out the conceptual differences in detail, but the quick version is: text-to-CAD says "build me what I described," generative design says "show me what the physics wants, given these rules." They're solving different problems. People conflate them because both involve computers producing geometry automatically, but the inputs, outputs, and validation are entirely different.</p>
<h2>The available tools</h2>
<p>This isn't a market with one option. Topology optimization and generative design tools have been shipping commercial products for years.</p>
<p>Fusion 360's Generative Design Extension is the one most Fusion users encounter first. It's cloud-based, which means the computation happens on Autodesk's servers and the results come back to your Fusion environment. The interface is approachable for someone who already knows Fusion. The cost is an add-on subscription, separate from the base Fusion license. For simple structural optimization problems with common manufacturing constraints, it works. I've used it for lightweighting mounting brackets and it produced results I was comfortable sending to a metal printer. The limitation is that it's tied to Fusion's ecosystem and the cloud computation model, which means you're dependent on Autodesk's servers and pricing for every run.</p>
<p>nTopology (nTop) takes a different approach. It's a standalone design platform built specifically for advanced geometry that traditional CAD tools can't handle well. Lattice structures, conformal cooling channels, topology-optimized shapes with smooth transitions. nTop is popular in aerospace and medical device design, where the geometry needs to be organic and the manufacturing method is almost always additive. It's powerful but specialized. If you're making brackets for sheet metal fabrication, nTop is overkill. If you're designing a titanium aerospace fitting for DMLS printing, it's one of the better tools available.</p>
<p>Altair Inspire (formerly solidThinking Inspire) is built for design engineers who want topology optimization without becoming FEA experts. You import or create geometry, define loads and constraints, and run the optimization. The output is a smoothed solid body that you can export and refine. Altair has decades of solver technology behind it (OptiStruct, the underlying solver, is one of the most validated structural optimization engines in the industry). The interface is cleaner than most pure FEA tools, and the workflow is designed to produce results that an engineer can actually use, not just publish.</p>
<p>ANSYS offers topology optimization through its structural simulation suite. If you're already in the ANSYS ecosystem for FEA, adding topology optimization is a natural extension. The solver is proven. The learning curve is steep if you're not already an ANSYS user. Pricing is ANSYS pricing, which is a polite way of saying "call for a quote and brace yourself."</p>
<p>Siemens NX and SolidWorks also have topology optimization capabilities built into their simulation add-ons. The functionality varies. SolidWorks Simulation has a basic topology study that works for simple problems. NX has more mature optimization tools. Neither is the primary selling point of those platforms, but if you're already paying for a SolidWorks or NX seat, the capability is there.</p>
<h2>The manufacturing problem</h2>
<p>Here's where topology optimization has always been honest in a way that marketing sometimes isn't: the output is hard to make.</p>
<p>A topology-optimized bracket looks like a bone structure because bones are nature's topology-optimized structures. Loads flow through curved, organic paths. Material exists only where stress demands it. The result is lightweight and stiff and completely unsuited to a three-axis CNC mill.</p>
<p>For subtractive manufacturing (milling, turning), topology-optimized geometry is often a non-starter. The shapes have undercuts, internal voids, thin curved walls, and freeform surfaces that require five-axis machining at minimum and often can't be machined at all. This is why generative design tools include manufacturing constraints: to prevent the solver from producing geometry that's beautiful on screen and impossible in a shop.</p>
<p>Even with manufacturing constraints, the results often need significant post-processing. You get a smoothed shape that technically respects the constraints, but it still needs draft angles refined, fillet radii checked, and mating surfaces flattened for assembly. A raw topology optimization result is a starting point, not a finished part.</p>
<p>For additive manufacturing (metal printing, SLS, MJF), topology optimization makes a lot more sense. Additive processes can produce the organic shapes the solver wants. Lattice structures, hollow sections, freeform curves, these are all things a metal printer handles without complaint. The marriage of topology optimization and additive manufacturing is where this technology actually delivers on its promise. An aerospace fitting that weighs 40% less than the machined version, with validated structural performance, printed in titanium. That's a real use case, not a slide deck.</p>
<p>For <a href="/posts/ai-in-cad-software">AI in CAD software</a> more broadly, topology optimization represents the mature end of the spectrum: proven solvers, validated results, established manufacturing workflows (at least for additive), and a clear understanding of where it works and where it doesn't.</p>
<h2>Compared to text-to-CAD</h2>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the current capabilities of prompt-based geometry generation, and it's useful to contrast those with what topology optimization offers.</p>
<p>Text-to-CAD produces geometry from a text description. The output has no structural validation. You don't know if the bracket will hold the load until you run FEA on it separately. The geometry is cosmetically correct (it looks like a bracket) but structurally unknown.</p>
<p>Topology optimization produces geometry from structural requirements. The output is structurally validated by construction. You know it carries the specified loads because the solver removed everything that doesn't. The geometry is structurally correct but cosmetically unusual.</p>
<p>Text-to-CAD is fast. Prompt, generate, export. Seconds. Topology optimization is slow. Setup, solve, iterate, smooth, export. Hours to days for complex problems.</p>
<p>Text-to-CAD handles a wide range of part types but with no engineering validation. Topology optimization handles a narrow range of problems (structural, thermal) but with rigorous validation.</p>
<p>The two approaches complement each other more than they compete. If you need a bracket and you don't care about weight or structural optimization, text-to-CAD gets you there faster. If you need a bracket that carries 500N with minimum weight and you can prove it to a certification body, topology optimization is the only option. Different tools for different problems.</p>
<h2>Where each makes sense</h2>
<p>Use topology optimization when:</p>
<p>The part has structural requirements. Defined loads, stiffness targets, weight limits. This is where the tool earns its keep. A suspension bracket, a drone arm, a satellite mounting structure, a medical implant. Any part where "light enough and strong enough" are both binding constraints.</p>
<p>You need to justify the design. Certification bodies, aerospace primes, and medical device regulators want to see that the geometry is structurally validated. Topology optimization produces that evidence as a byproduct of the design process. Text-to-CAD produces geometry with no structural history at all.</p>
<p>The manufacturing method is additive. If you're printing the part in metal or high-performance polymer, you can use the organic geometry that topology optimization naturally produces. The constraint that makes topo-opt hard for CNC disappears when the manufacturing process can build any shape.</p>
<p>Use text-to-CAD when:</p>
<p>The part is cosmetic or lightly loaded. Enclosures, covers, brackets that hold a cable, mounts that support their own weight plus a sensor. Parts where the structural requirements are "don't obviously break" and you can verify that with common sense and maybe a hand test.</p>
<p>Speed matters more than optimization. A <a href="/posts/ai-cad-for-real-work">prototype bracket for real work</a> that needs to exist by Thursday is better generated in thirty seconds and printed overnight than optimized for three days.</p>
<p>You don't know the loads. If you can't quantify the structural requirements, you can't run a meaningful topology optimization. Text-to-CAD at least gives you a part to test, measure, and iterate on.</p>
<h2>The honest assessment</h2>
<p>Topology optimization is the most mature form of "AI" in CAD, even though the people who do it usually don't call it AI. It's algorithmically generated geometry based on physics, validated by simulation, and it's been shipping in real products for years. The results are structurally sound and visually alien. The manufacturing challenge is real but solvable, especially with additive processes.</p>
<p>Text-to-CAD is newer, faster, more accessible, and produces geometry that looks normal but carries no structural guarantees. For the majority of parts that don't need structural optimization, that's fine. For the parts that do, topology optimization is still the answer, and it probably will be for a long time.</p>
<p>I use both, for different things, and I don't confuse them. One gives me shapes that the physics wants. The other gives me shapes that I described. Those are different kinds of useful, and knowing when to reach for which one is half the job.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI CAD for medical devices: regulatory and design reality</title>
      <link>https://blog.texocad.ai/posts/ai-cad-for-medical-devices</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-cad-for-medical-devices</guid>
      <pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate>
      <description>Medical device design lives under FDA 21 CFR Part 820, ISO 13485, and a documentation burden that would make your feature tree weep. AI-generated geometry has no place in that workflow. Yet.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>medical-devices</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI CAD tools cannot be used for medical device design in regulated contexts. FDA 21 CFR Part 820 and ISO 13485 require full design history files, risk analysis traceability, and validated design processes. AI-generated geometry has no design rationale, no risk traceability, and no validation pathway under current regulatory frameworks.</p>
<p>AI CAD tools cannot be used for regulated medical device design, full stop. The reasons have less to do with geometry quality and more to do with a documentation and traceability framework so thorough it makes most engineers' eyes water. I learned this the slow way, helping a startup friend with some fixture designs for a Class II device assembly line. I thought I understood documentation. I did not understand documentation. Three weeks into the project, sitting at my desk at 11 PM surrounded by printed-out risk analysis spreadsheets and a cup of tea gone completely cold, I realized that medical device design isn't engineering with paperwork. It's paperwork with engineering attached.</p>
<p>That experience reshaped how I think about tools like <a href="/posts/text-to-cad-guide">text-to-CAD</a> in regulated industries. The geometry is the easy part. The hard part is proving that every geometric decision was made for a documented, traceable, risk-assessed reason. AI-generated parts can't prove that because they don't know why they look the way they do.</p>
<h2>The regulatory framework: FDA and ISO in plain terms</h2>
<p>Medical devices sold in the US are regulated under FDA 21 CFR Part 820 (Quality System Regulation). Devices sold in the EU and most other markets need to comply with ISO 13485 (Quality Management Systems for Medical Devices). Both frameworks require something called design controls: a structured process for designing a device, verifying it meets requirements, and validating that it's safe and effective.</p>
<p>The practical consequence is a design history file (DHF) for every device. The DHF captures: what the device is supposed to do (design inputs), how the design achieves those requirements (design outputs), how you verified the design meets the inputs (verification), how you validated the device works for its intended use (validation), and the risk analysis that traces every identified hazard to a design mitigation.</p>
<p>Every dimension, every material choice, every surface finish, every tolerance on a medical device part needs to trace back through this chain. If a wall thickness is 2mm, the DHF should show that 2mm was determined by a combination of structural analysis, biocompatibility requirements, sterilization compatibility, and manufacturing process capability. If you change it to 2.5mm, you document why, update the risk analysis, and re-verify.</p>
<p>Text-to-CAD generates geometry with no design inputs, no risk traceability, and no verification basis. The AI chose the wall thickness because similar parts in the training dataset had similar thicknesses. That's not a design rationale. That's pattern matching. And pattern matching is not a recognized design methodology in any regulatory framework I've encountered.</p>
<h2>The design history file problem</h2>
<p>Here's a concrete example of why AI-generated geometry is incompatible with medical device development.</p>
<p>Suppose you're designing a surgical instrument handle. The grip diameter is 12mm. In a proper design process, you'd document: the ergonomic requirement (derived from user research and human factors analysis), the sterilization compatibility (the material and geometry must survive repeated autoclave cycles at 134°C), the structural requirement (the handle must withstand X Newtons without permanent deformation), and the manufacturing process (injection molded from a specific medical-grade polymer with specific processing parameters).</p>
<p>The 12mm diameter traces to all of those inputs. If someone asks "why is the grip 12mm?" you can point to the DHF and show the chain of reasoning.</p>
<p>Now imagine you generated the handle with text-to-CAD. The grip is 12mm because the AI produced a model with a 12mm grip. Why 12mm? Because the training data had handles with similar dimensions. There's no ergonomic analysis. No sterilization consideration. No structural calculation. No material-specific design rule. The <a href="/posts/text-to-cad-limitations">limitations of AI-generated geometry</a> that are annoying in general mechanical design become regulatory violations in medical devices.</p>
<p>An FDA auditor looking at a design history file that says "geometry was generated by AI based on a text prompt" would have questions. A lot of questions. The kind that result in warning letters and remediation plans, not approval to sell the device.</p>
<h2>Biocompatibility and material constraints</h2>
<p>Medical devices that contact the patient (or the patient's bodily fluids) must be made from biocompatible materials. Biocompatibility isn't a single property. It's a matrix of tests (ISO 10993 series) that evaluates cytotoxicity, sensitization, irritation, systemic toxicity, genotoxicity, implantation effects, and more. The testing requirements depend on the device classification, the contact type (surface, external communicating, or implant), and the contact duration.</p>
<p>The geometry and the material are inseparable in medical device design. You don't design a shape and then pick a material. You design a shape that's possible in the specific material that's been qualified for the application. An implant geometry that works in PEEK might fail in titanium because the stress distribution is different. A fluid pathway geometry designed for silicone might not work in polycarbonate because the weld lines from injection molding compromise the chemical resistance.</p>
<p>Text-to-CAD tools don't select materials. They generate shapes. The prompt might mention "stainless steel" or "medical grade plastic," but the AI doesn't adjust the geometry based on the material's properties, processing constraints, or biocompatibility requirements. It generates the same shape regardless. The connection between material and geometry that's fundamental to medical device design simply doesn't exist in <a href="/posts/ai-cad-for-real-work">AI CAD workflows</a>.</p>
<h2>Sterilization design considerations AI ignores</h2>
<p>If you've never designed parts that need to be sterilized, you might think sterilization is something that happens after the design is done. It's not. Sterilization compatibility is a design input that influences geometry from the beginning.</p>
<p>Autoclave sterilization (steam at 134°C under pressure) means the material must survive repeated thermal cycling without warping, degrading, or losing dimensional stability. The geometry needs to allow steam penetration to all surfaces. Narrow lumens, dead-end cavities, and features that trap air prevent effective sterilization. The part can't have hidden surfaces where bioburden can accumulate.</p>
<p>EtO (ethylene oxide) sterilization requires that the gas can reach all surfaces and that the part can be adequately aerated afterward to remove residual EtO. Geometry affects gas penetration. Thick sections or sealed cavities complicate the process.</p>
<p>Gamma and e-beam irradiation affect material properties. Some polymers yellow, embrittle, or degrade with repeated irradiation. The geometry needs to account for the material property changes that occur over the device's reprocessing life.</p>
<p>None of this information is encoded in text-to-CAD output. The AI generates a shape. Whether that shape can be effectively sterilized, whether it has features that trap contaminants, whether the material (if one was even specified) will survive the sterilization process, all of that is left for the human designer to evaluate and fix. On a Class I device fixture, that evaluation is manageable. On a Class III implant, the gap between "AI-generated shape" and "sterilizable, biocompatible, validated medical device component" is roughly the Grand Canyon.</p>
<h2>Where AI might help in medical device development</h2>
<p>I've been painting a grim picture, and it's accurate for regulated device components. But not everything in a medical device company is a patient-contacting, Class III, FDA-regulated component. There are spaces where AI-generated geometry could save time without triggering regulatory nightmares.</p>
<p>Non-patient-contact jigs and fixtures. Assembly fixtures, test fixtures, and handling tools that are used in the manufacturing process but never touch the patient are subject to lighter requirements. A fixture that holds a device during adhesive curing doesn't need biocompatibility testing. It needs to be dimensionally accurate, hold the parts in the right position, and not contaminate the device. Text-to-CAD can generate starting geometry for fixtures that gets rebuilt properly in Fusion 360 or SolidWorks. I've done this. The output needs the <a href="/posts/is-text-to-cad-accurate">usual dimensional verification</a>, but the regulatory overhead is manageable.</p>
<p>Packaging components. Primary packaging (sterile barrier) has its own requirements, but secondary packaging (boxes, trays, inserts) for shipping and storage has more relaxed design constraints. A foam insert or a shipping tray is a reasonable target for AI-generated starting geometry.</p>
<p>Early-phase concept exploration. Before you're in design controls (which typically starts at the design input phase), there's often a fuzzy concept phase where the team is exploring form factors, user interaction concepts, and rough sizing. AI-generated concept geometry can inform those discussions without becoming part of the design history file. The key is that none of this concept geometry carries forward into the regulated design process without being completely redesigned.</p>
<p>Training and communication models. Visual models for surgeon training, patient communication, or sales demonstrations don't need to meet the same requirements as the actual device. An AI-generated model of a rough device shape used in a training presentation is not a medical device and isn't subject to design controls.</p>
<h2>The fundamental traceability gap</h2>
<p>The core issue with AI CAD in medical devices comes back to one word: traceability. In a regulated design process, every output traces to an input. Every risk has a mitigation. Every mitigation traces to a design feature. Every design feature traces to a verification activity. The DHF is a web of documented connections that lets anyone, an FDA auditor, a quality engineer, a future design team, understand why the device looks the way it does.</p>
<p>AI-generated geometry has no traceability. It has a prompt and an output. The path between them is an opaque neural network. Even if you could somehow extract the "reasoning" behind a generated feature, it would be "this feature appeared because similar features appeared in the training data," which is not a design rationale. It's statistics.</p>
<p>Some people have suggested that you could document the AI generation process itself: "geometry was generated by Zoo.dev using prompt X, verified against requirements Y and Z, and modified to meet specifications A, B, and C." This is theoretically possible, but it creates a strange hybrid where the design history starts with an unjustified geometry and then documents all the changes made to justify it. That's backwards from how design controls are supposed to work. You're supposed to derive the geometry from the requirements, not generate geometry and then check whether it happens to meet the requirements.</p>
<p>A quality manager I worked with during that fixture project put it bluntly: "If you can't tell me why a feature exists, it doesn't belong on a regulated device. I don't care how it was generated." That standard eliminates AI-generated geometry from <a href="/posts/text-to-cad-for-manufacturing">medical device design workflows</a> for any component where the geometry affects safety or performance.</p>
<h2>The honest assessment</h2>
<p>AI CAD tools have no place in regulated medical device design today, and the barrier isn't technical quality. It's regulatory structure. The FDA and ISO 13485 don't care whether your geometry is pretty, fast to generate, or dimensionally close enough. They care whether you can prove that every design decision was made for a documented reason, traced to a requirement, assessed for risk, and verified.</p>
<p>Text-to-CAD can't prove any of that. The technology generates shapes. Medical device design requires justified shapes. The difference is documentation, and documentation is not what these tools were built to produce.</p>
<p>For non-regulated tooling, fixtures, and concept-phase exploration? Use it. Check the output. Treat it like a starting sketch. But keep it away from your design history file, keep it away from patient-contacting components, and for the love of everything, don't let it anywhere near a submission to the FDA. The auditor will not be impressed by how fast you generated the geometry. They'll be very interested in why you can't explain it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI CAD for consumer electronics enclosure design</title>
      <link>https://blog.texocad.ai/posts/ai-cad-for-consumer-electronics</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-cad-for-consumer-electronics</guid>
      <pubDate>Mon, 16 Mar 2026 00:00:00 GMT</pubDate>
      <description>Consumer electronics enclosures need snap fits, EMI shielding, thermal management, antenna keep-outs, and cosmetic surfaces. AI-generated enclosures understand none of these constraints.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>consumer-electronics</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI CAD tools can generate basic box-shaped enclosures but miss critical consumer electronics requirements: snap-fit geometry, boss placement for screws, EMI shielding features, antenna keep-out zones, thermal paths, cosmetic surface requirements, and IP ratings. Useful for early concept geometry only.</p>
<p>AI CAD tools can generate a box that looks like an electronics enclosure, but it won't be one you could ship. I found this out in the most mundane way possible: I prompted Zoo.dev to generate a "handheld electronics enclosure, 120mm by 70mm by 25mm, with a battery compartment and button cutouts," exported the STEP file, and opened it in Fusion 360. The result looked like an enclosure in the same way that a cardboard box looks like a suitcase. Correct general category. Missing everything that makes it functional. No snap fits. No screw bosses. No ribs. No features for keeping a PCB in position. Just a hollow box with some holes in it, sitting on my screen while my second monitor showed the fifteen-item checklist of things a real enclosure needs before it's ready for tooling.</p>
<p>I've designed maybe thirty enclosures over the years, mostly for small consumer and industrial products. Not Apple-level stuff, but real products that shipped, went through EMC testing, survived drop tests, and occasionally came back from the field with interesting failure modes. That experience has given me a very specific understanding of what an enclosure actually is, and it's not a box with walls.</p>
<h2>Snap fits and mechanical attachment: where the AI goes blank</h2>
<p>A consumer electronics enclosure typically has two halves (or more) that need to attach to each other. The most common method is snap fits: cantilever beams with hooks that deflect during assembly and lock into receiving features on the mating half. Getting snap fits right involves calculating beam length, deflection, material properties (different plastics have different allowable strains), and retention force. The geometry is fussy: the hook angle, the lead-in angle, the beam cross-section, and the clearance in the receiving slot all matter.</p>
<p>Text-to-CAD tools don't generate snap fits. I've tried multiple prompts, multiple tools. The best I got was a raised ridge around the perimeter of the enclosure that vaguely suggested where a snap fit might go, but had none of the actual geometry. No cantilever beam. No hook. No deflection relief. No receiving feature on the mating part. Because text-to-CAD generates single parts with no assembly context, it can't reason about how two halves interact during assembly. The <a href="/posts/text-to-cad-limitations">text-to-CAD limitations</a> around assemblies are well-documented, and snap fits are a perfect example of a feature that only makes sense in an assembly context.</p>
<p>Screw bosses are slightly more present in AI output. I've seen generated enclosures with cylindrical protrusions in the corners that could charitably be called bosses. But they lacked the details that make bosses functional: the correct inner diameter for the intended screw type (a boss for an M3 self-tapping screw has different geometry than one for an M2.5 heat-set insert), gusset ribs to prevent the boss from shearing off, and a wall thickness that provides enough thread engagement without creating sink marks on the cosmetic surface.</p>
<h2>EMI shielding and antenna keep-outs</h2>
<p>Any product with a wireless radio (WiFi, Bluetooth, cellular, NFC) needs an antenna, and that antenna needs a keep-out zone: a region of the enclosure where conductive materials (metal parts, metallized plastic, conductive coatings) must be absent to avoid detuning the antenna. The keep-out zone is defined by the antenna designer based on the frequency, radiation pattern, and required efficiency.</p>
<p>Simultaneously, many products need EMI shielding to pass FCC/CE emissions testing. Shielding features include conductive gaskets, finger springs on mating surfaces, metalized coatings inside the enclosure, and shielding cans over noisy components. These features need to be incorporated into the enclosure geometry: gasket grooves on mating faces, lands for shielding cans, and openings designed to be below the wavelength cutoff for the relevant frequencies.</p>
<p>Text-to-CAD knows none of this. The generated enclosure has no concept of electromagnetic compatibility. There are no gasket grooves. No shielding features. No antenna keep-out zones. No awareness that the metal screw boss you'd need for grounding is incompatible with the antenna zone two centimeters away. The AI generates a shell. Whether RF energy can enter or exit that shell in controlled ways is not part of the generation process. If you're designing <a href="/posts/text-to-cad-for-enclosures">enclosures for real products</a>, EMI and antenna considerations are non-negotiable, and they need to be designed in from the start.</p>
<h2>Thermal management</h2>
<p>Electronics generate heat. That heat needs to go somewhere. In a sealed enclosure, the thermal path is from the component (usually a processor, regulator, or power stage) through the PCB, through thermal interface material or an air gap, to the enclosure wall, and then from the enclosure wall to the ambient air.</p>
<p>Designing the thermal path involves: positioning heat-generating components near enclosure walls, designing flat contact pads on the inner wall surface for thermal interface material, adding ribs or fins on the exterior to increase surface area, incorporating vents (if the IP rating allows), and ensuring the PCB mounting features don't create thermal bottlenecks.</p>
<p>AI-generated enclosures have uniform wall thickness and no thermal features. No contact pads. No external ribs or fins. No consideration for where the hot components sit on the PCB. I prompted an enclosure for "a device with a processor that needs cooling" and got a box with vents. The vents were cosmetic rectangles in the side wall with no consideration for airflow direction, filter mounting, IP rating impact, or the actual location of the heat source inside the enclosure. A vent in the wrong place is worse than no vent, because it compromises the enclosure's environmental protection without providing meaningful cooling.</p>
<h2>Cosmetic surfaces and textures</h2>
<p>Consumer electronics enclosures are visible products. The exterior surface quality matters. Depending on the product positioning, you might need: high-gloss Class A surfaces (which require specific mold polishing and material flow considerations), matte textures (which require specific texture depths, draft angles increased beyond the standard to prevent drag marks during ejection, and sometimes specialized mold steel), soft-touch coatings (which require coating thickness allowance in the geometry and masking features to keep coating off mating surfaces), and multi-material construction (overmolding, which requires shut-off surfaces and separate mold inserts).</p>
<p>Text-to-CAD generates smooth, generic surfaces. The geometry has no texture specification, no increased draft angles for textured surfaces, no coating allowances, no consideration for gate vestige location (the visible mark where plastic enters the mold), and no parting line placement strategy to hide the parting line on a less visible surface. For a product where the enclosure is the brand experience, these omissions aren't minor. They're the difference between a product that looks intentional and one that looks like a first prototype.</p>
<h2>Tolerance stacking in multi-part assemblies</h2>
<p>A consumer electronics product is an assembly. The enclosure has to hold a PCB, a battery, a display, buttons, connectors, a speaker, maybe a camera module. Each of these components has its own dimensional tolerances. The enclosure dimensions have tolerances from the injection molding process. When you stack all these tolerances, you get a worst-case scenario where the PCB might not fit, the display might rattle, or the buttons might not actuate properly.</p>
<p>Tolerance analysis for enclosure design involves calculating the worst-case and statistical stack-ups for every critical assembly interface: display to enclosure gap, button cap to enclosure cutout clearance, PCB to mounting boss alignment, connector to enclosure cutout alignment, and battery to battery compartment clearance. Each interface has a nominal dimension and a tolerance range that accounts for component variation, mold variation, and assembly variation.</p>
<p>Text-to-CAD generates nominal geometry with no tolerance awareness. The button cutout is exactly the size specified in the prompt (if you're lucky), with no consideration for the clearance needed to accommodate button cap variation, enclosure shrinkage variation, and assembly position variation. The display opening is a rectangle, not a rectangle with a specific clearance and cosmetic gap specification. <a href="/posts/ai-cad-for-real-work">AI CAD for real work</a> already reveals that AI-generated geometry lacks manufacturing awareness. In multi-part consumer electronics, where six or eight components need to fit together in a space smaller than your palm, that lack of awareness becomes a multi-dimensional tolerance problem that the AI doesn't even know exists.</p>
<h2>The gap between "enclosure" and "enclosure that ships"</h2>
<p>I keep coming back to this distinction because it captures the fundamental problem with AI-generated enclosures. The AI can make a box. It can put holes in the box. It can round the corners and add a seam line that suggests where two halves might separate. On screen, it looks like a product enclosure.</p>
<p>But a product enclosure is a precision assembly interface, an EMC solution, a thermal management system, a cosmetic surface, a structural shell, and a manufacturing challenge all compressed into 2mm of plastic. Every wall thickness is a trade-off between stiffness, cosmetic quality, cycle time, and material cost. Every feature is positioned relative to an internal component that the AI doesn't know about. Every surface is specified for a texture, a coating, or a finish that affects the tooling, the molding, and the assembly.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> describes the realistic scope of these tools, and enclosure design is a good test case for understanding the gap. The AI gives you maybe 5% of the design work: the overall shape and size. The remaining 95%, snap fits, bosses, ribs, EMI features, thermal management, cosmetic specifications, tolerance allocation, DFM for injection molding, is still entirely manual. And that 95% is where the actual enclosure design lives.</p>
<h2>The honest assessment</h2>
<p>AI CAD for consumer electronics enclosures is concept-phase useful and production-phase irrelevant. If you want to quickly visualize an enclosure shape for a pitch deck or an early design review, text-to-CAD can get you a 3D box faster than modeling one from scratch. If you want to design an enclosure that houses real electronics, passes EMC testing, survives drop testing, looks good in a customer's hand, and can be manufactured at scale by an injection molder in Shenzhen, you need a real CAD tool, a real DFM process, and probably a conversation with your mold maker that involves a lot of back-and-forth about gate locations and draft angles.</p>
<p>The technology might get there someday. The prompt would need to include a PCB layout, a component placement, a thermal budget, an EMC strategy, a cosmetic specification, and a manufacturing process definition. At that point, you're not typing a prompt. You're writing a product specification, which is what enclosure design actually requires. The shortcut the AI promises doesn't exist because the information it needs to do the job right is the same information you need to do the job right, and that information is the job.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD for rapid prototyping</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-for-prototyping</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-for-prototyping</guid>
      <pubDate>Mon, 16 Mar 2026 00:00:00 GMT</pubDate>
      <description>Rapid prototyping is where text-to-CAD makes the most sense right now. You need geometry fast, accuracy is forgiving, and the goal is learning, not production.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>prototyping</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD is most useful for rapid prototyping where speed matters more than precision. Generate a first-draft model from a text prompt, export STL, print on FDM, evaluate fit and form, then iterate. The 30-second generation time beats 30-minute manual modeling for disposable prototype geometry.</p>
<p>Friday afternoon, 4 PM, and a client sends over a revised board layout with mounting holes that moved 8mm from where they were last week. I had a prototype enclosure sitting on my desk that was now wrong in exactly the ways that matter: the standoffs didn't line up and the USB cutout was in the wrong spot. In the old days I would have opened Fusion 360, adjusted the sketch, rebuilt the feature tree (which would probably complain about at least one lost reference), re-exported, and sent the file to the printer. Forty-five minutes, minimum, mostly spent arguing with the timeline about why a moved hole affects a fillet three features later.</p>
<p>Instead I typed a new prompt into Zoo.dev with the updated dimensions, waited about thirty seconds, opened the STEP file, spot-checked the critical measurements, exported STL, and hit print. The part was on the build plate before 4:20. Was it perfect? No. The wall on one side was 0.3mm thinner than I'd have liked, and the corner radii weren't exactly what I'd have modeled by hand. But it was close enough to hold up to the board and check fit, which was the entire point. Prototyping is not about perfection. It's about learning fast enough to make the next version less wrong.</p>
<p>That's the argument for <a href="/posts/text-to-cad-guide">text-to-CAD</a> in prototyping, and it's the strongest argument the technology has right now. When the goal is speed, the tolerance is plus or minus "can I tell if this works," and the part is going in the trash after it teaches you something, text-to-CAD is genuinely useful.</p>
<h2>Why prototyping tolerates what manufacturing doesn't</h2>
<p>Every complaint about text-to-CAD output, the inconsistent wall thickness, the approximate dimensions, the missing tolerances, the absent DFM considerations, becomes less important when the part is disposable.</p>
<p>A prototype exists to answer a question. Does the board fit? Can the user reach the button? Is the enclosure too bulky? Does the cable route work? These questions need physical geometry, but they don't need precise geometry. If the enclosure is 1mm wider than intended, you can still tell whether it feels too big in your hand. If the mounting holes are 0.5mm off, you can still check if the board orientation makes sense. If the wall thickness varies a little, the structural concept is still visible.</p>
<p>Manufacturing demands precision because the parts go into products that go into customers' hands. Prototyping demands speed because the faster you learn, the fewer mistakes survive into production. Text-to-CAD is bad at precision and good at speed. The match with prototyping is obvious once you stop expecting it to be a manufacturing tool.</p>
<h2>The thirty-second first draft</h2>
<p>The time savings are real and measurable. I've tracked my workflow on about two dozen prototyping iterations over the past few months, comparing text-to-CAD generation to manual modeling in Fusion 360 for simple bracket-and-enclosure type parts.</p>
<p>Manual modeling for a simple bracket: 10-20 minutes. Create a sketch, dimension it, extrude, add holes, fillet edges, export.</p>
<p>Text-to-CAD for the same bracket: 30 seconds for generation, 2-3 minutes for import and spot-checking critical dimensions, maybe 5 minutes if I need to adjust something in Fusion before exporting. Call it 5-8 minutes total.</p>
<p>For a single part, that's not life-changing. But prototyping is iterative. You don't make one part. You make five versions of the same part as the design evolves, the board layout changes, the client moves a connector, or you realize the cable needs to route a different way. Five iterations at 15 minutes each is 75 minutes of modeling. Five iterations at 7 minutes each is 35 minutes. Over a project with a dozen prototype parts going through three or four rounds, the cumulative savings add up to hours. Not days, but hours. And in a tight prototyping timeline, hours matter.</p>
<h2>What to prototype with text-to-CAD</h2>
<p>Not everything in a prototype needs AI-generated geometry. Some parts are too complex, too critical, or too dependent on specific dimensions to trust to a text prompt. Here's what I've found works and what doesn't.</p>
<p>Works well: enclosure shells where you're checking fit and form. Mounting brackets for PCBs, sensors, and small motors. Cable routing guides. Battery holders. Display bezels (rough fit check only). Standoff and spacer geometry. Simple jigs for holding components during testing. Structural test pieces for evaluating basic load paths.</p>
<p>Works poorly: gear mechanisms. Flexible latches that need specific deflection behavior. Anything with mating surfaces that need to seal. Parts with threads (the AI generates decorative threads, not functional ones). Anything that needs to snap together with another AI-generated part, because the two parts won't agree on dimensions. Multi-component assemblies where fit between parts matters more than the individual shapes.</p>
<p>The pattern is straightforward: if the prototype question is "does this shape work in this space," text-to-CAD helps. If the prototype question is "do these two parts work together at this tolerance," text-to-CAD introduces more problems than it solves.</p>
<h2>The iterate-fast loop</h2>
<p>The best prototyping workflow I've found with text-to-CAD is a tight loop:</p>
<p>Prompt. Generate. Download STEP. Open in Fusion 360. Check three or four critical dimensions. Fix anything that's off by more than a millimeter on a feature that matters. Export STL. Slice. Print.</p>
<p>Evaluate the print. Hold it. Try to fit the components. Take notes on what's wrong. Write a new prompt that addresses the problems. Repeat.</p>
<p>The key insight is that you're not refining the same model through a feature tree. You're regenerating from scratch each time with an updated description. This sounds wasteful if you're used to parametric modeling, where you'd adjust one dimension and rebuild. But for prototyping, regeneration is actually fine because the part is simple, the generation is fast, and you're going to throw it away after the next revision anyway.</p>
<p>I've found myself writing prompts that get more specific with each iteration. First round: "rectangular enclosure 100x60x35mm with lid." Second round: "rectangular enclosure 100x60x35mm, 2mm walls, lid with 4 alignment pins, USB-C opening on the short side centered 12mm from the bottom." Third round: same but with "add ventilation slots on both long sides, 8 slots each, 2mm wide." Each prompt builds on what I learned from the previous print, and the regeneration takes seconds.</p>
<p>This workflow won't work for everyone. If you're a parametric-modeling purist who wants a single source of truth with full design intent captured in the feature tree, throwing away geometry and regenerating feels wrong. I get it. But prototyping is a different game. The feature tree doesn't matter when the part has a lifespan of one afternoon.</p>
<h2>The Fusion 360 checkpoint</h2>
<p>I never go straight from text-to-CAD to the printer without opening the STEP file in Fusion 360 first. This takes 2-3 minutes and has saved me from enough failed prints that it's non-negotiable.</p>
<p>What I check: overall dimensions against the prompt. Wall thickness on at least two faces (the AI sometimes gets these inconsistent). Hole diameters on critical mounting features. Whether the geometry is actually a solid (occasionally you get a model with internal faces that slice weirdly). Whether any feature is below the minimum printable size for my printer and material.</p>
<p>What I fix: holes that are too small (AI consistently generates holes 0.2-0.5mm undersized, and FDM shrinks them further). Walls that are below 1.2mm. Obvious errors like missing features or features in the wrong location.</p>
<p>What I ignore: non-critical dimensions being off by half a millimeter. Fillets that aren't exactly the radius I'd have chosen. Cosmetic details on a functional prototype. The fact that the feature tree is essentially one imported body with no history.</p>
<p>This checkpoint is the difference between text-to-CAD being a time-saver and text-to-CAD being a filament-waster. Five minutes of checking versus two hours of reprinting after a failure. The math works out every time.</p>
<h2>Materials for prototyping AI-generated parts</h2>
<p>PLA. Almost always PLA for the first round.</p>
<p>PLA is forgiving, cheap, fast to print, dimensionally stable enough for fit checks, and it doesn't care about the minor geometry imperfections that text-to-CAD tools produce. A wall that varies between 1.5mm and 2mm still prints. A slightly faceted curve still looks like a curve. An oversized fillet still functions as a fillet. PLA absorbs the imprecision of AI-generated geometry better than any other common FDM material.</p>
<p>For later prototype rounds where I need to test mechanical properties, I switch to PETG or ABS. These materials are less forgiving of geometry quirks (ABS warps more, PETG strings more) but they're closer to production material behavior. By the time I'm printing in engineering materials, I've usually already corrected the critical geometry in Fusion, so the AI's original output has been refined.</p>
<p>I've also printed AI-generated parts in TPU for a flexible gasket prototype. This worked surprisingly well because the gasket was a simple ring shape, exactly the kind of geometry text-to-CAD handles without trouble.</p>
<h2>Where this sits compared to parametric prototyping</h2>
<p>I'm not going to pretend text-to-CAD is always faster than traditional modeling for prototyping. It depends on the part, the complexity, and how fast you are in your CAD tool of choice.</p>
<p>If you're an experienced Fusion 360 user and the part is a simple bracket you've modeled a hundred times before, you can probably sketch, extrude, and export in under ten minutes. Text-to-CAD saves you maybe five minutes. Not nothing, but not transformative.</p>
<p>If you're exploring a shape you haven't modeled before, or if you need multiple variations quickly, or if you're less experienced in CAD and every bracket takes thirty minutes, the time savings grow. A hardware engineer who's more comfortable writing English than creating sketch constraints can get a printable part from a text description in minutes instead of wrestling with a feature tree for an hour.</p>
<p>The biggest advantage isn't speed on any single part. It's the lower barrier to trying things. When generating a shape takes thirty seconds, you try more shapes. When you try more shapes, you learn faster. When you learn faster, the final product is better. The value isn't in the individual model. It's in the velocity of the iteration loop.</p>
<h2>The prototyping use case is real</h2>
<p>Prototyping is where I recommend people start with text-to-CAD, and it's where I think the <a href="/posts/best-text-to-cad-tools">current tools</a> deliver genuine value. The accuracy is good enough. The speed is clearly better. The failure cost is a few dollars of filament and an hour of print time, not a blown manufacturing run.</p>
<p>The parts are disposable by design. You print them to learn something, not to ship something. Every limitation of text-to-CAD output, the missing tolerances, the approximate dimensions, the absent DFM awareness, matters less when the part's purpose is education rather than production.</p>
<p>Start here if you're curious about text-to-CAD. Generate a bracket. Print it. Hold it against the thing it's supposed to fit. You'll immediately understand both what the technology can do and where it stops. That understanding is more useful than any demo, and it only costs you thirty seconds of prompting and a bit of plastic.</p>
<p>The parts that survive prototyping get modeled properly in <a href="/posts/ai-cad-for-real-work">real CAD</a> for production. The parts that don't survive go in the scrap bin, having done their job. Either way, the prototyping was faster than it would have been without the AI. That's a narrow win, but a real one, and in 2026 it's the clearest justification for text-to-CAD that I can honestly make.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD for product design: where it fits and where it doesn&apos;t</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-for-product-design</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-for-product-design</guid>
      <pubDate>Sun, 15 Mar 2026 00:00:00 GMT</pubDate>
      <description>Product design involves more than geometry. But the geometry part is where text-to-CAD can help, if you know which parts of the process it actually speeds up.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>product-design</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD fits product design at the early concept and prototyping stages: generating quick first-draft geometry for brackets, enclosures, and simple components. It doesn&apos;t replace detailed design work involving assemblies, tolerances, surface quality, DFM, or material selection. Best used as a starting-point generator.</p>
<p>I was in a design review last month with a client who makes consumer electronics. Small team, tight timeline, two weeks to get from napkin sketch to prototype. The industrial designer had some hand-drawn concepts of the enclosure. The mechanical engineer had a rough sense of the board layout and connector positions. And someone, I think it was the project manager, asked whether AI could "just generate the CAD" so they could skip ahead to printing.</p>
<p>I said maybe, for parts of it. Then I spent the next hour demonstrating exactly which parts, and it turned out to be a narrower slice than anyone hoped.</p>
<p>Product design is not geometry generation. That's the core misunderstanding that text-to-CAD marketing tends to encourage. The geometry is one layer, and not even the hardest layer, of a process that includes user requirements, ergonomics, material selection, assembly design, tolerancing, DFM, testing, and iteration. <a href="/posts/text-to-cad-guide">Text-to-CAD</a> can help with the geometry layer, specifically the early, rough, disposable version of it. Everything else is still a human job.</p>
<h2>Where product design actually spends its time</h2>
<p>Before talking about what AI can do, it helps to be honest about where time goes in a real product design project. I've been doing this kind of work in Fusion 360 for years, and before that in SolidWorks, and the breakdown is roughly the same regardless of the tool.</p>
<p>Maybe 15-20% of the time goes into initial geometry creation. Sketching profiles, extruding features, cutting pockets, adding fillets. This is the part that demos focus on because it's visual and satisfying. A shape appears. Progress is visible.</p>
<p>Another 20-30% goes into assembly work. Making parts fit together. Defining mating relationships. Checking interference. Managing fastener access. Making sure the lid actually closes on the box, the board actually fits in the housing, and the cable actually reaches the connector. This is where simple-looking products become complicated, because every part exists in relationship to other parts.</p>
<p>The remaining 40-50% (often more) goes into detailing, tolerancing, DFM, testing, and revision. Adjusting wall thickness for moldability. Adding draft for tooling. Specifying surface finish on cosmetic faces. Running tolerance stacks to verify that the assembled product works across the worst-case combination of parts. Revising after a prototype reveals problems. Revising again after the tooling engineer says the ribs are too thin. Revising a third time after the client changes the board layout.</p>
<p>Text-to-CAD touches the first 15-20%. It doesn't touch the rest. That's not a criticism of the technology. It's a description of what the technology is: a geometry generator. The question for product design is whether faster initial geometry actually matters when the downstream work dominates the timeline.</p>
<h2>Early concepts: the genuine sweet spot</h2>
<p>The one place text-to-CAD consistently saves me time in product design is at the very beginning of a project, when I need shapes to react to rather than imagine.</p>
<p>Before these tools existed, early concept exploration in CAD meant sketching several variations of a housing or bracket, each one taking 15-30 minutes in Fusion. Quick by detailed-design standards, but slow when you're trying to explore ten different form factors in a single afternoon.</p>
<p>With text-to-CAD, I can generate five or six variations of a basic enclosure in the time it takes my coffee to cool from painful to drinkable. "Rectangular enclosure 120x80x40mm with rounded corners." "Same but with a tapered front face." "Same but split horizontally with alignment pins." None of these models will be the final design. Most of them will be wrong in important ways. But they give me and the team something concrete to discuss, critique, and steer from.</p>
<p>I used this workflow on a recent project for a small sensor housing. Generated four enclosure shapes with Zoo.dev, pulled them into Fusion 360, dropped in the PCB model for a quick fit check, and had a design direction selected within an hour. The selected concept still needed complete rework: proper wall thickness, snap fits, cable routing, thermal venting, and about a dozen other details the AI didn't include. But the "what general shape are we going for" question was answered fast, and that let the real design work start sooner.</p>
<h2>The assembly gap</h2>
<p>Product design lives and dies on assemblies. A housing is not just a housing. It's a housing that contains a PCB, a battery, a display, three connectors, two switches, and a lens. Each of these components has specific dimensions, mounting requirements, and keep-out zones. The housing exists to hold them all in the right positions relative to each other and relative to the user's hand.</p>
<p>Text-to-CAD tools can't generate assemblies. They generate single parts. You can ask for "an enclosure for an Arduino Nano with a USB-C port opening on one side and two mounting screw holes on the bottom," and you might get something that looks right. But the USB-C opening won't be positioned to match the actual connector location on the Arduino board, because the AI doesn't have the Arduino board model. The screw holes won't match the board's mounting pattern unless you specified the exact coordinates in the prompt, and even then, the accuracy is approximate.</p>
<p>I tried this experiment systematically with three different boards. Generated enclosures for each using detailed prompts that included every relevant dimension. The USB port cutout was misaligned by 1-3mm on every attempt. The mounting holes were off by up to 2mm. Close enough to see the concept, nowhere near good enough to print and assemble.</p>
<p>In real product design, assemblies drive the geometry. The part shape comes from the components it needs to contain, the manufacturing process it needs to survive, and the user interactions it needs to support. Text-to-CAD generates shapes without any of that context. The shape is freestanding. The product is not.</p>
<h2>Material selection: invisible but essential</h2>
<p>When I design a consumer product enclosure, material choice is one of the first decisions. Is it injection-molded ABS? PC/ABS for impact resistance? Glass-filled nylon for stiffness? Silicone overmold for grip? Each material has different design rules. ABS needs different wall thickness than polycarbonate. Nylon has different shrink rates. Silicone has different Shore hardness options that affect the geometry of overmold features.</p>
<p>Text-to-CAD tools don't know what material the part will be made from. They generate geometry in a material vacuum. The walls are whatever thickness the training data averaged. The features are whatever the model learned was typical. There's no feedback loop between material properties and geometry.</p>
<p>This means the AI-generated starting point needs to be re-evaluated against the chosen material before any detail work happens. A 1.5mm wall might be fine for ABS but too thin for unfilled polypropylene. A snap fit designed at one thickness might need to be 20% thicker for a more brittle material. These adjustments aren't optional. They're the difference between a product that survives a drop test and one that doesn't.</p>
<h2>Surface quality and cosmetic intent</h2>
<p>Product design, especially for consumer products, cares about surfaces in a way that mechanical engineering often doesn't. A visible face needs to be smooth. A parting line needs to be positioned where the user won't see it. A textured surface needs specific draft to release from a textured mold. A painted surface needs different geometry than a color-matched surface.</p>
<p>Text-to-CAD geometry has no concept of cosmetic intent. Every surface is equal. The fillet that transitions between the front face and the side wall is the same quality as the fillet hidden inside a cable channel. There's no distinction between A-surfaces (visible to the user) and B-surfaces (functional but hidden). There's no consideration of how light will play across a curved surface, or where a customer's thumb will rest, or which surface the marketing team will photograph.</p>
<p>For products where appearance matters, which is most consumer products, the AI-generated geometry is a starting shape that needs its surfaces completely rethought. That's normal in product design; surface refinement is always a separate pass. But it means the AI is contributing to the structural concept, not the finished design. The contribution is real but limited.</p>
<h2>Ergonomics: the thing geometry can't capture alone</h2>
<p>A product that a human holds, touches, carries, or operates needs ergonomic consideration. Handle diameter. Grip contour. Button placement. Weight distribution. Viewing angles. These aren't add-ons. They're primary design drivers.</p>
<p>I asked a text-to-CAD tool to generate a handheld device enclosure. I got a rectangular box with rounded edges. It was technically holdable in the same way that a brick is technically holdable. The radii were arbitrary. The grip zones were flat. The weight distribution (if the internals were included) would have put the center of gravity in the wrong place. The button positions were decorative.</p>
<p>Ergonomic design requires understanding human hands, which come in different sizes. It requires testing with foam models, 3D-printed mockups, and user feedback. It requires the kind of judgment that comes from watching someone struggle with a prototype and knowing which surface to adjust. Text-to-CAD can generate the first foam-core-equivalent shape to hold in your hand and react to. It cannot design the final ergonomic form.</p>
<h2>DFM: the wall between concept and production</h2>
<p>Design for manufacturability is where product design gets expensive if you ignore it. Every manufacturing process has constraints that need to be reflected in the geometry from early in the design process, not bolted on at the end.</p>
<p>Injection molding needs draft angles, uniform wall thickness, gate locations, and rib-to-wall ratios. Sheet metal needs bend radii and relief cuts. Die casting needs different draft than injection molding and has minimum wall thickness requirements tied to flow length. Even 3D printing has DFM rules around support, orientation, and feature resolution.</p>
<p>Text-to-CAD tools have no DFM awareness. I've covered this in detail in the <a href="/posts/text-to-cad-for-manufacturing">manufacturing post</a>, but the product design angle is slightly different. In product design, DFM isn't just about making the geometry producible. It's about making trade-offs between appearance, function, and manufacturability throughout the design process.</p>
<p>A product designer might choose to add a visible parting line on a less prominent surface to avoid a side action in the mold. They might thicken a wall to improve flow, even though it adds weight. They might split a part into two pieces to make it moldable, changing the entire assembly strategy. These decisions require understanding the manufacturing process, the cost implications, and the product requirements simultaneously. The AI generates a shape. The product designer generates solutions.</p>
<h2>Where text-to-CAD fits in the product design timeline</h2>
<p>After working with these tools across several projects, here's my honest mapping of where they help and where they don't.</p>
<p>Weeks 1-2, concept exploration: genuinely useful. Generate multiple form factors quickly. Use them as conversation starters. Print rough shapes on FDM for early feel tests. This is <a href="/posts/text-to-cad-for-prototyping">prototyping territory</a>, and text-to-CAD is good at it.</p>
<p>Weeks 2-4, detailed design: not useful. This is where you're building proper parametric models with assembly relationships, material-aware features, and DFM considerations. The AI-generated concept might inform the starting dimensions, but the actual CAD work is ground-up in Fusion or SolidWorks.</p>
<p>Weeks 4-8, refinement and validation: not useful. Tolerance stacks, FEA, mold flow analysis, interference checks, and drawing creation are all manual engineering tasks that require proper parametric models with full feature history.</p>
<p>Weeks 8+, production release: not useful. ECOs, revision management, and supplier communication require fully defined engineering models. The AI hasn't touched the file since week one.</p>
<p>The useful window is real but narrow. Maybe 10-15% of the project timeline, and only for the simplest parts in the assembly. The PCB mounting bracket. The cable guide. The battery holder. Not the main housing. Not the user-facing surfaces. Not the mechanism.</p>
<h2>Simple components within a product</h2>
<p>There's one product-design use case where text-to-CAD reliably helps: generating internal structural components that don't interact with the user or the outside world.</p>
<p>A PCB standoff. A cable clip. A simple bracket that holds a speaker in place. An internal rib structure. These are the utility parts of a product, necessary but not interesting. They have simple geometry, forgiving tolerances, and no cosmetic requirements. They're exactly the kind of thing where spending fifteen minutes in Fusion feels tedious and a thirty-second AI generation feels like a win.</p>
<p>I've started using text-to-CAD for these components consistently, generating the first draft, importing into the assembly, checking fit, and adjusting. For a product with six to ten internal structural parts, this saves maybe an hour of total modeling time. Not transformative, but real. And it lets me spend that hour on the parts that actually need human attention: the outer surfaces, the mechanism, the ergonomics.</p>
<h2>The workflow I've settled on</h2>
<p>For product design projects, my current text-to-CAD workflow is:</p>
<p>Use it for concept-phase exploration. Generate form-factor options. Print and hold them. Make decisions about overall shape and proportion.</p>
<p>Use it for simple internal components. Standoffs, clips, brackets, mounts. Generate, import, adjust, move on.</p>
<p>Don't use it for the primary enclosure past the concept phase. The surface quality, assembly relationships, and DFM requirements need proper parametric modeling.</p>
<p>Don't use it for any part that needs tolerances tighter than plus or minus 1mm. Don't use it for parts with complex surface transitions. Don't use it for parts that need to match specific hardware components without measuring and correcting first.</p>
<p>This gives me the speed benefits where they matter and keeps me in <a href="/posts/ai-cad-for-real-work">real CAD</a> where accuracy matters. It's not the revolution the demos promise. It's a new tool in the box, useful for specific screws and not for others. Product design has always been about knowing which tool to reach for. Text-to-CAD is one more option, as long as you understand its boundaries.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD for mechanical parts: brackets, mounts, and fixtures</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-for-mechanical-parts</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-for-mechanical-parts</guid>
      <pubDate>Sat, 14 Mar 2026 00:00:00 GMT</pubDate>
      <description>Brackets, mounts, and fixtures are the sweet spot for text-to-CAD. Simple geometry, clear dimensions, and forgiving tolerances. Here&apos;s what works.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>mechanical-parts</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD works best for simple mechanical parts: L-brackets, mounting plates, standoffs, cable clips, and basic fixtures. These parts have simple prismatic geometry that AI handles well. Expect to fix hole positions, fillet radii, and material thickness. Complex assemblies and tight tolerances still require manual modeling.</p>
<p>I keep a cardboard box under my desk full of 3D-printed brackets that didn't work. Some came from bad sketches. Some from wrong dimensions I typed while tired. And a growing number, over the last year or so, came from text-to-CAD tools. The brackets in that box all look roughly correct. They have flanges, holes, stiffening ribs, the usual. But "roughly correct" is a generous description when the bolt holes are 0.6mm off and the thing won't actually mount to the DIN rail it was supposed to fit.</p>
<p>That said, brackets, mounts, and fixtures are genuinely the sweet spot for text-to-CAD. Not the only use case that works, but the one that works most often with the least cleanup. And I've been doing this long enough to appreciate a tool that saves me even twenty minutes on a part I was going to iterate anyway. So here's what actually works, what breaks, and where the line is.</p>
<h2>Why simple mechanical parts are the right test</h2>
<p>The dirty secret of text-to-CAD is that the training data is mostly simple mechanical parts. Brackets. Plates. Standoffs. Flanges. The kind of geometry you'd find in a first-year engineering project or a McMaster-Carr catalog page. That's not an insult. It means the AI has seen thousands of these shapes and has a decent statistical model of what they look like.</p>
<p>A basic L-bracket is a sketch-extrude-cut operation. Two legs, some holes, maybe a fillet at the bend. There's nothing parametrically complex about it. The geometry is prismatic, the features are standard, and the dimensions are all related in ways that aren't hard to infer from a text description. Compare that to a turbine blade, a multi-body mold insert, or a swept lofted handle, and you can see why the AI does better here.</p>
<p>I tested Zoo.dev, AdamCAD, and a couple of prompt-based generators on a set of ten common mechanical parts: mounting plates, L-brackets, cable clips, standoffs, DIN rail adapters, sensor mounts, PCB standoffs, a motor mounting bracket, a U-channel, and a flat plate with a bolt pattern. Not glamorous. Just the kind of parts that show up three dozen times in any hardware project.</p>
<h2>What the AI got right</h2>
<p>The surprise, if you can call it that, is how much was close enough to use. On six of the ten parts, the AI produced geometry I could import into Fusion 360 and start modifying without wanting to delete everything and start from scratch. The overall dimensions were within a millimeter. The shapes were recognizable. The feature count was approximately correct.</p>
<p>The standoffs were perfect. A cylinder with a bore and a counterbore is about the simplest thing you can ask for, and the AI nailed it every time. Outer diameter, inner bore, height, counterbore depth. All within 0.2mm of the prompt. I could've sent those STEP files directly to a print job.</p>
<p>The L-brackets were close. Leg lengths were right. Thickness was right. The bend radius was usually present, which is more than I expected. The hole positions were the weak spot, usually within a millimeter of where I asked for them but not exactly on the mark. On a clearance hole bracket for M4 screws, a millimeter of drift is something you can live with. On anything tighter, you can't.</p>
<p>The mounting plates were solid. Flat geometry is easy for the AI. A rectangle with counterbored holes is basically the AI's comfort food. I got usable output on all three plate variants I tested.</p>
<h2>What the AI got wrong</h2>
<p>The cable clips were a mess. A cable clip is a simple part, but it has a snap-fit feature, an undercut, and geometry that depends on the cable diameter in a way that the AI couldn't infer from the prompt alone. I asked for a clip sized for a 10mm cable. The slot opening was 8mm. The retention lips didn't have enough return to actually hold anything. It looked like a clip in the viewport, but it was structurally a channel with aesthetic bumps.</p>
<p>The DIN rail adapter was the most interesting failure. The AI generated something clearly inspired by a DIN rail adapter, with the right general profile. But the retention clip was solid geometry, not a sprung feature. The rail slot width was off. And the mounting hole pattern was symmetrical when it should have been offset to account for the rail's asymmetric cross-section.</p>
<p>The motor mount bracket was also wrong in a specific, educational way. It had the right overall shape: a flat base with a raised portion and holes for motor bolts. But the bolt circle diameter was generic, not matched to any standard motor frame size. In real life, a NEMA 23 motor mount has a 47.14mm bolt circle. The AI gave me 45mm. Close enough to look right. Far enough to not work. I've had this exact argument with a 3D printer at 11 PM, trying to force M3 bolts through holes that were juuust slightly too close together.</p>
<h2>The fixup time is the real metric</h2>
<p>The question everyone asks is whether text-to-CAD saves time. On simple mechanical parts, the answer is yes, but less than you think, because the fixup time eats into the savings.</p>
<p>For the standoffs, fixup time was zero. Straight to print.</p>
<p>For the L-brackets and mounting plates, I spent five to fifteen minutes in Fusion 360 adjusting hole positions, tweaking a fillet radius, and verifying wall thickness. Modeling these from scratch would've taken maybe twenty to thirty minutes each. So the net savings were ten to fifteen minutes per part. Real, but not dramatic.</p>
<p>For the cable clip and the DIN rail adapter, I spent more time trying to fix the AI output than it would have taken to model the part from scratch. The cable clip needed a complete redesign of the retention feature. The DIN rail adapter needed the rail interface geometry rebuilt. At that point, the AI's contribution was a rough shape I could've sketched freehand in Fusion 360 in about ninety seconds.</p>
<p>The pattern is clear. If the part is prismatic, symmetric, and doesn't depend on interface dimensions with other components, text-to-CAD saves time. If the part has functional features that interact with specific mating geometry, the AI's output is a starting suggestion at best and a misleading distraction at worst.</p>
<h2>What makes a good prompt for mechanical parts</h2>
<p>After running through dozens of prompts, a few things consistently improve results. Specify material thickness explicitly: "3mm thick aluminum L-bracket" beats "L-bracket" every time. The AI treats thickness as a free parameter if you don't pin it, and it picks weird numbers like 4.7mm that don't match any standard stock.</p>
<p>Give bolt patterns in absolute terms. "Four M4 clearance holes, 4.5mm diameter, on a 40mm by 30mm rectangular pattern centered on the face" is better than "mounting holes for M4 bolts." Include standard interface dimensions if the part mounts to something specific: "31mm bolt circle, 22mm central bore" for a NEMA 17 plate. And keep it to one part. The moment you describe two parts that fit together, you're asking for an assembly, and <a href="/posts/text-to-cad-limitations">text-to-CAD can't do assemblies</a>.</p>
<h2>Where this fits in a real workflow</h2>
<p>My workflow for simple mechanical parts starts with a text-to-CAD prompt about half the time now. Not because the output is perfect, but because it's faster than setting up a new sketch, picking a plane, and drawing the first rectangle. Then I import into Fusion 360, measure everything, fix what's off, add proper constraints, and save it as a real parametric model. The AI's version stays in the import bodies folder as a reference. The actual model is mine.</p>
<p>For fixtures and jigs, the tolerance on "good enough" is wide. A 3D-printed fixture that'll be used for a few weeks and thrown away doesn't need to hit every dimension. Text-to-CAD is genuinely useful here.</p>
<p>For anything going to <a href="/posts/ai-cad-for-cnc-machining">CNC machining</a> or <a href="/posts/ai-cad-for-injection-molding">injection molding</a>, the AI output is a starting conversation, not a finished part. The AI doesn't know what a cutter is, doesn't know what a mold is, and doesn't care about your tolerances.</p>
<h2>The honest take</h2>
<p>Brackets, mounts, and fixtures are where text-to-CAD earns its keep, and even here, the earn is modest. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> I wrote earlier covers the general workflow, but for mechanical parts specifically, the real value is in skipping the first five minutes of modeling setup on parts you've built a hundred times before. It's not going to replace your engineering judgment. It's not going to give you <a href="/posts/text-to-cad-for-manufacturing">manufacturing-ready output</a>. And it's not going to know that your NEMA 23 bolt circle is 47.14mm, not 45mm.</p>
<p>But if you're honest about what it can do, treat the output as a draft, and keep your calipers within reach, it's a useful addition to the boring part of mechanical design. I just wish the boring parts were the only parts that mattered.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD for manufacturing: can AI output survive a machine shop?</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-for-manufacturing</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-for-manufacturing</guid>
      <pubDate>Fri, 13 Mar 2026 00:00:00 GMT</pubDate>
      <description>I showed text-to-CAD output to a machinist. The look on his face was educational. Here&apos;s what happens when AI geometry meets manufacturing reality.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>manufacturing</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD output is not manufacturing-ready in 2026. Common issues: missing fillets on internal edges, zero-radius corners, no draft angles, incorrect hole tolerances, and geometry that ignores tool access. AI-generated models require significant manual editing before CNC machining, injection molding, or sheet metal fabrication.</p>
<p>I sent a STEP file to my usual machine shop last month without mentioning it came from an AI. Just said I needed a quick quote on a one-off bracket. Aluminum, nothing exotic. The shop owner called me forty minutes later, which is fast even for him. "Jan, did an intern draw this?" he asked. I told him the truth: it was generated by a text-to-CAD tool. There was a pause. "That explains the internal corners," he said. "Tell the AI about end mills."</p>
<p>That call lasted about fifteen minutes and covered most of what I'm about to write here. The bracket looked fine on screen. It had mounting holes, a reasonable profile, fillets in the right general areas. It exported to STEP without errors. But when a person who has actually made thousands of parts from CAD files looked at it, the problems were immediate and numerous. Sharp internal corners that no cutter can reach. A pocket depth that exceeded the tool length-to-diameter ratio. Walls thin enough to chatter. No consideration of how the part would be held in a vise.</p>
<p>This is the story of <a href="/posts/text-to-cad-guide">text-to-CAD</a> and manufacturing in 2026. The geometry exists. The engineering doesn't.</p>
<h2>What a machine shop actually needs from a CAD file</h2>
<p>Before I get into what AI gets wrong, it's worth understanding what a manufacturing-ready model requires. Because the bar isn't just "correct shape." It's a much longer list than most non-manufacturing people realize.</p>
<p>A machineable part needs tool access to every feature. Every internal pocket needs corner radii that match or exceed the radius of the smallest end mill that can reach it. Every hole needs to be achievable with standard drill sizes or boring operations. Wall thickness needs to be sufficient for the material and the cutting forces. Features need to be positioned so the part can be fixtured, meaning clamped in a vise or bolted to a plate, without blocking access to the features being cut.</p>
<p>Beyond that, a manufacturing drawing or model needs tolerances. Not "approximately 10mm." Exactly 10mm +0/-0.05, or 10mm H7, or 10mm with a surface finish of Ra 1.6. These specifications tell the machinist what matters and what doesn't, where to spend time and where good enough is good enough.</p>
<p>Text-to-CAD tools produce none of this. They produce shapes. Shapes without tolerances, without DFM consideration, without any awareness that the geometry will need to interact with cutting tools, fixtures, and physics. The <a href="/posts/text-to-cad-limitations">limitations are well documented</a> at this point, but seeing them through the eyes of a machine shop makes them feel more urgent.</p>
<h2>The internal corner problem</h2>
<p>This is the one every machinist spots first. It's almost a litmus test for whether a model was created by someone who understands machining.</p>
<p>CNC end mills are round. They have a radius. When an end mill cuts a pocket, the corners of that pocket will have a radius equal to (at minimum) the radius of the cutter. A 6mm end mill leaves 3mm corner radii. A 3mm end mill leaves 1.5mm corner radii. You can go smaller, but smaller cutters are slower, more fragile, and more expensive to run.</p>
<p>Text-to-CAD tools routinely generate pockets and slots with zero-radius internal corners. Perfect 90-degree intersections of two walls. These look fine on screen and are physically impossible to machine without EDM (electrical discharge machining), which costs an order of magnitude more than milling.</p>
<p>Every single AI-generated bracket, enclosure, and housing I've sent to a machine shop has had this problem. Every one. The fix is simple: add internal fillets of at least 1.5mm (for a 3mm cutter) on all internal vertical edges. A human CAD user who has been yelled at by a machinist once does this automatically. The AI has never been yelled at by anyone.</p>
<p>I've tried prompting for it explicitly. "Add 2mm fillets on all internal corners" sometimes works, sometimes doesn't, and sometimes adds fillets on external edges where I didn't want them while missing the internal ones entirely. It's the kind of task that requires understanding why the fillet matters, not just where to put one.</p>
<h2>Tool access and pocket depth</h2>
<p>Imagine you need to cut a deep pocket in a block of aluminum. The end mill is a long, thin cylinder spinning at thousands of RPM, plunging into metal. The deeper the pocket relative to the tool's diameter, the more the tool deflects, chatters, and potentially breaks. There's a practical limit, usually around 3 to 4 times the tool diameter for standard operations, beyond which you need special tooling, reduced feeds, or a different approach entirely.</p>
<p>Text-to-CAD tools don't model this constraint. I've gotten parts with 20mm deep pockets that are 5mm wide, requiring a tool aspect ratio that would make any machinist wince. The tool would need to be 4mm diameter or less to fit in the pocket, and at 20mm depth, it would be cutting at 5x its diameter. That's not impossible with modern tooling and careful feeds, but it's slow, risky, and expensive. A part designed by someone who knows machining would either widen the pocket, reduce the depth, or split the feature into multiple operations.</p>
<p>The AI doesn't think about any of this because it doesn't model the manufacturing process. It models geometry. The distance between those two things is where the machine shop quote doubles.</p>
<h2>Wall thickness and rigidity</h2>
<p>Thin walls vibrate during machining. This causes chatter marks on the surface, dimensional inaccuracy, and sometimes catastrophic failure where the wall simply breaks under cutting forces. The minimum practical wall thickness for aluminum milling is around 1mm for short features and more like 2-3mm for anything tall or unsupported.</p>
<p>I measured wall thickness on fifteen AI-generated parts intended for CNC machining. Six of them had at least one wall thinner than 1.5mm. Two had walls thinner than 1mm. One, a particularly ambitious enclosure, had a 0.6mm wall that would have vibrated like a tuning fork the moment a cutter touched it.</p>
<p>The AI generated these thin walls not because it was trying to be clever about weight reduction, but because it was distributing material based on training data patterns without any understanding of what happens when you try to cut thin features in metal. The result looks like a part on screen. It sounds like a dentist drill in the shop.</p>
<h2>Hole specifications: close is not good enough</h2>
<p>Holes are the simplest feature in machining and one of the easiest places for AI-generated geometry to go wrong in ways that matter.</p>
<p>A 6mm hole in a CAD model is not useful manufacturing information by itself. Is it a clearance hole for an M6 bolt (needs to be 6.4mm or 6.6mm)? A close-fit hole (6.1mm)? A bearing bore (H7 tolerance, meaning 6.000 to 6.012mm)? A tapped hole (5mm pilot drill, then M6 tap)? Each of these is a different operation with different tooling, and the CAD model needs to communicate which one the designer intended.</p>
<p>Text-to-CAD tools generate holes at nominal dimensions with no tolerance or fit class specification. A "6mm mounting hole" arrives as exactly 6.000mm in the STEP file, which tells the machinist nothing about intent. Most shops will drill it at 6.0mm +/- 0.1mm because that's their standard, and hope that's what you wanted. If you needed a press-fit or a bearing bore, you're in trouble, and you won't know until assembly.</p>
<p>I've also noticed that AI-generated hole diameters don't always match standard drill sizes. Real engineers design with drill charts in mind. A 6.8mm hole is standard. A 6.73mm hole means someone is guessing. I've gotten fractional hole diameters from AI tools that would require custom boring operations when a standard drill would have been fine.</p>
<h2>Draft angles: the mold doesn't care about your demo</h2>
<p>If a part is going to be injection molded, every face that runs parallel to the mold pull direction needs a draft angle, typically 1 to 3 degrees. Without draft, the part sticks in the mold. This is not optional. This is not a nice-to-have. This is physics.</p>
<p>Text-to-CAD tools generate parts with zero draft on every vertical face, every time. I have tested dozens of AI-generated enclosures and housings and have never once seen draft angles applied automatically. The AI doesn't know the part will be molded. It doesn't know what a mold is. It generates a box with perfectly vertical walls because that's what the training data looks like in the CAD file, even though the actual manufactured parts in that training data had draft angles applied.</p>
<p>A tooling engineer I showed some AI output to didn't even open the STEP files. He looked at the renders and said, "No draft anywhere. These would need complete rework before I could even start a mold design." He wasn't being difficult. He was being accurate.</p>
<p>This applies to cast parts too. Casting needs draft for the same reason. If your part will be manufactured by any process that involves removing it from a shaped cavity, the AI-generated geometry is missing a fundamental requirement.</p>
<h2>Surface finish: the thing that isn't in the model</h2>
<p>Surface finish is specified as Ra (roughness average) in micrometers or microinches. A freshly machined aluminum surface might be Ra 1.6. A ground surface might be Ra 0.4. A polished surface might be Ra 0.1. These specifications affect function (sealing surfaces need to be smooth), appearance (visible surfaces on a product), and cost (smoother is more expensive).</p>
<p>Text-to-CAD models have no surface finish information. None. The geometry is mathematically smooth in the way that all B-Rep surfaces are smooth, but there's no callout telling the manufacturer which surfaces matter and which don't. Without this information, the shop either applies their default finish everywhere (wasting time on surfaces that don't matter) or calls you to ask (wasting everyone's time).</p>
<p>This is less dramatic than the internal-corner problem but equally real in terms of cost and lead time. A proper manufacturing model communicates intent on every surface. An AI-generated model communicates shape and nothing else.</p>
<h2>Sheet metal: not even the right kind of model</h2>
<p>Sheet metal manufacturing has its own CAD methodology. A proper sheet metal part is modeled with bend features, K-factors, relief cuts, and a flat pattern that can be laser-cut from sheet stock and then formed on a press brake.</p>
<p>Every text-to-CAD tool I've tested generates sheet metal parts as solid extrusions. They look like bent sheet metal. They are not bent sheet metal. There's no flat pattern. No K-factor. No bend allowance. The model is a solid block shaped like a folded piece of metal, and converting it to actual sheet metal features in SolidWorks or Fusion 360 is often harder than modeling the part from scratch.</p>
<p>A sheet metal shop that received one of these files would have to reverse-engineer the designer's intent, create a proper flat pattern, and hope the bend radii work for the material and tooling they have. That's not a manufacturing-ready deliverable. That's a puzzle.</p>
<h2>What the AI actually produces versus what manufacturing needs</h2>
<p>Here's a table that summarizes the gap, because after enough paragraphs of bad news, a clean comparison helps:</p>
<p>What manufacturing needs: tool-access-aware geometry with appropriate internal radii, toleranced dimensions, surface finish callouts, draft angles for molded parts, flat patterns for sheet metal, feature relationships that capture design intent, and fixturing consideration.</p>
<p>What text-to-CAD produces: a nominally dimensioned 3D solid with no tolerances, no surface finish, no draft, no flat pattern, no feature relationships, sharp internal corners, and no awareness of how the part will be held or cut.</p>
<p>The gap between these two lists is not a software version away from being closed. It represents fundamental manufacturing knowledge that current AI training data doesn't encode.</p>
<h2>The rework time equation</h2>
<p>People ask me whether text-to-CAD saves time for manufactured parts. The honest answer is: it depends on how you count.</p>
<p>If you're starting from zero and need a rough concept of a bracket to discuss with your team, yes, generating a shape in thirty seconds beats spending fifteen minutes in Fusion 360. You get something to react to quickly, and that has value in the early design phase.</p>
<p>If you need a part that will actually be machined, the math gets ugly. The AI saves you maybe ten to fifteen minutes of initial geometry creation. But then you spend thirty to sixty minutes fixing the model: adding proper fillets, correcting hole sizes to standard dimensions, adjusting wall thickness, adding tolerances, adding surface finish callouts, checking tool access, and rebuilding the feature tree so the part is parametrically editable. The <a href="/posts/ai-cad-for-real-work">rework for real manufacturing</a> often takes longer than the generation saved.</p>
<p>For a one-off prototype bracket, this might still be a net positive. For a production part that will go through design reviews, DFM checks, and tolerance stack analysis, the AI-generated starting point barely moves the needle. The engineering is the hard part, and the AI doesn't do any of it.</p>
<h2>What I actually use text-to-CAD for in manufacturing contexts</h2>
<p>Despite everything I've said, I do use text-to-CAD in my manufacturing workflow. Just not for what the marketing suggests.</p>
<p>I use it for generating rough concept geometry during early design discussions. When a client describes what they need and I want to show them a shape within the meeting instead of sending something next day, a quick AI-generated model on screen gets the conversation moving. Everyone understands it's a placeholder. It's visual communication, not engineering output.</p>
<p>I use it for generating fixture and jig concepts. These are often simple geometry that will be 3D printed or quickly machined with loose tolerances. The AI gets me 80% of the way to a usable fixture in seconds, and the remaining 20% is adjusting a few dimensions in Fusion.</p>
<p>I use it for exploring form factors before committing to detailed design. If I'm not sure whether a housing should be 80mm wide or 100mm wide, generating both options quickly and looking at them in the context of the assembly is faster than modeling each one.</p>
<p>What I don't use it for: anything that goes to a machine shop without me reworking it first. Anything with tolerances that matter. Anything that will be injection molded, cast, or formed. Any part that interacts with other parts in an assembly where dimensional relationships are critical.</p>
<h2>Where this might improve</h2>
<p>The most likely near-term improvement is DFM validation layers bolted onto text-to-CAD output. Several CAD companies are working on automated DFM checks that flag problems like sharp internal corners, thin walls, and missing draft angles before the user sees the model. This doesn't make the AI smarter about manufacturing, but it catches the worst mistakes.</p>
<p>Training on manufacturing-contextualized data would help more. If the AI learned from parts that included their manufacturing process, material, and tolerance annotations alongside the geometry, the output might start reflecting real constraints. But that data is mostly locked inside company PLM systems and rarely includes the manufacturing context in a machine-readable way.</p>
<p>Process-specific generation modes could also help. Instead of "generate a bracket," imagine "generate a bracket for 3-axis CNC milling in 6061 aluminum." That prompt carries enough context for the AI to apply appropriate constraints, if it had the training data to understand them. We're not there yet.</p>
<h2>The honest verdict</h2>
<p>Text-to-CAD output does not survive a machine shop in 2026. Not without significant rework by someone who understands the manufacturing process. The geometry comes out looking like parts but behaving like sketches. It's missing the engineering layer that separates a shape from a specification.</p>
<p>For early concept work and quick visualization, text-to-CAD is a useful speed boost. For <a href="/posts/text-to-cad-for-3d-printing">prototyping on FDM printers</a>, it's workable. For anything going through a manufacturing process that involves tooling, fixturing, or tolerances measured in hundredths of a millimeter, the human engineer is still doing all the hard work, and the AI is providing a slightly head-started shape to work from.</p>
<p>My machinist's advice was simpler than anything I've written here. "If the AI can learn about internal corner radii," he said, "that alone would cut my callback rate in half." He's not wrong. And the fact that we're still talking about something that basic tells you exactly where this technology sits relative to <a href="/posts/text-to-cad-limitations">manufacturing reality</a>. It's early. It's useful at the margins. It's nowhere near the shop floor.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD for gears: don&apos;t hold your breath</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-for-gears</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-for-gears</guid>
      <pubDate>Thu, 12 Mar 2026 00:00:00 GMT</pubDate>
      <description>Gears require involute tooth profiles, precise module values, and geometry that follows standards. Text-to-CAD tools don&apos;t understand any of that. Here&apos;s what happens when you try.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>gears</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD cannot generate accurate gears. Gear geometry requires involute tooth profiles, precise module/pitch values, root fillets, and dimensional standards (AGMA, ISO) that current AI models don&apos;t understand. AI-generated &apos;gears&apos; are cosmetic approximations unsuitable for meshing or power transmission. Use dedicated gear calculators instead.</p>
<p>A colleague showed me an AI-generated spur gear last week, spinning it in the viewport like he'd accomplished something. It had teeth. It had a bore. It looked, from three feet away and with one eye closed, like a gear. I asked him what module it was. He didn't know. I measured the tooth profile in Fusion 360. It wasn't an involute curve. It wasn't even close to an involute curve. It was a series of arcs stitched together with the confidence of someone who's seen a picture of gears but never used one. The thing couldn't have meshed with another gear any more than a saw blade could mesh with a comb.</p>
<p>That's the state of text-to-CAD for gears. The AI can produce objects that look like gears. It cannot produce gears.</p>
<h2>Why gears are fundamentally different from brackets</h2>
<p>Most of the mechanical parts that text-to-CAD handles decently are what I'd call "shape-first" geometry. A bracket's function comes from its overall form: the leg length, thickness, hole placement. The exact contour of each surface is simple. Flat faces, cylindrical holes, maybe a fillet. The AI can approximate these shapes because there's a lot of room for the geometry to be slightly off and still work.</p>
<p>Gears aren't like that. A gear's function lives in the tooth profile, and that profile is defined by a mathematical curve called an involute. The involute shape is not optional. It's not aesthetic. It's the geometry that allows two gears to transmit motion at a constant velocity ratio with smooth rolling contact. If the tooth profile is wrong, the gears don't mesh properly. They bind, chatter, wear unevenly, or simply don't transfer torque.</p>
<p>The specific involute curve for a given gear depends on the module (or diametral pitch in imperial), the number of teeth, the pressure angle (usually 20 degrees), the addendum, the dedendum, the root fillet radius, and potentially a profile shift coefficient. These are all interrelated by formulas from gear standards like ISO 21771 or AGMA 2001. You can't just eyeball them.</p>
<p>Text-to-CAD tools don't know any of these formulas. They've seen gear shapes in training data, so they can generate something gear-shaped. But gear-shaped and gear-functional are completely different things.</p>
<h2>What the AI actually generates</h2>
<p>I tested three prompts across two tools. Simple, specific requests.</p>
<p>First: "Spur gear, module 2, 20 teeth, 14mm bore, 20-degree pressure angle, 10mm face width." This is unambiguous by gear standards. It defines exactly one correct geometry.</p>
<p>Zoo.dev gave me something with 20 teeth. The outer diameter was close, about 43mm versus the correct 44mm. But the tooth profile was wrong. The flanks were straight lines connecting circular arc tips, like a sprocket for a chain, not a gear for meshing. The root form was a simple radius that didn't follow the standard trochoid. And the tooth thickness at the pitch circle, which I measured carefully, was 3.5mm instead of the correct 3.14mm. That's more than 10% off on a dimension that determines whether two gears will mesh without binding.</p>
<p>Second: "Helical gear, module 1.5, 30 teeth, 10mm bore, 20-degree pressure angle, 15-degree helix angle." This is harder, and I expected it to fail. It did. The output had 30 teeth, but they were straight, not helical. The AI ignored the helix angle entirely and gave me a spur gear with incorrect tooth proportions. The bore was 10mm, which was nice. Small victories.</p>
<p>Third: "Bevel gear, 15 teeth, module 3." This one came back as a cone with rectangular protrusions. I stared at it for a while. It looked like a medieval torture device more than a power transmission component. The AI clearly had no training data for bevel gears and improvised badly.</p>
<h2>The involute problem</h2>
<p>An involute tooth profile is a precise mathematical curve generated by unwinding a string from a base circle. Every point on the curve has a specific radius that determines the contact mechanics. In real gear design software, the tooth profile is computed from first principles. The software doesn't approximate. It calculates.</p>
<p>Text-to-CAD tools don't calculate. They predict. They generate something that looks statistically probable based on training data. For a bracket, that's fine. For a gear, "roughly" is failure. A tooth profile 0.5mm off at the pitch point produces a gear set that binds, wears prematurely, and generates noise under load. The AI doesn't know what a base circle is, doesn't know the dedendum is 1.25 times the module, and doesn't know the root fillet has to clear the tip of the mating gear.</p>
<h2>What you should use instead</h2>
<p>For parametric gears in Fusion 360, add-ins like GF Gear Generator produce accurate involute geometry from standard parameters in about thirty seconds. SolidWorks has a Toolbox with standard gear geometry. For serious gear work, dedicated tools like KISSsoft handle strength calculations, contact analysis, and profile modifications.</p>
<h2>The "but I just need a visual" argument</h2>
<p>Some people tell me they don't need an accurate gear. They just need something that looks like a gear for a rendering, a concept assembly, or a presentation. Fine. For that use case, the AI output is acceptable. It has teeth. It's round. It exists in 3D. Put it in a rendering with motion blur and nobody will notice the tooth profile is wrong.</p>
<p>But I'd still argue you're better off using a gear generator even for visuals, because it takes thirty seconds, the result is correct, and you won't have to explain to an engineer six months later why the gear model in the assembly file has the wrong pitch circle when someone tries to use it as a reference.</p>
<p>I've seen "placeholder" geometry survive in project files for years, slowly migrating from "concept only" to "reference geometry" to "I thought this was the real model." Bad gears in a file are like bad wiring in a wall. Eventually someone will assume it's correct and base a decision on it.</p>
<h2>Where AI and gears might eventually meet</h2>
<p>The gear problem is a specific case of a broader <a href="/posts/text-to-cad-limitations">text-to-CAD limitation</a>: the AI generates geometry by appearance, not by engineering rules. A path forward might involve coupling the AI with a parametric gear kernel, where the text prompt extracts parameters and feeds them to a proper gear calculator. That makes more sense than trying to train a neural network to rediscover the involute.</p>
<p>Until that exists, my advice is simple: don't use text-to-CAD for gears. Use a proper gear tool. The teeth won't mesh, the gears won't run, and the only thing you'll have generated is frustration. For <a href="/posts/text-to-cad-for-mechanical-parts">simple mechanical parts</a> like brackets and mounting plates, text-to-CAD is a reasonable starting point. For gears, springs, cams, or anything where the geometry is defined by math rather than by what it looks like, stick with dedicated tools.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD for enclosures: a practical test</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-for-enclosures</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-for-enclosures</guid>
      <pubDate>Wed, 11 Mar 2026 00:00:00 GMT</pubDate>
      <description>I asked three text-to-CAD tools to generate a simple electronics enclosure. One of them came close. The other two produced geometry that would trap heat and embarrass a snap fit.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>enclosures</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD can generate basic rectangular enclosures with lids but struggles with snap fits, standoffs, ventilation slots, cable routing, and proper wall thickness. Zoo.dev produces the best enclosure geometry. For real product enclosures, AI-generated output is a starting point requiring 30-60 minutes of manual refinement.</p>
<p>I have a Raspberry Pi 4 sitting on my desk in a 3D-printed case that I designed two years ago in Fusion 360. Four standoffs, a snap lid, ventilation slots, cutouts for HDMI, USB, Ethernet, GPIO, and the SD card. It took me about three hours to model, including the snap-fit tuning that required two test prints before the lid stopped either popping off when you looked at it wrong or requiring a flathead screwdriver to remove. That case is the benchmark I used when I decided to test whether text-to-CAD tools could generate electronics enclosures.</p>
<p>The results were educational. One tool produced something I could work with. Two others produced geometry that ranged from "almost a box" to "conceptually adjacent to an enclosure." None of them got close to what I'd call production-ready, but the gap between the best and worst was large enough to be interesting.</p>
<p>I tested Zoo.dev, CADScribe, and AdamCAD. Same prompt for each: "Electronics enclosure for a Raspberry Pi 4, rectangular, 95x65x35mm, 2mm wall thickness, snap-fit lid, four M2.5 standoffs matching the Pi's mounting holes, ventilation slots on both long sides, USB and Ethernet cutouts on one short side, HDMI cutout on the opposite short side, SD card slot on the same side as HDMI."</p>
<p>That's a detailed prompt. Probably more detailed than most users would write. I wanted to give each tool the best chance of producing something useful, and I wanted to compare them on the same specification.</p>
<h2>Zoo.dev: the closest to usable</h2>
<p>Zoo.dev generated a rectangular enclosure with approximately the right overall dimensions (it was 93x64x34mm, close but not exact). The walls were fairly consistent at around 1.8mm, which is slightly thinner than the 2mm I requested but within the range where FDM printing still works fine.</p>
<p>The standoffs existed. Four cylindrical bosses inside the enclosure, positioned in a pattern that was close to the Raspberry Pi's mounting hole layout but off by about 1.5mm on two of the positions. Close enough to see the concept, not close enough to actually mount a board without drilling out the holes and hoping.</p>
<p>The ventilation slots were there, which surprised me. Eight rectangular slots on each long side, evenly spaced. They were a bit narrower than I'd have designed by hand (about 1.2mm wide, where I'd normally do 2mm for better airflow), but they existed and were correctly oriented.</p>
<p>The lid was where things got complicated. Zoo.dev generated a flat lid that sat on top of the enclosure walls, but the "snap fit" was more of a friction fit using thin tabs on the lid edges. The tab geometry was too thin to actually flex and snap, and the corresponding features on the enclosure walls were shallow enough that the lid would slide off with a mild shake. This is a hard thing to get right, even for human designers, so I'm not surprised the AI struggled. But a snap fit that doesn't snap is just a fit, and not a very good one.</p>
<p>The port cutouts were approximately located but not accurately sized. The USB opening was a single rectangular cutout where the Pi has two stacked USB-A ports and two USB-C ports. The Ethernet cutout was close but about 1mm too narrow. The HDMI cutout was positioned too high by about 2mm.</p>
<p>Overall verdict: this was the best of the three. With 30-45 minutes of cleanup in Fusion 360 (correcting standoff positions, fixing port cutout dimensions, redesigning the snap fit, and adjusting wall thickness), I had a printable enclosure that actually held a Pi. The AI saved me maybe an hour of initial modeling time compared to starting from scratch. Net time savings: 15-30 minutes. Not zero, but not the revolution the marketing promises either.</p>
<h2>CADScribe: the shape without the features</h2>
<p>CADScribe generated a rectangular box. Just a box. With a lid that was a separate flat plate. No standoffs inside. No ventilation slots. No port cutouts. The dimensions were roughly right (96x66x36mm), and the wall thickness was consistent at about 2.1mm, which was actually closer to my spec than Zoo.dev managed.</p>
<p>But the feature list from my prompt was almost entirely ignored. I got the overall enclosure shape and walls. Nothing else. No mounting features, no openings, no snap geometry, no ventilation.</p>
<p>I tried a simplified prompt: "Simple box enclosure 95x65x35mm with four holes in the bottom for M2.5 screws and a removable lid." The box came back with the lid, but the four holes were positioned in a symmetric grid pattern that had nothing to do with the Raspberry Pi's mounting layout. At least the holes existed.</p>
<p>CADScribe seems to work best for the simplest enclosure geometry: a box with a lid. Anything beyond that requires manual modeling, which is the thing you were trying to avoid. As a starting point for "I need a box shape to work from," it functions. As an enclosure design tool, it doesn't.</p>
<h2>AdamCAD: better parametrics, limited features</h2>
<p>AdamCAD took a different approach. It generated a rectangular enclosure with adjustable dimension sliders, which is useful for dialing in the size. The overall dimensions were close (94x64x34mm), and I could adjust them with the parametric controls. Wall thickness was controllable too, which is a nice touch.</p>
<p>The standoffs were generated as simple cylinders, four of them, but they were positioned based on the AI's interpretation of "matching the Pi's mounting holes," which was off by 2-3mm on three of the four positions. The parametric controls didn't extend to standoff positions, so fixing them required exporting and editing in another tool.</p>
<p>No ventilation slots. The snap fit was a basic tongue-and-groove around the lid perimeter, which would actually work for a friction fit but wouldn't snap. Port cutouts were absent. The SD card slot was ignored entirely.</p>
<p>AdamCAD's strength is in the parametric adjustment after generation, but the feature set for enclosures is limited to the basic box plus simple internal features. For a plain enclosure where you plan to add all the specific features in your own CAD tool, it gives you a dimensionally adjustable starting shape. That's something, but it's not an enclosure design tool.</p>
<h2>What all three got wrong</h2>
<p>Some failures were consistent across all three tools, which suggests they're fundamental limitations of current text-to-CAD technology rather than bugs in specific implementations.</p>
<p>Snap fits. No tool generated a functional snap-fit mechanism. This is one of the most common features in plastic enclosures and one of the hardest to generate correctly because it requires understanding material deflection, interference fits, and the manufacturing process (snap fits need draft on both sides for injection molding). For 3D printing, the tolerances are different. The AI doesn't model any of this. It generates tab-like geometry that looks like a snap fit in a render but doesn't function as one.</p>
<p>Standoff positioning. All three tools positioned standoffs based on approximate symmetry rather than the actual Raspberry Pi mounting pattern (which is 58mm x 49mm, not centered in the board outline). This is a specific-knowledge problem. The AI doesn't have a database of PCB mounting patterns, so it places standoffs where they look reasonable rather than where they need to be.</p>
<p>Port cutouts. Even with specific descriptions, the cutout geometry didn't match real connector dimensions. The tools seem to generate generic rectangular openings rather than sized cutouts for specific connectors. This makes sense given that the AI doesn't have access to connector specifications, but it means every cutout needs manual correction.</p>
<p>Wall thickness consistency. All three tools produced enclosures where the wall thickness varied by up to 0.5mm between faces. For FDM prototyping this is tolerable. For injection molding, where uniform wall thickness is critical to prevent warping and sink marks, it would be a problem. None of the tools seemed to enforce a minimum or target wall thickness consistently, even when specified in the prompt.</p>
<p>Thermal design. My prompt mentioned ventilation slots, but none of the tools considered thermal performance beyond the literal geometry I requested. An enclosure for a Raspberry Pi needs to dissipate heat. A human designer would think about airflow paths, the relationship between intake and exhaust positions, and whether the board's hot components are near the ventilation. The AI places slots where prompted and calls it done.</p>
<h2>What I'd actually use AI-generated enclosures for</h2>
<p>After this test, I have a clear picture of where text-to-CAD fits in enclosure design.</p>
<p>Concept-phase enclosures for showing a client or team the general shape and size of a product. "Here's roughly what the thing will look like." Print it, set it on the table next to the PCB, and have a conversation about proportions, orientation, and layout. For this, even the worst tool's output is faster than modeling from scratch, and accuracy doesn't matter because the enclosure is a communication prop, not a functional part.</p>
<p>Quick fit-check enclosures for <a href="/posts/text-to-cad-for-prototyping">prototyping</a>. Generate the outer shell, print it, drop the board inside, and see if everything fits spatially. You're checking overall volume and basic keep-out zones, not mounting-hole alignment or connector positions. The 1-2mm dimensional error is tolerable because you're asking "does this board fit in a box this size," not "do the screw holes line up."</p>
<p>Starting-point geometry for real enclosure design. Generate the box shape, import into Fusion 360, and use it as the starting body to add proper standoffs, snap fits, port cutouts, and ventilation. This saves the 10-15 minutes of creating the initial shell geometry manually. Whether that's worth the workflow disruption of generating externally and importing depends on how fast you are in your CAD tool. I'm fast in Fusion, so the savings are marginal. For someone less experienced, they might be more meaningful.</p>
<p>What I wouldn't use it for: any enclosure going to production. Any enclosure that needs to seal against dust or water. Any enclosure with complex internal features like cable channels, EMI shielding, or mechanical interlocks. Any enclosure that will be injection molded. These all require the kind of detailed design work that text-to-CAD doesn't touch.</p>
<h2>The enclosure-specific prompt that works best</h2>
<p>After testing various prompt styles, I've found that simple dimensional prompts work better than feature-rich prompts for enclosure generation. The AI handles "rectangular box 100x60x40mm, 2mm walls, removable lid" much better than "rectangular box with four standoffs at 58x49mm spacing, ventilation slots, snap-fit lid, USB-C cutout at X position..."</p>
<p>The more features you add to the prompt, the more likely the AI is to get some of them wrong or ignore others entirely. A simpler prompt gives you a more reliable starting shape that you can add features to in your own CAD tool. This is counterintuitive if you expect the AI to do more work for you, but it reflects the reality of where these tools are in 2026: they're good at boxes and bad at details.</p>
<p>My recommended approach for enclosure prototyping: prompt for the box and wall thickness only. Import STEP into Fusion 360. Add your own standoffs at the correct positions from the PCB drawing. Add your own port cutouts measured from the actual connectors. Add your own ventilation, snap fits, and mounting features. Let the AI give you the shell. Do the engineering yourself.</p>
<h2>Compared to templates and libraries</h2>
<p>It's fair to ask whether text-to-CAD is even the right tool for enclosures when parametric enclosure libraries already exist. GrabCAD, Thingiverse, and various paid template libraries have adjustable enclosure designs that you can download and modify. Some CAD tools have built-in enclosure generators with parametric controls.</p>
<p>For standardized rectangular enclosures, those libraries are often faster and more reliable than text-to-CAD. You pick a template, enter your dimensions, adjust the features, and you're done. The standoffs are properly parameterized. The snap fits are proven. The wall thickness is consistent.</p>
<p>Where text-to-CAD has an edge is in non-standard shapes. If your enclosure has an unusual profile, a tapered side, an asymmetric layout, or a form factor that doesn't match any template in the library, generating from a text description gives you more flexibility. The output quality is lower than a well-designed template, but the shape freedom is higher. For <a href="/posts/text-to-cad-for-product-design">product design</a> work where the enclosure shape is part of the brand identity, that flexibility matters.</p>
<h2>The honest verdict on AI-generated enclosures</h2>
<p>Text-to-CAD can make a box. It can put a lid on the box. It can put holes in the box. It cannot make an enclosure in the engineering sense of the word, which includes mounting features at correct positions, properly dimensioned port cutouts, functional snap fits, thermal management, EMI compliance, and the hundred other details that separate a box from a product housing.</p>
<p>For concept visualization and early prototyping, the box is enough. For everything beyond that, you're opening Fusion 360 and doing the <a href="/posts/ai-cad-for-real-work">real work</a> yourself. The AI saved you a shell. You're building the enclosure.</p>
<p>Zoo.dev is the best of the tools I tested for this use case, which tracks with my experience on other <a href="/posts/text-to-cad-guide">text-to-CAD work</a>. It produced the most complete feature set and the cleanest geometry. But "best of three" and "good enough" are different standards, and for enclosure design, none of the tools meet the second one without manual rework. That gap will likely narrow over time as training data improves and tools add enclosure-specific logic. For now, keep your parametric skills current. The snap fit still needs a human who knows what 0.3mm of interference actually feels like in PLA.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD for 3D printing: what works and what breaks</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-for-3d-printing</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-for-3d-printing</guid>
      <pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate>
      <description>Text-to-CAD can generate models that print. Sometimes. The wall thickness is usually wrong, supports are your problem, and the tolerances are optimistic. But for quick prototypes, it&apos;s not bad.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>3d-printing</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD can generate 3D-printable geometry from text prompts, typically exported as STL. Simple parts (brackets, boxes, mounts) print well. Issues include incorrect wall thickness, missing fillets for printability, poor overhang awareness, and optimistic tolerances. Best used for quick FDM prototypes, not production prints.</p>
<p>Last Tuesday I printed a bracket that Zoo.dev generated from a one-line prompt. Peeled it off the build plate, held it up to the thing it was supposed to hold, and it fit. Not perfectly, there was about a millimeter of play on the mounting holes, but it fit. I stood there for a second feeling like the future had arrived. Then I printed the second part, an enclosure with a snap lid, and the overhang collapsed into spaghetti because the AI put a 70-degree unsupported ceiling in the middle of the box. Back to the present.</p>
<p>That's been my experience with <a href="/posts/text-to-cad-guide">text-to-CAD</a> for 3D printing in a nutshell. The simple stuff works surprisingly well. The slightly less simple stuff fails in ways that suggest the AI has never watched a print fail, which of course it hasn't. It's generating geometry from training data, not from bitter experience with a clogged nozzle at 2 AM.</p>
<p>I've been 3D printing parts for over a decade, starting with a janky RepRap I built from eBay parts and ending up with a Bambu Lab that makes me feel like I wasted years on calibration. And I've been testing text-to-CAD tools for months. This is what I know about the intersection: where the geometry prints cleanly, where it fails, and what you need to fix before you hit slice.</p>
<h2>The parts that actually print</h2>
<p>Simple prismatic geometry is the sweet spot. Boxes. Brackets. Plates with holes. Standoffs. If the part is basically a collection of extrusions and cuts with some fillets, text-to-CAD tools can generate something printable more often than not.</p>
<p>I keep a running list of test parts I've printed from AI-generated geometry. The success rate for simple parts, things like L-brackets with two mounting holes, or rectangular trays with a uniform wall, is around 70-80%. Meaning the STL comes off the tool, goes into PrusaSlicer or Bambu Studio, slices without warnings, and prints into an object that roughly matches the prompt. Not always dimensionally perfect, but physically real and vaguely functional.</p>
<p>The reason this works is because 3D printing, especially FDM, is forgiving. A wall that's 1.8mm instead of 2mm will still print. A hole that's 5.7mm instead of 6mm will still exist, even if your M6 bolt complains about it. The process doesn't care about sharp internal corners the way a CNC cutter does. It doesn't need draft angles. It doesn't need the geometry to be anything more than a watertight solid, and text-to-CAD tools are generally good at producing watertight output.</p>
<p>Zoo.dev in particular exports clean STL files that slice without repair in every slicer I've tested. That's not nothing. I've gotten STL files from human CAD users that needed mesh repair before printing, so the AI is at least clearing that bar.</p>
<h2>Wall thickness: the first thing to check</h2>
<p>Every text-to-CAD tool I've tested gets wall thickness wrong at least some of the time. Not catastrophically wrong, usually. But wrong enough that you need to check before you print.</p>
<p>The problem shows up most often on enclosures and housing-type parts. I asked Zoo.dev for a rectangular electronics enclosure, 80 by 50 by 30mm, and got walls that varied between 1.2mm and 2.4mm depending on which face you measured. The prompt didn't specify wall thickness, so the AI guessed. Its guess was inconsistent and, on the thin side, below the minimum for reliable FDM printing with a 0.4mm nozzle.</p>
<p>This matters because thin walls cause problems. Below about 1.2mm on most FDM setups, you get underextrusion, gaps, and weak spots. Above 3mm, you start wasting material and print time. The sweet spot for FDM is usually 1.6 to 2.4mm for functional prints, and text-to-CAD tools don't seem to know that.</p>
<p>The fix is easy in theory: specify wall thickness in your prompt. "Rectangular enclosure, 80x50x30mm, 2mm wall thickness" gives better results than "rectangular enclosure." But even with specific prompts, I've seen the AI produce walls that don't match the requested thickness. My rule is always measure the wall in the STEP file before exporting STL. It takes thirty seconds and saves a failed print.</p>
<h2>Overhangs and supports: the AI doesn't think about gravity</h2>
<p>This is the big one. Text-to-CAD tools generate geometry in a zero-gravity viewport where everything floats and nothing sags. They have no concept of build orientation, layer-by-layer deposition, or what happens when you try to print a 60-degree overhang without support.</p>
<p>I tested this with a simple request: a shelf bracket with a diagonal brace. The AI generated a perfectly reasonable-looking bracket with a clean 45-degree strut connecting the horizontal arm to the vertical plate. In principle, 45 degrees is right on the edge of what FDM can do without supports. In practice, the AI also added a small horizontal tab on the underside of the strut that turned a maybe-printable overhang into a definitely-needs-support overhang. The tab was about 5mm wide and entirely unsupported. Classic case of geometry that makes sense structurally but ignores the printing process entirely.</p>
<p>Most of the AI-generated parts I've printed needed support material removed. That's not unusual for FDM printing generally, but the issue is that text-to-CAD tools don't optimize for minimal support. A human designer who knows the part will be printed tends to round the underside of overhangs, add chamfers instead of flat shelves, orient features to be self-supporting. The AI does none of this because it doesn't know the part will be printed. It's generating shapes, not print-ready geometry.</p>
<p>For simple parts, this is manageable. You slice the model, add supports in the slicer, and deal with the cleanup. For complex parts with internal overhangs or enclosed cavities, it can make the part unprintable without redesign. I had one AI-generated part with an internal shelf that would have required support material inside a box with no way to remove it. A human would never design that for FDM. The AI did it without hesitation.</p>
<h2>Dimensional accuracy: close enough for prototyping</h2>
<p>I measured forty-something AI-generated parts after printing, comparing the printed dimensions to what the prompt requested. On simple features like overall length, width, and height, the AI-to-STL dimensional error averaged about 2-3%, and the print process added another 0.2-0.5mm of dimensional variation depending on the material and printer. So a 50mm dimension typically ended up somewhere between 48mm and 51mm as a printed part.</p>
<p>For prototyping, this is usually fine. You're checking form, fit, and basic function. You're not machining bearing bores. The cumulative error from AI geometry plus FDM printing tolerance is rarely more than a millimeter on features under 100mm, and that's within the range where you can evaluate a design concept and decide what to fix in the next iteration.</p>
<p>For production printing, where you need parts to mate with specific hardware, mount in specific locations, or clear specific keep-out zones, the dimensional drift matters. A 6mm hole that prints at 5.6mm doesn't fit the M6 bolt. A 20mm standoff that ends up at 19.4mm leaves a gap in the assembly. These aren't failures of the printing process. They're the compounding of AI dimensional approximation plus print shrinkage plus process tolerance, and the result is parts that need post-processing or reprinting.</p>
<p>My workflow for anything dimensionally critical: generate the shape with text-to-CAD, import STEP into Fusion 360, measure and correct the critical dimensions, add the tolerances I need, and then export STL from Fusion. The AI saves me the initial modeling time. The measurement-and-fix step is non-negotiable.</p>
<h2>STL export: mostly fine, occasionally cursed</h2>
<p>The good news is that text-to-CAD tools generally output clean STL files. Zoo.dev's STL exports have been consistently watertight in my testing. I run every file through the slicer's analysis tool, and the vast majority pass without mesh errors, non-manifold edges, or inverted normals.</p>
<p>The bad news is that resolution can be an issue. Some tools export STL with a triangle density that's either too low (visible faceting on curved surfaces) or too high (50MB files for a simple bracket). Zoo.dev lets you control the mesh density through the API, which helps. Other tools give you what they give you.</p>
<p>For FDM printing, this rarely matters. The layer height masks most faceting. For SLA printing, where surface quality is visible at the layer level, a low-resolution STL mesh can show up as visible flat spots on curved surfaces. I've had a couple of prints where the triangulation was coarse enough to see facets on what should have been a smooth fillet. The fix is to export at higher resolution from the source, or to re-export from a proper CAD tool after importing the STEP file.</p>
<h2>SLA and SLS: less forgiving, less tested</h2>
<p>Most of my text-to-CAD printing tests have been FDM because that's what most people use and that's where the forgiveness is highest. But I've printed a few AI-generated parts on SLA and SLS, and the story is different.</p>
<p>SLA (resin) printing is more accurate than FDM but less tolerant of certain geometry problems. Thin sections that hold up on FDM can fail on SLA because the suction forces during peel are higher on large flat areas. Internal cavities need drain holes or you trap uncured resin. The AI doesn't add drain holes because it doesn't know the part is being resin-printed.</p>
<p>I printed a small AI-generated housing on my resin printer and it looked great until I realized the AI had created a nearly enclosed box with no drain path. I caught it before printing by checking the model in Chitubox, but a less careful user would have ended up with a part full of trapped liquid resin. That's the kind of process-specific knowledge that text-to-CAD tools completely lack.</p>
<p>SLS is even more specialized. The powder bed is more forgiving about overhangs, but wall thickness minimums are stricter for nylon, and the mechanical properties depend heavily on feature orientation relative to the build. None of this is encoded in AI-generated geometry.</p>
<h2>What to actually check before you print</h2>
<p>After months of testing, I've developed a checklist for AI-generated parts going to the printer:</p>
<ul>
<li>Open the STEP file in Fusion 360 or your preferred tool. Don't trust the STL preview alone.</li>
<li>Measure wall thickness on every face. Flag anything below 1.2mm for FDM or 0.6mm for SLA.</li>
<li>Look for unsupported overhangs above 45 degrees. Decide if you can add supports or if the geometry needs redesign.</li>
<li>Check for enclosed cavities. Add drain holes for SLA. Add support access for FDM if needed.</li>
<li>Measure critical dimensions against your prompt. Correct anything that's off.</li>
<li>Verify hole diameters. AI-generated holes are consistently undersized in my testing, by about 0.2 to 0.5mm.</li>
<li>Check the STL in your slicer for mesh errors. Run a repair if needed (though this is rarely necessary with Zoo.dev output).</li>
<li>Think about orientation. The AI doesn't know which way is up on the build plate. You might need to rotate the model for better printability.</li>
</ul>
<p>This list sounds like a lot of work, and it is compared to just hitting "slice and print." But it's less work than modeling from scratch, and it's a lot less work than fixing a failed print. The checking takes maybe five minutes. The printing takes hours. Spending those five minutes is the difference between text-to-CAD being useful and text-to-CAD being a waste of filament.</p>
<h2>FDM materials: the AI doesn't care, but you should</h2>
<p>Text-to-CAD tools generate geometry without material awareness. The output is the same whether you're printing in PLA, PETG, ABS, nylon, or TPU. This seems obvious, but it matters because material choice affects what geometry is printable.</p>
<p>ABS warps on large flat surfaces, so an AI-generated part with a big flat base might curl off the bed. TPU is flexible, so thin walls that hold up in PLA will flex and deform. Nylon absorbs moisture and has different bridging characteristics. PETG strings more, which means small details and holes might need different post-processing.</p>
<p>None of this is the AI's fault, exactly. A human modeling a part for 3D printing doesn't usually embed material properties in the geometry either. But a human who knows the part will be printed in ABS adds mouse ears to the corners or uses a brim. A human who knows it's TPU thickens the walls. The AI produces one shape for all materials, and the user has to adapt.</p>
<p>For PLA prototyping, which is where most text-to-CAD output ends up, this mostly doesn't matter. PLA is forgiving. It prints at low temperatures, doesn't warp much, bridges reasonably, and tolerates imperfect geometry with a shrug. That's why the "text-to-CAD for 3D printing" story is really a "text-to-CAD for PLA prototyping" story. The further you move from PLA on a desktop FDM printer, the more the AI's lack of process awareness becomes a problem.</p>
<h2>The comparison nobody wants to make</h2>
<p>Here's the thing. If you already know how to model in Fusion 360, SolidWorks, or even OnShape, text-to-CAD for 3D printing doesn't save you much time on individual parts. I can sketch, extrude, and fillet a simple bracket in Fusion faster than I can write a good prompt, wait for generation, download the STEP, import it, check the dimensions, fix the walls, and re-export.</p>
<p>Where it saves time is when you need lots of variations quickly. Five different bracket shapes. Three enclosure options. A stack of standoffs with different heights. Generating those with text prompts is faster than modeling each from scratch, even if each one needs a five-minute checkup in Fusion afterward.</p>
<p>It also saves time if you don't know CAD at all. And that, honestly, might be the bigger story. A hardware tinkerer who needs a mount for a Raspberry Pi and a sensor board can describe it in English, get an STL, and print it. The dimensions might be off by a millimeter. The walls might need a little thickening. But the part exists, and it didn't require learning sketch constraints or feature trees. For the maker community, that's a real shift.</p>
<h2>Where it actually fits</h2>
<p>Text-to-CAD for 3D printing works best when your expectations match the technology's actual capabilities. It's a first-draft generator for printable geometry. Not a print-optimization tool. Not a slicer replacement. Not a substitute for understanding your printer and material.</p>
<p>Use it for <a href="/posts/text-to-cad-for-prototyping">rapid prototyping</a> where speed matters more than precision. Use it for concept models you'll iterate on. Use it to get a shape on the build plate fast, evaluate it in your hand, and then decide whether to refine it in real CAD or prompt another version.</p>
<p>Don't use it for production prints where dimensional accuracy matters. Don't trust the wall thickness without checking. Don't assume it's thought about overhangs, because it hasn't. Don't print the first output blindly. Look at it in a slicer first.</p>
<p>The gap between "geometry that exists" and "geometry that prints well" is real, and in 2026, the AI is responsible for the first part and you're responsible for the second. That's not a damning verdict. It's an honest one. For quick FDM prototypes of simple parts, text-to-CAD is faster than starting from scratch and good enough to learn from. For anything beyond that, keep your <a href="/posts/ai-cad-for-real-work">real CAD tools</a> warm. The printer doesn't care who modeled the part. It only cares whether the geometry makes sense, and right now, that judgment is still yours.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI CAD for sheet metal: flat patterns and bend allowances</title>
      <link>https://blog.texocad.ai/posts/ai-cad-for-sheet-metal</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-cad-for-sheet-metal</guid>
      <pubDate>Mon, 09 Mar 2026 00:00:00 GMT</pubDate>
      <description>Sheet metal design has specific rules about bend radii, K-factors, flat patterns, and relief cuts. AI-generated CAD knows none of this. Here&apos;s why that matters.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>sheet-metal</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI-generated CAD models cannot handle sheet metal design. Text-to-CAD tools don&apos;t understand bend radii, K-factors, bend allowances, minimum flange lengths, relief cuts, or flat pattern unfolding. AI might generate something that looks like sheet metal in 3D but can&apos;t be fabricated. Use your CAD tool&apos;s sheet metal environment instead.</p>
<p>I spent a Friday afternoon last month trying to unfold an AI-generated bracket in Fusion 360's sheet metal environment. The bracket looked correct in 3D. Two flanges, a base, a couple of holes. It even had what appeared to be bend lines at the junctions. I right-clicked, tried to convert it to a sheet metal body, and Fusion gave me the digital equivalent of a confused stare. The solid body wasn't sheet metal. It was a solid extrusion shaped like a piece of bent metal, which is a very different thing. No bend features. No defined sheet thickness as a driving parameter. No K-factor. No bend relief. Just a solid lump that happened to look like something you could bend on a brake, except you couldn't, because it didn't know it was supposed to be bent.</p>
<p>That experience captures the fundamental problem with AI-generated CAD for sheet metal work. The AI can generate shapes that look like sheet metal parts. It cannot generate sheet metal parts. The distinction matters because sheet metal design is not about the 3D shape. It's about the relationship between the 3D folded part and the 2D flat pattern, and everything that connects them: bend radii, bend allowances, K-factors, relief cuts, and the physical behavior of real material being forced around a punch nose.</p>
<h2>What makes sheet metal design different</h2>
<p>In most CAD workflows, you create a 3D shape and figure out how to manufacture it. Sheet metal reverses this. You start with a flat sheet. You cut it to a specific profile. Then you bend it. The 3D shape is the result of the bending process, not the starting point.</p>
<p>The flat pattern drives the design. Every bend consumes material. The outer surface stretches, the inner compresses, and somewhere between them is a neutral axis. The K-factor locates that axis relative to the material thickness: around 0.33 to 0.44 for mild steel, different for aluminum and stainless. Get it wrong and your flat pattern is the wrong size. Flanges end up too long or too short, and holes don't line up.</p>
<p>Real sheet metal CAD tools handle all of this automatically. You specify thickness, bend radius, and K-factor. The software calculates bend allowance, generates the flat pattern, and keeps everything synchronized. Text-to-CAD doesn't do any of this.</p>
<h2>What the AI actually generates</h2>
<p>I tested five sheet metal prompts across two tools. Specific, clear descriptions of parts that any sheet metal designer would recognize.</p>
<p>"L-bracket, 2mm mild steel, 50mm base, 40mm flange, bend radius 3mm, two 5mm holes in each leg." The AI gave me a solid body. Not a sheet metal body. The bend area had a 3mm external radius, but internally it was a sharp corner with no proper bend geometry. The material thickness measured 2mm on the flat sections but the bend zone was thicker, about 2.4mm, because the AI blended the inner and outer surfaces without understanding that sheet metal has constant thickness through the bend. When I tried to create a flat pattern, Fusion 360 couldn't unfold it. The geometry wasn't defined as a bend.</p>
<p>"U-channel, 1.5mm aluminum, 80mm wide, 30mm flanges, 2mm bend radius." Same problem. Solid body. The flanges were 30mm as requested, but from the outside edge to the center of the bend, not from the bend tangent line to the end of the flange. That's a measurement difference that matters when your flat blank gets laser-cut to the millimeter.</p>
<p>"Mounting plate with two 90-degree bent tabs, 3mm steel, 100mm by 60mm base, tabs 20mm tall on the short edges." The AI generated a base plate with two vertical walls. No bend features. No bend relief at the junction between the tabs and the base. The corners where the tabs meet the base were sharp inside and radiused outside, with no consistency in the radius. A press brake operator would look at this and not know where to begin, because the part doesn't define any bending information.</p>
<h2>The flat pattern problem</h2>
<p>The flat pattern is the whole point of sheet metal CAD. You need it to order material, to program the laser or turret punch, and to verify that the part will fold correctly. Without a flat pattern, you don't have a manufacturable design.</p>
<p>AI-generated shapes cannot be unfolded because they were never folded. There's no bend line, no defined bend angle, no bend radius as a property. In Fusion 360, I can sometimes reconstruct the sheet metal definition by manually identifying faces and defining them as flanges. On simple parts with one or two bends, this works. On anything more complex, it takes longer than modeling from scratch in the sheet metal environment.</p>
<h2>Bend relief and corner conditions</h2>
<p>When two bends meet at a corner, the material at the intersection needs somewhere to go. Without a relief cut, the metal tears during bending. The relief can be a rectangular notch, a round hole, or a specific tear-drop shape, depending on the corner condition and the required strength.</p>
<p>AI-generated sheet metal shapes have no relief cuts. The bends just meet at a corner as if the material will politely rearrange itself. On a real press brake, that corner would either tear, bulge, or both. The result is a part that doesn't match the 3D model, with distorted corners and potential cracks that compromise structural integrity.</p>
<p>Real sheet metal environments add relief cuts automatically based on the corner type and material properties. The software knows that two perpendicular bends sharing a corner need material removed at the intersection. The AI doesn't know this because it doesn't model bending as a physical process. It models the result of bending as a shape, without any of the manufacturing intelligence that makes the shape producible.</p>
<h2>Minimum flange length and bend feasibility</h2>
<p>A press brake has physical limitations. The minimum flange length depends on the die opening, which depends on material thickness and bend radius. For 2mm mild steel with a 3mm bend radius, the minimum flange is roughly 10 to 12mm. AI-generated parts sometimes include flanges that are too short to bend. A 5mm flange on a 3mm sheet is physically impossible on most brakes.</p>
<p>The AI also doesn't check for bend interference. Two flanges that bend toward each other can collide if they're too close or too tall. The bending sequence matters, and it's a planning problem that sheet metal designers think about during modeling. The AI doesn't think about it at all.</p>
<h2>What you should do instead</h2>
<p>If you need sheet metal parts, use your CAD tool's sheet metal environment. In Fusion 360, switch to the Sheet Metal tab, specify your material thickness and default bend radius, and start designing. Every flange gets a proper bend with a proper radius. The flat pattern updates automatically. Relief cuts appear at corners.</p>
<p>SolidWorks, Solid Edge, and even FreeCAD all have capable sheet metal environments. The time you'd spend generating a shape with text-to-CAD, discovering it's not sheet metal, trying to convert it, failing, and then modeling from scratch is always more than just starting in the sheet metal environment. Always.</p>
<h2>The one place AI might help</h2>
<p>There's an argument for using text-to-CAD to explore the general form of a sheet metal part before modeling it properly. But you'd immediately discard the AI geometry and remodel from scratch in the sheet metal environment, so the "savings" amount to maybe five minutes of not having to imagine a shape in your head. For comparison, sketching three options on a napkin takes ninety seconds.</p>
<h2>The bottom line</h2>
<p>Sheet metal is a process-driven discipline. The flat pattern, the bending sequence, the material behavior, the tooling constraints: these aren't afterthoughts bolted onto the design. They're the design. Every dimension in a sheet metal part traces back to a flat blank that gets cut and bent in a specific order with specific tools.</p>
<p>AI-generated CAD can't participate in that process because it doesn't model bending. It doesn't know what a K-factor is. It doesn't compute bend allowances. It doesn't generate flat patterns. It doesn't add relief cuts. It produces solid bodies that look like bent metal and contain none of the manufacturing intelligence that makes sheet metal parts producible.</p>
<p>If someone asks me whether AI CAD works for sheet metal, my answer is the same every time: use the sheet metal environment in your actual CAD tool. It was purpose-built for this exact problem, it handles all the manufacturing math automatically, and it will save you from the experience of staring at an AI-generated bracket on a Friday afternoon, wondering why it won't unfold, while your coffee goes cold and the press brake operator texts you asking where the flat pattern is.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI CAD for CNC machining: output quality and DFM</title>
      <link>https://blog.texocad.ai/posts/ai-cad-for-cnc-machining</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-cad-for-cnc-machining</guid>
      <pubDate>Sun, 08 Mar 2026 00:00:00 GMT</pubDate>
      <description>CNC machining demands tool access, reasonable radii, proper tolerances, and geometry that doesn&apos;t make a programmer swear. AI CAD output gets about half of that right.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>cnc</category>
      <category>machining</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI-generated CAD models are not CNC-ready without manual editing. Common issues: internal corners with zero radius (impossible for end mills), no consideration for tool access, missing tolerances, incorrect hole depths, and geometry that ignores fixture requirements. Budget 30-60 minutes of DFM cleanup per AI-generated part.</p>
<p>The last time I sent an AI-generated STEP file to my machinist, he called me within the hour. Not to ask about material or finish. To ask if I was feeling okay. The part had four internal pockets with perfectly sharp corners, a wall section thinner than the end mill that would need to cut next to it, and two holes that dead-ended into a feature from the other side with about 0.1mm of material between them. "I could make this," he said, "if I also had a laser, an EDM machine, and no self-respect." He was being generous.</p>
<p>That was six months ago, and I've since run a lot more AI-generated geometry through the same mental filter a CNC programmer uses when they open a new file. The results are consistent in their inconsistency. Some features are fine. Some are impossible. And the AI never tells you which is which, because it doesn't know. It generated a shape. Whether that shape can survive contact with a rotating cutter is someone else's problem.</p>
<h2>What CNC machining actually needs from a model</h2>
<p>Before getting into what the AI gets wrong, it helps to spell out what a CNC-ready model actually requires. Not every engineer thinks about this, especially if they've mostly done 3D printing, where the geometry rules are more forgiving.</p>
<p>A CNC-machinable part needs tool access to every feature. If a cutter can't physically reach a surface, that surface doesn't get cut. This means considering the cutter's diameter, its length, the holder clearance, and the fixture. Internal corners need a radius at least as large as the cutter that will machine them, usually a bit larger to avoid full-width engagement that causes chatter. Walls need to be thick enough to resist the cutting forces without deflecting. Holes need depths that standard drills can reach. Features need to be positioned so the part can be fixtured in a vise or on a table without the cutter colliding with the clamps.</p>
<p>Then there's the engineering data. Tolerances on critical dimensions. Surface finish callouts where they matter. Thread specifications. Datum references. GD&#x26;T on features that control fit and function. A model without this information is a shape, not a specification. A machinist can cut a shape, but they can't guarantee it'll work in your assembly unless you tell them what matters and how much variation is acceptable.</p>
<p>AI-generated CAD provides the shape. It provides none of the engineering data. And the shape itself often violates basic machining constraints.</p>
<h2>The sharp corner problem</h2>
<p>This is the single most common DFM violation in AI-generated geometry, and it's so consistent that I've started thinking of it as the AI's signature move. Every internal corner comes out with zero radius. Every pocket, every slot, every L-shaped cutout. Perfectly sharp, perfectly impossible.</p>
<p>An end mill is round. The smallest radius it can leave in a corner equals its own radius. A 6mm end mill leaves a 3mm corner radius. A 3mm end mill leaves a 1.5mm radius. You can go smaller, but smaller cutters are slower, more fragile, and more expensive to run. A zero-radius internal corner requires EDM or some other non-traditional process, which means a different machine, a different shop, and a different price.</p>
<p>I measured internal corners on fifteen AI-generated parts from three different tools. Every single one had zero-radius internal corners. Not one tool added corner radii to pockets or slots. Not one tool produced geometry that acknowledged the existence of rotating cutters. This is the kind of thing that a first-year manufacturing student learns in week two, and the AI has no concept of it.</p>
<p>The fix is easy in Fusion 360. Select the edges, add a fillet, pick a radius that matches your expected tooling. Five minutes per part, maybe less. But the fact that you have to do it every time, on every AI-generated part, tells you something about the gap between generating geometry and generating machinable geometry.</p>
<h2>Wall thickness and cutter deflection</h2>
<p>A thin wall next to a deep pocket is a classic machining headache. The cutter pushes against the wall during the cut, and if the wall is too thin relative to its height, it deflects. The result is a wall that's thicker at the top (where the cutter started) and thinner at the bottom (where deflection was worst), with a surface finish that looks like it was machined during an earthquake.</p>
<p>The general rule of thumb is wall thickness should be at least one-tenth of the wall height for aluminum, more for softer materials or taller walls. A 20mm tall wall should be at least 2mm thick, and even that will show some deflection with aggressive cutting parameters.</p>
<p>AI-generated geometry doesn't follow this rule because it doesn't know this rule exists. I've seen AI output with 0.5mm walls adjacent to 15mm-deep pockets. The AI made the outer shape match the prompt and let the pocket eat into whatever material was left. In the viewport, it looks fine. On a CNC machine, that wall is vibrating like a tuning fork halfway through the first roughing pass.</p>
<p>I once had to explain this to someone who was excited about their AI-generated part. They'd asked for a thin-walled enclosure and the AI delivered exactly what they asked for, walls so thin you could practically read through them. "But I asked for 0.8mm walls and it gave me 0.8mm walls," they said. Yes. And your machinist will ask for 2mm walls and an explanation for why the original designer hates machinists.</p>
<h2>Hole geometry issues</h2>
<p>AI-generated holes have a collection of problems that stack up. The AI often generates blind holes with flat bottoms when a standard drill point is 118 or 135 degrees and leaves a cone. A flat-bottomed blind hole requires a second operation with an end mill. The AI doesn't know this.</p>
<p>Position is the second issue. I covered dimensional accuracy in the <a href="/posts/is-text-to-cad-accurate">text-to-CAD accuracy post</a>, but for CNC work, hole position tolerance matters most. If two holes are supposed to be 50mm apart for a bolt pattern and the AI places them 49.3mm apart, the bolts don't fit. A CNC-machined hole in aluminum is where it is. You can't stretch it.</p>
<p>Thread callouts are completely absent. If a hole needs to be tapped M6x1.0, the AI generates a smooth bore with no thread specification, no counterbore for a cap screw head, no countersink for a flat head screw.</p>
<h2>The fixture problem nobody mentions</h2>
<p>Fixturing is how you hold the part while it's being machined. The method constrains which faces the cutter can reach and in what order. Good part design considers this from the start. You leave clamping surfaces. You design the part so it can be machined in a reasonable number of setups, ideally two or three.</p>
<p>AI-generated parts have no concept of fixturing. I've seen parts where the only flat surface is the one being machined, leaving nowhere for a vise to grip. I've seen features on all six faces that would require six setups, which is absurd for a bracket that should be done in two. A machinist will figure it out, but every workaround costs time, and that time shows up on your invoice.</p>
<h2>What AI-generated geometry looks like to a CAM programmer</h2>
<p>I asked a CAM programmer I know to process three AI-generated parts. Just generate toolpaths, not actually cut anything. His notes were instructive.</p>
<p>Part one, a bracket: "Pockets need corner radii. I'd add 3mm fillets to all internal corners. Two of the holes are too close to the edge, I'd flag these to the designer. Otherwise straightforward, two setups." He estimated 15 minutes to fix the model and program it.</p>
<p>Part two, a housing: "Walls too thin on the north side, 0.6mm. I can't machine this without deflection. The pocket depth is 18mm with 0.6mm walls. I'd either thicken the walls or use a rest-machining strategy with a tiny cutter, which triples the cycle time." He estimated 30 minutes of model rework before he could even start programming.</p>
<p>Part three, a motor mount: "The bolt pattern is off. I overlaid the NEMA 23 spec and the holes are 1.5mm from where they should be. Also, the counterbore depths are inconsistent, two are 4mm and two are 4.5mm, which I'm guessing is a generation artifact, not intent." He fixed the holes to spec and made the counterbores consistent. Another 20 minutes.</p>
<p>Total DFM cleanup across three parts: about an hour. None of the parts were complex. All of them needed human intervention before a single chip could fly.</p>
<h2>The tolerance gap</h2>
<p>CNC machining is a tolerance-driven process. Without tolerances, machinists apply their shop default, usually plus or minus 0.127mm (0.005 inches). That might be fine for your part. It might not. AI-generated models carry no tolerance information: no dimensional tolerances, no GD&#x26;T, no surface finish specs. For production machining, that's a problem requiring a human to solve before the file goes to the shop.</p>
<h2>A realistic workflow for CNC parts</h2>
<p>Generate the rough shape. Import the STEP. Measure every dimension that matters. Fix what's wrong. Add internal corner radii to every pocket and slot. Check wall thicknesses. Verify hole positions against your mating part. Add tolerances, surface finish callouts, thread specs. Consider fixturing.</p>
<p>That's 30 to 60 minutes of work on a simple part. Compare that to 45 to 90 minutes modeling from scratch, and the time savings are real but modest. The people who get into trouble skip the DFM review and send AI output straight to the shop. The machinists will either reject the file, make assumptions you didn't intend, or cut exactly what you sent and let you discover the problems in aluminum.</p>
<h2>The honest assessment</h2>
<p>AI-generated CAD is not CNC-ready. It's not close to CNC-ready. It's a rough shape that needs a human with manufacturing knowledge to turn it into a machinable part. For simple brackets and plates, the cleanup is minor and the time savings are real. For anything with pockets, thin walls, critical hole patterns, or interface dimensions, budget at least half an hour of DFM work per part.</p>
<p>The tools will improve. I expect someone will eventually bolt a DFM rule engine onto the generation pipeline, catching the worst violations before the user ever sees them. But that doesn't exist today. Today, the AI generates geometry like someone who's studied pictures of machined parts but never heard the sound of chatter, never smelled coolant, and never had a machinist call them within an hour to ask if they were feeling okay.</p>
<p>My machinist is still taking my calls, which I appreciate. I just make sure to check the corner radii before I send anything now. He deserves at least that much respect.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI CAD for injection molding: draft angles, wall thickness, and reality</title>
      <link>https://blog.texocad.ai/posts/ai-cad-for-injection-molding</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-cad-for-injection-molding</guid>
      <pubDate>Sun, 08 Mar 2026 00:00:00 GMT</pubDate>
      <description>Injection molding has rules. Uniform wall thickness, draft angles, gate location, and parting lines. AI-generated CAD models ignore all of them.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>injection-molding</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI-generated CAD models are unsuitable for injection molding without extensive rework. Current text-to-CAD tools don&apos;t apply draft angles, don&apos;t maintain uniform wall thickness, ignore gate and parting line considerations, and don&apos;t account for shrinkage. Molding requires DFM expertise that AI doesn&apos;t yet have.</p>
<p>I showed a tooling engineer three AI-generated enclosure designs on a Tuesday afternoon. He was eating a sandwich. He put the sandwich down after the first model, which is how I knew it was bad. By the third model he'd stopped scrolling and just pointed at the screen. "This wall is 1.2mm here and 3.8mm here. You know what happens when you mold that?" I knew. Differential cooling. Warpage. Sink marks on the thick sections. The part comes out of the mold looking like it spent a week in a hot car. He picked his sandwich back up and said something about how the geometry looked like it was designed by someone who'd never waited for a mold to be cut. He wasn't wrong.</p>
<p>Injection molding is one of the most constraint-heavy manufacturing processes you'll encounter in product design. Every surface, every wall, every feature is shaped by the physics of molten plastic flowing into a cavity and then cooling into a solid part while still trapped in steel. If you don't design for those physics, you get parts that warp, crack, stick in the mold, show cosmetic defects, or simply can't be ejected. AI-generated CAD ignores every single one of these constraints, and the result is geometry that looks like an injection-molded part but can't actually be injection molded.</p>
<h2>Draft angles: the most basic requirement</h2>
<p>Draft is a slight taper applied to vertical faces so the part can release from the mold. When plastic cools, it shrinks onto the core. Without draft, the part grips the steel and either sticks in the mold or gets damaged during ejection. The typical minimum draft is 1 degree per side, with 2 to 3 degrees being more comfortable for textured surfaces.</p>
<p>I have never seen an AI-generated model with draft angles. Not once. Not from Zoo.dev, not from AdamCAD, not from any of the prompt-based tools I've tested. Every vertical face comes out at exactly 90 degrees to the parting plane, which is the one angle that guarantees ejection problems.</p>
<p>Adding draft after the fact is possible but tedious. In Fusion 360 or SolidWorks, you select faces, pick a pull direction, and specify the angle. On a simple box, that takes two minutes. On an enclosure with ribs, bosses, snap fits, and internal features, it takes much longer because every face needs to draft in the correct direction relative to the mold open direction, and some features need split draft where the taper reverses at the parting line.</p>
<p>The AI doesn't generate draft because it doesn't model the mold. It generates a free-standing 3D shape. The concept of a two-part tool that opens in a specific direction, with surfaces that need to release cleanly, is completely absent from the generation process. This isn't a minor oversight. It's a fundamental gap between creating geometry and designing a moldable part.</p>
<h2>Wall thickness: uniform or disaster</h2>
<p>Uniform wall thickness is the single most important rule in injection mold design. When molten plastic fills a cavity, thin sections cool faster than thick sections. Non-uniform cooling causes internal stresses, which cause warpage. Thick sections also develop sink marks on the opposite surface as the material shrinks during cooling, leaving visible depressions that ruin cosmetic surfaces.</p>
<p>The target wall thickness depends on material and part size, but for most thermoplastics, 1.5mm to 3mm is the working range. The important thing is consistency. If your nominal wall is 2mm, every wall should be 2mm. Transitions between different thicknesses should be gradual, typically ramping over a distance of at least three times the thickness change.</p>
<p>AI-generated enclosures routinely violate this rule. I measured wall thickness on five AI-generated box-type enclosures, and every one had variation of at least 30% between the thinnest and thickest wall sections. One had a 1.1mm wall adjacent to a 4.2mm boss, with no transition geometry. That boss would show a visible sink mark on the opposite surface, guaranteed. The thick section would also cool slower than the surrounding walls, creating a localized stress concentration that could lead to cracking in service.</p>
<p>The AI generates wall thickness by subtracting an inner cavity from an outer shell, and it doesn't constrain the inner shape to maintain uniform distance from the outer shape. The result is walls that wander in thickness depending on how the inner and outer surfaces were independently generated. It's a geometry problem, not an engineering solution.</p>
<h2>Gate location and flow</h2>
<p>The gate is where molten plastic enters the mold cavity. Its location determines fill patterns, weld lines, air traps, and surface quality. Text-to-CAD tools don't model gates because they don't model the molding process.</p>
<p>This matters because the designer needs to consider gate location during part design. A common mistake is designing a part with a thin section between the gate and a thick section. The plastic flows through the thin area first, it freezes before the thick section has packed out, and you get voids or excessive shrinkage. Good mold design positions the gate at the thickest section and lets material flow from thick to thin. The AI can't make these decisions because it doesn't think about flow.</p>
<h2>Parting lines and undercuts</h2>
<p>The parting line is where the two halves of the mold meet. An experienced designer places it strategically: at the widest cross-section, along an edge where flash is least visible. The part geometry is designed with the parting line in mind.</p>
<p>AI-generated parts have no concept of parting lines. Snap-fit hooks point in the wrong direction. Internal features require side actions or lifters. Screw bosses stick out at angles incompatible with a simple two-plate mold. Every undercut that can't be avoided requires additional mold mechanism, each adding thousands of dollars to the tool cost. A part with three or four unnecessary undercuts, typical of AI-generated enclosure geometry, could add $10,000 to $20,000 to the mold. That's real money spent because the AI doesn't understand how molds open.</p>
<h2>Ribs, bosses, and sink marks</h2>
<p>Ribs add stiffness to thin walls without increasing nominal wall thickness. But a rib that's too thick relative to the wall causes a sink mark on the opposite surface. The standard guideline: rib thickness should be 50 to 70 percent of the adjoining wall thickness, with draft on the rib sides and a radius at the base.</p>
<p>AI-generated parts sometimes include ribs, which is nice. But the ribs are usually the same thickness as the wall or thicker, defeating the purpose. Screw bosses are similar: the AI generates cylinders protruding from a wall with no particular dimensional relationship to anything. They'd work in 3D printing. In injection molding, they'd create sink marks and assembly problems.</p>
<h2>Shrinkage</h2>
<p>When plastic cools, it shrinks. ABS shrinks about 0.5 to 0.7%, polypropylene 1.5 to 2%. The mold cavity is cut larger to compensate. AI-generated models are nominal geometry with no consideration for shrinkage. The AI doesn't know what material you're molding, doesn't know that shrinkage is anisotropic in filled materials, and doesn't know that non-uniform wall thickness causes non-uniform shrinkage, which causes warpage on top of the warpage from differential cooling. <a href="/posts/text-to-cad-limitations">AI-generated models carry no tolerance information</a> of any kind.</p>
<h2>What a realistic path looks like</h2>
<p>If you're designing for injection molding, text-to-CAD is not your tool. Maybe, and I'm being generous, for a very early concept shape that you'll completely redesign. The workflow for injection-molded parts has always involved specialized knowledge. You design the part with the mold in mind. You simulate the fill. You iterate with the toolmaker. A $30,000 mold that produces warped parts is an expensive lesson in physics.</p>
<p>AI-generated geometry enters this workflow at the earliest possible stage, if at all. The moment the project gets serious about manufacturing, the AI output gets replaced by a proper parametric model designed by someone who knows the rules.</p>
<h2>The bottom line</h2>
<p>AI CAD tools cannot design injection-molded parts. They can generate shapes that vaguely resemble injection-molded parts, but the gap between resemblance and manufacturability is filled with draft angles, wall thickness rules, gate analysis, parting line strategy, shrinkage compensation, and years of accumulated DFM knowledge that no current AI system possesses.</p>
<p>I'm not saying this to be discouraging about AI CAD in general. For <a href="/posts/text-to-cad-for-mechanical-parts">simple mechanical parts</a>, brackets, and mounting plates, these tools offer genuine time savings. But injection molding is a domain where the manufacturing process dictates the geometry to a degree that text-to-CAD simply can't handle. The tooling engineer who put down his sandwich to critique those models was right. You can't design for a process you don't understand, and the AI doesn't understand injection molding. It just draws shapes that look plastic.</p>
]]></content:encoded>
    </item>
    <item>
      <title>OpenSCAD MCP server: AI with visual feedback</title>
      <link>https://blog.texocad.ai/posts/openscad-mcp-server</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/openscad-mcp-server</guid>
      <pubDate>Sat, 07 Mar 2026 00:00:00 GMT</pubDate>
      <description>The OpenSCAD MCP server lets AI tools see what they&apos;re generating in real time. It closes the feedback loop that makes text-to-CAD actually iterative instead of blind.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>openscad</category>
      <category>mcp</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> The OpenSCAD MCP (Model Context Protocol) server connects AI assistants to OpenSCAD, allowing them to generate code, render previews, and iterate based on visual feedback. This creates a closed-loop text-to-CAD workflow where the AI can see and correct its output, significantly improving results over blind code generation.</p>
<p>The first time I used <a href="/posts/openscad-chatgpt">ChatGPT to generate OpenSCAD code</a>, the workflow was: describe a part, copy the script, paste it into OpenSCAD, hit render, look at the result, go back to ChatGPT, describe what was wrong, get a new script, copy, paste, render, repeat. It worked. It also felt like giving driving directions to someone over the phone while they wore a blindfold. The AI was generating geometry it couldn't see. Every correction required me to be the AI's eyes, translating visual problems back into text: "the hole is on the wrong face," "the pocket is too shallow," "the mounting tabs are inside the enclosure instead of outside."</p>
<p>The OpenSCAD MCP server fixes this problem. It gives the AI eyes.</p>
<h2>What MCP actually is</h2>
<p>MCP stands for Model Context Protocol. It's a standard, originally developed by Anthropic, that lets AI assistants connect to external tools. Instead of the AI being limited to generating text and hoping you do something with it, MCP lets the AI call functions, read files, execute code, and receive results. Think of it as a way for the AI to use software the same way you do, by issuing commands and seeing what happens.</p>
<p>An OpenSCAD MCP server is a bridge between an AI assistant (Claude, ChatGPT via a compatible client, or any MCP-aware agent) and a local OpenSCAD installation. The server exposes OpenSCAD's capabilities as tools the AI can call: create a new script, modify code, render a preview, export to STL, analyze the geometry. The AI writes OpenSCAD code, tells the server to render it, receives back an image of the result, and decides what to change next. The whole loop happens without you copying and pasting anything.</p>
<h2>The projects that exist</h2>
<p>Several OpenSCAD MCP servers have appeared in the last year, each with a slightly different focus.</p>
<p><a href="https://github.com/quellant/openscad-mcp">quellant/openscad-mcp</a> is the most actively maintained as of early 2026, with about 63 GitHub stars and a v0.2.0 release from February 2026. Built with Python and FastMCP, it supports rendering from multiple perspectives, export to STL, 3MF, AMF, OFF, DXF, and SVG, model management, and geometry analysis. It works with Claude Desktop, Cursor, Windsurf, and VS Code. This is the one I've spent the most time with.</p>
<p><a href="https://github.com/fboldo/openscad-mcp-server">fboldo/openscad-mcp-server</a> is a TypeScript implementation, also from early 2026, available on npm. It focuses on PNG preview rendering and STL export, with a design geared toward iterative agent-driven workflows. Lighter-weight than quellant's version, and the npm packaging makes setup straightforward if you're already in a Node.js environment.</p>
<p><a href="https://github.com/petrijr/openscad-mcp">petrijr/openscad-mcp</a> is another Python-based server, released in January 2026, with validation, rendering, batch rendering, templates, and module support. It emphasizes local-first operation using stdio transport, meaning everything runs on your machine with no network calls.</p>
<p><a href="https://github.com/jhacksman/OpenSCAD-MCP-Server">jhacksman/OpenSCAD-MCP-Server</a> is older, from early 2025, and takes a more ambitious approach: AI image generation, multi-view reconstruction, CUDA integration, and remote processing. It has around 139 stars and represents a different philosophy, using the MCP connection as part of a larger pipeline that goes beyond simple code generation and rendering.</p>
<p>All of these require a local OpenSCAD installation. They're bridges, not replacements. OpenSCAD does the actual rendering and geometry computation. The MCP server just translates between the AI's requests and OpenSCAD's command-line interface.</p>
<h2>Why visual feedback changes everything</h2>
<p>Here's the thing about AI generating CAD: the geometry is spatial. A text description of what's wrong with a 3D model is inherently lossy. When I tell ChatGPT "the hole is in the wrong place," the AI has to guess what I mean by "wrong place." Is it on the wrong face? At the wrong coordinates? The right coordinates but measured from the wrong datum? Rotated incorrectly? All of these produce different fixes, and without seeing the geometry, the AI is essentially guessing which correction to apply.</p>
<p>With an MCP server, the AI renders the model and receives an image. Modern LLMs with vision capabilities can look at that image and understand the geometry. They can see that a hole is on the top face when it should be on the side face. They can see that a pocket is off-center. They can see that two features overlap when they shouldn't. The correction is based on visual evidence, not a text translation of a visual problem.</p>
<p>In practice, this roughly doubles the success rate on first-iteration corrections. When I was doing the copy-paste workflow, I'd estimate about half my corrections produced the intended fix. The other half produced a different error, because the AI misinterpreted my text description of the problem. With the MCP workflow, the AI's corrections hit the target more consistently because it can see what it's fixing.</p>
<h2>What the workflow feels like</h2>
<p>I use quellant/openscad-mcp with Claude in Cursor. The setup took about fifteen minutes: install the Python package, configure the MCP connection in Cursor's settings, point it at my OpenSCAD installation. After that, it just works.</p>
<p>I describe a part in natural language. Claude generates an OpenSCAD script, saves it through the MCP server, and renders a preview. The preview image appears in the conversation. I can see the geometry. Claude can see the geometry. If something is wrong, I say "the mounting tabs should be on the outside of the box, not the inside" and Claude modifies the script, re-renders, and shows me the updated result. The iteration loop is fast, usually under ten seconds per cycle.</p>
<p>The multi-perspective rendering is useful for catching problems that a single-angle preview hides. A part that looks correct from the front might have a feature missing on the back. Rendering from three or four angles gives both me and the AI a complete picture without having to rotate the model manually.</p>
<p>The export step is also handled through MCP. When the geometry looks right, I tell Claude to export STL and the file appears in my project directory. No menu navigation, no dialog boxes, no forgetting to set the right export resolution. The AI handles the export parameters because it knows what the model contains and can choose appropriate settings.</p>
<h2>What it doesn't solve</h2>
<p>The MCP server doesn't make OpenSCAD better at things OpenSCAD is bad at. The <a href="/posts/openscad-ai">limitations of OpenSCAD + AI</a> remain: no STEP export, no organic surfaces, no assemblies, limited threading. The AI can see the geometry now, but it still can't generate a freeform surface in a language that doesn't support freeform surfaces.</p>
<p>The visual feedback also has limits. The AI sees a rendered image, not the actual geometry data. It can't measure distances in the rendering. It can't detect that a wall is 1.9mm thick when it should be 2mm by looking at the preview. Dimensional accuracy still requires you to check the code or export and measure in a slicer. The visual feedback catches structural and positional errors. It doesn't catch dimensional errors below visual threshold.</p>
<p>Complex geometry still confuses the AI, with or without visual feedback. If the model has many overlapping boolean operations, the rendered result might look wrong in ways the AI can't diagnose from the image alone. "Something looks weird about the bottom-left corner" is about the level of precision you get from visual analysis, and that's often not enough to identify a buried boolean error five levels deep in the script.</p>
<p>There's also a practical constraint: the render cycle adds time. Each iteration requires OpenSCAD to render the model and the server to capture the image. For simple parts this takes a second or two. For complex parts with many boolean operations or high <code>$fn</code> values, it can take ten to thirty seconds. That's still faster than the copy-paste workflow, but complex models make the loop feel sluggish.</p>
<h2>Setting it up</h2>
<p>The quickest path is quellant/openscad-mcp with Claude Desktop or Cursor:</p>
<ol>
<li>Install OpenSCAD if you don't have it.</li>
<li>Install the MCP server: <code>pip install openscad-mcp</code> or clone the repo.</li>
<li>Add the MCP server configuration to your AI tool's settings. For Cursor, this goes in the MCP configuration file. For Claude Desktop, it goes in the Claude settings JSON.</li>
<li>Verify the connection by asking the AI to generate and render a simple cube.</li>
</ol>
<p>The README for each project has specific setup instructions. The configuration differs slightly between AI clients, but the concept is the same: point the client at the MCP server, point the MCP server at OpenSCAD, and the pipeline connects.</p>
<p>If you prefer a TypeScript setup or are already using npm-based tools, fboldo's server installs with <code>npm install openscad-mcp-server</code> and has a similarly straightforward configuration.</p>
<h2>Where this fits in the bigger picture</h2>
<p>The MCP approach isn't unique to OpenSCAD. There are MCP servers for <a href="/posts/freecad-ai-plugin">FreeCAD</a>, Fusion 360, and other CAD tools. The <a href="/posts/text-to-cad-open-source">text-to-CAD open source</a> ecosystem is full of these bridges. What makes the OpenSCAD version particularly effective is that OpenSCAD's interface is already text-based. The MCP server doesn't need to simulate mouse clicks or navigate GUI menus. It writes a text file and calls a command-line renderer. The impedance mismatch between what the AI does naturally (generate text) and what the tool needs (receive text) is essentially zero.</p>
<p>For FreeCAD and Fusion 360 MCP servers, the AI generates Python scripts that manipulate a GUI application through an API. That's a more complex translation, with more things that can go wrong. The OpenSCAD MCP server is simple in architecture because OpenSCAD is simple in interface. That simplicity is, in a roundabout way, OpenSCAD's greatest strength for AI integration.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the full range of tools and approaches. If you're already using OpenSCAD and already using an AI assistant, an MCP server is the obvious next step. It turns a workable-but-clunky copy-paste workflow into something that feels like pair programming with someone who can actually see the screen. The AI is still not a CAD expert. But at least it's no longer a CAD expert working blindfolded.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using ChatGPT to write OpenSCAD code</title>
      <link>https://blog.texocad.ai/posts/openscad-chatgpt</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/openscad-chatgpt</guid>
      <pubDate>Fri, 06 Mar 2026 00:00:00 GMT</pubDate>
      <description>ChatGPT can write OpenSCAD code that actually compiles most of the time. Here&apos;s how to use it, what to watch out for, and where it gets weirdly creative with geometry.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>openscad</category>
      <category>chatgpt</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> ChatGPT can generate valid OpenSCAD code from natural language prompts, producing parametric 3D models. Best practices: describe geometry with dimensions, ask for modules and parameters, verify the code compiles in OpenSCAD, and iterate on errors. Works well for simple parts; struggles with complex boolean operations and threading.</p>
<p>A few months ago I needed a simple sensor bracket for a project, the kind of thing with two mounting tabs, a pocket for the sensor body, and a slot for the cable. I could have modeled it in Fusion 360 in ten minutes. Instead, because I was already in a ChatGPT window answering a client email, I typed: "Write an OpenSCAD script for a sensor bracket, 40mm wide, 25mm tall, 3mm thick, with a 15mm x 10mm rectangular pocket centered on the face, two 3.5mm mounting holes 5mm from each end, and a 4mm wide slot from the pocket to the bottom edge for a cable."</p>
<p>ChatGPT gave me a 30-line script. I pasted it into OpenSCAD, hit F5, and the preview showed something recognizably bracket-shaped. The mounting holes were in the right place. The pocket was centered. The cable slot connected to the pocket and ran to the bottom edge. The wall around the pocket was a little thin, and the slot was 3mm wide instead of 4mm, but I changed two variables and had a printable part.</p>
<p>That was easier than it should have been. Here's what I've learned since about making it work consistently.</p>
<h2>Why ChatGPT and OpenSCAD are a good match</h2>
<p>The pairing works because OpenSCAD's language is small, well-documented, and heavily represented in ChatGPT's training data. OpenSCAD scripts show up in hundreds of blog posts, forum threads, Thingiverse descriptions, and tutorials dating back over a decade. ChatGPT has seen a lot of <code>.scad</code> files. It knows the syntax, the standard primitives, the boolean operations, and most of the common patterns.</p>
<p>Compare this to asking ChatGPT to write FreeCAD Python macros. FreeCAD's scripting API is large, inconsistent across workbenches, and documented unevenly. ChatGPT generates FreeCAD code that looks plausible but fails on execution because it invents method names, uses deprecated API patterns, or forgets a <code>recompute()</code> call. OpenSCAD's language is constrained enough that ChatGPT stays within the valid syntax almost every time.</p>
<p>The other advantage is that OpenSCAD scripts are self-contained. No imports, no dependencies, no environment setup. You paste the code, it either renders or it doesn't. There's no debugging a missing library path or a version mismatch. The feedback loop is immediate: code goes in, geometry comes out, you see the result in under a second for simple parts.</p>
<h2>How to prompt for good results</h2>
<p>The single most important thing is dimensions. Every number you leave out is a number ChatGPT invents, and its sense of proportion is unreliable. I've asked for "a small box" and gotten back a 200mm cube. I've asked for "a bracket" and received something the size of a dinner plate. Always include overall dimensions, wall thickness, hole diameters, feature positions, and spacing.</p>
<p>Use millimeters. ChatGPT handles metric better than imperial for OpenSCAD, probably because most OpenSCAD examples online use millimeters. If you're working in inches, convert before prompting. "25.4mm" will produce more consistent results than "1 inch."</p>
<p>Ask for parametric code explicitly. Say "use variables for all dimensions" or "put parameters at the top of the script." ChatGPT will often generate parametric code anyway, but asking for it ensures the output has named variables you can adjust instead of magic numbers scattered through the geometry.</p>
<p>Request modules when the part has repeated features. "Create a module for the mounting tab and use it twice" produces cleaner code than letting ChatGPT repeat the geometry inline. Modules also make it easier to modify the design later, because changing the module definition updates every instance.</p>
<p>Name features using CAD vocabulary. "Counterbore" produces better geometry than "a hole with a wider hole on top." "Fillet" works better than "rounded edge." "Chamfer," "pocket," "boss," "slot," "keyway" all seem to trigger more accurate code generation. The <a href="/posts/text-to-cad-prompt-engineering">prompt engineering guide</a> goes deeper on this, but the principle is the same: precise vocabulary produces precise geometry.</p>
<h2>A prompt that works</h2>
<p>Here's a prompt I use regularly for simple enclosures:</p>
<p>"Write an OpenSCAD script for a rectangular electronics enclosure. Outer dimensions 80mm x 50mm x 30mm. Wall thickness 2mm. Open top. Four 4.2mm mounting holes in the corners of the open face, 5mm from each outer edge. Two M3 standoffs inside the box, 8mm tall, 6mm outer diameter, 3mm inner diameter, positioned 20mm from each short wall, centered on the long axis. Use variables for all key dimensions."</p>
<p>ChatGPT consistently generates a working script for this. The enclosure is a <code>difference()</code> of two cubes. The holes are <code>cylinder()</code> calls subtracted from the walls. The standoffs are <code>cylinder()</code> calls added inside the box. The variables are declared at the top. It compiles, it renders, the proportions are correct.</p>
<p>The details sometimes need adjustment. ChatGPT occasionally positions features from the wrong reference point (from the center when I meant from an edge, or vice versa). It sometimes forgets <code>$fn</code> on cylinders, giving you octagonal holes instead of round ones. It might put the standoffs on the wrong axis if the prompt is ambiguous about which wall is "short" and which is "long." These are all quick fixes, a matter of changing a number or adding a <code>$fn=50</code> parameter.</p>
<h2>Where ChatGPT gets creative in bad ways</h2>
<p>Boolean operations are the most common failure. ChatGPT will generate a <code>difference()</code> where two faces are coplanar, a situation that makes OpenSCAD's renderer produce warnings or broken geometry. The classic case: subtracting a cube from a larger cube where the subtracted cube's face sits exactly on the larger cube's face. OpenSCAD handles this ambiguously. The fix is to extend the subtracted shape slightly past the surface, and ChatGPT doesn't always remember to do this. I've started adding "extend all cuts 0.1mm past the surface for clean booleans" to my prompts, which helps.</p>
<p>Nesting is another issue. ChatGPT sometimes generates deeply nested boolean operations that are hard to read and occasionally produce unexpected results. A <code>difference()</code> inside a <code>union()</code> inside another <code>difference()</code> can behave in ways that aren't intuitive, and if the AI gets the nesting order wrong, features appear or disappear in confusing ways. For complex parts, I ask ChatGPT to comment each section and use named modules for logical groupings. This produces longer code but fewer geometry surprises.</p>
<p>Circular geometry can go wrong when ChatGPT forgets the <code>$fn</code> parameter. OpenSCAD defaults to a low polygon count for circles and cylinders, so a "round hole" might render as a hexagonal hole. I include "use $fn=50 for all circular features" in every prompt now. It's a small thing but it saves a debugging step every time.</p>
<p>ChatGPT also has a tendency to generate geometry that's structurally valid but not printable. Thin walls that would collapse during printing, overhangs without support surfaces, bridges that are too long. The AI doesn't think about manufacturing. It thinks about geometry. If printability matters, you need to specify minimum wall thickness, maximum overhang angles, and bridge lengths in the prompt, or just check the result yourself with a slicer.</p>
<h2>The iteration loop</h2>
<p>The first script is rarely the final script. My typical workflow:</p>
<ol>
<li>Write a detailed prompt with all dimensions and features.</li>
<li>Paste the script into OpenSCAD, render it.</li>
<li>Identify what's wrong: wrong position, wrong size, missing feature, broken boolean.</li>
<li>If it's a quick fix, edit the script directly.</li>
<li>If the structure is wrong, go back to ChatGPT with a correction: "The cable slot should run along the Y axis, not the X axis" or "The mounting holes should be on the vertical face, not the horizontal face."</li>
</ol>
<p>Two or three iterations usually gets me to a usable part for simple geometry. If the part is complex enough to need more than three rounds, I'm better off modeling it from scratch. The time investment tips over somewhere around the fourth revision, especially if the structural layout keeps being wrong.</p>
<p>ChatGPT maintains context within a conversation, so corrections build on previous output. "Make the walls thicker" works after the initial generation. "Add a snap-fit lip around the top edge" works if the enclosure is already generated. This conversational refinement is the real strength of the workflow: you're iterating on a design through natural language, with the AI maintaining the full script context.</p>
<h2>ChatGPT vs Claude vs local models</h2>
<p>I've tested this workflow with ChatGPT (GPT-4 and later), Claude, and a few local models running through Ollama.</p>
<p>ChatGPT produces the most consistently valid OpenSCAD across the widest range of complexity. It gets the syntax right almost every time, handles parametric variables well, and generates readable code. GPT-4 is better than GPT-3.5 for anything beyond simple primitives.</p>
<p>Claude generates clean, well-commented code and sometimes writes more elegant solutions than ChatGPT, using <code>hull()</code> operations and mathematical positioning that are genuinely clever. Claude also tends to generate code with better structure, separating concerns into modules more naturally. It occasionally produces scripts that are more complex than necessary, but the code quality is generally high.</p>
<p>Local models (I've tested with Llama 3 and DeepSeek Coder) work for simple parts but struggle with complex boolean operations and positioning. They're fine for generating a parametric box or a simple bracket. They're unreliable for anything with more than three or four features interacting. If you're running a local model, stick to simple geometry and be prepared to fix more errors.</p>
<p>For the <a href="/posts/openscad-ai">OpenSCAD + AI workflow</a> in general, any of these models work. The choice depends on whether you care about privacy (local models), cost (local models again), code elegance (Claude), or widest compatibility (ChatGPT).</p>
<h2>What not to attempt</h2>
<p>Don't ask ChatGPT to generate gears with correct involute tooth profiles. It'll produce something that looks like a gear from across the room but won't mesh with anything. Use the BOSL2 library's gear modules for that, or generate the gear profile in a dedicated tool and import the DXF.</p>
<p>Don't ask for thread geometry. ChatGPT will generate a cylinder and call it threaded, or produce a helical sweep that's cosmetically thread-shaped but dimensionally meaningless. For 3D printing, use BOSL2's threading modules. For manufacturing, threads belong in your machining setup, not your model.</p>
<p>Don't ask for multi-part assemblies in one prompt. Generate each part separately. ChatGPT loses track of which geometry belongs to which body when you describe multiple interacting parts, and the resulting script is usually a tangled mess of boolean operations that produces a single fused solid instead of separate parts.</p>
<p>Don't trust the output without measuring it. Paste the script into OpenSCAD, render it, use the measurement tools or export and check in a slicer. ChatGPT gets dimensions wrong often enough that blind printing is a bad idea. I've had holes come out 0.5mm too small, walls too thin by a full millimeter, and features positioned from the wrong datum. Always verify.</p>
<h2>The honest take</h2>
<p><a href="/posts/openscad-ai">ChatGPT writing OpenSCAD code</a> is the most practical text-to-CAD workflow I use regularly. Not the most impressive. Not the most powerful. The most practical. Because the input is text, the output is text, the edit cycle is fast, and I don't need to install anything beyond OpenSCAD, which I already have.</p>
<p>It works best for parts I could model myself in ten to fifteen minutes. For those parts, ChatGPT gets me to 80% in one minute and I spend five minutes fixing the rest. The time savings are real but modest. The bigger value is creative: I can iterate on design ideas faster by describing variations than by modeling each one. "Make it 5mm taller. Add a third mounting hole. Widen the pocket by 2mm." Each variation takes seconds to describe and the AI produces updated code instantly.</p>
<p>If you're already comfortable with OpenSCAD, adding ChatGPT to the workflow is trivial and immediately useful. If you've never used OpenSCAD, the combination is a genuinely good way to learn the language, because you can read what the AI generates and understand how OpenSCAD primitives compose into real parts. Either way, it's worth an afternoon of experimentation. The <a href="/posts/openscad-mcp-server">MCP server approach</a> takes this further by closing the visual feedback loop, but even the basic copy-paste workflow produces results I'd actually use.</p>
]]></content:encoded>
    </item>
    <item>
      <title>OpenSCAD + AI: the text-to-CAD workflow nobody talks about</title>
      <link>https://blog.texocad.ai/posts/openscad-ai</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/openscad-ai</guid>
      <pubDate>Thu, 05 Mar 2026 00:00:00 GMT</pubDate>
      <description>OpenSCAD is already a text-based CAD tool. Feed it to an LLM and you&apos;ve got a text-to-CAD workflow that actually produces parametric, editable code. It&apos;s the quietest success in this whole space.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>openscad</category>
      <category>open-source</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> OpenSCAD combined with LLMs (ChatGPT, Claude, local models) is a practical text-to-CAD workflow that produces parametric, editable .scad code. Because OpenSCAD models are code, AI can generate and iterate on them naturally. Tools like the OpenSCAD MCP Server add visual feedback to the loop.</p>
<p>I've been using OpenSCAD on and off for about eight years, mostly for parametric enclosures, jigs, and the kind of small utility parts that don't justify opening Fusion 360. The workflow has always been the same: open a text editor, write geometry in code, hit F5 to preview, swear at a boolean union, fix it, export STL, print. It's not glamorous. The preview window looks like it was designed in 2004, because it was. The language has no classes, no proper error messages, and a rendering engine that punishes you for ambitious geometry by simply refusing to finish.</p>
<p>But here's the thing nobody in the text-to-CAD conversation seems to have noticed: OpenSCAD has been a text-to-CAD tool this whole time. The input is text. The output is geometry. The entire model is a script. There's no feature tree, no click-based modeling, no hidden state. Everything is in the code, and code is exactly what language models are good at writing.</p>
<p>When I first asked ChatGPT to write me an OpenSCAD script for a cable clip, sometime in early 2024, I expected garbage. What I got was a working script with parametric variables for cable diameter, wall thickness, and mounting hole size. It compiled on the first try. The proportions were off, the snap arm was too thin, and the base needed a wider foot, but the structure was correct and every dimension was a variable I could edit. I tweaked three numbers and had a printable part in under ten minutes. That was the moment I started paying attention.</p>
<h2>Why OpenSCAD is the natural fit</h2>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD space</a> has plenty of tools generating geometry from text prompts. Zoo.dev generates B-Rep solids from descriptions. Research models like Text2CAD generate parametric operation sequences. Various MCP servers connect language models to Fusion 360 and FreeCAD. These all work, to varying degrees. But they all face the same fundamental problem: the AI generates geometry it can't see, can't debug, and can't reason about structurally.</p>
<p>OpenSCAD sidesteps this entirely because there's no gap between the representation and the generation. The model is code. The AI generates code. The output is directly editable in the same format it was created in. There's no translation layer, no compiled binary representation, no opaque feature tree. If the AI generates a cylinder with <code>cylinder(h=20, d=10);</code> and the height is wrong, you change the 20 to 25. If the AI puts a hole in the wrong place, you adjust the <code>translate()</code> values. The debugging workflow is the same whether a human or an AI wrote the script.</p>
<p>This matters more than it sounds like it should. With other text-to-CAD tools, when the output is wrong, your options are: re-prompt and hope for better, or import the geometry into a CAD tool and fix it there. With OpenSCAD, you just edit the script. The AI's output is your working file. There's no import step, no format conversion, no loss of parametric intent.</p>
<p>OpenSCAD's language is also surprisingly LLM-friendly. The syntax is small and well-defined. The primitives are simple: <code>cube</code>, <code>cylinder</code>, <code>sphere</code>. The operations are straightforward: <code>union</code>, <code>difference</code>, <code>intersection</code>, <code>translate</code>, <code>rotate</code>, <code>scale</code>. The language has been documented for years across tutorials, forums, and the official manual. It shows up extensively in LLM training data. ChatGPT, Claude, and even smaller local models generate syntactically valid OpenSCAD more consistently than they generate Python scripts for FreeCAD or macro code for SolidWorks.</p>
<h2>What the workflow actually looks like</h2>
<p>The simplest version: you describe a part to an LLM, get back an OpenSCAD script, paste it into OpenSCAD, hit F5, and look at the result. If it's wrong, you either edit the script directly or tell the AI what to fix. This is the <a href="/posts/openscad-chatgpt">ChatGPT + OpenSCAD</a> workflow, and for simple parts it works surprisingly well.</p>
<p>The more advanced version uses tools that close the feedback loop. <a href="https://promptscad.com">PromptSCAD</a> runs the full cycle in a browser: you describe a part, DeepSeek v3 generates the script, and OpenSCAD compiled to WASM renders the result in real time. Still pre-alpha, but functional. <a href="https://github.com/zacharyfmarion/openscad-studio">OpenSCAD Studio</a> provides a desktop and web editor with Claude and GPT integration, live preview, and syntax highlighting.</p>
<p>The most interesting development is the <a href="/posts/openscad-mcp-server">OpenSCAD MCP server</a> ecosystem. Multiple projects now connect OpenSCAD to AI agents via the Model Context Protocol, giving the LLM the ability to generate code, render previews, and see the result. The AI writes a script, OpenSCAD renders it, the AI looks at the rendering and decides what to change. This is the workflow that makes text-to-CAD actually iterative instead of a guess-and-check loop.</p>
<p><a href="https://github.com/Adam0Brien/nl-cad">NL-CAD</a> takes a multi-mode approach, supporting the BOSL2 library for mechanical parts, voxel objects, and conversational refinement through CLI, web, and API interfaces. BOSL2 is worth mentioning because it extends OpenSCAD's vocabulary significantly, adding proper screws, threads, gears, and snap connectors that the base language can't handle well.</p>
<h2>Where the geometry holds up</h2>
<p>OpenSCAD with an LLM works well for a specific category of parts: parametric, prismatic, mechanical components built from boolean operations on simple solids. The kind of thing you'd 3D print, laser cut, or CNC from stock.</p>
<p>Enclosures, brackets, mounts, standoffs, clips, spacers, simple housings, cable management parts, test jigs, alignment fixtures, PCB mounting plates, sensor brackets, fan adapters, battery holders. I've generated usable versions of all of these. Not perfect versions. Usable starting points where every dimension is a variable I can adjust.</p>
<p>The sweet spot is parts with clear geometric logic. An enclosure is a box minus a slightly smaller box. A bracket is two intersecting plates with holes. A standoff is a cylinder with a smaller cylinder subtracted from the center. These map cleanly onto OpenSCAD's boolean operations, and LLMs generate them reliably because the geometric intent is close to the code structure.</p>
<p>Parts that need exact dimensions relative to existing hardware (PCB mounting holes at specific positions, bolt patterns matching a standard) work well as long as you specify the dimensions in the prompt. The AI doesn't know your PCB layout, but if you tell it "four M3 mounting holes on a 58mm by 48mm rectangular pattern, centered on a 70mm by 60mm plate," you'll get exactly that.</p>
<h2>Where it falls apart</h2>
<p>OpenSCAD has real limitations, and AI doesn't fix them.</p>
<p>Organic shapes. Anything with complex curves, sculpted surfaces, or freeform geometry is painful in OpenSCAD regardless of whether a human or an AI writes the code. The language doesn't have splines. Smooth curves require hulling many small primitives, which is slow to render and ugly to read. If your part has an ergonomic grip or an aerodynamic profile, OpenSCAD is the wrong tool and adding AI doesn't change that.</p>
<p>Export format. OpenSCAD exports STL, 3MF, AMF, OFF, DXF, and SVG. It does not export STEP. For 3D printing this doesn't matter. For manufacturing workflows that require STEP, it's a hard stop. No amount of AI magic changes the fact that OpenSCAD produces tessellated geometry, not B-Rep solids. If you need a STEP file, you need a different tool.</p>
<p>Complex assemblies. OpenSCAD has no assembly concept. You can model multiple parts in one script using modules, but there are no constraints, no mates, no interference checks. The AI can generate a box and a lid in the same script, but they're just geometry floating in space. If you need assembly features, you're working around the language's limitations, and the AI inherits those limitations.</p>
<p>Threading and fine mechanical features. OpenSCAD can generate thread-like geometry using libraries like BOSL2, but the results are approximations useful for 3D printing, not for precision machining. The AI will write BOSL2 thread calls if you ask, but the output is decorative thread geometry, not real thread form.</p>
<p>The AI also introduces its own failure modes. Boolean operations are the most common problem. The AI will occasionally generate geometry where two bodies share a face exactly, causing OpenSCAD to produce warnings or garbage output. It'll place features slightly off-axis, creating tiny geometric slivers. It'll nest <code>difference()</code> operations in ways that produce unexpected results because the order of subtraction matters and the AI doesn't always get the intent right. You learn to recognize these patterns quickly, but they require reading the generated code, not just looking at the preview.</p>
<h2>The hidden advantage: version control</h2>
<p>One thing I didn't expect to care about but now consider a genuine advantage: OpenSCAD scripts work perfectly with git. The model is a text file. You can diff changes, review modifications, track history, branch experiments, and merge updates the same way you manage code. This doesn't matter for a one-off bracket. It matters a lot if you're maintaining a library of parametric parts, collaborating with someone, or want to track why a dimension changed.</p>
<p>AI-generated OpenSCAD scripts fit neatly into this workflow. I keep a repository of utility parts, each as a <code>.scad</code> file with a corresponding prompt file documenting what I asked the AI to generate. When I need to modify a part, I can either edit the script directly, re-prompt the AI with adjustments, or both. The history stays clean and readable because the diff shows exactly what changed and why.</p>
<p>Try doing that with a Fusion 360 <code>.f3d</code> file. The version history is inside Fusion's cloud, tied to their account system, and opaque to every tool outside their ecosystem. OpenSCAD's version control story is accidental, but it's better than what any commercial CAD tool offers for AI-generated content.</p>
<h2>The ecosystem is growing fast</h2>
<p>A year ago, using OpenSCAD with an LLM meant copy-pasting text between a chat window and a text editor. Now there's a browser-based tool, multiple MCP servers, a desktop editor with AI integration, and libraries like BOSL2 that extend what the language can describe. The community around this workflow is small but active, and the pace of development is accelerating.</p>
<p>The <a href="/posts/text-to-cad-open-source">text-to-CAD open source</a> landscape has many moving pieces. Research models, FreeCAD scripts, Fusion 360 MCP bridges. But the OpenSCAD path is the only one where the entire pipeline is open, the output is editable code, and the workflow doesn't depend on a proprietary CAD engine or a commercial API. That combination matters for hobbyists, for educators, and for anyone who wants to understand and control what the AI is actually producing.</p>
<h2>The honest assessment</h2>
<p>OpenSCAD + AI is not the future of CAD. It's not going to replace SolidWorks, or Fusion 360, or even FreeCAD. The language is too limited, the geometry too constrained, the export formats too restricted for professional mechanical engineering work. If you need STEP files, assemblies, simulations, or organic surfaces, look elsewhere.</p>
<p>But for what it does, it works better than anything else in the <a href="/posts/text-to-cad-workflows-and-tools">text-to-CAD space</a> right now. Parametric code you can read, edit, version-control, and regenerate. A workflow where AI assistance feels natural because the medium is already text. An ecosystem that's fully open source, runs locally, and doesn't require a subscription or an API key if you're using a local LLM.</p>
<p>I still model most of my professional work in Fusion 360. But my jig library, my cable management parts, my one-off brackets and test fixtures? Those are all OpenSCAD scripts now, half of them started by an LLM, all of them edited by hand. It's the quietest success story in text-to-CAD, and I suspect it'll stay quiet, because it doesn't have a marketing department. Just a text editor and a preview window that looks like it hasn't been updated since the first Obama administration.</p>
]]></content:encoded>
    </item>
    <item>
      <title>LLM CAD generation: how large language models create geometry</title>
      <link>https://blog.texocad.ai/posts/llm-cad-generation</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/llm-cad-generation</guid>
      <pubDate>Wed, 04 Mar 2026 00:00:00 GMT</pubDate>
      <description>Large language models can write CAD code, generate CAD operation sequences, and sometimes produce actual usable geometry. Here&apos;s how they do it and where they fall apart.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>llm</category>
      <category>ai-generation</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> LLMs generate CAD geometry through three approaches: writing CAD scripting code (OpenSCAD, FreeCAD Python), generating CAD operation sequences (sketch→extrude→fillet), or driving CAD APIs through function calling. Code generation works best because LLMs understand programming syntax. Direct geometry generation requires specialized fine-tuning like the Text2CAD model.</p>
<p>I was sitting in front of Claude at about eleven at night, trying to get it to generate a FreeCAD Python script for a simple motor mounting plate. Four holes in a rectangular pattern, a center bore, some countersinks. The kind of part I could model in Fusion 360 in six minutes without thinking about it. Claude's first script used a FreeCAD API method that doesn't exist. The second script used the right methods but put the holes on the wrong face. The third script worked but forgot the countersinks. The fourth script added the countersinks in the wrong coordinate system. The fifth script was perfect. It took about forty minutes, two cups of tea, and enough frustrated backspace to wear out a key, but the geometry rendered correctly and I could export a STEP file.</p>
<p>That experience is a miniature version of the entire LLM CAD generation story: language models can produce geometry. The path from prompt to usable output is just a lot messier than the demos suggest. Understanding how LLMs actually create CAD geometry, the specific mechanisms, the failure modes, the architectural choices, helps explain both why the technology works at all and why it breaks in the ways it does.</p>
<h2>Three approaches to the same problem</h2>
<p>LLMs don't understand geometry the way a CAD kernel does. They don't have a spatial model. They don't reason about topology, or B-Rep faces, or surface normals. What they do have is an extremely good understanding of sequences, patterns, and code syntax. Every approach to LLM CAD generation exploits that strength, and the differences between approaches come down to what kind of sequence the LLM generates.</p>
<p>The first approach is code generation. The LLM writes a script in a CAD scripting language, OpenSCAD, CadQuery Python, FreeCAD Python, or Fusion 360's API, and a separate program executes the script to produce geometry. The LLM never touches the geometry directly. It writes instructions. A geometric kernel follows them.</p>
<p>The second approach is operation sequence generation. The LLM generates a structured sequence of CAD operations: create sketch on XY plane, draw rectangle with dimensions, extrude by 20mm, create sketch on top face, draw circle at center, cut-extrude through all. This sequence gets parsed and executed by a CAD engine or a custom interpreter. The <a href="/posts/text2cad-paper">Text2CAD model</a> works this way, generating sketch-and-extrude sequences from a fine-tuned transformer.</p>
<p>The third approach is API driving through function calling or tool use. The LLM connects to a running CAD application via an API bridge (typically MCP, the Model Context Protocol) and issues commands one at a time, receiving feedback between each step. <a href="/posts/cadagent-fusion-360">CADAgent</a> and the Fusion 360 MCP bridges work this way. The LLM isn't generating the full sequence in advance. It's interacting with the CAD tool in real time, seeing results, and adjusting.</p>
<p>Each approach has different strengths, different failure modes, and different implications for the quality of what comes out the other end. The <a href="/posts/how-text-to-cad-works">how text-to-CAD works</a> post covers the conceptual pipeline. This post is about the mechanics, the places where the mechanism matters for the output.</p>
<h2>Code generation: the one that works best</h2>
<p>The most reliable LLM CAD generation in 2026 is code generation. This is partly because LLMs have been trained on enormous amounts of programming data, and partly because CAD scripting languages are constrained enough that the probability of generating valid syntax is high.</p>
<p>OpenSCAD is the sweet spot. Its scripting language is small, well-documented, and deterministic. A <code>cube([30, 20, 10])</code> always produces the same box. The language has clear error messages. The rendering is fast.</p>
<p>Several projects have formalized pipelines around this. PromptSCAD uses DeepSeek to generate OpenSCAD code and renders in-browser. The OpenSCAD MCP Server gives the LLM visual feedback. For simple to moderate parts, these pipelines produce usable geometry more reliably than any other LLM-based approach.</p>
<p>CadQuery Python is the next step up. CadQuery wraps OpenCascade and produces real B-Rep geometry, STEP-exportable. The API is larger and less forgiving, so scripts fail more often. But when they work, the output is manufacturing-grade. Recent research projects like FutureCAD and CADSmith use CadQuery as their target language, combining code generation with validation loops where one agent generates code, another checks dimensional accuracy, and a vision model evaluates the result visually.</p>
<p>The pattern across all of these: the LLM generates text (code), a separate system interprets the text (a compiler or runtime), and a proper geometric kernel produces the geometry. The LLM never reasons about geometry directly. It generates instructions in a language it was trained on, and the geometry is a downstream consequence.</p>
<p>Why this works: LLMs are good at code. They've been trained on millions of code examples. OpenSCAD and Python are in the training data. The mapping from natural language description to code is the kind of task transformers handle well, translating from one structured sequence to another.</p>
<p>Why it breaks: the LLM doesn't have spatial reasoning. It can write <code>translate([10, 0, 0])</code> without understanding that 10mm to the right means the feature will overlap with an existing wall. It can generate a boolean subtraction that produces invalid geometry without knowing the result is non-manifold. Every failure where the code is syntactically correct but geometrically wrong traces back to this: the model understands the language, not the space. And in CAD, the space is what matters.</p>
<h2>Operation sequence generation: the research approach</h2>
<p>Instead of writing code in an existing language, some systems train a model to generate CAD operations directly. The model outputs a structured sequence, something like: create_sketch(plane=XY), draw_line(0,0,50,0), draw_line(50,0,50,30), close_sketch(), extrude(distance=10). A custom interpreter parses this sequence and builds the geometry.</p>
<p>The Text2CAD model from NeurIPS 2024 is the most prominent example: a transformer fine-tuned on the DeepCAD dataset of roughly 178,000 parametric CAD models. Given a text prompt, it generates a sequence of sketch-and-extrude operations that an interpreter converts to geometry. NURBGen takes a different approach, generating NURBS surface parameters as structured output, directly convertible to B-Rep.</p>
<p>The advantage: the model learns domain-specific patterns. Text2CAD has learned that brackets tend to have certain proportions, that holes appear in regular patterns. The disadvantage: the training data bottleneck. DeepCAD has 178,000 models. Image generation models train on billions. The gap shows in the output: only simple prismatic shapes, nothing complex. Most real-world CAD data is proprietary, and that data bottleneck is the single biggest obstacle to better LLM CAD generation.</p>
<h2>API driving: the real-time approach</h2>
<p>The third approach doesn't generate a complete sequence up front. Instead, the LLM connects to a running CAD application and issues commands one at a time, receiving feedback after each step. This is how CADAgent works with Fusion 360, and how the various MCP bridge projects connect language models to CAD tools.</p>
<p>The workflow looks like this: the LLM says "create a sketch on the XY plane," the CAD tool does it and reports success. The LLM says "draw a rectangle, 50mm by 30mm, centered at the origin," the CAD tool confirms. Each step includes feedback, often a screenshot or model state, so the LLM can adjust. If a fillet fails, the AI can try a different radius. If a sketch lands on the wrong plane, the AI can delete it and start over.</p>
<p>This iterative process is closer to how a human uses a CAD tool, and it produces more reliable results for complex models than generating the entire sequence blind.</p>
<p>The disadvantages: it's slow (each operation requires an API round trip) and expensive (each round trip costs tokens). It's also dependent on the quality of the CAD tool's API. FreeCAD's API, for example, is extensive but inconsistent. A wrong parameter type fails silently. The feedback loop helps, but it doesn't solve the gap between understanding syntax and understanding geometry.</p>
<h2>Where this all breaks down</h2>
<p>Across all three approaches, the failure modes cluster around the same issues.</p>
<p>Spatial reasoning. LLMs don't have it. They generate coordinates and transforms from learned patterns, but they don't understand that two features will interfere, that a wall is too thin to machine, or that a chamfer will remove material needed for a mating surface. Every approach compensates differently: vision models, screenshots, spatial training data. The compensation works for simple parts and breaks down as complexity increases.</p>
<p>Manufacturing awareness. No LLM CAD generation system understands manufacturing constraints. The AI generates geometry in a mathematical vacuum. It doesn't know about draft angles, tool access, or minimum wall thickness. A human designer carries these constraints in their head. An LLM doesn't know they exist unless you put them in the prompt, and even then it applies them inconsistently.</p>
<p>Dimensional precision. LLMs produce the most likely next token, not the geometrically correct next dimension. Ask for a hole at 25.4mm from the edge and you might get 25mm or 26mm. For concept models, this doesn't matter. For production parts, it's the difference between a hole that aligns and one that doesn't.</p>
<h2>Where this is going</h2>
<p>The most promising direction isn't any single approach. It's the combination.</p>
<p>CADSmith and FutureCAD point toward the likely architecture: an LLM generates CadQuery code, a geometric kernel executes and measures it, a validation agent checks against dimensional requirements, and the system iterates until the geometry passes. Code generation provides kernel reliability. Validation loops compensate for the LLM's lack of spatial reasoning.</p>
<p>The practical implication: LLM-generated geometry will get more reliable, not because the LLMs understand space, but because the validation systems improve. The LLM remains a text machine generating text instructions. The geometric validity comes from the kernels and feedback loops wrapped around it.</p>
<p>For now, if you want to use LLM CAD generation in your work, code generation via OpenSCAD or CadQuery is the most reliable path. The <a href="/posts/text-to-cad-open-source">text-to-CAD open source</a> post covers the tools. If you want the convenience of a polished interface, <a href="/posts/zoo-text-to-cad-review">Zoo.dev</a> wraps the whole pipeline into a single prompt box. If you want parametric output inside Fusion 360, <a href="/posts/cadagent-fusion-360">CADAgent</a> uses the API-driving approach with real-time feedback.</p>
<p>And if you want to understand the research foundations, the <a href="/posts/text2cad-paper">Text2CAD paper</a> and the <a href="/posts/how-text-to-cad-works">how text-to-CAD works</a> post lay it out. The technology is real. The geometry it produces is getting better. The gap between "generated" and "production-ready" is still wide, and closing it is going to take better validation systems more than better language models. The LLMs already know how to write the code. They just don't know yet whether the code they wrote makes something you can actually build.</p>
]]></content:encoded>
    </item>
    <item>
      <title>FreeCAD AI plugins: what exists in 2026</title>
      <link>https://blog.texocad.ai/posts/freecad-ai-plugin</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/freecad-ai-plugin</guid>
      <pubDate>Tue, 03 Mar 2026 00:00:00 GMT</pubDate>
      <description>FreeCAD&apos;s Python scripting makes it a natural target for AI integration. A few plugins exist. Most are experimental. Here&apos;s the honest status.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>freecad</category>
      <category>open-source</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> FreeCAD AI plugins in 2026 are limited to experimental projects: AI-assisted Python macro generation using LLMs, and community plugins for natural language to FreeCAD operations. No mature, production-ready AI plugin exists for FreeCAD. The Python scripting API makes it technically feasible but the ecosystem is underdeveloped.</p>
<p>I installed three different FreeCAD AI plugins last weekend. One crashed FreeCAD on launch. One worked but generated a cube when I asked for a bracket. The third, the FreeCAD AI Workbench, actually generated a recognizable bracket with mounting holes and the correct overall dimensions, then asked me if I wanted to refine it. I sat there for a moment, genuinely surprised, because the last time I'd tried AI-anything in FreeCAD, the experience was closer to reading tea leaves than engineering.</p>
<p>The status of FreeCAD AI plugins in 2026 is a lot like FreeCAD itself: technically capable, inconsistently polished, improving faster than you'd expect, and still not something I'd hand to a colleague without a disclaimer. Here's what exists, what works, and what's still held together with optimism and duct tape.</p>
<h2>The FreeCAD AI Workbench</h2>
<p>The most developed project is <a href="https://github.com/ghbalf/freecad-ai">ghbalf/freecad-ai</a>, an add-on workbench that puts a chat interface directly inside FreeCAD. Currently at v0.7.0-alpha (April 2026), it connects FreeCAD to LLMs and lets you describe parts in natural language.</p>
<p>The feature list is ambitious for an alpha project. It has a chat panel with streaming responses, 42 structured FreeCAD operations exposed as callable tools, Plan and Act modes (Plan shows you the code before executing, Act runs it directly), support for 20+ LLM providers including Anthropic, OpenAI, Ollama, Gemini, and DeepSeek, image support so the AI can see your viewport, error self-correction with up to three retry attempts, and reusable instruction sets the developer calls "Skills" for common parts like enclosures, gears, and fastener patterns.</p>
<p>I tested it with Claude and GPT-4. The results were uneven but occasionally impressive. Simple prismatic parts (a plate with holes, a U-bracket, a rectangular enclosure) generated correctly more often than not. The AI created sketches on the right planes, applied Pad operations with correct dimensions, and placed features in approximately the right positions. The error self-correction is genuinely useful: when the first attempt failed (a wrong method call, a missing recompute), the plugin caught the error, sent it back to the LLM, and the second attempt usually worked.</p>
<p>Complex geometry was a different story. A request for a part with intersecting fillets produced an error cascade that the self-correction couldn't recover from. An enclosure with internal ribs generated the ribs as separate bodies instead of features on the main body. A request for a pattern of holes created the first hole correctly and then placed the remaining five in a line instead of a circular pattern. These aren't surprising failures. FreeCAD's Python API has patterns that even experienced scripters find unintuitive, and an LLM working through 42 tool calls to build complex geometry is going to trip over the same inconsistencies.</p>
<p>The Plan mode is the safer option and the one I'd recommend. It shows you the generated code before executing it, which means you can catch obvious problems before they corrupt your model. In Act mode, the AI just runs code directly, and when it goes wrong, you're left cleaning up a partially-built feature tree that might be easier to delete and restart than to fix.</p>
<h2>FreeCAD MCP servers</h2>
<p>The MCP approach, connecting FreeCAD to AI assistants via the Model Context Protocol, has produced several projects.</p>
<p><a href="https://github.com/proximile/FreeCAD-MCP">proximile/FreeCAD-MCP</a> offers 57 CAD operation tools, Docker-containerized headless FreeCAD execution, and vision AI analysis. The containerized approach is interesting because FreeCAD's headless mode has historically been fragile, and running it in Docker isolates the environment from your system. The 57-tool set covers sketching, Part workbench operations, measurements, and export.</p>
<p><a href="https://github.com/Coben-3d/freecad-mcp">Coben-3d/freecad-mcp</a> is another MCP implementation, and several others have appeared on GitHub with varying levels of maturity. The MCP ecosystem for FreeCAD is still in the "many experiments, no standard" phase. None of these have the stability I'd want for regular use, but they demonstrate the approach.</p>
<p>The MCP path has one key advantage over the workbench plugin: the AI runs outside FreeCAD. If the generated code crashes, it crashes a subprocess, not your active FreeCAD session with unsaved work. The workbench plugin runs inside your live FreeCAD instance, which means a bad script can corrupt your model or crash the application. I've experienced both.</p>
<h2>CADialogue</h2>
<p><a href="https://github.com/Hiram31/CADialogue">CADialogue</a> takes a different approach entirely. It's a multimodal system that accepts natural language, speech, and images as input for generating FreeCAD Python macros. The speech input means you can literally talk to your CAD tool, though in practice I find typing more precise and less likely to confuse "fillet" with "fill it."</p>
<p>The standout feature is macro caching: once a script has been generated for a particular operation, the system caches it and reuses it for similar requests. The developers report an 85x speedup on repeated tasks, which makes sense because LLM inference is slow and cached lookups are fast. For workflows where you're generating many variations of similar parts, this could be useful. For one-off parts, the cache doesn't help.</p>
<p>CADialogue also includes human-in-the-loop refinement, meaning you can approve, reject, or modify the AI's output before it executes. This is the pattern I prefer. AI generating CAD code that runs immediately without human review is a workflow I don't trust enough to use on anything I care about.</p>
<h2>GPT4FreeCAD and older projects</h2>
<p><a href="https://github.com/revhappy/GPT4FreeCAD">GPT4FreeCAD</a> is an earlier project from 2023 that connects GPT-4 to FreeCAD for generating Python scripts and sketches. It's less actively maintained than the newer projects, and the FreeCAD API has evolved since it was written, but it proved the concept worked. Several of the newer projects cite it as inspiration.</p>
<p><a href="https://github.com/alekssadowski95/FreeCAD-AI-Toolbar">FreeCAD-AI-Toolbar</a> is another early effort, providing a toolbar interface for AI interaction within FreeCAD. The ambition was right; the execution was constrained by the LLM capabilities available at the time.</p>
<p>These older projects are worth mentioning because they show that the community has been trying to make this work for years. The recent improvement isn't because someone had a new idea. It's because the LLMs got better at generating correct Python, and the tool integration patterns (MCP, structured tool calling) matured enough to make the connection reliable.</p>
<h2>Why FreeCAD is harder than OpenSCAD for AI</h2>
<p>The <a href="/posts/openscad-ai">OpenSCAD + AI</a> workflow works well partly because OpenSCAD's language is tiny and self-contained. The AI generates a text file, OpenSCAD renders it, done. There's no state management, no active session, no feature tree to corrupt.</p>
<p>FreeCAD is fundamentally different. It's a stateful application with a complex object model. Creating a part involves: starting a new document, creating a body, creating a sketch on a specific plane, adding geometric constraints to the sketch, closing the sketch, applying a feature (Pad, Pocket, Revolve), and calling <code>recompute()</code> at the right moments. Each step depends on the previous steps having completed correctly. If the AI gets step three wrong, steps four through seven produce garbage or fail.</p>
<p>The API surface is also much larger. FreeCAD has multiple workbenches (Part, Part Design, Sketcher, Draft, Assembly, etc.), each with its own scripting conventions. A script that works in the Part workbench might use different patterns than the same operation in Part Design. The documentation is extensive but uneven, and LLMs occasionally mix up the different workbench APIs, generating code that uses Part module functions when the context requires Part Design, or vice versa.</p>
<p>Constraint handling is the most consistent failure point. FreeCAD's Sketcher requires geometric constraints (coincident, tangent, perpendicular, fixed, dimensional) to fully constrain a sketch before it can be used as a feature profile. LLMs routinely generate under-constrained sketches, producing geometry that's valid but not deterministic. The sketch looks right, but it can shift unpredictably when the model recomputes. Getting an LLM to generate fully constrained sketches consistently is a problem none of these plugins has solved yet.</p>
<h2>The honest assessment</h2>
<p>No mature, production-ready FreeCAD AI plugin exists in 2026. The FreeCAD AI Workbench is the closest, and it's in alpha. The MCP servers work but are unstable. The older projects are proof-of-concept quality.</p>
<p>That said, the trajectory is clearly upward. The FreeCAD AI Workbench went from initial release to v0.7.0 in a few months, adding structured tool calling, vision support, error correction, and multi-provider support at a pace that suggests active, sustained development. The MCP ecosystem is growing. The LLMs themselves keep getting better at Python, which directly improves FreeCAD script quality.</p>
<p>If you want to try AI-assisted FreeCAD today, start with the FreeCAD AI Workbench in Plan mode. Use a capable model (Claude or GPT-4). Keep your requests simple. Save your work before letting the AI execute anything. And expect to read and fix the generated code, because the plugin is a starting point, not a finished product.</p>
<p>For simpler parts that don't need STEP export or assembly features, the <a href="/posts/openscad-ai">OpenSCAD path</a> is more reliable today. For parts that need FreeCAD's full capabilities, <a href="/posts/freecad-ai-macro">AI-generated macros</a> with manual review and editing remain the most practical approach. The plugins are getting there. They aren't there yet. If FreeCAD's history is any guide, they'll arrive eventually, slightly later than hoped, and with at least one dependency that requires building from source.</p>
]]></content:encoded>
    </item>
    <item>
      <title>FreeCAD + AI: automating CAD with Python and LLMs</title>
      <link>https://blog.texocad.ai/posts/freecad-ai-macro</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/freecad-ai-macro</guid>
      <pubDate>Mon, 02 Mar 2026 00:00:00 GMT</pubDate>
      <description>FreeCAD runs on Python. LLMs write Python. The combination works better than you&apos;d expect for simple parametric parts, and worse than you&apos;d hope for anything complex.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>freecad</category>
      <category>python</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> FreeCAD&apos;s Python API can be combined with LLMs (ChatGPT, Claude) to generate parametric CAD models through AI-written macros. The workflow: describe a part to the LLM, get FreeCAD Python code, run it in FreeCAD&apos;s console. Works for simple prismatic parts; fails on complex geometry, constraints, and assemblies.</p>
<p>I needed a mounting plate for a small stepper motor, the kind with four M3 holes on a 31mm square pattern and a central bore for the shaft. Simple part. Ten minutes in FreeCAD, five if the sketcher was feeling cooperative. Instead I opened Claude, typed out the dimensions, and asked for a FreeCAD Python macro. Forty seconds later I had a script. I pasted it into FreeCAD's Python console, hit Enter, and watched it build the part step by step: new document, new body, sketch on XY plane, rectangle, holes, pad, central bore. The dimensions were correct. The hole pattern was correct. The central bore was the right diameter. It even added fillets on the outer corners, which I hadn't asked for but didn't mind.</p>
<p>Then I asked for a version with countersunk holes instead of through-holes, and the script crashed on the second sketch because Claude generated a <code>Part.makeCone</code> call with arguments in the wrong order. Welcome to AI-assisted FreeCAD.</p>
<h2>The basic workflow</h2>
<p>The concept is straightforward. FreeCAD exposes nearly everything through a Python API. You can create documents, build sketches, apply features, set constraints, modify parameters, export files, and run simulations, all from Python. LLMs are good at writing Python. The connection writes itself.</p>
<p>The workflow:</p>
<ol>
<li>Describe the part to an LLM with specific dimensions, features, and positions.</li>
<li>Ask for a FreeCAD Python macro.</li>
<li>Copy the script.</li>
<li>Open FreeCAD, go to View > Panels > Python Console.</li>
<li>Paste and run.</li>
<li>Inspect the result. Fix what's wrong, either in the script or in the GUI.</li>
</ol>
<p>For simple parts, this works more often than it fails. A rectangular plate with holes. An L-bracket with mounting features. A cylindrical standoff. A simple enclosure. Parts built from extrusions, cuts, and basic features on flat sketches generate reliably because the API calls are straightforward and well-documented enough that LLMs have seen plenty of examples.</p>
<h2>What to tell the LLM</h2>
<p>FreeCAD's Python API has multiple ways to do the same thing, and LLMs don't always pick the right one. Being specific about which approach you want saves debugging time.</p>
<p>Tell the LLM to use the Part Design workbench, not the Part workbench, for parametric modeling. Part Design creates features with history (Pad, Pocket, Fillet, Chamfer) that you can edit later. The Part workbench creates shapes directly, which is simpler to script but produces non-parametric geometry. If you want to modify the model after generation, you need Part Design.</p>
<p>Ask for fully constrained sketches. This is the most important instruction and the one LLMs most often ignore. A FreeCAD sketch needs geometric constraints (coincident points, horizontal/vertical lines, fixed positions) and dimensional constraints (lengths, angles, radii) to be fully determined. Under-constrained sketches produce geometry that looks right but shifts unpredictably when you modify anything. The LLM needs to add constraints, not just draw geometry.</p>
<p>A prompt that works for me:</p>
<p>"Write a FreeCAD Python macro using Part Design workbench. Create a new document and body. Sketch on the XY plane. Draw a 60mm x 40mm rectangle centered on the origin. Fully constrain the sketch with dimensional and positional constraints. Pad the sketch 5mm. Add four 3.5mm through-holes at positions (20, 12), (-20, 12), (20, -12), (-20, -12) relative to center, each as a separate sketch on the top face with a Pocket through all. Add 2mm fillets on all outer vertical edges. Call recompute after each operation."</p>
<p>That last part matters. FreeCAD needs explicit <code>recompute()</code> calls to update the model after scripted operations. LLMs forget this about a third of the time, and the result is a script that runs without errors but produces an empty or partially-built model because the features never actually computed.</p>
<h2>The API problems LLMs stumble on</h2>
<p>FreeCAD's scripting API is powerful but inconsistent, and the inconsistencies are exactly where LLMs fail.</p>
<p>Sketch constraint syntax trips up every model I've tested. FreeCAD uses index-based references for constraint targets: <code>Sketch.addConstraint(Sketcher.Constraint('Coincident', 0, 3, 1, 1))</code> means "make the endpoint of the first line coincident with the start point of the second line." Those numeric indices depend on the order geometry was added to the sketch. LLMs generate constraints with wrong indices constantly, because the correct indices depend on the runtime state of the sketch, which the LLM can't see.</p>
<p>The coordinate system differences between workbenches cause silent failures. Part Design sketches use local coordinates on the sketch plane. The Part module uses global coordinates. <code>FreeCAD.Vector(10, 0, 0)</code> means different things in different contexts, and LLMs mix them up.</p>
<p>Method names change between FreeCAD versions. A script generated for FreeCAD 0.21 might use deprecated methods that fail in FreeCAD 1.0. LLMs train on data from multiple versions and don't always generate code for the version you're running. If a script fails on a method call, check whether the method name is current. <code>Part.show()</code> vs <code>FreeCADGui.ActiveDocument.ActiveView.fitAll()</code> is the kind of thing that changes between versions and breaks silently.</p>
<p>Topological naming is the deep problem. FreeCAD has a long-standing issue where feature references (faces, edges, vertices) can change identity when earlier features are modified. This affects scripted operations that reference specific faces: "add a sketch on Face6 of the Pad" works until you modify the Pad, at which point Face6 might be a different face. LLMs generate face references based on expected topology, and those references are fragile. FreeCAD 1.0 has made significant progress on this issue with the toponaming fix, but the problem isn't fully solved and AI-generated scripts still hit it.</p>
<h2>Claude vs ChatGPT for FreeCAD macros</h2>
<p>I've generated hundreds of FreeCAD macros with both, and the differences are noticeable.</p>
<p>Claude produces cleaner code structure. It breaks operations into functions, names variables descriptively, and adds comments that are actually helpful. It handles the Part Design workflow more consistently, creating proper Body > Sketch > Feature sequences. It's also better at generating constrained sketches, though still not reliable enough to skip checking.</p>
<p>ChatGPT (GPT-4 and later) generates code that works on the first run more often for simple parts. It seems to have more FreeCAD-specific training data, or at least more recent examples. For basic Pad/Pocket operations, ChatGPT's success rate is slightly higher. For more complex operations involving multiple sketches, datum planes, or patterns, Claude handles the complexity better.</p>
<p>Both fail on the same categories of problems: sketch constraints, topological references, and complex boolean operations. The failure modes differ (Claude tends to over-specify constraints in ways that conflict, ChatGPT tends to under-specify them), but the end result is the same: you're debugging a Python script against FreeCAD's API documentation.</p>
<p>Local models through Ollama are usable for trivial geometry and unreliable for anything else. I've had success with DeepSeek Coder for basic shapes, but the moment the script needs to reference specific faces or add non-trivial constraints, local models produce code that doesn't survive contact with FreeCAD's runtime.</p>
<h2>Making the output actually useful</h2>
<p>The scripts LLMs generate are starting points. Here's how I turn them into useful parts.</p>
<p>Run in Plan mode if using the <a href="/posts/freecad-ai-plugin">FreeCAD AI Workbench</a>. If pasting manually, read the script before running it. Look for obvious problems: missing <code>recompute()</code> calls, hard-coded face references, unconstrained sketches, operations on the wrong plane.</p>
<p>After running, check the model tree. Every feature should show a green checkmark. Yellow warnings mean something computed but not as expected. Red means failure. If a feature is red, check the Python report view at the bottom for the error message.</p>
<p>Measure critical dimensions. Use FreeCAD's measurement tool to verify that holes are the right diameter, walls are the right thickness, and features are positioned correctly. LLMs get dimensions wrong often enough that this step isn't optional.</p>
<p>If the script needs modification, edit it and re-run on a new document rather than trying to modify the generated model in the GUI. The parametric relationships in a scripted model are often fragile, and manually editing a feature can break downstream references in ways that are harder to fix than just adjusting the script and regenerating.</p>
<p>Save working scripts as macros. FreeCAD stores macros in its macros directory, and you can re-run them anytime. I keep a small library of AI-generated macros for common parts (standoffs, mounting plates, simple enclosures) that I've debugged and parameterized. Changing a dimension at the top of the script and re-running is faster than regenerating from the LLM, and the output is predictable because the script is known to work.</p>
<h2>When to use this instead of just modeling</h2>
<p>The honest calculation: if you can model the part in FreeCAD in under fifteen minutes, just model it. The time you spend writing a prompt, waiting for the LLM, debugging the script, and verifying the output usually exceeds fifteen minutes for a first-time generation. The value comes from reuse and iteration.</p>
<p>If you need ten variations of a mounting plate with different hole patterns, generating a parametric script once and modifying the variables for each variation is faster than modeling each plate individually. If you're exploring design options and want to see a bracket at 40mm, 50mm, and 60mm heights quickly, a parametric script with a single variable change is faster than three manual models.</p>
<p>The other use case is learning. If you're new to FreeCAD's Python API, asking an LLM to generate a script for a part you understand, then reading the script to see how the API calls work, is a surprisingly effective tutorial. I learned more about FreeCAD's Sketcher constraint API from reading AI-generated scripts than from the official documentation, partly because the scripts show complete working examples while the docs show isolated function signatures.</p>
<h2>The practical state of things</h2>
<p>FreeCAD + LLMs for macro generation works. Not seamlessly, not reliably for complex parts, not without debugging. But for simple parametric geometry, the workflow produces usable results faster than I expected when I first tried it.</p>
<p>The <a href="/posts/freecad-ai-plugin">FreeCAD AI plugins</a> are trying to make this smoother by handling the integration inside FreeCAD itself. They're getting better. For now, the manual workflow of prompting an LLM, pasting into the console, and fixing the errors is the most reliable approach because you stay in control of every step.</p>
<p>If you're coming from <a href="/posts/openscad-ai">OpenSCAD + AI</a>, expect a rougher experience. OpenSCAD's small language makes AI generation more predictable. FreeCAD's large API makes it more capable but more fragile. The <a href="/posts/text-to-cad-open-source">text-to-CAD open source</a> landscape has room for both approaches: OpenSCAD for quick parametric parts you'll 3D print, FreeCAD for parts that need STEP export, proper B-Rep geometry, or features that OpenSCAD can't express.</p>
<p>The combination of FreeCAD and LLMs isn't magic. It's a moderately capable junior colleague who knows the API vocabulary, works fast, and needs supervision. I've worked with actual junior colleagues like that. You give them clear instructions, check their work, fix their mistakes, and appreciate that they saved you some time even if the result needed polishing. Same thing here, except this colleague doesn't drink the last of the coffee.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Best AI CAD tools in 2026: honest picks</title>
      <link>https://blog.texocad.ai/posts/best-ai-cad-tools-2026</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/best-ai-cad-tools-2026</guid>
      <pubDate>Sun, 01 Mar 2026 00:00:00 GMT</pubDate>
      <description>There are a lot of AI CAD tools now. Most of them are not very good. Here are the ones that are actually worth your time in 2026.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>tools</category>
      <category>2026</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Best AI CAD tools in 2026: Zoo.dev (best text-to-CAD for STEP output), CADAgent (best open-source Fusion 360 integration), OpenSCAD+ChatGPT (best code-based workflow), SolidWorks AURA (best vendor copilot), Onshape AI Advisor (best browser-based assistant). None replace manual CAD skills, but several save real time on specific tasks.</p>
<p>I spent the first three months of 2026 testing every AI CAD tool I could find. My downloads folder looks like a crime scene of STEP files, STL exports, broken Python scripts, and screenshots of geometry that should never have existed. Some of these tools genuinely saved me time. Some of them wasted more time than they saved. One of them generated something that Fusion 360 refused to import, which I didn't even know was possible with a STEP file. The charitable interpretation is that the field is maturing. The honest interpretation is that the field is growing faster than the quality is improving, and finding the tools worth using requires wading through a lot of tools that aren't.</p>
<p>Here are my honest picks for 2026. Not the longest list. Not the most diplomatic. Just the tools that have actually earned a place in my workflow or come close enough that I check on them regularly.</p>
<h2>Best overall text-to-CAD: Zoo.dev</h2>
<p>Zoo is the tool I keep coming back to, which from me is a real compliment because I don't enjoy depending on SaaS products for things I can do with my own hands.</p>
<p>Zoo runs on a GPU-native geometric kernel called KittyCAD. The output is real B-Rep geometry. You type a prompt, you get a solid model with proper faces and edges, and you can import it into Fusion 360, SolidWorks, or any other tool that reads STEP files. The geometry behaves like geometry. You can select faces, add features, measure things. It's not perfect, but it's real in a way that most competitors aren't.</p>
<p>For simple to moderate parts, a flanged bracket, a shaft collar, a basic enclosure, the results are genuinely useful starting points. I've used Zoo-generated geometry as the foundation for prototype parts that I then refined manually. That saves me the boring first ten minutes of sketching obvious profiles, which on a day with six parts to model adds up to an hour I can spend on the hard stuff.</p>
<p>The weaknesses are real. Complex prompts produce unreliable geometry. Internal faces appear where they shouldn't. The AI doesn't understand manufacturing constraints. It will generate wall thicknesses that no injection mold could fill and call it done. And there's no parametric history, the output is a dead solid. But for first-draft generation with real format support and a working free tier, nothing else comes close.</p>
<p>The <a href="/posts/zoo-text-to-cad-review">Zoo text-to-CAD review</a> and the <a href="/posts/zoo-text-to-cad-tutorial">Zoo tutorial</a> have the full breakdown. Start there if you've never tried text-to-CAD.</p>
<h2>Best parametric output: CADAgent</h2>
<p>If Zoo is the best at generating STEP files, <a href="/posts/cadagent-fusion-360">CADAgent</a> is the best at generating models you can actually work with afterward.</p>
<p>CADAgent is an open-source Fusion 360 add-in that builds models directly inside Fusion's timeline. It doesn't import geometry. It creates sketches, applies dimensions, extrudes features, adds fillets. You watch the model construct itself, feature by feature, and the result is a fully parametric model with editable history. Change a sketch dimension and the rest of the model updates. That's not something any other text-to-CAD tool does.</p>
<p>The catches are significant. It requires Fusion 360, an Anthropic API key (so there's a per-use cost), and patience for the inevitable moments when a complex prompt fails mid-build and leaves you with a half-finished model and a red timeline. For simple parts, it's magical. For complex parts, it's a coin flip. But when it works, the output quality is in a different league from tools that generate orphaned solids.</p>
<p>I recommend CADAgent specifically for Fusion 360 users who want AI assistance without leaving their environment. If you use SolidWorks or NX, this isn't for you. If you use Fusion and you've ever wished someone else would do the first rough sketch of a bracket while you handle the interesting design work, try it.</p>
<h2>Best code-based workflow: OpenSCAD with an LLM</h2>
<p>This isn't a single product. It's a workflow: describe a part in English, have ChatGPT, Claude, or another LLM write an OpenSCAD script, render the result in OpenSCAD. Simple, unglamorous, and surprisingly effective.</p>
<p>OpenSCAD is a code-based CAD tool that's been around for years. You write a script using primitives, booleans, and transformations, and OpenSCAD renders the geometry. LLMs are good at writing code, and OpenSCAD's scripting language is well-documented and constrained enough that the AI generates decent scripts for simple to moderate parts.</p>
<p>The advantages: the code is inspectable and editable. The parametric relationships are explicit in the script. You can version-control the model in git. You don't need a proprietary license. Projects like PromptSCAD and the OpenSCAD MCP Server have formalized the workflow further, adding visual feedback loops and web interfaces.</p>
<p>The disadvantages: OpenSCAD's language is awkward for organic shapes. Export is STL, not STEP, which limits manufacturing workflows. The LLM still makes mistakes that require reading and fixing the code. And the geometry is CSG-based, which is conceptually different from the sketch-and-extrude modeling most CAD users are accustomed to.</p>
<p>I use this workflow for quick programmatic parts, things with regular patterns, parametric families, or geometry that's easier to describe in code than to draw. A grid of mounting holes. A custom standoff with a configurable thread boss. A test fixture with parametric dimensions. For those parts, typing a description and getting a working script is faster than opening Fusion.</p>
<p>The <a href="/posts/text-to-cad-open-source">text-to-CAD open source</a> post covers the OpenSCAD ecosystem in detail.</p>
<h2>Best vendor copilot: SolidWorks AURA/LEO</h2>
<p>If you're already paying for SolidWorks 2026, the AURA and LEO companions are the most feature-complete vendor AI offering available today.</p>
<p>LEO handles practical design assistance: predictive command access, error diagnosis, assembly structure suggestions. AURA handles broader guidance, connecting you with resources and helping with brainstorming. They're available through SolidWorks Labs in the 2026 release.</p>
<p>What makes SolidWorks stand out isn't any single AI feature. It's the breadth. AI-powered drawing generation from text prompts. What's Wrong Analysis for debugging failed features. Assembly Structure Generator. Material Manager. Dassault shipped more AI features in the first quarter of 2026 than most vendors have announced in total. Whether all of them work brilliantly is a separate question, and the answer is "not yet, but several of them work well enough to be useful."</p>
<p>The drawing generation is the feature I'd highlight specifically. If you produce standard engineering drawings regularly, having AI generate 70-80% of a drawing automatically is a real time savings. Not glamorous. Not a keynote moment. But the kind of thing that makes your Friday afternoon less painful.</p>
<p>For the full feature breakdown: <a href="/posts/solidworks-ai-features-2026">SolidWorks AI features 2026</a> and the <a href="/posts/solidworks-aura-ai">SolidWorks AURA review</a>.</p>
<h2>Best browser-based assistant: Onshape AI Advisor</h2>
<p>Onshape AI Advisor does exactly one thing: it helps you use Onshape better. It answers questions, suggests techniques, walks you through troubleshooting, and pulls from verified documentation. It doesn't generate geometry. It doesn't execute commands. It teaches.</p>
<p>That sounds unimpressive next to tools that generate 3D models from text. But Onshape AI Advisor is the most reliable AI feature in any CAD tool I've tested. It works consistently. The answers are accurate. The suggestions are useful. And because Onshape is cloud-native, the AI features update continuously without waiting for an annual release cycle.</p>
<p>If you're learning Onshape, migrating from another CAD tool, or just tired of searching through help docs that feel like they were organized by committee, AI Advisor is genuinely worth your time. PTC has bigger AI plans for Onshape, including agent workflows and FeatureScript generation, but the current shipping product is a documentation assistant done right.</p>
<p>The <a href="/posts/onshape-ai-advisor">Onshape AI Advisor</a> post has more detail.</p>
<h2>Honorable mentions</h2>
<p>Autodesk Assistant in Fusion 360 can execute modeling commands from natural language and is the closest any major vendor has come to text-to-command geometry creation. It's in Tech Preview and works for simple operations. If Autodesk finishes shipping what they've demoed, this belongs higher on the list. For now, it's a promising work in progress. The <a href="/posts/fusion-360-ai-features">Fusion 360 AI features</a> post tracks its progress.</p>
<p>Solid Edge 2026's automation features, specifically Magnetic Snap Assembly and Automatic Drawing Creation, are practical time-savers that fly under the radar. They're not conversational AI, but they're AI-powered and they work.</p>
<p>AdamCAD is fast and cheap for simple parametric parts with dimensional sliders. Good for 3D printing workflows and quick iterations. Not a tool for production engineering.</p>
<h2>Tools I tested and wouldn't recommend</h2>
<p>I'm not going to name every tool that disappointed me, but the patterns are worth calling out.</p>
<p>Several tools generate mesh geometry and call it "CAD." It's not. Mesh is mesh. If the output can't be imported into Fusion or SolidWorks as a solid body with real faces, it's a 3D model, not a CAD model. The distinction matters for anyone who actually manufactures parts.</p>
<p>Several tools are thin wrappers around GPT-4 with a CAD-themed prompt template. You can do the same thing yourself by pasting a prompt into ChatGPT and generating OpenSCAD code. The wrapper doesn't add enough value to justify a subscription.</p>
<p>Several tools promise "AI-native CAD" but are really web-based modeling tools with an AI chat panel bolted on. The <a href="/posts/ai-native-cad">AI-native CAD</a> post explores what that term should mean versus what it usually means.</p>
<h2>The honest verdict</h2>
<p>No AI CAD tool in 2026 is ready to replace manual modeling skills. Every output I've tested needs some level of cleanup, from minor tweaks to complete rebuilds. The tools that are useful are useful in specific, bounded situations: generating first drafts, automating boring documentation, finding commands faster, debugging errors. That's real value. It's just not the revolution the marketing suggests.</p>
<p>My working setup in April 2026: Fusion 360 for actual modeling. Autodesk Assistant for command shortcuts when I forget where a feature lives. Zoo.dev for generating quick starting geometry when I have a simple part and don't feel like sketching from zero. CADAgent for parts I want with full parametric history. OpenSCAD with Claude for programmatic geometry. SolidWorks with LEO on the rare occasions I open it. That combination saves me maybe two to three hours a week, which over a year is significant, and which I wouldn't have believed twelve months ago.</p>
<p>The tools are getting better. The hype is getting louder. The gap between them is the thing to watch. For the full landscape, the <a href="/posts/ai-cad-software-2026">AI CAD software 2026</a> post has every tool and feature I've tracked, and the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> puts it all in context.</p>
]]></content:encoded>
    </item>
    <item>
      <title>CADAgent for Fusion 360: open-source text-to-CAD inside Fusion</title>
      <link>https://blog.texocad.ai/posts/cadagent-fusion-360</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/cadagent-fusion-360</guid>
      <pubDate>Sun, 01 Mar 2026 00:00:00 GMT</pubDate>
      <description>CADAgent is an open-source Fusion 360 add-in that generates parametric models from text prompts using your own Anthropic API key. It actually builds features in the timeline.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>cadagent</category>
      <category>fusion-360</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> CADAgent (GitHub: er-fo/CADAgent, released March 2026) is a free, open-source Fusion 360 add-in that generates parametric CAD models from text prompts. It uses Anthropic&apos;s Claude API to create real Fusion 360 features with timeline history, not imported geometry. Requires your own API key.</p>
<p>I installed CADAgent on a Thursday evening after seeing someone mention it in a Fusion 360 forum thread. The installation took about fifteen minutes, which included setting up an Anthropic API key, cloning the GitHub repo, and fumbling with Fusion's add-in loader because I always forget where Scripts and Add-Ins lives in the menu. By the time I typed my first prompt, the coffee I'd made specifically for the occasion was already lukewarm.</p>
<p>I typed: "Create a flanged mounting bracket, 80mm wide, 40mm tall, 3mm thick, with four M5 through holes on the flange." Then I watched. Fusion 360 started drawing a sketch. Lines appeared. Dimensions locked in. The sketch extruded. A second sketch appeared on the flange face. Circles for the holes. A cut-extrude punched them through. Fillets appeared on the corners. The timeline at the bottom of the screen filled up with features, real features, the kind you can click on and roll back and edit. The whole process took maybe forty seconds.</p>
<p>I sat there looking at a fully parametric bracket that I hadn't drawn. I clicked on the first sketch in the timeline, changed the width from 80mm to 100mm, and the entire model updated cleanly. That moment, right there, is why CADAgent is different from every other text-to-CAD tool I've tested.</p>
<h2>What CADAgent actually is</h2>
<p>CADAgent is an open-source Fusion 360 add-in that connects to Anthropic's Claude API and translates text prompts into actual Fusion 360 modeling operations. It doesn't generate a STEP file and import it. It doesn't produce a mesh blob. It builds the model inside Fusion 360 the same way you would: sketch, constrain, extrude, cut, fillet, chamfer, pattern. Every operation lands in the timeline with full parametric history.</p>
<p>The project appeared on GitHub in March 2026. It's not backed by a company or a research lab. It's a community project that uses the Model Context Protocol (MCP) pattern to bridge Claude's language understanding with Fusion 360's API. You need a Fusion 360 license (or the free personal-use version) and your own Anthropic API key. The add-in sends your prompt to Claude, which generates a sequence of Fusion 360 API calls, and Fusion executes them in real time. You watch the model build itself.</p>
<p>If you've read the <a href="/posts/text-to-cad-open-source">text-to-CAD open source</a> overview, you know the open-source landscape is fragmented. OpenSCAD workflows, FreeCAD Python scripting, research prototypes. CADAgent is the first open-source project I've used that produces genuinely native, timeline-based output inside a professional CAD tool. That matters more than it sounds, because the parametric history is the whole point.</p>
<h2>What works</h2>
<p>Simple to moderate mechanical parts. Brackets, plates, enclosures, shaft collars, standoffs. Anything you could describe in a sentence or two and build from basic sketch-and-extrude operations, CADAgent handles with surprising competence.</p>
<p>I ran it through a few dozen prompts over a weekend. A cable clamp with a screw slot came out clean. A motor mounting plate with a center bore and bolt pattern generated correctly on the first try. A simple L-bracket with stiffening ribs built perfectly. In each case, the timeline was readable. I could go back into any sketch, change a dimension, and the downstream features updated. That's not just a party trick. It means the output is actually usable in a production workflow, the same way a model you built yourself is usable.</p>
<p>The quality of the generated models also surprised me in small ways. The AI tends to use construction geometry properly. It creates offset planes when needed. It applies fillets in a sensible order, usually last, which is exactly what you'd want to avoid the kind of fillet-before-cut disasters that break feature trees. Whether that's Claude's understanding of CAD best practices or the add-in's prompting, I don't know. But the result is models that behave well when you modify them.</p>
<p>For the basics, CADAgent feels like having a junior colleague who knows Fusion 360 well enough to build your first draft while you go refill your coffee. You'll still want to check the work. But the starting point is solid, and it lives natively in the tool you're already using.</p>
<h2>Where it falls apart</h2>
<p>Complex geometry. The moment your prompt requires multiple interacting features, compound curves, or geometry that depends on earlier geometry in non-obvious ways, CADAgent struggles. I asked it to create an electronics enclosure with a lid, snap-fit clips, and ventilation slots. It got about 70% of the way through before a shell operation failed, turning the timeline red. The model up to that point was usable, but the remaining features were dead. I ended up deleting the failed operations and finishing the enclosure manually, which took longer than if I'd just modeled it from scratch.</p>
<p>The AI sometimes makes bizarre construction choices. I asked for a stepped shaft and watched it extrude a full cylinder, then create a sketch on the end face, then cut away material to create the step, rather than just sketching the stepped profile and revolving it. The result was geometrically correct but the feature tree was ugly, and editing it later required understanding why the AI built it that way, which was harder than understanding my own modeling decisions.</p>
<p>Error recovery is basically nonexistent. When a Fusion 360 operation fails mid-sequence, the add-in doesn't back up and try a different approach. It just stops. You're left with a partial model and a red flag in the timeline. For an experienced Fusion user, this is manageable. You can fix the broken feature or delete it and finish manually. For someone who was hoping to avoid learning Fusion 360's interface, a half-built model with a cryptic error is not a great experience.</p>
<p>API costs add up faster than you might expect. Each prompt sends a substantial amount of context to Claude, including Fusion 360 API documentation and the current model state. A single part generation might cost $0.10 to $0.50 in API calls depending on complexity and how many retries the system needs. That's cheap compared to a CAD seat, but it's not free, and if you're iterating on a design through repeated prompts, the cost accumulates.</p>
<h2>How it compares</h2>
<p>The comparison that matters most is against <a href="/posts/zoo-text-to-cad-review">Zoo.dev</a>. Zoo generates STEP files with real B-Rep geometry, but the output arrives as an orphaned solid. No feature tree, no sketches to edit. If you want to change a dimension, you're resketching or using direct editing.</p>
<p>CADAgent's output, when it works, is a fully parametric model. You can change a sketch dimension and watch the rest update. You can suppress features, reorder them, add new ones in between. The model behaves like a model you built yourself. That's a category difference.</p>
<p><a href="/posts/adamcad-review">AdamCAD</a> offers parametric sliders, but the control is surface-level. CADAgent gives you the actual feature tree.</p>
<p>The tradeoff is reliability. Zoo produces output for almost any prompt. CADAgent produces better output for simple prompts and nothing usable for complex ones. If you need a quick STEP file, Zoo wins. If you need a model you'll work with inside Fusion, CADAgent wins, assuming the complexity stays within its limits. The full <a href="/posts/best-text-to-cad-tools">tools comparison</a> has the side-by-side breakdown.</p>
<h2>The bigger picture</h2>
<p>CADAgent isn't the only project connecting Fusion 360 to language models. There's a growing cluster of MCP-based Fusion 360 bridges on GitHub, including projects from faust-machines (80+ tools), ClaudeFusion360MCP (focused on teaching Claude about Fusion's coordinate system), and several others. The pattern is the same: use the Model Context Protocol to let an AI agent issue commands to Fusion 360's Python API. CADAgent is the one I've found most usable for actual part generation, but the ecosystem is moving fast.</p>
<p>What these projects share is the right insight: generating geometry inside the CAD environment, with access to the constraint solver and the parametric engine, produces fundamentally better output than generating geometry externally and importing it. A STEP file is a snapshot. A parametric model is a living document. The difference matters every time you need to make a change, which in real design work is approximately always.</p>
<p>The gap between CADAgent today and a truly reliable AI-assisted CAD workflow is still significant. Complex parts break. Error handling is crude. The AI's modeling strategy is sometimes baffling. But the foundation is right: open-source code, a professional CAD engine, parametric output, and an API architecture that will improve as language models improve. The first time you watch a model build itself in your timeline and then successfully edit it afterward, you'll understand why this approach matters more than prettier STEP files.</p>
<p>I still model most of my parts by hand. I've been doing it for over a decade and I'm faster than CADAgent for anything non-trivial. But for quick first drafts of simple parts, for roughing out a concept before committing to a design, for generating a starting point that I can refine instead of building from a blank sketch, CADAgent has earned a spot in my Fusion 360 toolbar. That's not a ringing endorsement. It's an honest one, which is worth more.</p>
<p>For related reading: <a href="/posts/fusion-360-ai-features">Fusion 360 AI features</a> covers what Autodesk is shipping officially, and the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> puts CADAgent in context with every other tool in this space.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI CAD software in 2026: the full picture</title>
      <link>https://blog.texocad.ai/posts/ai-cad-software-2026</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-cad-software-2026</guid>
      <pubDate>Fri, 27 Feb 2026 00:00:00 GMT</pubDate>
      <description>A complete rundown of every AI-powered CAD tool and feature available in 2026. The field is bigger than the marketing noise suggests, and also smaller than the hype implies.</description>
      <dc:creator>TexoCAD</dc:creator>

      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI CAD software in 2026 spans three categories: dedicated text-to-CAD tools (Zoo.dev, AdamCAD, CADAgent), vendor-integrated AI assistants (SolidWorks AURA, Onshape AI Advisor, Autodesk Assistant, Siemens NX AI Chat), and open-source AI workflows (OpenSCAD+LLM, FreeCAD+Python). Most are early-stage with limited geometry generation capabilities.</p>
<p>I keep a spreadsheet. It started as a text file with three entries back in late 2024, just a note to myself about which tools were claiming to do text-to-CAD. By January 2026 it had twelve rows. By April it has over thirty, and I had to add columns for things like "actually generates geometry" and "requires a proprietary license" and "has the person who built it ever opened a CAD program." The spreadsheet is getting unwieldy. The field is getting unwieldy. Every week there's a new GitHub repo, a new SaaS launch page, a new vendor press release about how AI is transforming design. Sorting through it all has become a part-time job I didn't apply for.</p>
<p>Here's the full picture as of April 2026, organized by what actually matters: what the tools do, how mature they are, and whether they're worth your time.</p>
<h2>The three categories</h2>
<p>All <a href="/posts/ai-in-cad-software">AI in CAD software</a> falls into three buckets, and the confusion in the market comes from people mixing them up.</p>
<p>The first is dedicated text-to-CAD tools. These are standalone products or APIs where you type a text prompt and get 3D geometry back. Zoo.dev, AdamCAD, CADScribe, CADAgent. Their entire reason for existing is converting natural language into CAD models. Some produce real B-Rep geometry. Some produce mesh. Some produce scripts. The quality varies enormously.</p>
<p>The second is vendor-integrated AI assistants. SolidWorks AURA and LEO, Autodesk Assistant, Onshape AI Advisor, Siemens Design Copilot, Creo AI Assistant. These live inside existing CAD platforms and help you use the software. Most of them are chatbots trained on documentation. A few are starting to execute commands from natural language. None of them generate complex geometry from scratch, though Autodesk is working toward it.</p>
<p>The third is open-source AI workflows. OpenSCAD with an LLM. FreeCAD with AI-assisted Python scripting. MCP bridges connecting language models to CAD APIs. These are cobbled-together pipelines that work surprisingly well for simple parts and fall apart predictably for complex ones.</p>
<p>If you're evaluating "AI CAD software," the first thing you need to decide is which category you care about. A person searching for AI that generates parts and a person searching for AI that helps them use SolidWorks are looking for completely different things. The marketing doesn't help you make that distinction. I will.</p>
<h2>Dedicated text-to-CAD tools</h2>
<p>These are the tools that do what most people imagine when they hear "AI CAD." You type, you get a model. The <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> post has my detailed rankings, but here's the 2026 summary.</p>
<p>Zoo.dev remains the most capable dedicated text-to-CAD tool. It runs on a GPU-native geometric kernel (KittyCAD), generates real B-Rep geometry, and outputs STEP, glTF, OBJ, and STL. The geometry is importable into any CAD tool and behaves like real solid modeling output. For simple to moderate parts, the results are genuinely useful as starting points. For complex parts, the results range from rough to wrong. Zoo has a free tier, a Python SDK, and a well-documented API. It's the tool I recommend trying first if you've never used text-to-CAD.</p>
<p>AdamCAD generates STL files with parametric sliders for post-generation dimensional adjustment. It's fast, cheap ($5.99/month entry point), and good for quick iterations on simple parts. The parametric controls are shallow compared to a real feature tree, but for 3D printing prototypes and early-stage exploration, the speed-to-output ratio is the best in the category.</p>
<p><a href="/posts/cadagent-fusion-360">CADAgent</a> is the open-source option that produces the best output, because it generates models directly inside Fusion 360 with full timeline history. The catch: you need Fusion 360, an Anthropic API key, and patience for when complex prompts fail mid-build. For simple parts, the parametric output is in a different league from everything else.</p>
<p>CADScribe generates STEP and STL files. The geometry is valid but rough. Edge quality is inconsistent, and complex parts suffer. It sits in the middle of the pack: better than the tools that only produce mesh, worse than Zoo for geometry quality.</p>
<p>CADGPT doesn't generate geometry. It generates automation scripts (AutoLISP, Python). Useful for AutoCAD power users. Misleading for everyone else. The name does a lot of heavy lifting that the product doesn't support.</p>
<p>Several newer entries have appeared in early 2026: BuildCAD AI, various MCP-based agents, and a handful of tools that wrap GPT-4 or Claude around OpenSCAD or CadQuery code generation. Most of these are too early to evaluate seriously, but the trend is clear. The barrier to building a text-to-CAD tool has dropped, which means we'll see more of them, and most of them won't be very good.</p>
<h2>Vendor-integrated AI assistants</h2>
<p>Every major CAD vendor shipped some form of AI assistant between late 2025 and early 2026. The <a href="/posts/ai-cad-copilot">AI CAD copilot</a> post covers the pattern in detail. Here's where each vendor stands.</p>
<p>SolidWorks 2026 shipped the most AI features of any major release. AURA and LEO are virtual companions: AURA for Q&#x26;A and guidance, LEO for design-specific tasks like error diagnosis and assembly structure suggestions. Add AI-powered drawing generation, What's Wrong Analysis, and the Assembly Structure Generator, and Dassault is moving faster than any other vendor on shipping actual features.</p>
<p>Autodesk Assistant can execute modeling commands from natural language: extrude, fillet, chamfer, hole, shell, revolve. It's in Tech Preview, which means it works for simple operations and gets confused by anything ambiguous. Neural CAD, the text-to-geometry feature demoed at AU 2025, remains unavailable. The <a href="/posts/fusion-360-ai-features">Fusion 360 AI features</a> post tracks the shipping status.</p>
<p>Onshape AI Advisor has been live since October 2025. It's a documentation assistant, not a geometry generator, and it's the most honest AI feature in CAD right now. PTC has roadmap items for FeatureScript generation and agent workflows, but the shipping product is a guidance tool, and a good one.</p>
<p>Creo AI Assistant (beta since Creo 13) focuses on error troubleshooting. Narrow but useful. Solid Edge 2026 shipped Design Copilot, Magnetic Snap Assembly, and Automatic Drawing Creation. The automation features save more time than the chat interface: automatic drawing creation handles 70-80% of a standard drawing, and snap assembly reportedly speeds up constraint work by up to nine times.</p>
<p>Siemens NX has AI Chat under development, with limited public details.</p>
<h2>Open-source AI workflows</h2>
<p>The <a href="/posts/text-to-cad-open-source">text-to-CAD open source</a> post covers this in depth. The short version: the pieces exist but the products don't.</p>
<p>OpenSCAD plus an LLM is the most practical open-source workflow. LLMs write OpenSCAD code well because the language is small and constrained. Projects like PromptSCAD and the OpenSCAD MCP Server formalize the pipeline. The limitation: STL-only export and awkward organic shapes.</p>
<p>FreeCAD with AI-assisted Python gives you STEP export and real B-Rep geometry, but LLM scripts fail more often because the API is larger and less forgiving. Fusion 360 MCP bridges (CADAgent, ClaudeFusion360MCP, faust-machines) produce the best output: native Fusion geometry with parametric history. The catch is you need a Fusion license.</p>
<p>The Text2CAD research code from NeurIPS 2024 is on GitHub. It generates sketch-and-extrude sequences from text. The output is geometrically simple. It's research, not a tool.</p>
<h2>What's real vs. what's marketing</h2>
<p>Here's the honest sorting, April 2026.</p>
<p>Things that work today and save real time: Solid Edge's automatic drawing creation. SolidWorks' AI error diagnosis. Onshape AI Advisor for learning the software. Zoo.dev for generating simple STEP geometry. OpenSCAD plus an LLM for programmatic parts. CADAgent for simple Fusion 360 parts.</p>
<p>Things that are shipping but still immature: Autodesk Assistant's text-to-command. SolidWorks AURA and LEO. Creo AI Assistant. Most vendor chatbots. AdamCAD for quick iterations.</p>
<p>Things that are announced but not available: Autodesk Neural CAD. Most "coming soon" features from every vendor's roadmap.</p>
<p>Things that are mostly marketing: any claim that AI can replace a CAD engineer in 2026. Any tool that calls itself "AI-native" without being fundamentally different from a chatbot wrapper. Any demo that shows a complex model being generated from a single prompt without showing the ten failed attempts before it.</p>
<h2>The patterns that matter</h2>
<p>Three trends are worth watching more than any individual tool.</p>
<p>Code generation is winning. The most reliable text-to-CAD results come from LLMs generating code (OpenSCAD scripts, CadQuery Python, Fusion API calls) rather than generating geometry representations directly. This makes sense. LLMs understand programming syntax. They don't inherently understand B-Rep topology. Having the LLM write code and letting a proper geometric kernel execute it produces better results than trying to make the LLM reason about geometric primitives.</p>
<p>Automation features matter more than generation features. Automatic drawing creation, smart assembly constraints, AI error diagnosis. These aren't flashy. They don't make good keynote demos. But they save real time on tasks that working engineers actually do, and they work reliably enough to trust. The person who doesn't have to manually dimension a standard three-view drawing is saving more time than the person who generates a rough bracket from a prompt and then spends twenty minutes fixing it.</p>
<p>The integration gap is the real bottleneck. The best text-to-CAD output in the world is less useful if it arrives as an orphaned STEP file with no feature history. CADAgent's approach, generating geometry inside the CAD environment with parametric history, points toward the right future. But it only works inside Fusion 360, it's unreliable for complex parts, and it requires API costs on top of a CAD license. The gap between generating any geometry and generating useful geometry inside a real workflow is where most tools stall.</p>
<h2>What to actually do</h2>
<p>If you use CAD for work and want to try AI tools in 2026, here's my honest recommendation.</p>
<p>Start with your vendor's AI features. If you're on SolidWorks 2026, try the drawing generation and error analysis. If you're on Fusion 360, use the Autodesk Assistant for command execution. If you're on Onshape, the AI Advisor is genuinely helpful for learning. These are free with your existing license and low risk.</p>
<p>Try Zoo.dev for text-to-CAD. It's the most reliable standalone tool, the free tier is generous enough to evaluate properly, and the STEP output imports cleanly into any CAD program. Use it for simple parts and first drafts, not for production geometry.</p>
<p>If you live in Fusion 360 and want to experiment, install <a href="/posts/cadagent-fusion-360">CADAgent</a>. The parametric output is worth the setup time. Start with simple parts. Don't try to generate an entire assembly from a single prompt.</p>
<p>Ignore anything that only exists as a demo, a roadmap item, or a press release. The gap between what gets announced and what gets shipped in CAD AI is the widest it's ever been, and betting your workflow on a feature that hasn't arrived yet is a reliable way to waste a quarter.</p>
<p>The AI CAD field in 2026 is real, growing, and useful in specific situations. It is not a revolution yet. It's a collection of tools at varying stages of maturity, some of which save you real time and most of which save you less time than they cost you in learning, debugging, and cleanup. The honest path forward is to use what works, test what's new, and keep your manual skills sharp. The AI isn't replacing you this year. It might hand you a decent first draft, though, and some days that's enough.</p>
<p>For the full <a href="/posts/text-to-cad-guide">text-to-CAD guide</a>, which puts all of these tools in a structured framework, start there if this is your first time looking at the space. For the <a href="/posts/best-ai-cad-tools-2026">best AI CAD tools 2026</a> picks, that post cuts through the list and names the ones worth your time.</p>
]]></content:encoded>
    </item>
    <item>
      <title>What AI-native CAD actually means (and what it doesn&apos;t)</title>
      <link>https://blog.texocad.ai/posts/ai-native-cad</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-native-cad</guid>
      <pubDate>Fri, 27 Feb 2026 00:00:00 GMT</pubDate>
      <description>Every CAD vendor claims to be AI-native now. Most of them are bolting a chatbot onto a twenty-year-old codebase. Here&apos;s what AI-native should mean and what it actually means.</description>
      <dc:creator>TexoCAD</dc:creator>

      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI-native CAD means software designed from the ground up with AI as a core component, not bolted on afterward. True AI-native CAD would have AI integrated into geometry creation, constraint solving, and design optimization. In 2026, no major CAD tool is truly AI-native. Zoo.dev comes closest among startups.</p>
<p>I was on a call with a vendor rep last month, the kind of call where someone with a marketing title tries to explain their product's AI features while I try to figure out what the product actually does. About ten minutes in, she described their CAD tool as "AI-native." I asked what that meant. There was a pause. Then she explained that they had recently added a chatbot to the interface that could answer questions about the software. That was the native part. The AI was a chat panel on the right side of the screen, drawing from documentation and help articles. The CAD engine underneath was the same one they'd been shipping for fifteen years.</p>
<p>That conversation captures the state of "AI-native CAD" in 2026. The term has become a flag that companies plant on their marketing pages, and the definition stretches to accommodate whatever they've managed to ship. A chatbot is AI-native. A documentation search with better ranking is AI-native. A slightly improved autocomplete is AI-native. The word has been drained of meaning so thoroughly that hearing it now tells you almost nothing about the product. It tells you a lot about the marketing budget.</p>
<p>But the concept behind the term, software designed from the ground up with AI as a core architectural component rather than an afterthought, is genuinely interesting. Worth defining properly. Worth evaluating honestly. Worth separating from the noise.</p>
<h2>What AI-native should mean</h2>
<p>If I had to define AI-native CAD in a way that would actually distinguish one product from another, it would be this: a CAD tool where the AI doesn't just assist the user, it participates in the geometry creation process at a fundamental level. The AI isn't sitting in a side panel answering questions about the software. It's inside the geometry engine, influencing how shapes get created, how constraints get resolved, and how design decisions get evaluated.</p>
<p>In a truly AI-native CAD system, you wouldn't need to choose between manual modeling and AI generation. The two would be interleaved. You might sketch a rough profile and the AI would infer the constraints. You might describe a feature in words and the system would add it to the existing geometry with full parametric relationships. You might modify one part of a design and the AI would flag the downstream consequences, not just the geometric failures, but the manufacturing implications, the cost changes, the assembly interference.</p>
<p>The AI wouldn't be a separate mode you switch into. It would be part of how the tool thinks. The way spell-check is native to a word processor, not an add-on you install.</p>
<p>That version of AI-native CAD doesn't exist in 2026. Not from any major vendor. Not from any startup. Some tools are closer than others, but nobody has shipped the full vision. What exists is a spectrum that runs from "chatbot bolted onto old software" to "AI integrated into specific workflows" to "AI as a core architectural element in early stages."</p>
<h2>What it actually means today</h2>
<p>In practice, companies using "AI-native" in 2026 fall into three groups.</p>
<p>The first is legacy vendors who added AI features to existing products. SolidWorks, Fusion 360, Onshape, Creo, NX, Solid Edge. All of them have AI assistants or copilots. None of them are AI-native in any architectural sense. The geometric kernel, the constraint solver, the parametric engine, all of this was designed before anyone was talking about language models. The AI is a passenger, not a driver.</p>
<p>I'm not criticizing these features. The <a href="/posts/ai-in-cad-software">AI in CAD software</a> post covers them fairly. But calling any of them AI-native is like calling a car with a GPS "satellite-native." The GPS is useful. The car was designed without it.</p>
<p>The second group is startups that built their product with AI from the beginning but still rely on traditional geometry engines. Zoo.dev fits here. Their KittyCAD kernel is new, and AI generation is central to their product, but the kernel works on B-Rep principles that predate language models by decades. Zoo is closer to AI-native than any major vendor, because the entire product is oriented around AI-generated geometry. But the AI is still writing recipes for a traditional kitchen.</p>
<p>The third group is research projects where the AI generates geometry representations directly. The <a href="/posts/text2cad-paper">Text2CAD model</a> generates sketch-and-extrude sequences from a trained transformer. NURBGen generates NURBS surface parameters from text. These are closer to "AI-native geometry creation" because the neural network produces the geometric data itself. But they're research prototypes. The output quality is nowhere near production-grade.</p>
<h2>Why the distinction matters</h2>
<p>You might wonder why I care about this terminology when there are parts to model and deadlines to meet. Two reasons.</p>
<p>First, it affects purchasing decisions. If a vendor tells you their product is AI-native and you're expecting AI that fundamentally changes how geometry gets created, you'll be disappointed when you discover it's a chat panel that links to help articles. Marketing language shapes expectations, and mismatched expectations waste time and money. I have watched enough engineers get excited about AI features, reorganize their evaluation timelines, sit through demos, and then discover the feature either doesn't exist yet or does something much smaller than advertised. Accurate terminology prevents that cycle.</p>
<p>Second, it affects where the technology goes. If we let "AI-native" mean "has a chatbot," we've defined the term so loosely that there's no incentive to build the real thing. The distinction between bolting AI onto an existing CAD tool and building a CAD tool where AI is architecturally central isn't academic. It's the difference between incremental improvement and a different kind of software. Both are fine. Both have value. But they're not the same thing, and calling them the same name helps nobody except the marketing team.</p>
<h2>What genuine AI-native CAD might look like</h2>
<p>If someone built a truly AI-native CAD tool, what would be different? I've been thinking about this more than is probably healthy, sitting at my desk on a Saturday with a half-finished bracket on one monitor and too many arxiv tabs on the other.</p>
<p>Geometry creation would be conversational and continuous. You'd describe what you want in stages, and the system would build incrementally, maintaining constraints as it goes. You'd say "make the flange wider" and the AI would know which dimension to change without you pointing at a sketch.</p>
<p>Constraint solving would be AI-assisted. Current parametric CAD requires you to manually define every relationship: coincident, tangent, concentric, equal. A truly AI-native system would infer constraints from context. It would understand that two holes should remain aligned because they're mounting holes, not just because they share a vertical constraint.</p>
<p>Manufacturing awareness would be built in. Not just "is this geometry valid?" but "can you actually mill this pocket?" Current AI tools generate geometry in a vacuum. A native system would generate geometry with process knowledge embedded.</p>
<p>None of this exists. The closest thing is LLMs driving CAD APIs through tools like <a href="/posts/cadagent-fusion-360">CADAgent</a>. But that's still an AI operating a traditional tool, not an AI that's part of the tool.</p>
<h2>Where we actually are</h2>
<p>Honest scorecard, April 2026.</p>
<p>No major CAD vendor is AI-native. SolidWorks has the most features shipping. Autodesk has the most ambitious roadmap. Onshape has the cleanest AI assistance. None of them have AI that participates in geometry creation at an architectural level.</p>
<p>Zoo.dev is the closest among commercial tools. The product is built around AI geometry generation. But the kernel is traditional B-Rep, and the AI is a generation layer on top, not integrated into the kernel itself.</p>
<p>Research projects (Text2CAD, NURBGen, FutureCAD, CADSmith) are exploring genuine AI-native geometry generation. These are early, limited, and not usable for real work. But they're the only things pointed at the full vision.</p>
<p>The timeline for a production-grade AI-native CAD tool is unclear. The research and compute exist. The data is the bottleneck. Most real CAD data is locked inside companies who have no reason to share it.</p>
<h2>What to do with this information</h2>
<p>If you're evaluating CAD tools: ignore the term "AI-native" in marketing materials. Look at specific features, try them with your actual parts, and judge the output. The <a href="/posts/best-ai-cad-tools-2026">best AI CAD tools 2026</a> post has my tested recommendations. The <a href="/posts/ai-cad-software-2026">AI CAD software 2026</a> post has the full landscape.</p>
<p>If you're interested in where AI CAD is actually going: follow the research, not the marketing. The papers coming out of Autodesk Research, the Text2CAD group, and projects like CADSmith and FutureCAD are the leading edge. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> links to all of it.</p>
<p>If you're a vendor reading this and you've called your product AI-native: show me the architecture diagram. Show me where the AI touches the geometry engine. If the answer is "it sends queries to a documentation database," we need to have a different conversation about what words mean.</p>
<p>AI-native CAD is a real idea with a real future. The problem isn't the concept. The problem is that the term has been adopted by everyone before anyone has built it. When someone does build it, I suspect we'll know, because the product won't need the label. The same way nobody calls a smartphone "internet-native." It just is. The label becomes unnecessary when the thing is real. We're not there yet. We're at the stage where the label is doing all the work.</p>
]]></content:encoded>
    </item>
    <item>
      <title>SolidWorks AURA AI: a first look</title>
      <link>https://blog.texocad.ai/posts/solidworks-aura-ai</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/solidworks-aura-ai</guid>
      <pubDate>Thu, 26 Feb 2026 00:00:00 GMT</pubDate>
      <description>AURA is Dassault&apos;s AI companion for SolidWorks 2026. You can talk to it, type to it, and sometimes it does what you meant.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>solidworks</category>
      <category>aura</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AURA is an AI companion in SolidWorks 2026 that accepts voice and text input for design tasks. It can look up commands, explain features, suggest next steps, and assist with design workflows. It shipped with SolidWorks 2026 FD01 (February 2026). Usefulness varies by task complexity.</p>
<p>I was halfway through a tolerance review on a forty-part assembly when a coworker leaned over and said, "Have you tried asking AURA?" He said it with the same confidence people use when suggesting you restart your computer. Like the answer was obvious, and the only reason you hadn't fixed the problem was a failure of imagination. I typed a question into the AURA panel, waited, and got a response that was technically correct, contextually useless, and formatted like a customer support ticket from 2014. Welcome to <a href="/posts/ai-in-cad-software">AI in CAD software</a>, Dassault edition.</p>
<p>AURA is the AI assistant that shipped with SolidWorks 2026 FD01 earlier this year, and it's available through the SOLIDWORKS Labs beta tab. Dassault calls it a "virtual companion." I call it a search bar with feelings. But there's more to it than that, and some of what it does is actually worth talking about.</p>
<h2>What AURA is, technically</h2>
<p>AURA is built on Mistral AI's foundational model, hosted on Dassault's own Outscale cloud infrastructure rather than on OpenAI or Google's servers. That's a meaningful choice. It means your design data, your questions, and your context stay inside Dassault's security perimeter. For companies that get nervous about cloud AI tools reading their proprietary geometry, this matters. Whether the tradeoff in model quality compared to GPT-4 or Claude is worth it depends on what you're asking.</p>
<p>You access AURA through the 3DSwym app in SolidWorks, or through the MySession task pane. It lives in a side panel and accepts text input. You type a question, it gives you an answer. The interaction feels like a chat window bolted onto a CAD tool, because that's exactly what it is.</p>
<p>AURA can search SolidWorks documentation, community forums, and your team's 3DSwym content. It can answer questions about your current assembly. It can summarize posts, translate content, and look things up in the knowledge base. It's essentially a context-aware search engine that can speak in sentences instead of returning ten blue links.</p>
<h2>What it actually does well</h2>
<p>I'll give AURA this: for documentation lookups, it's faster than using the help system. I asked it how to save a file to a previous SolidWorks version and got a clear, correct answer with links to the relevant documentation. I asked it about mate references in assemblies and got a reasonable explanation with context pulled from community posts. For the kind of questions you'd normally type into Google and then spend three minutes filtering through SEO garbage and outdated forum posts, AURA is genuinely faster.</p>
<p>It also handles assembly queries in a way that's mildly impressive the first time you see it. You can ask things like "Which parts in this assembly are aluminum?" or "What's the total mass?" and it can pull that information from the current model. Not revolutionary, because mass properties and material assignments have been queryable in SolidWorks since forever, but having a natural language interface to it is more comfortable than clicking through property managers. The first time you type a question and get a real answer about your actual model, there's a small moment of "okay, that's nice."</p>
<p>The summarization features work too. If you're the kind of team that uses 3DSwym for project documentation (and I know some teams do, bless their patience), AURA can summarize long threads, pull out key decisions, and condense rambling wiki pages into something you can actually scan. That's not a CAD feature, but it's useful if your workflow includes digging through collaborative content to find out why someone changed a dimension three weeks ago.</p>
<h2>Where it falls apart</h2>
<p>The problem is when you try to use AURA for anything that requires judgment, nuance, or genuine design intelligence. And that's where the marketing language starts to diverge from the experience.</p>
<p>I asked AURA to suggest a fillet radius for an internal pocket in a machined housing. What I got was a generic recommendation to consult the material datasheet and consider tool radius constraints. Correct, in the way that telling someone to "consult a professional" is correct. Not helpful, in the way that you wanted an actual answer based on the geometry you're staring at. AURA doesn't reason about your model the way an experienced colleague would. It reads properties and searches documentation. Those are different things, and the gap between them is where the frustration lives.</p>
<p>I also tried asking it to identify potential interference between two subassemblies. It pointed me to the Interference Detection tool. Again, technically correct. I know the Interference Detection tool exists. I've been using SolidWorks for more than a decade. What I wanted was for the AI to run the check and tell me the result, or at least save me the six clicks to get there. Instead, it told me how to do the thing I already know how to do. That's the experience in a nutshell: AURA is good at telling you about SolidWorks, and much less good at doing things in SolidWorks.</p>
<p>The responses also have a particular quality to them that I can only describe as cautious. Every answer feels like it went through three layers of review. Nothing is direct. Nothing is opinionated. Nothing is wrong, either, which is the tradeoff Dassault clearly made. A safer model that never gives you bad advice also never gives you bold advice. If you've ever asked a corporate chatbot a real question, you know the texture. AURA has that texture.</p>
<h2>The 3DSwym dependency</h2>
<p>Here's a thing that will annoy some people: AURA is deeply tied to the 3DExperience platform and 3DSwym. If your company runs SolidWorks Desktop without the cloud platform, or if you're on an older licensing model, AURA might not be available to you, or might not have access to the team-level knowledge features that make it most useful.</p>
<p>This is consistent with Dassault's long-term strategy of pushing everyone toward 3DExperience, which is consistent with my long-term strategy of sighing about it. The tool is most useful when it has a rich pool of team content to search. On a standalone installation with no 3DSwym data, it's basically a fancier help search. Still useful, but less useful.</p>
<h2>How it compares to what's happening elsewhere</h2>
<p>PTC shipped the <a href="/posts/onshape-ai-advisor">Onshape AI Advisor</a> last year, which does similar documentation-and-guidance work inside a browser-based CAD tool. Siemens added Design Copilot to NX, which also answers natural language queries against documentation. The pattern across the industry is the same: every major CAD vendor now has a chat panel that can answer questions about how to use the software.</p>
<p>What none of them are doing yet is what the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers: generating geometry from a prompt. AURA doesn't create parts from text descriptions. It doesn't sketch for you. It doesn't build features. Dassault has another companion called LEO that's closer to that territory, handling things like automated drawing creation and design-tree diagnostics, but LEO is a separate project with its own timeline. AURA is the knowledge assistant, not the geometry assistant.</p>
<p>Compared to what standalone AI CAD tools like Zoo.dev are doing with actual geometry generation, AURA is solving a fundamentally different problem. It's making SolidWorks easier to use. It's not making design faster by automating the design itself. Both are valid goals. They just serve very different moments in the workflow, and if you showed up expecting AURA to be an <a href="/posts/ai-cad-copilot">AI CAD copilot</a> that builds parts alongside you, you'll be disappointed.</p>
<h2>The security angle</h2>
<p>One thing Dassault got right, and I'll credit them for this without reservation, is the data isolation. Running on their own Outscale cloud with Mistral's model means your design queries aren't passing through third-party AI infrastructure. For aerospace, defense, and medical device companies that have strict data handling requirements, this is a real differentiator. You can ask AURA about your assembly's material properties without worrying that the query ended up in someone else's training dataset.</p>
<p>Whether this actually matters for most SolidWorks users is debatable. If you're designing consumer electronics housings, the security risk of a cloud AI seeing your bracket design is approximately zero. But for regulated industries, the ability to say "our AI queries never leave our cloud infrastructure" has procurement value. Dassault knows its customer base.</p>
<h2>The verdict</h2>
<p>AURA is a documentation assistant with a chat interface and some assembly awareness. It's useful for the same things a good help system would be useful for, except you can type questions in natural language instead of guessing at search keywords. For new SolidWorks users, it's a decent onboarding companion. For experienced users, it's occasionally handy and frequently underwhelming.</p>
<p>It doesn't design anything. It doesn't build geometry. It doesn't replace knowing how to use SolidWorks. What it does is reduce the friction of looking things up, and for some users, on some days, that's enough. For the rest of us, it's one more panel to ignore while we manually do the thing we already knew how to do.</p>
<p>I'll keep it open in the side panel. I'll ask it the occasional question when I forget where Dassault hid a setting in the 2026 UI refresh. And I'll keep waiting for the version of <a href="/posts/solidworks-ai-features-2026">AI in CAD</a> that actually feels like a collaborator rather than a reference librarian who happens to know my file is open.</p>
]]></content:encoded>
    </item>
    <item>
      <title>SolidWorks 2026 AI features: AURA, LEO, and whether any of it works</title>
      <link>https://blog.texocad.ai/posts/solidworks-ai-features-2026</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/solidworks-ai-features-2026</guid>
      <pubDate>Wed, 25 Feb 2026 00:00:00 GMT</pubDate>
      <description>SolidWorks 2026 ships with AI companions called AURA and LEO plus a handful of other AI features. Some of them are useful. Some of them are branding exercises.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>solidworks</category>
      <category>dassault</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> SolidWorks 2026 includes AURA (voice/text AI companion), LEO (assistant for assemblies and design), Assembly Structure Designer (text-to-assembly), Design Inspection (natural language queries), and automated drawing features. Most shipped with FD01 in February 2026. Quality varies from genuinely useful to gimmick.</p>
<p>I spent about five years in SolidWorks before moving to Fusion 360, and in that time I developed a deep fondness for the software and a deeper fondness for complaining about it. So when Dassault announced that SolidWorks 2026 would ship with AI companions named AURA and LEO, my first reaction was that the software I once used to fight with manually was now going to generate new ways to disappoint me automatically. My second reaction was that I should probably install it and find out.</p>
<p>SolidWorks 2026 FD01 dropped in February 2026. I've been using it, specifically the AI features, for about six weeks. Long enough to form opinions. Short enough that I'm sure some of those opinions will age poorly. Here's where things stand.</p>
<h2>AURA and LEO: the branding</h2>
<p>Before we get to what the AI actually does, we need to talk about the names, because Dassault has made a whole thing of it.</p>
<p>AURA and LEO are not separate products. They're not separate AI models. They're personas layered on top of the same underlying intelligence system, differentiated by personality and approach. AURA is the creative one, focused on "what if" questions. LEO is the practical one, focused on "how to" execution. In theory, the system automatically switches between personas based on what you're asking. In practice, I mostly noticed that the chat responses sometimes had slightly different tones and I couldn't always tell which persona was talking.</p>
<p>Dassault describes AURA as "highly agreeable" and focused on exploring possibilities. LEO is "assertive" and focused on manufacturability and feasibility. Both are available in SolidWorks 2026 FD01 through the SolidWorks Labs (Beta) tab.</p>
<p>I understand why they did this. Giving the AI a name and a personality makes it feel like a colleague rather than a chatbot. The engineering press ate it up. I'm less charmed. An AI that gives you bad geometry with an assertive personality is not better than an AI that gives you bad geometry with no personality. The character sheet doesn't change the topology.</p>
<p>That said, the features themselves are real, and some of them are worth your time. Let me go through them.</p>
<h2>Assembly Structure Designer</h2>
<p>This is the feature that got the most attention at the SolidWorks announcement, and it's the one that comes closest to <a href="/posts/text-to-cad-guide">text-to-CAD</a> territory, at least in concept.</p>
<p>Assembly Structure Designer lets you describe an assembly in natural language and have LEO generate the full structure: top-level assembly, sub-assemblies, individual parts, all organized in a hierarchy. You type something like "battle bot with a spinning drum, a chassis, two drive motors, and a wedge scoop" and it creates the folder structure, names the files, and sets up the assembly tree.</p>
<p>It does not generate the geometry. I need to repeat that because the name is misleading. Assembly Structure Designer creates empty parts and assemblies organized in a hierarchy. It's project scaffolding, not design. You still have to model every single part yourself. What it saves you is the initial file setup: creating the assembly file, creating empty part files, naming things, organizing the structure.</p>
<p>For small projects, this saves maybe ten minutes. For a large assembly with dozens of sub-assemblies and hundreds of parts, the time savings could be meaningful if the AI gets the structure right. In my testing, the generated structures were reasonable for simple products and became progressively less useful as complexity increased. It organized a camera tripod structure sensibly. It made odd choices about sub-assembly grouping for a more complex mechanical assembly, putting things together that I would have kept separate.</p>
<p>It shipped in beta with FD01. It works. It's useful for kickstarting project setup. It's not what people imagine when they hear "AI-generated assemblies."</p>
<h2>AI-powered drawing creation</h2>
<p>This one surprised me by being more useful than expected. SolidWorks 2026 can generate drawings from your 3D models using AI-assisted automation. You set your drawing template, your standards (ASME, ISO, etc.), and your primary views, then the AI generates a drawing with appropriate views, dimensions, and annotations.</p>
<p>I tested it on a moderately complex part, a housing with pockets, holes, and threaded features. The generated drawing was about 70% of the way to what I'd produce manually. The views were reasonable. The dimension placement was mostly logical, though it had the familiar problem of putting dimensions where they technically work but not where I'd want them for readability.</p>
<p>The preview-and-refine workflow is nice. You get a preview, adjust the layout, and then generate the final drawing. It's faster than starting from scratch, especially for simple parts where the standard views tell most of the story. For complex drawings with GD&#x26;T and custom annotations, you'll spend time cleaning up. But "starting from 70% instead of zero" is a genuine time savings on routine parts.</p>
<p>This shipped in beta. I've used it on real parts. It works. It's not magic, but it's useful.</p>
<h2>What's Wrong analysis</h2>
<p>"What's Wrong" is an AI feature that analyzes a failed feature tree and tries to tell you why it failed and how to fix it. If you've spent any time in SolidWorks, you've stared at a red feature with no clear error message and wondered what you did to offend the software. What's Wrong is Dassault's attempt at making that diagnosis faster.</p>
<p>In practice, the analysis is hit-or-miss. For obvious failures, like a sketch that lost its reference or a fillet that can't be applied to an edge that no longer exists, What's Wrong correctly identifies the root cause and suggests a fix. The suggestion is usually accurate enough to be helpful, especially for less experienced users who might not immediately know why a feature turned red.</p>
<p>For more complex failures, the kind where a rebuild error cascades through eight features and the actual problem is three operations back in the tree, What's Wrong sometimes identifies the right root cause and sometimes points you at a symptom rather than the disease. I had one case where a patterned feature failed because of a sketch constraint issue six features earlier, and What's Wrong told me the pattern was the problem. Technically true. Not the fix I needed.</p>
<p>Still, having any diagnostic tool is better than having none, and for the common failure modes that frustrate new users, it works well enough. Shipped in beta.</p>
<h2>Design Inspection</h2>
<p>Design Inspection lets you query your model using natural language. "What's the total mass?" "How many parts in this assembly?" "What material is this body?" "What's the volume of this part?" Instead of navigating to the mass properties dialog or clicking through the material editor, you just ask.</p>
<p>It's not exciting. It's just convenient. I used it mostly during design reviews when someone asked a quick question and I didn't want to click through three dialogs to answer it. "What's the wall thickness here?" is faster to type than to measure, assuming the AI interprets "here" correctly, which it does about 80% of the time.</p>
<h2>Material Manager</h2>
<p>The AI-enhanced Material Manager lets you assign and manage materials conversationally. "Change this part to 316L stainless steel." "Assign aluminum 7075-T6 to all parts in this sub-assembly."</p>
<p>It works the way you'd expect. Useful when you're doing material studies and want to swap materials across multiple parts quickly. Not transformative.</p>
<h2>What's still coming</h2>
<p>Dassault has announced additional AI features for later in 2026:</p>
<p>Project Planner: AI-enhanced project planning with design task breakdowns. Status: announced, not shipped.</p>
<p>Additional LEO competencies in sheet metal, simulation, and surfacing. Status: announced as future updates.</p>
<p>Most features target general availability by mid-2026. SolidWorks delivers five major releases per year, so the feature set will evolve.</p>
<h2>How it compares to Fusion 360's AI</h2>
<p>The natural comparison is with <a href="/posts/fusion-360-ai-features">Fusion 360's AI features</a>, since Autodesk and Dassault are making similar bets on AI in different CAD platforms.</p>
<p>Fusion 360's Autodesk Assistant handles similar tasks: natural language command execution, design queries, material assignment. Fusion also has the announced-but-not-shipping <a href="/posts/fusion-360-neural-cad">Neural CAD</a> for text-to-geometry generation, which SolidWorks doesn't have an equivalent for. On the other hand, SolidWorks' automated drawing generation is more developed than anything Fusion currently offers for drawing automation.</p>
<p>Both are in beta/preview. Both are useful for simple tasks and unreliable for complex ones. SolidWorks' automated drawing generation is more developed than anything Fusion currently offers for drawing automation. The main difference is the personality theater: Dassault gave their AI names and backstories, Autodesk called theirs "Assistant." I prefer the honesty of calling it what it is.</p>
<h2>The gap between naming and shipping</h2>
<p>My biggest criticism of SolidWorks 2026's AI features isn't the technology. It's the packaging. Giving the AI personas names and personality descriptions creates an expectation that the system is more capable than it is. When Dassault says LEO "prioritizes feasibility and manufacturability," that implies the AI understands manufacturing in a deep way. It doesn't. It generates assembly structures and answers queries about mass properties. Useful capabilities. Not "manufacturing intelligence."</p>
<h2>The honest scorecard</h2>
<p>Assembly Structure Designer: useful for project setup, doesn't generate geometry. Beta, shipping.</p>
<p>AI-powered drawing creation: the most practically useful AI feature. Saves real time on routine drawings. Beta, shipping.</p>
<p>What's Wrong analysis: helpful for simple failures, inconsistent for complex ones. Beta, shipping.</p>
<p>Design Inspection: convenient for quick queries. Works. Beta, shipping.</p>
<p>Material Manager: minor convenience. Works. Beta, shipping.</p>
<p>AURA and LEO personas: marketing. The underlying AI doesn't change based on which name it's using.</p>
<p>SolidWorks 2026 has more AI than SolidWorks 2025, and some of it is genuinely useful. The drawing automation alone justifies updating if you produce a lot of routine drawings. The Assembly Structure Designer is a nice time saver for project kickoffs. The rest is convenience features that slightly reduce friction without changing the fundamental workflow.</p>
<p>If you're hoping that AURA and LEO will make SolidWorks feel like having a senior engineer looking over your shoulder, lower those expectations. They'll make it feel like having a command-line assistant who's read the help files and can sometimes save you a click. That's less exciting than the marketing video, but it's real, and real is what counts when you're trying to get a drawing out the door on a Friday afternoon.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Onshape AI Advisor: what PTC shipped and what it missed</title>
      <link>https://blog.texocad.ai/posts/onshape-ai-advisor</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/onshape-ai-advisor</guid>
      <pubDate>Tue, 24 Feb 2026 00:00:00 GMT</pubDate>
      <description>Onshape AI Advisor has been live since October 2025. It gives real-time guidance while you model. Some of it is helpful. Some of it is like a backseat driver who read the manual once.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>onshape</category>
      <category>ptc</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Onshape AI Advisor (launched October 2025) provides real-time modeling guidance, feature suggestions, and error prevention inside Onshape&apos;s browser-based CAD. It also includes LLM-powered FeatureScript autocomplete and AI-enhanced search. It&apos;s most useful for newer users and less useful for experienced modelers.</p>
<p>I was building a sheet metal bracket in Onshape last Tuesday, the kind of part that takes fifteen minutes if you know what you're doing and forty-five minutes if you're fighting the interface. Halfway through a flange bend, a small panel in the corner of my screen offered a suggestion: "Consider using a Flange feature instead of Extrude for sheet metal parts." I stared at it for a second. I was already using the Flange feature. The AI Advisor was telling me to do the thing I was actively doing, with the confidence of someone who walked into a room mid-conversation and decided to summarize what they heard.</p>
<p>That moment captures Onshape AI Advisor pretty well. It's trying to help. It sometimes does. And it has no idea how much you already know.</p>
<h2>What PTC actually shipped</h2>
<p>Onshape AI Advisor launched in October 2025 as part of release 1.205. It's built on Amazon Bedrock, which means it's running on AWS infrastructure using a mix of foundation models that PTC selected and tuned against Onshape's own documentation. The advisor sits inside the Onshape interface, available as a chat panel where you can type questions and get responses.</p>
<p>The core pitch: real-time guidance while you model. You're building a part, you hit a wall, you type a question, the advisor gives you an answer sourced from Onshape's official docs, tutorials, and training materials. It doesn't hallucinate from the open internet. It doesn't make up commands that don't exist. It sticks to what's actually in the Onshape knowledge base, which is a design choice that limits creativity but improves reliability.</p>
<p>PTC is careful to note that the AI Advisor does not access your design data. It can't see your model. It can't read your feature tree. It can't inspect your sketches. It only knows what you tell it in the chat window, plus whatever context it can infer from the conversation. That's an important distinction that changes what the tool can and can't do.</p>
<h2>Where it's genuinely useful</h2>
<p>For newer Onshape users, especially people migrating from SolidWorks or Fusion 360, the AI Advisor is a decent safety net. Onshape does things differently enough that experienced CAD users still trip over its conventions. The way mates work, the branching and merging model, the Part Studio vs Assembly distinction, the way configurations are set up. These are all things that make sense once you understand them and are bewildering until you do.</p>
<p>I asked the advisor how Onshape's branching model compares to file versioning in traditional CAD. The answer was clear, accurate, and included a link to the relevant documentation. It explained the concept in terms that would make sense to someone coming from a PDM background. For that use case, finding the right documentation page fast, the advisor earns its place.</p>
<p>The FeatureScript guidance is another bright spot. FeatureScript is Onshape's programming language for custom features, and it has a learning curve that makes most people give up before they get anywhere interesting. The advisor can explain FeatureScript concepts, break down example code, and help you understand what a particular function does. It won't write production FeatureScript for you, but it'll get you over the hump of "what does this syntax even mean?" faster than reading the documentation raw. PTC has hinted at FeatureScript autocomplete as a future feature, and honestly, that would be more useful than most of what the advisor currently does.</p>
<h2>Where it falls short</h2>
<p>The advisor can't see your model. It has no context about what you're building, what features you've applied, what errors you're hitting, or what the geometry looks like. Every conversation starts from zero. You have to describe your problem in text, which means you're spending time translating a visual, spatial problem into words so that a language model can translate it back into instructions.</p>
<p>I had a sketch that wouldn't fully constrain. In a normal CAD workflow, an experienced colleague would glance at the screen, point at the under-constrained point, and tell me what dimension I'm missing. With the AI Advisor, I had to describe my sketch geometry in words, list the constraints I'd already applied, and ask what might be missing. The advisor gave me a generic checklist of common under-constraint causes. Technically helpful. Practically slower than just staring at the sketch for another thirty seconds.</p>
<p>This is the fundamental limitation of an <a href="/posts/ai-cad-copilot">AI CAD copilot</a> that can't see the CAD. It's like calling tech support and having to describe your screen over the phone. The information barrier means every interaction carries overhead, and that overhead adds up until it's faster to just figure things out yourself.</p>
<p>The advice quality also has a ceiling. For basic workflow questions, the advisor is fine. For nuanced modeling decisions, like whether to use a loft or a sweep for a particular transition, or how to structure a multi-body Part Studio for downstream assembly work, the answers get vague. The advisor defaults to restating what the documentation says, which is accurate but doesn't include the judgment calls that make one approach better than another for your specific situation. It's the difference between "a sweep creates geometry by moving a profile along a path" and "use a sweep here because the cross-section changes and a loft would create a tangency break at the transition." The advisor gives you the first kind of answer.</p>
<p>For Enterprise customers, the AI Advisor is disabled by default and has to be turned on by an admin. If your company has a cautious IT department, which describes most companies large enough to have Enterprise Onshape licenses, you might not have access even though the feature technically exists.</p>
<h2>The roadmap promises</h2>
<p>PTC has been public about what's coming: agent workflows that can interact with model metadata, AI-assisted rendering, and eventually automated geometry creation. The geometry creation part is the one that would actually change the game, and it's also the one furthest out on the timeline. I've seen enough roadmap slides turn into five-year plans to stay skeptical until I can click on something.</p>
<p>The FeatureScript autocomplete, if it ships, would be the single most useful AI feature Onshape could add. FeatureScript has enough users who know enough to be dangerous but not enough to be productive, and code completion with context awareness would lower the barrier meaningfully. That's the kind of AI assistance that saves actual time on actual work, rather than saving ten seconds on a documentation lookup.</p>
<h2>How it compares</h2>
<p>Against SolidWorks AURA, the Onshape AI Advisor has a similar scope: documentation lookups, workflow guidance, natural language search. AURA has the advantage of assembly awareness, meaning it can answer questions about your current model's properties, which the Onshape advisor can't. The Onshape advisor has the advantage of running in a browser-based tool that's already cloud-native, so the AI integration feels less bolted-on.</p>
<p>Against what Siemens is doing with Design Copilot in NX, same pattern: chat panel, documentation search, command suggestions. The enterprise CAD market has collectively decided that the first AI feature to ship is "search, but in a chat window." Nobody's model is going to break because the chatbot gave a bad search result.</p>
<p>None of these tools do what the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> describes: generating actual geometry from a text prompt. They're all workflow assistants, not design generators. The gap between "AI that helps you use CAD" and "AI that does CAD" remains wide, and the vendor assistants are firmly on the first side of that gap. For a broader view of where these sit in the <a href="/posts/ai-in-cad-software">current field</a>, the assistants are catching up on usability while the generators are still working on accuracy.</p>
<h2>The verdict</h2>
<p>Onshape AI Advisor is a solid documentation assistant that lives inside a good CAD tool. For new users, migration users, and FeatureScript learners, it saves real time. For experienced Onshape modelers, it's mildly useful for the occasional question and mostly ignorable for daily work. The inability to see your model is the fundamental constraint, and until that changes, the advisor will always feel like it's helping from outside the room.</p>
<p>PTC is making the right bets on the roadmap. Context-aware assistance, FeatureScript tooling, and eventually geometry agents are the features that would make the advisor worth talking about in a year. What shipped so far is a foundation. A decent one, competently built, appropriately cautious. Just don't expect it to know what you're modeling, or why your sketch is broken, or which approach will actually survive the manufacturing review. That part is still on you.</p>
<p>I keep the advisor panel open. I ask it a question maybe once or twice a day. And most days, I close the answer, nod, and go back to doing it the way I was going to do it anyway. Which, honestly, might be the most accurate review I can give: it's there, it works, and I forget about it the moment I'm actually thinking about geometry.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Siemens NX AI Chat: enterprise AI meets CAD</title>
      <link>https://blog.texocad.ai/posts/siemens-nx-ai-chat</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/siemens-nx-ai-chat</guid>
      <pubDate>Tue, 24 Feb 2026 00:00:00 GMT</pubDate>
      <description>Siemens added AI chat to NX. It works the way enterprise software usually works: carefully, slowly, and with a lot of infrastructure underneath.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>siemens</category>
      <category>nx</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Siemens NX AI Chat is a natural language interface for NX that lets users query design data, ask about features, and get modeling assistance through text commands. It&apos;s part of Siemens&apos; broader Xcelerator AI strategy. Currently in active development with limited public availability.</p>
<p>There's a particular kind of feature announcement that only Siemens can pull off. Fifteen slides of strategic vision, three mentions of "digital thread," a reference to the Xcelerator portfolio, and then somewhere on slide twelve, a screenshot of a chat panel in NX that can answer questions about documentation. I watched one of these presentations last summer while eating a sandwich at my desk, and by the time the demo started, I'd finished the sandwich and most of my patience. But the demo itself was interesting, and the thing it showed was Design Copilot NX, which is Siemens' entry into the "AI chat in your CAD tool" race that every major vendor is now running.</p>
<p>The copilot shipped with the NX X Essentials update in June 2025. I've been using NX on and off for a project that requires it (client toolchain, not my choice), so I've had a few months with the chat feature open in the side panel. Here's what it actually does, what it doesn't, and why it feels exactly like what you'd expect from Siemens: careful, capable, and wrapped in more enterprise infrastructure than strictly necessary.</p>
<h2>What Design Copilot NX is</h2>
<p>It's a natural language chat interface embedded in NX that answers questions by searching through official Siemens NX documentation. You type a question, the copilot searches the knowledge base, and it returns an answer with links to relevant docs and, in some cases, direct links to launch the command you're asking about.</p>
<p>That last part is the one feature that separates it from just having a browser tab open to the NX help site. If you ask "How do I create a swept feature?" the copilot doesn't just explain it, it gives you a link that launches the Swept command directly. That's a small thing, but in NX, where the menu structure has approximately seven thousand entries organized in a way that makes sense to Siemens and nobody else, having a natural language shortcut to the right command is genuinely useful. I've used NX for years and I still can't find half the commands on the first try.</p>
<p>The copilot also provides related query suggestions based on your conversation. Ask about swept features and it'll suggest follow-up questions about guide curves, section orientation, and alignment methods. Occasionally it surfaces things I didn't know I needed to look up.</p>
<p>Siemens also shipped a version for NX Manufacturing, where the copilot helps NC programmers find toolpath strategies, machining parameters, and setup documentation. Manufacturing users in NX often have even more trouble finding the right settings than design users, because the manufacturing module is enormous and specialized.</p>
<h2>Where it's useful</h2>
<p>NX has one of the steepest learning curves in the CAD industry. It's not that the software is bad. It's that the software can do everything, and finding the specific thing you need is like searching for a particular book in a library where the shelving system was designed by committee in the 1990s and expanded every year since without being reorganized. The copilot helps with that specific problem.</p>
<p>I asked it how to set up a variable fillet that transitions between two radii along an edge. In Fusion 360, I know exactly where that option lives. In NX, I knew the capability existed but couldn't remember which flavor of Edge Blend dialog contained it. The copilot gave me the answer in about five seconds, with a link to launch the right command. Without it, I'd have spent a minute or two clicking through menus, or opened the help docs in a browser and searched there. Small savings, but they compound across a session.</p>
<p>For new NX users, the copilot is more valuable. NX's terminology doesn't always match other CAD tools. What SolidWorks calls a "Lofted Boss" and Fusion 360 calls a "Loft," NX calls "Through Curves." If you're coming from another tool and you describe what you want in terms from your previous software, the copilot is reasonably good at translating. I tested this by asking SolidWorks-flavored questions: "How do I create a boss extrude?" gets you to the Extrude command with the correct NX terminology and context. That translation layer is more useful than it sounds.</p>
<p>The manufacturing copilot deserves separate mention. A colleague who programs five-axis parts said the copilot saved him real time when setting up a new operation type. He asked about tilt-angle strategies for barrel cutters and got documentation links plus parameter explanations that would have taken ten minutes to find through the help system. For someone programming parts daily in NX Manufacturing, that's a meaningful improvement.</p>
<h2>Where it doesn't help</h2>
<p>Like every <a href="/posts/ai-cad-copilot">AI CAD copilot</a> shipping right now, Design Copilot NX is a documentation search tool with a conversational interface. It doesn't see your model. It doesn't analyze your feature tree. It doesn't know what you're building, how the geometry behaves, or where the errors are. You tell it what you're trying to do, and it tells you how the documentation says to do it.</p>
<p>I had a boolean operation that was failing on a thin-wall geometry. The error message was cryptic, which is NX's specialty. I asked the copilot about the error. The response was a general explanation of boolean failure causes: check for zero-thickness results, ensure bodies overlap, verify body types. All correct. All things I already knew. What I needed was for the AI to look at the two bodies and tell me which face was causing the problem. That's not what any vendor's AI assistant does right now, but it's what would actually save time.</p>
<p>The copilot stays strictly within NX's official documentation. That's a feature for reliability, but it means you can't ask about interoperability issues or real-world workarounds. "How do I fix a STEP import that came in with split faces?" is the kind of question experienced NX users deal with regularly, and the answer usually involves tribal knowledge and specific sequences of healing commands that aren't well-documented. The copilot can point you to Heal Geometry, but it can't walk you through the sequence that actually works on a messy import from Creo.</p>
<p>If you're using NX standalone or in a mixed-vendor environment, some of the copilot's contextual suggestions are less relevant. It will cheerfully suggest Teamcenter workflows that don't apply to your setup.</p>
<h2>The enterprise question</h2>
<p>Siemens' approach to AI reflects Siemens' approach to everything: deliberate, infrastructure-heavy, and enterprise-first. The copilot is available through value-based licensing, Siemens' subscription model that bundles features based on usage tiers. The pricing is not transparent. It's Siemens. You talk to a sales rep.</p>
<p>The users who would benefit most from an AI assistant are newer users and smaller shops, and they're the least likely to be on the license tier that includes it. The experienced NX programmers at large aerospace companies who already know where every command lives are the ones who get the copilot by default. The people who need the tool most are the last ones to get it.</p>
<h2>How it compares to the field</h2>
<p>The pattern is now clear across the industry. Dassault shipped <a href="/posts/solidworks-aura-ai">AURA for SolidWorks</a>, PTC shipped the AI Advisor for Onshape, Siemens shipped Design Copilot for NX. All three do roughly the same thing: natural language search over documentation, command suggestions, and workflow guidance. All three can't see your model. All three are useful for beginners and forgettable for experts.</p>
<p>The differences are mostly about ecosystem. AURA runs on Mistral via Dassault's own cloud and is tied to 3DSwym. Onshape's advisor runs on Amazon Bedrock and is tied to Onshape's learning materials. Siemens' copilot is part of the Xcelerator portfolio. Each one is optimized for its own vendor's documentation and workflows, which means none of them are useful outside their own tool.</p>
<p>None of them generate geometry. None of them build features. None of them do what the tools in the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> do: turn a text description into an actual solid body. The assistants help you use CAD. The generators try to replace part of the CAD process itself. Across <a href="/posts/ai-in-cad-software">AI in CAD software</a>, the assistant story is converging while the generation story is still wide open.</p>
<h2>The verdict</h2>
<p>Design Copilot NX is a competent documentation assistant that makes NX slightly easier to navigate. For new users, it saves time finding commands and understanding terminology. For experienced users, it saves occasional frustration when NX hides a setting somewhere unexpected. The manufacturing variant might be the better story, since CAM programming in NX is complex enough that having a fast lookup tool has measurable value.</p>
<p>It's not a copilot in the way that word gets used in other industries. It doesn't work alongside you. It answers questions about the manual. That's useful, it's just not what the word "copilot" implies.</p>
<p>I'll keep the panel open when I'm working in NX. I'll use it the same way I use the copilots in <a href="/posts/solidworks-ai-features-2026">SolidWorks</a> and Onshape: as a faster way to search documentation when I can't remember where Siemens put the button. And I'll keep waiting for the version that can actually look at my model and tell me something I don't already know. That version isn't here yet, from any vendor. But the infrastructure is being laid, one chat panel at a time.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fusion 360 Text to Command: natural language meets feature trees</title>
      <link>https://blog.texocad.ai/posts/fusion-360-text-to-command</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/fusion-360-text-to-command</guid>
      <pubDate>Mon, 23 Feb 2026 00:00:00 GMT</pubDate>
      <description>Text to Command lets you tell Fusion 360 what to do in plain English instead of clicking through menus. It&apos;s a different idea from text-to-CAD, and it solves a different problem.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>fusion-360</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Fusion 360 Text to Command is an Autodesk feature that translates natural language instructions into CAD operations (e.g., &apos;extrude this face by 10mm&apos;). Unlike text-to-CAD, it doesn&apos;t generate geometry from scratch. It operates on existing models and works as a natural language interface to Fusion 360&apos;s command system.</p>
<p>I spend an embarrassing amount of time in Fusion 360 looking for commands I've used maybe three times in my life. The revolve tool is under Create, which makes sense, but the split body tool is under Modify, which also makes sense but not until you think about it. The circular pattern is somewhere I know I'll find it faster if I stop thinking and just let my eyes scan. The search bar helps, but typing "split" into the search bar and then clicking the result and then clicking the plane and then clicking the body isn't exactly the frictionless workflow the marketing team imagines.</p>
<p>Text to Command is Autodesk's answer to this particular brand of menu fatigue. Instead of navigating the ribbon or searching for a command name, you type what you want to do in plain English. "Split this body with my construction plane." "Extrude this face by 10mm." "Add a 0.5mm chamfer to all edges." The Autodesk Assistant interprets the instruction and executes the corresponding Fusion command.</p>
<p>It's not text-to-CAD. It doesn't generate geometry from a blank canvas. It operates on what's already there. That distinction matters, and it's one that people keep mixing up.</p>
<h2>What it actually does</h2>
<p>Text to Command is part of the Autodesk Assistant, which sits in a docked panel on the right side of the Fusion window. You open it, type an instruction, and the Assistant figures out which Fusion command you're asking for, sets the parameters, and executes it.</p>
<p>As of the March 2026 update, the supported operations in the design workspace include:</p>
<p>Geometry creation and modeling features: extrude, fillet, chamfer, hole, shell, split, revolve. The basics.</p>
<p>Sketch creation with dimensions. You can say "create a rectangle, 40mm by 20mm" and it produces a sketch with the right dimensions applied.</p>
<p>Patterns: circular and rectangular.</p>
<p>Primitives: spheres, toruses, coils.</p>
<p>Material and appearance assignment. "Assign stainless steel to this body" works as expected.</p>
<p>Design queries: ask about volume, surface area, identify geometry types, count features. Useful when you want a quick answer without opening the measure tool.</p>
<p>Since March 2026, it also handles manufacturing workspace tasks: creating manufacturing setups, generating toolpaths, selecting tools, batch renaming operations. The CAM side is newer and less polished than the design side, but it's there.</p>
<p>The execution model is straightforward. You type a command in natural language. The Assistant interprets it. It either executes immediately or, if you follow Autodesk's recommended workflow, proposes the steps and waits for your confirmation before executing. The Ask, Confirm, Execute pattern. I prefer the confirmation step because watching an AI extrude in the wrong direction without asking is the kind of surprise I've had enough of in my life.</p>
<h2>Where it works well</h2>
<p>Simple, well-specified operations on clearly identifiable geometry. That's the sweet spot.</p>
<p>"Extrude this face by 15mm." Works. No ambiguity about which face, no ambiguity about the operation.</p>
<p>"Fillet all edges of this body, 2mm radius." Works. Identical to what you'd get clicking through the dialog manually.</p>
<p>"Assign aluminum 6061 to this body." Works. Faster than navigating the material library.</p>
<p>For commands I use rarely, Text to Command is genuinely faster than hunting through menus. I use the loft tool maybe once a month. I can never remember whether it's under Create or under Surface or under some contextual menu that only appears when you've already made specific selections. Typing "loft between these two profiles" is easier than the menu search, and the Assistant handles it.</p>
<p>The reusable prompts feature is a nice touch. If you have a multi-step sequence you repeat, you can save it as a prompt and replay it later. This is basically a macro system with natural language as the input format. Less flexible than Fusion's API scripting, but lower barrier to entry. A designer who doesn't write Python can still automate a three-step sequence by saving a prompt.</p>
<h2>Where it falls apart</h2>
<p>Ambiguity is the killer.</p>
<p>"Add a pocket to the top of this part." Which face is "the top"? How big? How deep? Square or rectangular or circular? The Assistant has to guess, and its guesses are wrong often enough that you learn to be very specific or to just use the command palette instead.</p>
<p>"Make this thinner." Thinner how? Shell it? Scale it? Modify a specific dimension? The Assistant will pick one interpretation, and it might not be yours. I told it to "make the walls thinner" on a box and it shelled the body, which was correct for what I wanted but was a coin flip between that and offsetting individual faces.</p>
<p>Multi-step operations with dependencies are unreliable. "Extrude this face by 10mm, then add a 5mm hole centered on the new face" is two operations with a dependency: the second one needs the result of the first one. Sometimes the Assistant handles this. Sometimes it gets confused about which face is the "new face" and drills the hole somewhere unexpected. The recommended approach is to do one step at a time, confirm each one, and build up the sequence manually. Which works, but it's not much faster than just clicking the commands.</p>
<p>Context awareness has limits. The Assistant doesn't always understand spatial relationships the way you'd describe them to a colleague. "Put a hole near the left edge" requires the AI to know what "left" means in your current view orientation, how "near" translates to a dimension, and which edge you mean. A human colleague would ask clarifying questions. The Assistant sometimes asks, sometimes guesses, and sometimes produces something that makes you wonder if you're speaking the same language.</p>
<h2>How it compares to text-to-CAD</h2>
<p>This is the comparison that confuses people, and it's worth being very clear about it.</p>
<p>Text-to-CAD means generating geometry from nothing. You describe a part, the AI creates it. A blank canvas goes to a finished (or at least started) model. That's what tools like Zoo.dev do today, and what Autodesk's <a href="/posts/fusion-360-neural-cad">Neural CAD</a> aims to do inside Fusion eventually. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers this in detail.</p>
<p>Text to Command means operating on existing geometry using natural language instead of menus. You already have a model. You want to modify it. Instead of finding the chamfer tool in the ribbon, you type "chamfer these edges." The AI translates your words into Fusion commands.</p>
<p>One is a generative tool. The other is an interface tool. Both use natural language. They solve fundamentally different problems.</p>
<h2>The prompting tax</h2>
<p>There's an overhead to Text to Command that's easy to miss. To get reliable results, you need to be specific. Autodesk's recommended formula is: "I want to [GOAL] on [TARGET]. Constraints: [UNITS], [DON'T CHANGE X]."</p>
<p>That's a lot of typing for an extrude. If you already know the command, it's faster to just use it. The time savings come when you don't know the command, when the command is buried in a menu, or when you want to chain operations and save the sequence for later. For power users who know Fusion cold, Text to Command is a curiosity. For occasional users who switch between CAD platforms and can never remember where anything is, it's more useful.</p>
<h2>The verdict</h2>
<p>Text to Command is a genuinely useful feature that works well within a narrow band of complexity. For simple, clearly specified operations on existing geometry, it's faster and more pleasant than menu navigation. For anything ambiguous, multi-step, or context-dependent, it's unreliable enough that you'll want to keep your mouse-and-menu skills sharp.</p>
<p>It's shipping now as part of the Autodesk Assistant Tech Preview. It's free with your Fusion subscription. There's no reason not to try it. There's also no reason to restructure your workflow around it until the reliability improves.</p>
<p>The broader picture is that Text to Command, <a href="/posts/fusion-360-neural-cad">Neural CAD</a>, and the rest of the <a href="/posts/fusion-360-ai-features">Fusion 360 AI features</a> are all pieces of a larger bet Autodesk is making on natural language as an interface for design software. Some of those pieces work today. Some are still being assembled. Text to Command is the piece that works, within its limits, and I use it most days. It hasn't changed how I design. It's changed how I find the Split Body tool, which on some mornings is enough.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fusion 360 Neural CAD: what Autodesk is actually building</title>
      <link>https://blog.texocad.ai/posts/fusion-360-neural-cad</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/fusion-360-neural-cad</guid>
      <pubDate>Sun, 22 Feb 2026 00:00:00 GMT</pubDate>
      <description>Neural CAD is Autodesk&apos;s attempt at text-to-geometry inside Fusion 360. It was announced at AU 2025. As of early 2026, you still can&apos;t use it.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>fusion-360</category>
      <category>neural-cad</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Neural CAD is an Autodesk research project for generating editable 3D geometry from text prompts inside Fusion 360. Announced at Autodesk University 2025, it is not yet publicly available. It aims to produce parametric, editable output rather than mesh, but no shipping date has been confirmed.</p>
<p>I've been checking the Fusion 360 updates page roughly once a week since November, the way you check a tracking number for a package that still says "label created." Neural CAD was announced at Autodesk University 2025 in Nashville with the kind of energy that makes you think shipping is imminent. It's now April 2026, and the feature is still somewhere between "active research" and "coming to a product near you." The update page keeps showing me improvements to the sketch environment and new thread profiles. Useful stuff. Not what I'm refreshing the page for.</p>
<p>Neural CAD is Autodesk's name for a new type of AI model trained to generate real CAD geometry from text prompts. If it ships the way they've described it, it would be the first text-to-CAD capability built directly into a major professional CAD platform. That's a big deal. The problem is the "if."</p>
<h2>What Autodesk said at AU 2025</h2>
<p>At Autodesk University 2025, Mike Haley from Autodesk Research described Neural CAD as "completely reimagining the traditional software engines that create CAD geometry." The claim is that these are new AI foundation models trained specifically to reason about CAD objects, not general-purpose language models bolted onto existing geometry kernels.</p>
<p>The key technical claim: Neural CAD generates BREP (Boundary Representation) geometry from text prompts. That means real solid geometry with mathematical surfaces, edges, and vertices. Not mesh. Not triangulated approximations. The same kind of representation that Fusion's parametric engine works with natively. The demo showed someone typing something like "create a contemporary air fryer" and getting back an editable 3D model in the Fusion canvas.</p>
<p>Autodesk positioned this as fundamentally different from what existing <a href="/posts/text-to-cad-guide">text-to-CAD tools</a> do. Most of those tools, Zoo.dev included, generate geometry externally and hand you a STEP file to import. Neural CAD would generate geometry inside Fusion itself, integrated with the timeline, the parametric history, and the rest of the design environment. That integration is what makes the promise compelling. It's also what makes it hard to ship.</p>
<h2>Why this is technically hard</h2>
<p>Generating editable BREP geometry from text is a different and harder problem than generating mesh from text. The <a href="/posts/text-to-cad-vs-text-to-3d">text-to-CAD vs text-to-3D</a> distinction matters here.</p>
<p>A mesh model is a bag of triangles. You can approximate any shape with enough triangles, and the AI doesn't need to understand much about engineering to produce one. The output looks like a thing. It's not a useful engineering artifact, but it looks right in a viewport.</p>
<p>BREP geometry requires the AI to produce mathematically precise surfaces that meet at exact edges, form valid solid bodies, and behave correctly when you try to add features to them. A BREP model isn't just a shape. It's a data structure that encodes topology: which faces are adjacent, which edges bound which faces, which surfaces are planar versus cylindrical versus freeform. If any of those relationships are wrong, the model breaks the moment you try to fillet an edge or shell the body.</p>
<p>The parametric piece is even harder. If Autodesk wants Neural CAD output to participate in Fusion's timeline, the generated geometry needs to be expressed as a sequence of modeling operations that can be rolled back, edited, and replayed. That's not just generating a shape. It's generating a construction history that produces the shape. The difference is like the difference between giving someone a finished cake and giving them a recipe that produces the cake. One is dramatically more complex than the other.</p>
<p>Whether Autodesk has solved this or plans to ship something more limited than the AU demo suggested, I don't know. The research blog posts are encouraging. The absence of a shipping date is less encouraging.</p>
<h2>What exists today vs what was announced</h2>
<p>As of April 2026, here's the reality:</p>
<p>The Autodesk Assistant is live in Fusion 360 as a Tech Preview. It can execute existing commands via natural language (that's the <a href="/posts/fusion-360-text-to-command">Text to Command</a> capability). It can create basic geometry, apply modeling features, and answer questions about your model. This is real, shipping, and usable today, with the usual caveats about Tech Preview reliability.</p>
<p>Neural CAD for geometry, the text-to-BREP generation, is not available. It's not in Tech Preview. There's no beta access that I'm aware of. The <a href="https://www.autodesk.com/products/fusion-360/blog/fusion-roadmap-2026/">Fusion Roadmap 2026</a> references "neural CAD experiences that turn natural language prompts into editable design geometry" as something the team is working toward. The language is aspirational, not committal.</p>
<p>This distinction matters because people conflate the two. The Autodesk Assistant doing command execution is impressive but incremental. It's a smarter command line. Neural CAD generating novel geometry from prompts is a different category of capability, and it's the one that hasn't arrived yet.</p>
<h2>How it compares to what's already available</h2>
<p>If you want text-to-CAD geometry right now, tools like Zoo.dev already generate BREP solids from text prompts and output STEP files you can import into Fusion 360 or any other CAD software. They work today. The geometry is imperfect and needs cleanup, as I've written about in the <a href="/posts/is-text-to-cad-accurate">accuracy post</a>, but the basic capability exists.</p>
<p>The difference Neural CAD promises is integration. An external tool generates a STEP file that arrives in Fusion as a dumb imported body with no feature history. Neural CAD, as described, would generate geometry that's native to Fusion's modeling environment. You could roll back the timeline, edit a sketch dimension, add features, and the generated geometry would participate in the parametric workflow like any other operation.</p>
<p>That's a real advantage if it works. The gap between "import a STEP file and work with it" and "have native parametric geometry generated inside your design environment" is significant. It's the difference between getting a block of rough-cut material and getting a partially finished part on your machine with the fixtures already set up.</p>
<p>But promises about integration don't count until they ship. Zoo.dev, for all its limitations, works today. Neural CAD doesn't. That's the current state, and it's the only state that matters for anyone trying to get work done this week.</p>
<h2>The research angle</h2>
<p>Autodesk Research has published enough about their AI work to suggest this isn't vaporware. They have teams working on geometry understanding, CAD-specific foundation models, and the intersection of machine learning with parametric modeling. The research is real. The engineering challenge of turning research into a product feature that works reliably for millions of users is where things get slow.</p>
<p>I've seen enough AI demos that looked great in a controlled setting and fell apart the moment real users with real models started poking at them. A demo that generates a clean air fryer from a carefully crafted prompt is one thing. A production feature that handles "flange bracket, 3mm thick, four M4 holes, 60mm bolt pattern, with a stiffening rib down the middle, and make it look like the one from the Johnson project but smaller" is another thing entirely. The second prompt is closer to how engineers actually talk, and it's the kind of prompt that makes AI systems produce interesting garbage.</p>
<h2>What I'm watching for</h2>
<p>When Autodesk does ship something under the Neural CAD name, here's what I'll be testing:</p>
<p>Can it produce geometry that survives a fillet? Not just looks good in the viewport, but actually has valid topology that Fusion's modeling tools can work with.</p>
<p>Can it hit prompted dimensions? If I say 50mm, I want 50mm, not 48.7mm.</p>
<p>Does the output have a real feature timeline? Can I roll back and edit the AI-generated operations, or is it an imported body with a single "Neural CAD" node?</p>
<p>Does it handle engineering language? Not marketing prompts like "create a contemporary air fryer" but engineering prompts like "rectangular plate, 80x50x5mm, four 4.2mm through holes on a 60x30mm bolt pattern."</p>
<h2>The honest take</h2>
<p>Neural CAD is the most interesting thing Autodesk has announced in years. The idea of generating native, editable, parametric BREP geometry from text prompts inside a professional CAD environment is genuinely compelling. If Autodesk pulls it off, it changes <a href="/posts/what-is-text-to-cad">how text-to-CAD fits into real workflows</a> in a way that external tools can't match.</p>
<p>But it's not here. The feature page doesn't have a download button. The roadmap doesn't have a date. The AU demo was six months ago, and the follow-up has been silence punctuated by improvements to unrelated parts of Fusion. That's normal for complex software development, and it's also normal for features that got announced before they were ready.</p>
<p>I'll keep refreshing the update page. I'll keep my expectations calibrated to what I can actually open and use. And I'll keep using external text-to-CAD tools for the work I need done today, because waiting for the perfect integrated solution is a luxury that shipping deadlines don't allow. When Neural CAD arrives, I'll test it the same way I test everything: with a real part and no mercy. Until then, it's a promising research project with excellent marketing attached.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Creo AI Assistant: PTC&apos;s quiet bet</title>
      <link>https://blog.texocad.ai/posts/creo-ai-assistant</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/creo-ai-assistant</guid>
      <pubDate>Sat, 21 Feb 2026 00:00:00 GMT</pubDate>
      <description>PTC added an AI assistant to Creo that helps with error troubleshooting and design guidance. It doesn&apos;t make headlines, but it solves a specific problem.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>creo</category>
      <category>ptc</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Creo AI Assistant (beta in Creo 13+, September 2025) provides error troubleshooting, feature suggestions, and design guidance within PTC Creo. It also integrates with Creo&apos;s existing generative design tools (GTO/GDX). PTC&apos;s AI strategy is more conservative than Autodesk or Dassault, focusing on incremental productivity rather than text-to-geometry.</p>
<p>I hit a regeneration failure in Creo last fall that produced an error message I could have framed and hung on the wall. Something about a "failed intersection of datum plane and geometry reference" with a code that looked like it had been generated by a license plate factory. I did what everyone does: selected the text, pasted it into a search engine, clicked through three PTC support articles that described vaguely related problems in Creo versions I'd never used, and eventually fixed it by deleting the feature and rebuilding it from scratch. Forty minutes. For a round that was supposed to be parametric.</p>
<p>Two months later, PTC shipped the Creo AI Assistant beta in Creo+ 13.0. I hit a similar error, and this time the Assistant popped up with a panel explaining what that specific error code meant, linking to the relevant PTC support article, and suggesting a troubleshooting sequence. The suggestion didn't fix my problem directly, but it pointed me in the right direction about ten minutes faster than the search-engine route. Which is not a revolution. But for anyone who's spent time in PTC's support portal, ten minutes of not doing that is a gift.</p>
<h2>What it is</h2>
<p>Creo AI Assistant is PTC's AI chat feature, launched as a beta in Creo+ 13.0 in September 2025. It lives in a dockable panel inside the Creo interface. The current implementation is focused almost entirely on error troubleshooting: when Creo throws an error, the AI Assistant can provide context, explanations, and links to relevant PTC support articles directly inside the application.</p>
<p>The beta shipped in the cloud-native Creo+ first. On-premises Creo 13 is expected to get it around May 2026. If you're running anything older than Creo 13, you're not getting the AI Assistant at all, which is a conversation your IT department probably doesn't want to have.</p>
<p>The scope is narrow on purpose. PTC didn't try to ship a general-purpose chatbot that answers design philosophy questions or brainstorms product concepts. They built a tool that helps you understand Creo's error messages, which, if you've used Creo for any length of time, is not a trivial contribution.</p>
<h2>The error troubleshooting thing</h2>
<p>This is the core feature and it's worth understanding how it works in practice, because the implementation is both more limited and more useful than you'd expect.</p>
<p>When you encounter a supported error in Creo, the AI Assistant button becomes available. You click it, and a panel opens with information pulled from PTC's support knowledge base. The AI interprets the error, provides context about what caused it, and offers troubleshooting steps or links to relevant articles. It's not reading your model geometry. It's not analyzing your feature tree to figure out the root cause. It's matching the error code against a knowledge base and presenting the results in a more accessible format than the support portal.</p>
<p>The limitation is that it only works for a subset of error messages. If Creo throws an error the AI doesn't recognize, the Assistant button doesn't appear. PTC is expanding coverage, but right now the set of supported errors is incomplete. I hit the AI button maybe one time in three when something goes wrong. The other two times, I'm back to the search engine.</p>
<p>When it does work, the quality is usually good. PTC has a deep support knowledge base built over decades of enterprise customers filing tickets about exactly the kind of problems you're encountering. The AI's job is to surface the right article at the right time, and it does that well. It's less "AI generating novel insight" and more "AI being a good librarian," which honestly might be the most useful thing an AI assistant can do inside a CAD tool right now.</p>
<h2>What PTC is planning</h2>
<p>The beta is deliberately modest, but PTC's roadmap for Creo 2026 describes broader ambitions. Planned features include natural language commands, where you describe a design operation in plain English and Creo executes it. Predictive design recommendations, where the AI analyzes your design patterns and suggests next steps based on best practices. Intelligent feature recognition that automatically categorizes design elements. A full "Design Intelligence Assistant" that understands engineering intent.</p>
<p>I've heard versions of this roadmap from every major CAD vendor. Autodesk is already shipping some of it as a Tech Preview. Dassault is packaging it into LEO and AURA. Siemens has copilots in both NX and Solid Edge. The plans are ambitious across the board. The shipping software is less so.</p>
<p>PTC's difference is that they're not rushing. The beta is labeled as a beta. The scope is limited. Whether that's strategic patience or a late start depends on your generosity.</p>
<h2>Where it fits in PTC's AI story</h2>
<p>PTC already has generative design tools, Creo GTO and GDX, that use simulation-driven optimization to produce lightweight, organic structures based on loads and constraints. Those have been in Creo for a few years and they're useful for specific applications, particularly in aerospace and automotive where weight reduction justifies the workflow complexity.</p>
<p>The AI Assistant doesn't connect to those tools in any direct way. It's a separate feature addressing a separate problem. PTC's broader pitch is that AI will eventually tie everything together into a coherent experience. The reality, today, is an error lookup tool and some topology optimization features that don't talk to each other.</p>
<h2>The PTC customer angle</h2>
<p>This matters because PTC's customer base is different from Autodesk's or Onshape's. Creo is used heavily in aerospace, defense, automotive, medical devices, and heavy industrial equipment. These are regulated industries where design processes are controlled, changes require documentation, and "let the AI try something" is not a phrase anyone uses in a design review.</p>
<p>For these customers, an AI that helps you understand and fix errors faster is more valuable than an AI that brainstorms product concepts. The PTC engineer fighting a regeneration failure in a part with 400 features and a change order attached doesn't want creativity from the AI. They want to know which feature broke the chain and why. The current Creo AI Assistant doesn't do that yet, but the error troubleshooting direction is the right one for this audience.</p>
<p>PTC's conservative approach also reflects an uncomfortable truth about AI in CAD: enterprise customers are cautious about tools that modify geometry or make design suggestions without human oversight. The liability implications of an AI suggesting a wall thickness or a fillet radius in a part that ends up in a jet engine are non-trivial. PTC seems to understand this in a way that not every vendor does.</p>
<h2>How it compares</h2>
<p>Against <a href="/posts/autodesk-assistant-ai">Autodesk Assistant</a>, Creo AI Assistant is more focused and less capable. Autodesk is shipping text-to-command features and natural language modeling. PTC is shipping error lookup. But Autodesk's features are Tech Preview quality, while PTC's narrow feature works reliably within its scope. Reliable and narrow versus ambitious and early.</p>
<p>Against <a href="/posts/onshape-ai-advisor">Onshape AI Advisor</a>, which PTC also owns, the approach is surprisingly different. Onshape's AI Advisor helps users learn the tool. Creo's helps when something breaks. Different products, different audiences, different AI strategies from the same parent company. Onshape attracts teams migrating from SolidWorks. Creo attracts enterprise engineers who've been using the software for twenty years and need help when something breaks in a model older than some of their colleagues.</p>
<p>For a broader view of how all these AI assistants compare, the <a href="/posts/ai-cad-copilot">copilot overview</a> covers the field, the <a href="/posts/ai-in-cad-software">AI in CAD software</a> post maps the full picture, and the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> explains how geometry generation differs from the assistant pattern.</p>
<h2>The verdict</h2>
<p>Creo AI Assistant is the least flashy AI feature in CAD right now, and it might be the most honest. It does one thing: help you understand error messages faster. It does that one thing reasonably well, when the error is in its supported set. It doesn't pretend to be a design partner, a brainstorming companion, or a geometry engine. It's a support-article finder that saves you from opening a browser and wading through PTC's portal, which is a genuine quality-of-life improvement for anyone who's been on the receiving end of a Creo error dialog.</p>
<p>If you're a Creo user, the AI Assistant is worth enabling in the beta. It won't change your workflow. It will occasionally save you a trip to the support portal. And for a beta with a narrow scope, that's a reasonable start. PTC is betting that slow and reliable beats fast and flashy. They might be right. It wouldn't be the first time the boring approach aged better than the exciting one.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fusion 360 AI features: what&apos;s shipping and what&apos;s vapor</title>
      <link>https://blog.texocad.ai/posts/fusion-360-ai-features</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/fusion-360-ai-features</guid>
      <pubDate>Sat, 21 Feb 2026 00:00:00 GMT</pubDate>
      <description>Autodesk has announced a lot of AI features for Fusion 360. Some of them are real. Some of them are slide deck material. Here&apos;s the honest status.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>fusion-360</category>
      <category>autodesk</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Fusion 360 AI features in 2026 include the Autodesk Assistant (shipping), generative design (shipping), and announced-but-not-yet-available features like Neural CAD (text-to-geometry) and Text to Command (natural language operations). Most AI features are still in development or limited preview.</p>
<p>I watched Autodesk's AU 2025 keynote from my home office while eating leftover pizza, which felt appropriate because the presentation was about half substance and half reheated promises. Neural CAD. Text to Command. An AI assistant that would become your "thought partner." The audience applauded. I wrote down the feature names and put question marks next to most of them. Six months later, sitting in front of the actual software, I can report that some of those question marks have turned into checkmarks and some have turned into longer question marks.</p>
<p>Fusion 360 in 2026 has more AI features than it did a year ago. That's undeniable. Whether the features matter to your actual work depends entirely on which ones you use and how high your expectations are set. Here's the honest inventory.</p>
<h2>Autodesk Assistant: the one that actually shipped</h2>
<p>The Autodesk Assistant is the most visible AI addition to Fusion 360, and it's the one I've spent the most time with. It lives in a panel on the right side of the screen, accessible from a button in the upper-right corner. You type something in natural language, and it tries to do something useful.</p>
<p>As of March 2026, it's in Tech Preview. That label matters. It means the feature is live, you can use it, but Autodesk is telling you up front that it will occasionally behave like an intern who's enthusiastic but still learning the org chart.</p>
<p>What the Assistant actually does right now: it can create basic geometry. Extrude, fillet, chamfer, hole, shell, split. It can create sketches with dimensions. It can apply materials and appearances. It can generate circular and rectangular patterns. It can answer questions about your model, things like volume, surface area, and geometry identification. Since the March 2026 update, it also handles some CAM tasks: creating manufacturing setups, renaming operations, selecting tools. That's real functionality.</p>
<p>What it feels like in practice: faster than clicking through menus for simple stuff, and genuinely confusing for anything that requires context. I told it to "extrude this face by 10mm" and it worked perfectly. I told it to "add a pocket to the top of the bracket, 20mm square, centered, 3mm deep" and it picked the wrong face, extruded in the wrong direction, and produced something that looked like a Cubist interpretation of my request. That cycle of prompt, fail, rephrase, succeed is the actual workflow right now.</p>
<p>Autodesk recommends a three-part prompt formula: state your goal, identify the target, and list constraints. That level of specificity helps, and it tells you something about the current state of AI in CAD: the tool is most useful when you already know exactly what you want and you're just looking for a faster way to ask for it.</p>
<p>For a deeper look at the command-execution side specifically, the <a href="/posts/fusion-360-text-to-command">Text to Command</a> post covers that in more detail.</p>
<h2>Neural CAD: announced, demoed, not available</h2>
<p>This is the one that got the biggest applause at AU 2025. Neural CAD is Autodesk's term for a new generative AI foundation model trained specifically to reason about CAD geometry. The idea is that you type a description, something like "create a contemporary air fryer," and the AI generates native, editable BREP geometry directly inside Fusion's canvas. Not mesh. Not a screenshot. Real solid geometry with faces and edges you can select and modify.</p>
<p>The demo at AU looked impressive, the way all demos look impressive when curated by people whose job is to make demos look impressive. Mike Haley from Autodesk Research described it as "completely reimagining the traditional software engines that create CAD geometry."</p>
<p>As of April 2026, Neural CAD is not publicly available in Fusion 360. You can't use it. There's no button for it. The roadmap says "neural CAD experiences" are coming, with language about turning "natural language prompts into editable design geometry," but no shipping date has been confirmed. The gap between the announcement and the reality is currently about six months wide and showing no signs of closing quickly.</p>
<p>I have a separate post on <a href="/posts/fusion-360-neural-cad">what Neural CAD is and what it means</a> if you want the full breakdown. The short version: the technology is genuinely interesting, the ambition is real, and the shipping status is "soon" in the same way that "soon" has meant "eventually" in software for the last forty years.</p>
<h2>Generative design: shipping, but a different thing</h2>
<p>Generative design is the AI feature Fusion 360 has had for a while, and it's the one most people confuse with text-to-CAD even though they're solving completely different problems.</p>
<p>Generative design takes a set of constraints, loads, materials, keep-out zones, manufacturing methods, and generates organic-looking shapes that satisfy all of them. The output is typically something that looks like a bone or a reef structure, optimized for weight and stiffness but not for looking like a normal bracket. It's topology optimization dressed up in a more approachable interface.</p>
<p>It works. I've used it for lightweighting parts where the geometry doesn't need to be conventional and the manufacturing method can handle organic shapes. The results are genuinely useful when the constraints are well-defined.</p>
<p>It's available as the Generative Design Extension, an add-on you pay for on top of your Fusion subscription. The pricing has changed enough times that I'll just say "check the current Autodesk page" rather than write a number that'll be wrong by next Tuesday.</p>
<p>The reason I mention it here is that people searching for "Fusion 360 AI features" often have generative design in mind, and it's the one AI feature in Fusion that has genuine production history behind it. Companies have shipped parts designed with it. It's real in a way that the newer AI features aren't yet.</p>
<p>That said, generative design is not text-to-CAD. You're not typing a description and getting a bracket back. You're defining an engineering problem with specific inputs and getting a shape that solves it. The <a href="/posts/text-to-cad-vs-generative-design">text-to-CAD vs generative design</a> comparison explains the difference in more detail, but the quickest way I can put it: text-to-CAD is "build me what I described," generative design is "show me what the physics wants."</p>
<h2>Text to Command: the useful middle ground</h2>
<p>Text to Command is the feature that gets the least attention but might end up being the most practical. Instead of generating geometry from scratch, it lets you operate on existing geometry using natural language. "Extrude this face by 1 inch." "Add a 0.5mm chamfer to all edges." "Split this body with my construction plane."</p>
<p>It's essentially a natural language interface layered on top of Fusion's existing command system. You describe what you want to do, and the AI translates that into the appropriate Fusion command and executes it. It's part of the Autodesk Assistant, which means it's available in Tech Preview right now.</p>
<p>I've been using it for the kind of operations where I know what I want but can't remember which menu it's buried in. Fusion has a lot of commands. Nobody remembers where all of them live. Typing "revolve this sketch around this axis" is faster than hunting through the Create menu when you use revolve twice a year.</p>
<p>The limitations are the same as the Assistant overall: it works well for simple, well-specified operations and struggles with ambiguity or multi-step workflows. You can save multi-step sequences as reusable prompts, which is a nice touch, but the execution reliability drops as the complexity goes up.</p>
<p>I wrote a full assessment of <a href="/posts/fusion-360-text-to-command">Text to Command</a> as its own post because it deserves its own evaluation separate from the bigger AI hype.</p>
<h2>What's on the roadmap but not shipping</h2>
<p>The <a href="https://www.autodesk.com/products/fusion-360/blog/fusion-roadmap-2026/">Fusion Roadmap 2026</a> mentions several AI-related items:</p>
<p>Neural CAD experiences for text-to-geometry. Status: coming, no date.</p>
<p>Expanded Assistant capabilities across more workspaces. Status: partially shipping, more coming.</p>
<p>AI-powered renderings via Microsoft Azure OpenAI. Status: announced, not widely available.</p>
<p>AutoConstrain for drawings. Status: announced, unclear timeline.</p>
<h2>How this compares to the competition</h2>
<p>The broader <a href="/posts/ai-in-cad-software">AI in CAD software</a> landscape is moving quickly, and Fusion 360 is in a peculiar position. It has more AI features than most of its competitors, but fewer fully-shipped ones than the marketing suggests.</p>
<p>SolidWorks 2026 shipped AI companions (AURA and LEO) in beta with its FD01 release in February 2026. Those are at a similar maturity level to the Autodesk Assistant.</p>
<p>Zoo.dev and other dedicated <a href="/posts/text-to-cad-guide">text-to-CAD tools</a> already let you generate BREP geometry from text prompts today. They're specialized tools, not integrated into a full CAD platform, but they're shipping something that Fusion's Neural CAD has only demoed.</p>
<p>The advantage Fusion has is integration. If Neural CAD ships and works well inside the Fusion environment, with access to the timeline and the parametric engine, that's a fundamentally different proposition from importing STEP files. The question is when, and how much of the demo translates to reality.</p>
<h2>The honest scorecard</h2>
<p>Here's where each Fusion 360 AI feature stands as of April 2026:</p>
<p>Autodesk Assistant (natural language interface): shipping in Tech Preview. Works for simple operations. Useful but inconsistent.</p>
<p>Text to Command (operate on existing geometry via text): shipping as part of the Assistant. The most practically useful AI feature in Fusion right now.</p>
<p>Generative design (topology optimization): shipping as a paid extension. Proven, mature, and genuinely useful for the right problems.</p>
<p>Neural CAD (text-to-geometry generation): announced, demoed, not available. The most exciting promise and the biggest gap between announcement and shipping.</p>
<p>AI-powered rendering: announced, limited availability. Nice for presentations, not relevant to engineering work.</p>
<p>AutoConstrain for drawings: announced, unclear timeline.</p>
<p>Autodesk is doing real AI work. The research behind Neural CAD is genuinely interesting, and the Autodesk Assistant is a real product you can use today. But there's a gap between what Autodesk talks about and what Autodesk ships, and if you're making decisions based on the keynote rather than the current feature set, you'll be disappointed.</p>
<p>I keep Fusion 360 as my primary CAD tool. I use the Assistant when it saves me a menu hunt. I don't plan my workflows around AI features that haven't shipped yet. When Neural CAD becomes something I can open and use on a Monday morning, I'll test it with a real part, a set of calipers, and low expectations. Until then, it's a slide deck.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Autodesk Assistant AI: what it can and can&apos;t do</title>
      <link>https://blog.texocad.ai/posts/autodesk-assistant-ai</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/autodesk-assistant-ai</guid>
      <pubDate>Fri, 20 Feb 2026 00:00:00 GMT</pubDate>
      <description>Autodesk Assistant is the AI chat feature built into Fusion 360 and other Autodesk products. It can answer questions and find commands. It cannot model for you.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>autodesk</category>
      <category>assistant</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Autodesk Assistant is an AI-powered chat interface available in Fusion 360, AutoCAD, and other Autodesk products. It can answer how-to questions, locate commands, explain features, and suggest workflows. It cannot generate geometry, edit models, or perform CAD operations directly.</p>
<p>I was trying to remember where Autodesk hid the "Combine" command in the March 2026 update. It used to be under Modify. Then it moved, or maybe I moved, or maybe I was confusing it with something in the manufacturing workspace. I'd already clicked through three menus and opened a help article that described the Fusion 360 interface from what looked like two releases ago. So I opened Autodesk Assistant, typed "where is Combine," and got a direct answer in about two seconds. Panel on the right, no tab switching, no new browser window. I thought: this is the thing. This is what this tool is good at.</p>
<p>Then, feeling optimistic, I asked it to combine two bodies in my model. It explained how combining works. It did not combine the two bodies. That's Autodesk Assistant in one interaction. A genuinely good search tool that sometimes makes you forget it's not an actual coworker.</p>
<h2>What it is</h2>
<p>Autodesk Assistant is the AI-powered chat feature embedded across Autodesk products. In Fusion 360, you access it from a button in the upper-right corner. A docked panel opens on the right side, text input at the bottom, conversation above. It looks clean and stays out of the way. The ViewCube, timeline, and toolbar remain untouched, which sounds like a low bar but some vendors manage to fail it.</p>
<p>The Assistant is available in Fusion 360 as of the March 2026 product update, with some features labeled as Tech Preview. It also appears in AutoCAD and other Autodesk products, though the Fusion implementation is the most developed. The underlying system is context-aware: it adapts depending on whether you're on the Home tab managing projects or in the Design workspace building a part.</p>
<p>It's free to use if you have an Autodesk subscription. Which means it's part of the subscription you're already paying for, depending on how you feel about "free" in that context.</p>
<h2>What it actually does well</h2>
<p>Finding commands. This is the killer feature and I don't say that lightly. Fusion 360 has a deep menu structure that changes with each workspace, and the command search bar (the "S" shortcut) only works if you know what the command is called. Autodesk Assistant lets you describe what you want to do in plain language and it tells you where to find it, sometimes with enough context that you learn something new about the workflow. I asked "how do I mirror a body across a plane" and got a clear, step-by-step answer with the correct menu location. I've used the software for years and I still find this useful for features I touch once every few months.</p>
<p>Answering how-to questions. "How do I set up a static stress simulation?" "What's the difference between New Body and Join in an extrude?" "How do I export a 2D drawing as a PDF?" The quality of the answers is consistently decent. Not great, not wrong, but decent. The AI pulls from Autodesk's documentation and training materials, and because Autodesk has an enormous library of help content and tutorials, the source material is usually solid. It beats Googling because it doesn't return six YouTube thumbnails, a forum thread from 2019, and a result from a competitor's documentation.</p>
<p>Project management through chat. This one surprised me. You can ask Autodesk Assistant to create projects, organize folders, invite team members, and manage permissions. It confirms changes before applying them. For teams that do a lot of project setup and file management, talking to the AI is genuinely faster than clicking through Fusion's data management panel, which has always felt like it belongs in a different application from the one you're trying to use.</p>
<h2>The text-to-command experiment</h2>
<p>Here's where things get interesting and uncertain in equal measure. As of the March 2026 Tech Preview, Autodesk Assistant can execute <a href="/posts/fusion-360-ai-features">Fusion 360 AI features</a> directly from natural language. You can type "extrude this profile 10mm" and it will attempt to run the extrude operation. Supported commands include Extrude, Fillet, Chamfer, Hole, Shell, Split, Revolve, and circular and rectangular patterns.</p>
<p>I tried it on a few simple operations. "Add a 3mm fillet to the selected edges" worked. The fillet appeared in the timeline, parametric history intact. "Extrude the top face 5mm" worked after I selected the face first. "Create a rectangular pattern of this hole, 4 instances, 20mm spacing" took two attempts because my first prompt was ambiguous about which direction.</p>
<p>The operations preserve parametric history, which is the important part. You're not getting dumped geometry. You're getting timeline features you can edit, suppress, roll back, and modify the same way you would if you'd clicked through the menus. That's a meaningful distinction from <a href="/posts/text-to-cad-guide">text-to-CAD tools</a> that generate standalone STEP files.</p>
<p>The catches: it only works for the supported commands. Anything outside that list and the Assistant falls back to explaining how to do it manually. The selection context matters, and sometimes the AI misinterprets which face or edge you mean. And it's a Tech Preview, which means Autodesk is signaling that reliability isn't guaranteed. I wouldn't build a workflow around it for production work, but for learning the tool or prototyping quickly, it shaves real seconds off repetitive operations.</p>
<p>Autodesk has also announced something they're calling "Neural CAD," where the Assistant can generate native, editable 3D geometry from text prompts, things like "create a contemporary air fryer." This is closer to actual text-to-CAD functionality baked into a traditional tool. I haven't tested it extensively enough to judge the output quality, but the ambition is clear: Autodesk wants the Assistant to move beyond answering questions toward actually doing design work.</p>
<h2>What it can't do</h2>
<p>It can't diagnose problems with your specific model. Ask "why is this fillet failing?" and you'll get a generic explanation of fillet failure modes. You won't get "face 47 has a tangent discontinuity at the G1 junction near the imported edge." The Assistant doesn't analyze your geometry. It talks about geometry in general.</p>
<p>It can't make design decisions. It won't tell you if your wall is too thin for injection molding, if your draft angle is insufficient, or if the part is going to warp on the printer. Those are judgment calls that require understanding the manufacturing process, and the Assistant doesn't have that kind of reasoning.</p>
<p>It can't maintain deep context. Within a single conversation it does a reasonable job of remembering what you've discussed, but it doesn't accumulate knowledge about your project over time. It doesn't know that you've been working on this housing for three weeks, that the client changed the requirements twice, or that the mounting boss was supposed to align with a mating part in a different file. Every conversation starts approximately fresh.</p>
<p>It can't replace knowing the software. This sounds obvious, but it matters. The Assistant is good at helping you find features and explaining workflows. It's not good at compensating for a lack of understanding. If you don't know what a loft is, the Assistant can explain it. But if you don't understand why your loft is producing a self-intersecting surface, the Assistant will give you a textbook answer that's about as useful as reading the help page you already tried.</p>
<h2>How it compares to the others</h2>
<p>Against <a href="/posts/ai-cad-copilot">SolidWorks AURA and LEO</a>, Autodesk Assistant is simpler in scope but further along in direct model interaction. SolidWorks LEO can detect broken references and suggest fixes, which is a form of model awareness Autodesk hasn't shipped yet. But Autodesk's text-to-command execution is ahead of anything SolidWorks is offering in production.</p>
<p>Against Onshape AI Advisor, the comparison is cleaner. Both are good documentation assistants. Onshape's is more focused and stays in its lane. Autodesk's is more ambitious, especially with the Neural CAD and text-to-command features, but ambition and reliability aren't the same thing.</p>
<p>Against <a href="/posts/creo-ai-assistant">Creo AI Assistant</a>, the scope is wildly different. PTC's offering is currently limited to error troubleshooting from the support knowledge base. Autodesk is trying to turn the Assistant into a design collaborator. Whether Autodesk's broader approach delivers more practical value than PTC's narrow focus depends entirely on how well the execution holds up.</p>
<p>For a broader view of how all these <a href="/posts/ai-in-cad-software">AI in CAD software</a> assistants compare, the <a href="/posts/ai-cad-copilot">copilot overview</a> covers the full field.</p>
<h2>The honest take</h2>
<p>Autodesk Assistant is the best documentation search tool Fusion 360 has ever had. For finding commands, understanding features, and navigating Autodesk's enormous but inconsistently organized help library, it saves real time. I use it weekly, which is more than I can say for most new features Autodesk ships.</p>
<p>The text-to-command features are promising but early. They work for simple, well-defined operations. They break when the situation gets ambiguous. They preserve parametric history, which is the right architectural choice. And they're clearly the foundation for something bigger, whether that's Neural CAD or a more model-aware assistant in future releases.</p>
<p>But right now, today, Autodesk Assistant is a better way to search the help docs and a sometimes-useful way to avoid clicking through menus. That's it. It's a convenience feature wearing the costume of a revolution. It solves real problems, they're just smaller problems than the demo reel implies. I like it. I use it. I don't confuse what it is with what Autodesk wants it to become.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI CAD copilots: the assist pattern every vendor is chasing</title>
      <link>https://blog.texocad.ai/posts/ai-cad-copilot</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-cad-copilot</guid>
      <pubDate>Thu, 19 Feb 2026 00:00:00 GMT</pubDate>
      <description>Every major CAD vendor now has an AI copilot, assistant, or companion. The pattern is the same everywhere: an AI that watches you work and tries to help. The results vary wildly.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>copilot</category>
      <category>assistant</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI CAD copilots are vendor-integrated assistants that provide real-time guidance, feature suggestions, and natural language interaction within CAD software. Current examples include SolidWorks AURA/LEO, Onshape AI Advisor, Autodesk Assistant, Siemens NX AI Chat, and Solid Edge Design Copilot. They assist workflows but don&apos;t generate geometry from scratch.</p>
<p>Last Tuesday I was halfway through a surface patch in Fusion 360, the kind of repair job that happens when you import a STEP file from a supplier and discover the geometry has a gap the width of a human hair that the analysis tool insists is a canyon. The model was fine visually. The mesh preview was fine. But the moment I tried to shell it, the whole thing turned red and Fusion offered me an error message so vague it could have been a fortune cookie. So I opened Autodesk Assistant, typed "why is shell failing on this body," and waited.</p>
<p>The answer I got was a paragraph about shell operations in general. What they do, how they work, the kinds of geometry problems that cause them to fail. Perfectly accurate. Completely useless for my specific problem. I already knew what shell does. I needed the AI to look at my model, identify the bad face, and tell me where the issue was. It couldn't do that. None of them can, not yet. And that gap between what these copilots promise in the keynote and what they deliver at the desk is the story of AI CAD copilots in 2026.</p>
<h2>The pattern everyone copied</h2>
<p>Every major CAD vendor has now shipped some version of the same idea: an AI assistant that lives inside the software, answers questions in natural language, and tries to help you work faster. The implementations vary, but the pattern is identical. A chat panel. A text input box. A system that pulls from documentation, training materials, and sometimes your model context. You ask a question, the AI answers, and occasionally it can trigger a command.</p>
<p>SolidWorks has <a href="/posts/solidworks-aura-ai">AURA and LEO</a>, introduced in the 2026 release as "virtual companions." Autodesk has <a href="/posts/autodesk-assistant-ai">Autodesk Assistant</a>, available across <a href="/posts/fusion-360-ai-features">Fusion 360</a> and other products. Onshape has <a href="/posts/onshape-ai-advisor">AI Advisor</a>. Siemens has Design Copilot in both NX and Solid Edge. PTC has the <a href="/posts/creo-ai-assistant">Creo AI Assistant</a>, currently in beta. Even the second-tier and startup CAD tools are bolting on chat panels. The AI copilot has become table stakes, the feature no vendor wants to ship without, regardless of whether they've figured out what it's supposed to do.</p>
<p>The naming is all over the place, which tells you something. Assistant, Advisor, Companion, Copilot. Every vendor picked a word that implies the AI is helping, not doing. That's an accurate description of the current state, even if the marketing occasionally forgets to mention it.</p>
<h2>What they can actually do</h2>
<p>Strip away the keynote demos and the launch blog posts, and the current generation of <a href="/posts/ai-in-cad-software">AI in CAD software</a> copilots does roughly three things.</p>
<p>First, they answer how-to questions. "How do I create a loft between these two profiles?" "What's the difference between a shell and a thicken?" "How do I set up a static stress simulation?" The AI pulls from the vendor's documentation and training materials, packages the answer into a conversational format, and gives you links to the relevant help article. This is basically a better search engine for the help docs. And honestly, for a tool like SolidWorks or NX where the help documentation is vast and inconsistently organized, having an AI that can actually find the right article on the first try is genuinely useful. I've spent more time than I'll admit clicking through help menus that feel like they were organized by someone who lost interest halfway through.</p>
<p>Second, they can locate and sometimes launch commands. Autodesk Assistant can now execute Fusion commands if you describe what you want: "extrude this profile 10mm" or "add a 2mm fillet to the selected edges." SolidWorks LEO offers predictive command access based on what you're currently doing. Solid Edge's Design Copilot answers questions and provides guidance in natural language. This is the feature that feels closest to an actual productivity gain, because finding commands in a menu-heavy CAD tool is a real time sink, especially if you use the software intermittently and can't remember where Siemens hid the draft angle option this release.</p>
<p>Third, they provide design guidance. This is the vaguest category and the one vendors talk about most. AURA in SolidWorks is positioned as a "brainstorming partner" that connects you with enterprise knowledge and web resources. Onshape AI Advisor suggests best practices based on your conversation context. Creo AI Assistant helps with error troubleshooting by pulling from PTC's support knowledge base. The quality ranges from surprisingly helpful to aggressively generic, depending on how specific your question is and how well the vendor's knowledge base covers your problem.</p>
<h2>What they can't do</h2>
<p>Here's the list that matters.</p>
<p>None of them can look at your model and understand what's wrong with it geometrically. They can tell you what kinds of problems cause a fillet to fail. They can't tell you which edge in your specific model is causing the failure and why.</p>
<p>None of them generate real geometry from scratch in the way <a href="/posts/text-to-cad-guide">text-to-CAD tools</a> do. Autodesk has started moving in this direction with what they call "Neural CAD" in Assistant, where you can prompt for geometry creation, and it's available as a tech preview in Fusion as of March 2026. But the others are firmly in the "help you use the software" category rather than the "design things for you" category.</p>
<p>None of them reason about manufacturing constraints. You can ask how to set up a CAM toolpath, and the AI might walk you through the menus. You can't say "will this part be expensive to machine?" and get a useful answer based on your actual geometry.</p>
<p>None of them maintain real context across a session in a meaningful way. The context window is shallow and model-aware only in the most superficial sense. The AI knows you're in a Part environment or an Assembly environment. It doesn't know you've been fighting with the same tangent edge for twenty minutes.</p>
<p>And none of them reliably save time on the tasks where you actually need help. The easy questions, the ones the copilot answers well, are the ones you could have Googled in thirty seconds. The hard questions are precisely the ones the AI fumbles. That's the fundamental problem with the current copilot pattern: it's most capable when you need it least.</p>
<h2>The vendor strategies diverge</h2>
<p>Despite the surface similarity, the vendors are making different bets.</p>
<p>Dassault Systèmes is going big with SolidWorks. AURA and LEO split the concept into two personas: AURA for knowledge exploration and creative guidance, LEO for practical design assistance like fixing broken references, automating assembly structures, and setting up simulations. LEO can detect problems and suggest fixes, which is closer to model-aware assistance than what most competitors offer. Both are available in the SolidWorks Labs beta as of the 2026 release.</p>
<p>Autodesk is pushing <a href="/posts/autodesk-assistant-ai">Autodesk Assistant</a> the furthest toward actual geometry interaction. The text-to-command modeling feature, where you describe an operation in natural language and Assistant executes it in Fusion while preserving parametric history, is the most ambitious thing any vendor is shipping right now. It supports extrudes, fillets, chamfers, holes, shells, patterns, and revolves. It's also a tech preview, which in Autodesk language means "we think this works but please don't rely on it for Thursday's deadline."</p>
<p>PTC is taking the conservative route with <a href="/posts/creo-ai-assistant">Creo AI Assistant</a>. The current beta focuses on error troubleshooting, pulling from PTC's support knowledge base to help you understand what went wrong. That's a narrower scope than what Dassault or Autodesk are attempting, but PTC's customer base skews toward large enterprises with regulated design processes, and those customers are more interested in reliability than flash. PTC's broader AI ambitions include natural language commands and predictive design recommendations in Creo 2026, but the current shipping product is modest.</p>
<p>Siemens splits its copilot across NX and Solid Edge. Design Copilot NX launched in mid-2025 with natural language query support and best-practice guidance. Solid Edge 2026 added its own Design Copilot built on RAG technology. Siemens also shipped AI features that aren't conversational: Magnetic Snap Assembly in Solid Edge uses AI to detect and apply mates automatically, and Select Similar Faces in NX uses AI to identify matching geometry across complex parts. These non-chatbot AI features are, in my experience, more immediately useful than the chat interfaces.</p>
<p><a href="/posts/onshape-ai-advisor">Onshape AI Advisor</a> is the cleanest implementation from a usability standpoint. It's embedded in the workspace, pulls from Onshape's documentation and tutorial library, and focuses on helping users learn the tool. For people migrating from SolidWorks to Onshape, which is a common enough path these days, the AI Advisor is a genuinely useful guide. It doesn't try to be more than a smart documentation assistant, and because of that, it rarely disappoints.</p>
<h2>The real competition isn't each other</h2>
<p>The thing about AI CAD copilots is that they're all competing with the same baseline: searching the help documentation yourself. And the help documentation, for most CAD tools, is actually pretty good. It's just badly organized and hard to search. A copilot that makes the existing docs more accessible is nice. It's not a revolution. It's a better index.</p>
<p>The tools that will actually change how people work are the ones that interact with the model, not just the menus. Autodesk is closest to this with the text-to-command features in Assistant. SolidWorks LEO's problem detection is another step in this direction. But until a copilot can look at my specific geometry, understand the design intent, and suggest a fix that accounts for the manufacturing process I'm targeting, the gap between the demo and the desk will remain.</p>
<p>For now, I use these copilots the way I use a pocket dictionary. Helpful when I've forgotten a word. Not helpful when I'm trying to write a novel. The real work still happens in the sketch, the feature tree, and the tolerance stack. The AI watches from the side panel, ready to explain things I already mostly know, while the problems I actually need solved sit there in red, waiting for a human.</p>
<p>Every vendor has an AI copilot now. Very few of them have figured out what it's for.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Zoo text-to-CAD API tutorial: from curl to production</title>
      <link>https://blog.texocad.ai/posts/zoo-text-to-cad-api-tutorial</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/zoo-text-to-cad-api-tutorial</guid>
      <pubDate>Wed, 18 Feb 2026 00:00:00 GMT</pubDate>
      <description>A practical tutorial for the Zoo.dev text-to-CAD API, starting with a curl command and ending with a Python script that generates STEP files in a loop.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>zoo</category>
      <category>api</category>
      <category>tutorial</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> The Zoo.dev API accepts POST requests with text prompts and returns CAD geometry. Start with curl for testing, then use the kittycad Python SDK for automation. The API supports STEP, glTF, OBJ, and STL output. Authentication requires an API token from zoo.dev.</p>
<p>I wanted to generate a STEP file from a text prompt without opening a browser. Just a terminal, a curl command, and a file on disk at the end. The kind of thing that sounds simple and is simple, once you've figured out the three or four things the documentation assumes you already know. This tutorial is the version I wish I'd had when I started: beginning with a single curl command that proves the API works, and ending with a Python script that generates STEP files from a list of part descriptions in a loop.</p>
<p>Everything here uses the <a href="https://zoo.dev">Zoo.dev</a> API, which is currently the only production text-to-CAD service that returns real B-Rep geometry. If you want background on the API landscape and how Zoo fits into it, the <a href="/posts/text-to-cad-api">text-to-CAD API</a> overview covers that. This post is pure hands-on.</p>
<h2>Get an API token</h2>
<p>Before anything else, you need a Zoo account and an API token.</p>
<ol>
<li>Create an account at <a href="https://zoo.dev">zoo.dev</a> if you don't have one.</li>
<li>Go to <a href="https://zoo.dev/account?tab=api_tokens">account settings</a> and generate an API token.</li>
<li>Save it somewhere safe. You'll use it for every request.</li>
</ol>
<p>Set it as an environment variable so the examples below work as written:</p>
<pre><code class="language-bash">export ZOO_API_TOKEN=your-token-here
</code></pre>
<p>Zoo gives you $10 of free API usage per month, roughly enough for 15 to 50 generations depending on complexity. Failed calls aren't charged.</p>
<h2>Step 1: prove it works with curl</h2>
<p>The fastest way to test the API is a single curl command. This submits a text prompt and gets back a job object with an ID:</p>
<pre><code class="language-bash">curl -s -X POST \
  "https://api.zoo.dev/ai/text-to-cad/step" \
  -H "Authorization: Bearer $ZOO_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "rectangular plate, 80mm by 50mm by 5mm, four 4.2mm holes on a 60mm by 30mm bolt pattern centered on the plate"}' \
  | python3 -m json.tool
</code></pre>
<p>You'll get back a JSON response with an <code>id</code> field (a UUID), a <code>status</code> (probably <code>queued</code>), and your prompt echoed back. The <code>id</code> is what you need for the next step.</p>
<p>The generation happens asynchronously. This first request just starts the job. You don't get the geometry back immediately, which confused me the first time I tried it because I was expecting a file download and got a JSON status blob instead.</p>
<h2>Step 2: poll for completion</h2>
<p>Copy the <code>id</code> from the response and check the status:</p>
<pre><code class="language-bash">curl -s \
  "https://api.zoo.dev/user/text-to-cad/YOUR-UUID-HERE" \
  -H "Authorization: Bearer $ZOO_API_TOKEN" \
  | python3 -m json.tool | grep status
</code></pre>
<p>Keep running this until the status changes from <code>queued</code> or <code>in_progress</code> to <code>completed</code> (or <code>failed</code>). Typical wait is 15 to 90 seconds depending on what you asked for. A simple plate like the one above usually comes back in under 30 seconds.</p>
<p>Once the status is <code>completed</code>, the <code>outputs</code> field in the response contains your generated files. The keys are filenames (like <code>output.step</code>, <code>output.gltf</code>) and the values are base64-encoded file content.</p>
<h2>Step 3: save the STEP file</h2>
<p>You can extract and decode the STEP file from the JSON response with a bit of shell piping:</p>
<pre><code class="language-bash">curl -s \
  "https://api.zoo.dev/user/text-to-cad/YOUR-UUID-HERE" \
  -H "Authorization: Bearer $ZOO_API_TOKEN" \
  | python3 -c "
import sys, json, base64
data = json.load(sys.stdin)
for name, content in data['outputs'].items():
    if name.endswith('.step'):
        with open('plate.step', 'w') as f:
            f.write(base64.b64decode(content).decode('utf-8'))
        print(f'Saved plate.step')
"
</code></pre>
<p>Open <code>plate.step</code> in Fusion 360, SolidWorks, or any STEP-compatible tool. You should see a rectangular plate with four holes. The geometry is real B-Rep: selectable faces, measurable edges, the kind of solid you can fillet and chamfer and argue with. Whether the dimensions are exactly what you asked for is a separate conversation (they'll be close but not perfect, see the <a href="/posts/is-text-to-cad-accurate">accuracy post</a>).</p>
<p>That's the entire API workflow in three curl commands. Submit, poll, save. Everything else is automation, error handling, and making the prompts better.</p>
<h2>Step 4: move to Python</h2>
<p>Curl is fine for testing. For anything repeatable, Python is where you want to be. Install the SDK:</p>
<pre><code class="language-bash">pip install kittycad
</code></pre>
<p>Here's a self-contained script that does everything the curl commands did, but in a form you can actually build on:</p>
<pre><code class="language-python">import time
import base64
from kittycad.client import ClientFromEnv
from kittycad.api.ml import create_text_to_cad, get_text_to_cad_part_for_user
from kittycad.models import TextToCadCreateBody, ApiCallStatus

client = ClientFromEnv()

prompt = (
    "rectangular plate, 80mm by 50mm by 5mm, "
    "four 4.2mm holes on a 60mm by 30mm bolt pattern "
    "centered on the plate"
)

print(f"Submitting: {prompt}")
result = create_text_to_cad.sync(
    client=client,
    output_format="step",
    body=TextToCadCreateBody(prompt=prompt),
)
print(f"Job ID: {result.id}")

while True:
    response = get_text_to_cad_part_for_user.sync(
        client=client,
        id=result.id,
    )
    print(f"  Status: {response.status}")
    if response.status in (ApiCallStatus.COMPLETED, ApiCallStatus.FAILED):
        break
    time.sleep(5)

if response.status == ApiCallStatus.COMPLETED:
    for name, content in response.outputs.items():
        if name.endswith(".step"):
            with open("plate.step", "w") as f:
                f.write(base64.b64decode(content).decode("utf-8"))
            print("Saved plate.step")
else:
    print(f"Failed: {response.error}")
</code></pre>
<p>Run it. Watch the status updates tick by. Get a STEP file. Open it. Measure the holes. Feel the mild satisfaction of having automated the first step of a CAD workflow from a terminal. Then feel the mild annoyance of discovering that one of the four holes is 0.6mm off center, because that's just how text-to-CAD works right now.</p>
<h2>Step 5: generate multiple parts</h2>
<p>This is where things get interesting and where the API starts earning its keep. Instead of generating one part at a time, let's read from a list and generate a batch:</p>
<pre><code class="language-python">import time
import base64
import os
from kittycad.client import ClientFromEnv
from kittycad.api.ml import create_text_to_cad, get_text_to_cad_part_for_user
from kittycad.models import TextToCadCreateBody, ApiCallStatus

client = ClientFromEnv()

parts = [
    {
        "name": "mounting_plate",
        "prompt": "rectangular plate, 80mm by 50mm by 5mm, four 4.2mm holes on a 60mm by 30mm bolt pattern centered on the plate",
    },
    {
        "name": "standoff",
        "prompt": "cylindrical standoff, 20mm outer diameter, 10mm inner bore, 15mm tall",
    },
    {
        "name": "l_bracket",
        "prompt": "L-bracket, 3mm thick, 40mm equal legs, two 5mm holes per leg spaced 25mm apart, 10mm from edges, 2mm fillet at bend",
    },
    {
        "name": "cable_clip",
        "prompt": "C-shaped cable clip for 8mm cable, 2mm wall thickness, 15mm wide, with a single M3 mounting hole on the flat base",
    },
]

os.makedirs("output", exist_ok=True)

def generate_and_save(part):
    print(f"\nGenerating {part['name']}...")
    try:
        result = create_text_to_cad.sync(
            client=client,
            output_format="step",
            body=TextToCadCreateBody(prompt=part["prompt"]),
        )
    except Exception as e:
        print(f"  Submit failed: {e}")
        return False

    for _ in range(60):
        response = get_text_to_cad_part_for_user.sync(
            client=client,
            id=result.id,
        )
        if response.status in (ApiCallStatus.COMPLETED, ApiCallStatus.FAILED):
            break
        time.sleep(5)

    if response.status == ApiCallStatus.COMPLETED:
        for name, content in response.outputs.items():
            if name.endswith(".step"):
                path = f"output/{part['name']}.step"
                with open(path, "w") as f:
                    f.write(base64.b64decode(content).decode("utf-8"))
                print(f"  Saved {path}")
                return True

    print(f"  Failed: {response.error}")
    return False

results = []
for part in parts:
    success = generate_and_save(part)
    results.append((part["name"], success))

print("\n--- Summary ---")
for name, success in results:
    status = "OK" if success else "FAILED"
    print(f"  {name}: {status}")
</code></pre>
<p>I ran a version of this with twelve parts. Nine succeeded on the first try. Two succeeded after I rewrote the prompts to be more specific (the originals were too vague and the API returned errors). One just wouldn't generate no matter how I phrased it, a spring clip with internal snap features that was probably too complex for the current model. That's roughly the success rate I've seen across a few hundred generations: around 85 percent on the first attempt, closer to 90 percent after prompt adjustments.</p>
<h2>Writing better prompts</h2>
<p>Prompt quality is the single biggest factor in whether you get usable geometry. This is the part that no amount of SDK knowledge can substitute for. You need to think like you're writing a work order for someone who knows CAD vocabulary but has no context about your project.</p>
<p>Prompts that work well:</p>
<ul>
<li>Include specific dimensions for every feature you care about</li>
<li>Name features using CAD terms: bore, boss, fillet, chamfer, counterbore, pocket, rib, standoff</li>
<li>Specify material thickness explicitly</li>
<li>Describe hole patterns with center-to-center spacing, not "evenly distributed" (the AI's idea of "even" may differ from yours)</li>
<li>Keep it to one part per prompt</li>
</ul>
<p>Prompts that reliably cause problems:</p>
<ul>
<li>"A small bracket" (how small? what shape? for what?)</li>
<li>Descriptions that reference other parts ("a bracket that attaches to the sensor mount") without describing the geometry</li>
<li>Parts with more than about 10 features or multiple interacting feature sets</li>
<li>Anything involving springs, threads, or moving mechanisms</li>
<li>Organic shapes with complex curvature</li>
</ul>
<p>Here's a prompt I iterated on three times before the output was usable:</p>
<p>First try: "motor mount bracket." Generated something. Wrong in every dimension, because I gave it nothing to work with.</p>
<p>Second try: "L-bracket motor mount, NEMA 17 pattern." Better shape, but the bolt pattern was off and the overall size was too small.</p>
<p>Third try: "L-bracket, 3mm aluminum, 60mm by 40mm base, 60mm by 40mm vertical face, four M3 clearance holes on 31mm square NEMA 17 pattern centered on vertical face, two M4 clearance holes on base 50mm apart." Got a bracket I could actually use as starting geometry. Still needed to fix one hole position and add a fillet, but the shape was right.</p>
<p>The lesson: treat the prompt like you'd treat a dimensioned sketch. Every number you leave out is a number the AI guesses, and its guesses are trained on averages, not your specific assembly.</p>
<h2>Step 6: add retry logic</h2>
<p>Production scripts need to handle failures without stopping. Here's the pattern I've settled on:</p>
<pre><code class="language-python">def generate_with_retry(prompt, output_path, max_attempts=2):
    for attempt in range(max_attempts):
        try:
            result = create_text_to_cad.sync(
                client=client,
                output_format="step",
                body=TextToCadCreateBody(prompt=prompt),
            )
        except Exception as e:
            print(f"  Attempt {attempt + 1} request error: {e}")
            time.sleep(10)
            continue

        for _ in range(60):
            response = get_text_to_cad_part_for_user.sync(
                client=client,
                id=result.id,
            )
            if response.status in (
                ApiCallStatus.COMPLETED,
                ApiCallStatus.FAILED,
            ):
                break
            time.sleep(5)

        if response.status == ApiCallStatus.COMPLETED:
            for name, content in response.outputs.items():
                if name.endswith(".step"):
                    step_data = base64.b64decode(content).decode("utf-8")
                    with open(output_path, "w") as f:
                        f.write(step_data)
                    return True

        if attempt &#x3C; max_attempts - 1:
            print(f"  Attempt {attempt + 1} failed, retrying...")
            time.sleep(5)

    return False
</code></pre>
<p>Two attempts is usually enough. The generation model is non-deterministic, so the same prompt can succeed on a retry even if it failed the first time. I haven't found that more than two retries helps, though. If it fails twice, the prompt usually needs rewriting.</p>
<h2>Step 7: from script to something you'd actually maintain</h2>
<p>At this point you have all the pieces. The progression from here depends on what you're building. A few directions I've gone:</p>
<p>Reading part specs from a CSV or YAML file instead of hardcoding them. The <a href="/posts/text-to-cad-api-python">text-to-CAD API Python</a> post has a CSV example that plugs directly into this workflow.</p>
<p>Adding logging. I write a JSON log entry for each generation with the prompt, request ID, timestamp, success/failure status, and output path. Three weeks later, when I'm wondering why a particular STEP file looks wrong, the log tells me what prompt produced it.</p>
<p>Running as a cron job. My current setup checks a YAML file for new entries twice a day, generates any parts that don't already have STEP files in the output folder, and sends me a Slack message with the results. About 80 lines of Python total. It's the most useful automation I've built in the past six months that doesn't involve a database.</p>
<p>Validating the output. After saving the STEP file, I open it with a STEP parser (pythonOCC or cadquery can do this) and check that the solid is valid, has non-zero volume, and has a bounding box that roughly matches the expected dimensions. This catches the occasional garbage output before it pollutes the project folder.</p>
<h2>What this doesn't cover</h2>
<p>Assemblies. The API generates one part per request. If you need multiple parts that fit together, you're generating each one separately and assembling in your CAD tool.</p>
<p>Iterative refinement. The API doesn't have a "modify the last part I generated" endpoint. Each request starts fresh. Zoo's web UI supports conversational refinement, but the API is fire-and-forget.</p>
<p>KCL output. Zoo is building a code-based CAD language called KCL, and the API has a <code>kcl</code> option that returns KCL code alongside the geometry. I've experimented with this but haven't found a production use for it yet. The code is interesting to read and theoretically editable, but the tooling around KCL is still early.</p>
<p>For more context on KCL, Zoo's broader toolset, and how the text-to-CAD model performs on various part types, see the <a href="/posts/zoo-text-to-cad-review">Zoo text-to-CAD review</a>.</p>
<h2>The honest verdict on this workflow</h2>
<p>I've been using this API-based workflow for about three months now, mostly for generating fixture brackets and mounting plates, the kind of parts that are boring to model but too specific to reuse from a library. The API saves me real time on those parts. Not because the output is perfect, it's not, and I still open every STEP file and check the dimensions. But starting from a generated solid instead of a blank sketch cuts the boring parts out of my CAD day, and the boring parts are what make you stop caring about quality.</p>
<p>The workflow from curl to production script took me an afternoon to build and has been stable since. The SDK is good enough that you don't fight it. The API is good enough that it succeeds most of the time. The output is good enough that it's useful as starting geometry.</p>
<p>"Good enough" is doing a lot of work in that paragraph, and I mean it precisely. This is a tool that gets you 70 to 80 percent of the way there on simple parts, and that last 20 to 30 percent is still your job. Whether that's a bargain or a trap depends on how many simple parts you generate and how high your tolerance is for checking the AI's work. For me, it's been a bargain. The <a href="/posts/kittycad-python-sdk">KittyCAD Python SDK</a> documentation has everything else you'd need beyond what I've covered here.</p>
]]></content:encoded>
    </item>
    <item>
      <title>The Text2CAD paper: what the NeurIPS research actually says</title>
      <link>https://blog.texocad.ai/posts/text2cad-paper</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text2cad-paper</guid>
      <pubDate>Tue, 17 Feb 2026 00:00:00 GMT</pubDate>
      <description>The NeurIPS 2024 Text2CAD paper introduced the first end-to-end framework for generating parametric CAD from natural language. Here&apos;s what it does, what it proved, and what it doesn&apos;t solve.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>research</category>
      <category>neurips</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> The Text2CAD paper (NeurIPS 2024 spotlight) presents a transformer-based framework that generates parametric CAD models from text using the DeepCAD dataset (~170K models, ~660K text annotations). It uses a BERT encoder and autoregressive CAD sequence decoder to produce sketch-and-extrude operations, not mesh geometry.</p>
<p>I've seen the Text2CAD paper cited by at least four different text-to-CAD vendors, always in the same way: vaguely, enthusiastically, and with the inconvenient parts left out. "Based on cutting-edge NeurIPS research" is a great thing to put on a landing page. It's less useful for understanding what the research actually showed, where it broke down, and what it means for the tools you might use on a Tuesday afternoon when a client needs a bracket by end of day.</p>
<p>So I read the paper. Then I read it again, because the first pass didn't stick and I was trying to understand the evaluation metrics while eating a sandwich at my desk, which turns out to be a bad combination. Here's what it actually says.</p>
<h2>What the paper is</h2>
<p>Text2CAD, published as a spotlight paper at NeurIPS 2024, is the first end-to-end framework for generating parametric CAD models from natural language text prompts. The authors are Mohammad Sadil Khan, Sankalp Sinha, Talha Uddin Sheikh, Didier Stricker, Sk Aziz Ali, and Muhammad Zeshan Afzal, primarily from DFKI and RPTU Kaiserslautern-Landau in Germany.</p>
<p>"First end-to-end" is an important qualifier. Earlier research had tackled pieces of this problem: generating CAD sequences from other representations, generating 3D geometry from text as mesh, annotating CAD models with descriptions. Text2CAD put the full pipeline together: text in, parametric CAD operations out.</p>
<p>The paper introduced two contributions that matter. First, a data annotation pipeline that generated multi-level text descriptions for the <a href="/posts/deepcad-dataset">DeepCAD dataset</a>. Second, the model architecture itself, which takes those text descriptions and generates valid sequences of sketch-and-extrude operations.</p>
<h2>The data pipeline</h2>
<p>The <a href="/posts/deepcad-dataset">DeepCAD dataset</a> contains about 178,000 parametric CAD models represented as sequences of CAD operations: sketch a profile, extrude it, sketch another profile on a different plane, extrude that. Each model is a recipe, not a mesh. That's what makes it useful for this kind of research. The models are simple, mostly prismatic mechanical parts, but they're stored as parametric operation sequences, the same kind of instructions a human would follow in a timeline-based CAD tool.</p>
<p>The Text2CAD team annotated this dataset with approximately 660,000 text descriptions using Mistral and LLaVA-NeXT (a vision-language model). They generated descriptions at multiple skill levels, from beginner-style ("make two cylinders, one inside the other") to expert-style ("sketch a concentric circular profile on the XY plane with outer diameter 24mm and inner diameter 16mm, extrude 10mm along the Z axis"). This range was deliberate. Real users don't all describe geometry the same way, and the model needed to handle everything from casual to precise.</p>
<p>That annotation pipeline is itself a contribution. Before Text2CAD, the DeepCAD models existed without text labels. You had geometry but no natural language descriptions to train a text-to-CAD model on. The team essentially created the labeled dataset that made the whole thing possible.</p>
<h2>The architecture</h2>
<p>The model has two main components. A text encoder based on BERT (with trainable adaptive layers) converts the input prompt into a dense numerical representation. An autoregressive transformer decoder takes that representation and generates CAD operations one token at a time.</p>
<p>Each CAD operation is tokenized: the operation type (sketch, extrude), the parameters (coordinates, dimensions, angles), and the ordering. The decoder predicts the next token in the sequence, conditioned on the text encoding and everything it's generated so far. If you've seen how language models generate text word by word, this is the same principle applied to CAD construction sequences.</p>
<p>The output isn't mesh. It's a sequence of sketch-and-extrude operations that can be executed by a CAD kernel to produce B-Rep geometry. That distinction matters enormously and is what separates this from text-to-3D research like DreamFusion or Point-E. The Text2CAD model doesn't predict what a surface looks like. It predicts <a href="/posts/how-text-to-cad-works">how to build a solid</a>, step by step, the way an engineer would.</p>
<h2>What the results showed</h2>
<p>The paper evaluates the model on several axes. Visual quality (does it look like the described part), parametric precision (are the individual operations correct), and geometric accuracy (does the final solid match the intent).</p>
<p>For parametric precision, they report F1 scores for different CAD elements: lines, arcs, circles, and extrusions. The model is reasonably good at getting the basic operations right, especially for the simpler descriptions. For geometric accuracy, they use Chamfer Distance (a standard metric for comparing 3D shapes) and invalidity ratios (what fraction of generated sequences produce broken geometry).</p>
<p>They also ran GPT-4V evaluations and human evaluations, which is an acknowledgment that metrics alone don't capture whether a generated part is actually useful.</p>
<p>The honest summary of the results: the model can generate recognizable mechanical parts from text descriptions, with valid topology and the correct general shape. It handles beginner-level prompts (simple descriptions) better than expert-level prompts (precise dimensional specifications). The dimensional accuracy is approximate, not precise. The range of geometry it can produce is limited to what exists in the training data, which is mostly simple prismatic parts.</p>
<h2>What it doesn't solve</h2>
<p>This is where the vendor citations conveniently trail off.</p>
<p>The model generates single parts only. No assemblies. No parts that reference other parts. No mating relationships or spatial context. You describe one object and get one object.</p>
<p>The dimensional accuracy is not sufficient for manufacturing without verification. The model generates approximate dimensions that are often close but not exact. If you ask for 80mm by 50mm, you might get 78.3mm by 51.1mm. That's impressive for a research prototype and useless for a machine shop.</p>
<p>The geometry vocabulary is limited to sketch-and-extrude operations. No fillets, no chamfers, no shell, no pattern, no sweep, no loft. The <a href="/posts/deepcad-dataset">DeepCAD dataset</a> stores models as sketch-and-extrude sequences, so that's what the model learned. If your part needs a fillet, the model doesn't have a token for that. This is a significant limitation because real parts have fillets, chamfers, draft angles, and other features that make them manufacturable.</p>
<p>The training data is small. 178,000 models sounds like a lot until you compare it to the billions of images that image generation models train on. The model has seen a narrow slice of the CAD universe: simple mechanical parts, mostly boxes and cylinders and plates. Ask for a gear, a cam, a sheet metal bracket, or an ergonomic handle, and you're outside the training distribution.</p>
<p>The code is available (<a href="https://github.com/sadilkhan/Text2CAD">GitHub</a>), but the license is CC BY-NC-SA 4.0: non-commercial use only. If you want to build a product on this, you need a different license arrangement or a different model.</p>
<h2>What it means for the tools you actually use</h2>
<p>Every commercial text-to-CAD tool operates on similar principles to what this paper describes: text encoding, sequence generation, kernel execution. Zoo.dev, AdamCAD, CADAgent, they all process text prompts and output CAD operations. The specific architectures differ. The training data differs. The kernels differ. But the fundamental pattern, language in, construction sequence out, is what Text2CAD formalized academically.</p>
<p>The paper is useful for calibrating expectations. When you see a text-to-CAD tool generate a clean bracket from a prompt, the research tells you roughly what's happening inside and why certain things work better than others. Simple prismatic parts match the training distribution. Complex geometry doesn't. Dimensional accuracy is approximate. Single-part generation is the current frontier. These aren't limitations specific to one tool. They're limitations of the approach, and the Text2CAD paper is honest about them in a way that marketing pages typically aren't.</p>
<h2>The contribution that matters most</h2>
<p>If I had to pick one thing the Text2CAD paper did that will have lasting impact, it's the annotated dataset. Before this work, the CAD research community had geometry without language labels. Text2CAD created the bridge between natural language and parametric CAD sequences at a scale that enables training. Every future text-to-CAD model, open or commercial, benefits from the existence of that annotated data or from the pipeline methodology used to create it.</p>
<p>The model itself will be surpassed. The architecture will be refined. But the problem of connecting human language to CAD operation sequences, and the dataset that first made that connection trainable, that's the foundation. The <a href="/posts/text-to-cad-open-source">open-source text-to-CAD</a> space is building on it.</p>
<p>The paper is worth reading if you use text-to-CAD tools and want to understand what you're actually interacting with. It's also worth reading if you're skeptical about these tools, because the limitations section is more honest than any product page I've seen. The research proves the concept works. It also proves the concept has boundaries, and those boundaries are exactly where the hard engineering begins.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Can you self-host text-to-CAD?</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-self-hosted</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-self-hosted</guid>
      <pubDate>Mon, 16 Feb 2026 00:00:00 GMT</pubDate>
      <description>If you want to run text-to-CAD on your own servers for IP protection or offline use, the options are thin. Here&apos;s what exists.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>self-hosted</category>
      <category>enterprise</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Self-hosting text-to-CAD is extremely limited in 2026. The Text2CAD research code can be run locally but is not production-grade. OpenSCAD with a local LLM is the most practical self-hosted option. Zoo.dev is cloud-only. No turnkey self-hosted text-to-CAD solution exists for enterprise deployment.</p>
<p>A manufacturing client asked me last year if they could run text-to-CAD inside their own network. No cloud. No sending geometry to someone else's servers. Their IP policy was clear: part designs do not leave the building. I said I'd look into it, spent the better part of a day investigating, and came back with the kind of answer nobody enjoys giving. "Technically, sort of, but you won't like any of the options."</p>
<p>That conversation happens more often than the text-to-CAD vendors probably want to admit. Defense contractors, automotive suppliers, medical device companies, anyone with real IP concerns or air-gapped environments wants to know the same thing: can I run this on my own hardware? The honest answer in 2026 is mostly no, with a few narrow exceptions that come with significant caveats.</p>
<h2>Why self-hosting matters</h2>
<p>The reason isn't paranoia. CAD models contain geometry that directly represents a company's products, tooling, and manufacturing methods. A STEP file for a custom injection mold insert is worth real money. A housing design for an unreleased product is a trade secret. An assembly layout for a defense subsystem is controlled information.</p>
<p>Most text-to-CAD tools are cloud services. You type a prompt, it goes to a server, the server generates geometry, and it sends it back. The prompt itself often contains dimensional specifications, feature descriptions, and design intent that reveal what you're working on. Even if the vendor promises not to store your data, the fact that your geometry passed through someone else's infrastructure is a non-starter for many organizations. The <a href="/posts/text-to-cad-data-safety">text-to-CAD data safety</a> post covers the data handling policies of major tools, but even the best policies don't satisfy an air-gap requirement.</p>
<p>There's also the offline use case. Factories, field locations, and classified facilities don't always have internet. A text-to-CAD tool that requires a cloud connection is useless in those environments.</p>
<h2>What exists for self-hosting</h2>
<p>The options, ranked roughly by practicality.</p>
<h3>OpenSCAD with a local LLM</h3>
<p>This is the most usable self-hosted text-to-CAD setup I've found. The idea: run a language model locally (Llama 3, Mistral, DeepSeek, or similar), have it generate OpenSCAD scripts from your text prompts, and render the geometry with OpenSCAD on the same machine. Everything stays local. No network call leaves your system.</p>
<p>The setup isn't turnkey. You need to install and configure a local LLM inference server (Ollama or llama.cpp are the common choices), set up OpenSCAD, and write or borrow the glue code that sends prompts to the model and feeds the output to OpenSCAD. The <a href="/posts/openscad-ai">OpenSCAD AI</a> post covers the technical details of this workflow.</p>
<p>Several open-source projects make this easier. <a href="https://github.com/Adam0Brien/nl-cad">NL-CAD</a> provides a CLI and web interface for natural-language-to-OpenSCAD generation and can be pointed at a local model endpoint. The <a href="https://github.com/jabberjabberjabber/openscad-mcp">OpenSCAD MCP Server</a> connects any MCP-compatible agent to OpenSCAD with visual feedback.</p>
<p>The quality depends heavily on the local model. Llama 3 70B generates decent OpenSCAD for simple parts. Smaller models struggle with correct boolean operations and parameter placement. The best results I've gotten locally are still noticeably worse than what you get from a cloud model like Claude or GPT-4, because the local models are smaller and less capable at code generation. But the output stays on your machine, which is the whole point.</p>
<p>The limitations are the same as any OpenSCAD workflow: STL output only (no STEP), limited to geometry that OpenSCAD can express (no freeform surfaces, no feature tree), and you need to be able to read and debug OpenSCAD code when the model gets something wrong.</p>
<h3>Text2CAD research code</h3>
<p>The <a href="/posts/text2cad-paper">Text2CAD model</a> can be run entirely locally. The code is on GitHub, the model checkpoint is on Hugging Face, and the inference pipeline is Python. Once you get it running (which takes some determination and a GPU), you have a fully self-hosted system that generates parametric CAD operation sequences from text.</p>
<p>The problem is that the output quality isn't production-grade. The model was trained on the <a href="/posts/deepcad-dataset">DeepCAD dataset</a>, which contains simple prismatic parts. The generated geometry is approximate in dimensions and limited in complexity. There's no user interface. The inference is batch-mode Python scripts.</p>
<p>You could absolutely run this inside a corporate network. Whether you'd want to depends on whether "technically generates CAD from text" meets your bar. For research, experimentation, or as a starting point for a custom pipeline, it has value. For generating parts that engineers would use, it's not there yet.</p>
<p>The license is CC BY-NC-SA 4.0: non-commercial only. If your self-hosting use case involves a business generating parts for products, you're already in a licensing gray area.</p>
<h3>FreeCAD with local AI</h3>
<p>FreeCAD runs locally and has a Python scripting API. If you pair it with a local LLM, you can generate FreeCAD Python macros from text prompts and execute them inside FreeCAD. The output is real B-Rep geometry with STEP export, which is more useful for engineering workflows than OpenSCAD's STL.</p>
<p>The difficulty is reliability. FreeCAD's Python API is extensive but has enough quirks that local LLMs (which are less capable than cloud models) produce scripts that fail frequently. Wrong method signatures, missing recompute calls, coordinate system confusion. I'd estimate that about half the scripts generated by a local 70B model for FreeCAD need manual fixes, compared to maybe a quarter when using Claude or GPT-4 through the cloud. That failure rate makes it hard to recommend for regular use.</p>
<p>A few community projects are building <a href="/posts/freecad-ai-plugin">FreeCAD AI plugins</a> and MCP servers that could eventually support local model backends. These are early, but the architecture is right: FreeCAD as the open kernel, a local model as the brain, and an MCP server connecting them with visual feedback.</p>
<h3>Build123d with a local agent</h3>
<p>Build123d is a Python CAD library built on OpenCascade, the same open-source kernel that FreeCAD uses. It produces B-Rep geometry and exports STEP. The <a href="https://github.com/clawd-maf/cad-agent">CAD Agent</a> project already wraps build123d in an MCP server that gives AI agents visual feedback of their work.</p>
<p>In principle, you could point CAD Agent at a local LLM instead of a cloud model. The rendering server and build123d run locally. The only external dependency is the language model, and that can be replaced with a local one. In practice, I haven't tested this specific configuration extensively, and the code generation quality with local models is likely the weak link, same as with FreeCAD scripting.</p>
<p>This is probably the most promising fully-open self-hosted pipeline: local LLM, build123d for geometry, OpenCascade as the kernel, STEP as the output, and visual feedback through the rendering server. It's not packaged as a product. You'd be assembling it yourself.</p>
<h2>What you can't self-host</h2>
<p>Zoo.dev is cloud-only. The KittyCAD geometry kernel that powers it is proprietary. There's no self-hosted version and no announced plans for one. This is the most capable text-to-CAD tool on the market, and it is not available for on-premise deployment.</p>
<p>AdamCAD is cloud-only. The generation happens on their servers.</p>
<p>Any tool that uses OpenAI, Anthropic, or other cloud LLM APIs requires network access to those providers, which means your prompts (and any design information they contain) leave your network.</p>
<h2>The gap</h2>
<p>The fundamental problem is that the best text-to-CAD results require large models running on significant compute infrastructure, and most organizations don't want to set up and maintain that. A 70B parameter model needs a serious GPU or a cluster of consumer GPUs. A smaller model generates worse CAD. There's no magic middle ground where you get cloud-quality generation from a model that runs on a workstation quietly.</p>
<p>The kernel side is actually in reasonable shape. OpenCascade exists. Build123d wraps it nicely. STEP export works. The geometry creation pipeline can be fully local and open source. The bottleneck is the language model: the quality of the code generation, the ability to recover from errors, and the capacity to handle complex prompts. That's where the gap between local and cloud models hits hardest.</p>
<h2>Where this leaves enterprise</h2>
<p>If you need text-to-CAD and your geometry cannot leave your network, you have two practical options today. OpenSCAD with a local LLM for simple parts with STL output. Or FreeCAD/build123d scripting with a local LLM for B-Rep geometry with STEP output, accepting that you'll debug a lot of broken scripts.</p>
<p>Neither option is a product. Both are workflows you assemble and maintain yourself. Neither produces results competitive with what Zoo.dev does in the cloud.</p>
<p>The <a href="/posts/text-to-cad-open-source">text-to-CAD open source</a> ecosystem is improving, and the pieces for a good self-hosted pipeline exist. But "the pieces exist" and "it's ready for enterprise deployment" are different statements, and I'd be lying if I said the second one was true. For now, the companies that need self-hosted text-to-CAD are stuck choosing between data protection and output quality. The tools haven't solved that tradeoff yet.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD file formats: what comes out and whether it&apos;s usable</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-file-formats</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-file-formats</guid>
      <pubDate>Sun, 15 Feb 2026 00:00:00 GMT</pubDate>
      <description>Text-to-CAD tools output STEP, STL, glTF, OBJ, and DXF. Only one of those formats matters for real engineering work, and it&apos;s the one most tools handle worst.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>file-formats</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> STEP (AP214/AP203) is the only text-to-CAD output format suitable for engineering work because it preserves B-Rep geometry with real edges and faces. STL and OBJ are mesh-only. glTF is for visualization. DXF is 2D only. Always export STEP if you need editable, machinable geometry.</p>
<p>I opened a STEP file from Zoo.dev in Fusion 360 last week and everything was where it should be. Selectable faces. Real edges. Fillets I could suppress or resize. Then I opened the STL version of the exact same part. Same geometry, supposedly. Fusion imported it as a mesh body, a lump of triangles I couldn't do anything useful with. I tried selecting a face to add a pocket and got 347 individual triangles highlighted like confetti. I closed the file, opened the STEP again, and got on with my life.</p>
<p>This happens constantly. People generate a part with a text-to-CAD tool, export whatever format the download button gives them, and discover ten minutes later that their geometry is either perfectly usable or essentially decorative. The format decides, and most people don't think about it until they're already staring at the problem.</p>
<p>Here's what actually comes out of text-to-CAD tools, what each format gives you, and which one you should be using. Spoiler: it's STEP. It's always STEP.</p>
<h2>STEP: the one that matters</h2>
<p>STEP stands for Standard for the Exchange of Product Data. The specific flavors you'll encounter are AP203 and AP214, which differ in what metadata they carry, but for geometry purposes they're interchangeable. The file extension is <code>.step</code> or <code>.stp</code>, and every serious CAD program on earth reads them.</p>
<p>A STEP file contains B-Rep geometry. Boundary Representation. That means the file describes your part using mathematically exact surfaces, curves, edges, and vertices. A cylinder in a STEP file is a real cylinder defined by a center axis and a radius, not an approximation made of flat panels. A fillet is an actual tangent-continuous surface, not a strip of triangles pretending to be smooth.</p>
<p>This matters because B-Rep geometry is what CAD software works with natively. When you open a STEP file in Fusion 360 or SolidWorks, you get a solid body with selectable faces and edges. You can add features to it. Cut a pocket into that face. Chamfer that edge. Measure the distance between those two holes with actual precision. Dimension it on a drawing. Hand it to a CNC programmer who can generate toolpaths from the real surfaces instead of approximating over mesh data.</p>
<p>For a deeper look at why this distinction exists and why it shapes everything downstream, the <a href="/posts/brep-vs-mesh-ai-generation">B-Rep vs mesh</a> post covers the geometry side in detail.</p>
<p>Zoo.dev outputs STEP by default, which is one of the reasons I keep coming back to it. The geometry comes from their KittyCAD kernel, which generates native B-Rep. The STEP file you get is not a conversion from some internal mesh. It's B-Rep all the way through, from generation to export. That's rare in the text-to-CAD space, and it's not a small thing.</p>
<p>The downsides of STEP are minor but real. The files are larger than mesh formats, usually by a factor of two to five for typical parts. They don't render in web browsers without a viewer library. They don't load in game engines or 3D printing slicers, though slicers can usually import them and tessellate internally. None of these are problems for engineering work. They're problems for demos and social media screenshots, which is why so many tools default to flashier formats.</p>
<h2>STL: the format everyone knows and nobody should trust</h2>
<p>STL is the most common 3D file format in the world. Every 3D printer speaks it. Every CAD tool exports it. Every text-to-CAD tool offers it. It's been around since the 1980s, which in software terms makes it roughly Jurassic. And for engineering purposes, it's a dead end.</p>
<p>An STL file describes a shape as a collection of triangles. That's all it contains. No curves. No surfaces. No edges. No face information. No units (seriously, STL files don't specify whether the numbers are millimeters or inches). No color, no material, no assembly structure, no metadata of any kind. Just triangles.</p>
<p>A 50mm cylinder in an STL file isn't a cylinder. It's a polygon that looks like a cylinder if you use enough triangles. Select a "face" and you get a triangle. Measure an "edge" and you get a polyline approximation. Try to fillet something and your CAD software will tell you, politely or otherwise, that it can't fillet a mesh edge.</p>
<p>For 3D printing, STL works because the slicer only needs the outer shell to generate toolpaths. The triangulation artifacts are smaller than the printer's resolution, so the part comes out looking round even though the file says it's faceted. This is fine. Nobody cares about the mathematical purity of their prototype bracket's bore diameter at the file level when the printer adds its own inaccuracies anyway.</p>
<p>For everything else, STL is a trap. You can't edit it meaningfully. You can't add engineering features. You can't tolerance it. You can't generate a useful drawing from it. You can send it to a machine shop, but any machinist worth their hourly rate will ask you for a STEP file and silently judge you for sending the STL.</p>
<p>When a text-to-CAD tool only outputs STL, that tells you something about what's happening under the hood. It usually means the geometry was generated as a mesh internally, not as B-Rep, and the tool has no solid model to export. The STL isn't a simplified version of something better. It's all there is. The <a href="/posts/text-to-cad-vs-text-to-3d">text-to-CAD vs text-to-3D</a> distinction matters here: if the output is mesh-only, it's closer to text-to-3D regardless of what the marketing says.</p>
<h2>OBJ: STL with slightly more ambition</h2>
<p>OBJ files are another mesh format, originally from Wavefront. They can carry vertex normals, texture coordinates, and material references, which makes them better than STL for rendering and visualization. In game development and visual effects, OBJ is a reasonable format. In engineering, it's another pile of triangles.</p>
<p>Some text-to-CAD tools offer OBJ alongside STL because it looks better in web-based 3D viewers. The normals help with smooth shading, so a cylinder actually looks smooth on screen instead of faceted. But the underlying geometry is still mesh. You still can't select a face. You still can't add features. You're still working with triangle soup.</p>
<p>I've seen people download OBJ files from text-to-CAD tools because the preview looked smoother than the STL preview, as if visual smoothness in a web viewer meant geometric quality in CAD. It doesn't. Smooth shading is a rendering trick. The mesh underneath is just as faceted as the STL version. Sometimes it's literally the same mesh with normals tacked on.</p>
<p>OBJ is fine if your next step is dropping the part into a rendering scene or a product visualization. It's not fine if your next step involves Fusion 360, a machine shop, or any tool that needs to know what a face is.</p>
<h2>glTF: the visualization format</h2>
<p>glTF, sometimes called the "JPEG of 3D," is a format designed for efficient transmission and rendering of 3D scenes. It supports meshes, materials, textures, animations, and scene hierarchies. It's the format of choice for web-based 3D viewers, AR applications, and anywhere you need to display 3D content fast without a big download.</p>
<p>Zoo.dev offers glTF export, which is useful if you want to embed a generated part in a web page or hand it to someone who needs a 3D preview without installing CAD software. The format is lightweight and renders quickly in any browser with WebGL support.</p>
<p>For engineering? No. Same problem as OBJ and STL. It's a mesh format. No B-Rep. No selectable faces. No features. It's for looking at things, not working with them. I keep glTF around for client presentations where someone needs to rotate a part in their browser and feel involved. For actual work, it stays in the export menu.</p>
<h2>DXF: the 2D one</h2>
<p>DXF is an Autodesk format that primarily handles 2D geometry. Lines, arcs, circles, polylines, text, dimensions. It's the lingua franca of laser cutting, CNC routing, and 2D drawing exchange. If you need a flat profile cut from sheet metal or a gasket outline sent to a waterjet shop, DXF is what you send.</p>
<p>Some text-to-CAD workflows touch DXF when you need to extract a cross-section or a flat pattern from a generated 3D part. It's not really a text-to-CAD output format in the same way STEP or STL are. It's more of a downstream artifact. You generate the 3D part, slice it or unfold it, and export the 2D profile as DXF.</p>
<p>DXF carries no 3D solid information. If someone hands you a DXF and says it's the output of a text-to-CAD tool, what they actually have is a 2D slice of a 3D part, or they're confused. Either way, ask for the STEP file.</p>
<h2>The format hierarchy in practice</h2>
<p>Here's how I think about it, and how I'd recommend anyone using text-to-CAD tools think about it:</p>
<p>STEP first. Always. If the tool offers STEP export, use it. This is your working file, the one you open in CAD software, verify, edit, and send to manufacturing. It's the only format that preserves the geometry in a form that engineers can actually use. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> recommends this too, because there's no good reason not to.</p>
<p>STL for 3D printing. After you've verified the STEP file, export or convert to STL for your slicer. You can do this conversion in any CAD tool, or use a service API like Zoo's file conversion endpoint. The STL is a derivative product, not your source of truth.</p>
<p>glTF or OBJ for visualization. If you need a lightweight 3D preview for a web page, presentation, or client review, these formats work. They're display copies. Treat them that way.</p>
<p>DXF for 2D profiles. When you need a flat pattern, a cross-section, or a cutting profile, extract it from the STEP file and export as DXF. Don't try to go from text-to-CAD directly to DXF for a 3D part. The math doesn't work that way.</p>
<h2>The format problem in the text-to-CAD market</h2>
<p>Here's what actually bothers me. Most text-to-CAD tools default to showing you a shiny 3D preview in a web viewer, which is a glTF or OBJ render. You see a smooth, attractive part. You feel good about the result. You click download and get an STL, because that's the format most closely related to what's on screen. The STEP option, if it exists, is buried in a dropdown or behind an extra click.</p>
<p>This is backwards. The STEP file is the valuable output. The mesh preview is a demo. But the UX is designed around the demo, because smooth rendered parts look better in screenshots than "your STEP file is ready for download" in plain text. Marketing wins. Engineering loses.</p>
<p>Zoo.dev is better about this than most. STEP is a first-class output. The UI doesn't try to hide it behind the mesh. But across the market, the norm is still mesh-first, STEP-maybe, which tells you something about who these tools are being built for and who they're not.</p>
<p>If a text-to-CAD tool doesn't offer STEP export, it's not a CAD tool. It's a shape generator. The <a href="/posts/text-to-cad-step-file">text-to-CAD step file</a> post goes into more detail on what to expect from STEP output specifically.</p>
<h2>The conversion trap</h2>
<p>One more thing, because I've watched people fall into this and it always ends the same way. You cannot convert a mesh into a STEP file and get real B-Rep geometry. There are mesh-to-BREP tools. They exist. They try to fit surfaces over triangle data and produce something that has faces and edges. The results range from "usable for simple shapes" to "a fever dream with bad topology."</p>
<p>If the text-to-CAD tool generated mesh internally and gives you a STEP file, check whether that STEP file contains real B-Rep or a mesh body wrapped in a STEP container. Some tools do this. The file extension says STEP, but the contents are still triangles wearing a trench coat. Open it in your CAD software and try to select a face. If you get hundreds of tiny planar faces instead of a single surface, you've been had.</p>
<p>The <a href="/posts/brep-vs-mesh-ai-generation">B-Rep vs mesh</a> post explains this distinction in more technical depth. The short version: B-Rep generated as B-Rep is good. Mesh converted to B-Rep is usually bad. The file format doesn't lie, but it can be dressed up to mislead.</p>
<h2>Pick your format like you pick your fights</h2>
<p>Export STEP. Check the geometry. Use mesh derivatives for what mesh is good at, printing and previewing, and stop there. The file format is the first decision that determines whether your text-to-CAD output ends up in a real project or in a folder of curiosities you never open again. I've got both kinds of folders. The STEP folder gets opened. The STL-only folder collects dust. That tells you everything you need to know.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD open source: what exists and what&apos;s missing</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-open-source</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-open-source</guid>
      <pubDate>Sun, 15 Feb 2026 00:00:00 GMT</pubDate>
      <description>Open-source text-to-CAD is early. CADAgent works inside Fusion 360, OpenSCAD pairs naturally with LLMs, and FreeCAD has Python scripting. But there&apos;s no fully open alternative to Zoo.dev yet.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>open-source</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Open-source text-to-CAD options in 2026 include CADAgent (Fusion 360 add-in, GitHub: er-fo/CADAgent), OpenSCAD with LLM code generation, and FreeCAD with AI-assisted Python macros. There is no fully open-source equivalent to Zoo.dev&apos;s B-Rep generation. The NeurIPS Text2CAD research code is available but not production-ready.</p>
<p>I spent a weekend trying to get the <a href="https://github.com/sadilkhan/Text2CAD">Text2CAD research code</a> running on my workstation. Two hours setting up the conda environment. Another hour downloading the model checkpoint from Hugging Face. Twenty minutes discovering the training scripts expected Linux paths I didn't have. When I finally got inference working, it generated a vaguely rectangular solid that was supposed to be a mounting bracket. The geometry was valid, technically. It also looked like something a first-year student would submit five minutes before a deadline. I closed the terminal, opened Fusion 360, and modeled the bracket in eight minutes.</p>
<p>That experience captures where open-source text-to-CAD stands in 2026. The research exists. The code is public. The results aren't at the point where you'd use them for work. But the pieces are assembling in interesting ways, and if you care about running this on your own hardware, controlling your data, or just understanding what's happening under the hood, it helps to know what's actually out there.</p>
<h2>The research layer: Text2CAD</h2>
<p>The <a href="/posts/text2cad-paper">Text2CAD paper</a> from NeurIPS 2024 is the most significant academic work in this space. The code is open source on GitHub (SadilKhan/Text2CAD), licensed CC BY-NC-SA 4.0, which means non-commercial use only. The dataset, model checkpoint, and training pipeline are all available on Hugging Face.</p>
<p>What you get: a transformer-based model that takes a text prompt and generates a sequence of sketch-and-extrude operations, trained on the <a href="/posts/deepcad-dataset">DeepCAD dataset</a> of roughly 178,000 parametric CAD models with about 660,000 text annotations. The model produces valid parametric geometry from natural language. The output is editable CAD operation sequences, not mesh.</p>
<p>What you don't get: production-quality output. The models in the DeepCAD dataset are geometrically simple. Mostly basic prismatic shapes, boxes, cylinders, simple mechanical parts. The Text2CAD model generates geometry within that vocabulary, and it does so at a quality level that's impressive for research but inadequate for engineering. Dimensional accuracy is rough. Complex features aren't in the training distribution. There's no UI. The inference pipeline requires Python, conda, and a GPU. It behaves like a research prototype because it is one.</p>
<p>Still, the fact that this exists and is downloadable matters. Two years ago, no public code could generate parametric CAD from text at all. Now you can clone a repo and do it on your own machine. The gap between this and a useful tool is large, but it's a gap between something and nothing.</p>
<h2>OpenSCAD with LLMs: the practical option</h2>
<p>If I had to pick the most practical open-source text-to-CAD workflow right now, it's OpenSCAD plus a language model. Not because it's elegant, but because it actually works, within limits.</p>
<p><a href="https://openscad.org">OpenSCAD</a> is a code-based CAD tool. You write a script that describes geometry using primitives, boolean operations, and transformations. It's been around for years and it's fully open source. The output is parametric by nature: change a variable and the whole model updates. The rendering engine produces proper geometry, not mesh approximations (though you export to STL for manufacturing, which is a separate issue).</p>
<p>The connection to LLMs is natural. OpenSCAD scripts are code, and language models are good at generating code. You describe a part in English, the LLM writes an OpenSCAD script, and OpenSCAD renders the geometry. Several projects have formalized this workflow.</p>
<p><a href="https://promptscad.com">PromptSCAD</a> is a web-based tool that uses DeepSeek v3 as its LLM backend to generate OpenSCAD code from text prompts. It renders the result in-browser using OpenSCAD compiled to WASM. Still pre-alpha, but functional. You type a description, get a script, see the geometry.</p>
<p>The <a href="https://github.com/jabberjabberjabber/openscad-mcp">OpenSCAD MCP Server</a> connects OpenSCAD to AI agents via the Model Context Protocol, giving the LLM live visual feedback of what it's generating. The agent can create models, view rendered previews, and iterate. It's clever engineering, and it solves a real problem: AI generating CAD blind is like a machinist working with their eyes closed.</p>
<p><a href="https://github.com/Adam0Brien/nl-cad">NL-CAD</a> takes a multi-mode approach, supporting mechanical parts via the BOSL2 library, voxel objects, and conversational refinement. It has CLI, web, and API interfaces.</p>
<p>The OpenSCAD approach has real advantages. The code is inspectable and editable. The parametric relationships are explicit in the script. You can version-control the model in git. And because OpenSCAD's language is well-documented and constrained, LLMs generate surprisingly decent scripts for simple to moderate parts.</p>
<p>The downsides are the usual <a href="/posts/openscad-ai">OpenSCAD limitations</a>. The scripting language is powerful for programmatic geometry but clunky for organic shapes. There's no feature tree in the traditional sense. The export is STL, not STEP, which limits manufacturing workflows. And the LLM still makes mistakes: bad boolean operations, misplaced features, scripts that render with warnings or errors. You need to read the generated code. If you can't read OpenSCAD, you can't debug the output.</p>
<h2>FreeCAD with AI-assisted Python</h2>
<p><a href="/posts/freecad-ai-plugin">FreeCAD</a> is the most capable fully open-source parametric CAD program. It supports B-Rep geometry, STEP export, assemblies, FEM, and a Python scripting API that can do almost anything the GUI can do. In theory, it's the perfect foundation for open-source text-to-CAD.</p>
<p>In practice, the AI integration is still early. FreeCAD's Python API is extensive but inconsistent. Different workbenches have different scripting patterns. The documentation has gaps. Language models can generate FreeCAD Python scripts, but the scripts fail more often than OpenSCAD scripts because the API surface is larger and less forgiving. A wrong method name, a parameter in the wrong coordinate system, a missing recompute call, and the script fails silently or produces garbage geometry.</p>
<p>That said, I've had success using Claude and GPT-4 to generate FreeCAD macros for simple parts. A plate with holes. A bracket with bends. An enclosure with standoffs. The key is giving the LLM very specific instructions about the FreeCAD API and being prepared to fix the script when it inevitably gets a detail wrong. It's not a polished workflow. It's more like having a junior colleague who knows the API vocabulary but hasn't internalized the idioms.</p>
<p>Several community projects are working on more structured FreeCAD AI integration, including MCP servers that give language models access to FreeCAD's Python API with visual feedback. These are early and not yet stable enough for regular use, but the direction is promising. FreeCAD's architecture supports this kind of integration better than most proprietary tools, because the scripting API is a first-class citizen, not a bolted-on afterthought.</p>
<h2>Fusion 360 MCP servers and CAD Agent</h2>
<p>A cluster of open-source projects have appeared connecting Fusion 360 to language models via the Model Context Protocol. These aren't text-to-CAD in the traditional sense. They're bridges that let an AI agent issue commands to Fusion 360's API.</p>
<p><a href="https://github.com/AuraFriday/Fusion-360-MCP-Server">Fusion-360-MCP-Server</a> has around 70 GitHub stars, provides Python execution with full Fusion API access, and is available on the Autodesk app store. <a href="https://github.com/faust-machines/fusion360-mcp-server">faust-machines' version</a> offers 80+ tools for sketching, extrusions, assemblies, and exports. <a href="https://github.com/rahayesj/ClaudeFusion360MCP">ClaudeFusion360MCP</a> focuses specifically on teaching Claude to create models through natural language.</p>
<p>The advantage of this approach is that the output is native Fusion 360 geometry with full feature history. The AI isn't generating an abstract sequence that gets translated later. It's operating inside the real CAD environment, with immediate feedback about what works and what doesn't. The disadvantage is that you need a Fusion 360 license, which makes "open source" a relative term. The bridge code is open. The CAD engine is not.</p>
<p>There's also <a href="https://github.com/clawd-maf/cad-agent">CAD Agent</a>, which takes a different approach: a self-contained rendering server using build123d and MCP that lets AI agents see their CAD work in real time. The geometry is built in Python with build123d (an open-source wrapper around the OpenCascade kernel), and the agent gets visual feedback through rendered PNG images. This is fully open source, no proprietary CAD license required. Of the projects I've tested, this one comes closest to a genuinely open pipeline: open kernel, open protocol, open code.</p>
<h2>What's missing</h2>
<p>The honest summary: there is no open-source tool that does what Zoo.dev does. Zoo generates B-Rep geometry from text using a purpose-built kernel (KittyCAD), with a polished UI, and outputs STEP files. Nothing in the open-source world offers that end-to-end experience.</p>
<p>The open-source world has pieces, not products. Text2CAD proves the research works but isn't production-grade. OpenSCAD plus an LLM is practical but limited by OpenSCAD's language and STL-only output. FreeCAD scripting works for simple cases but requires hand-holding. Fusion 360 MCP bridges produce great output but depend on proprietary software.</p>
<p>What's specifically missing:</p>
<p>A production-grade open-source B-Rep generation model trained on enough data to handle real engineering parts. The Text2CAD model is trained on 178,000 models. Image generation models train on billions of images. The data gap is enormous, and most real CAD data is locked inside companies who aren't sharing.</p>
<p>A user interface that non-programmers can use. Every open-source option currently requires command-line interaction, Python scripting, or both. That's fine for developers. It's a wall for the mechanical engineer who just wants to type a prompt and get a STEP file.</p>
<p>STEP export from AI-generated geometry without going through a proprietary kernel. OpenCascade, the open-source B-Rep kernel, can do this. Build123d wraps it in Python. But nobody has yet built a clean pipeline from text prompt to STEP file using purely open-source components that produces results competitive with commercial tools.</p>
<h2>Where this is going</h2>
<p>The trajectory is clear even if the timeline isn't. The research is open. The kernels are open (OpenCascade has been around for decades). The LLMs are increasingly open. The MCP protocol creates a standard way to connect language models to CAD tools. The community is building bridges, servers, and wrappers at a pace that didn't exist a year ago.</p>
<p>My guess is that the practical open-source text-to-CAD workflow in the near term won't be a single monolithic tool. It'll be a pipeline: an LLM generating build123d or OpenSCAD code, a rendering server providing visual feedback, an open-source kernel producing B-Rep output, and some form of UI tying it together. The pieces exist. Someone needs to assemble them into something that doesn't require a PhD in patience to use.</p>
<p>For now, if you want open-source text-to-CAD, the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the full tool landscape including these options. OpenSCAD with an LLM is the most usable path today. FreeCAD scripting is the most capable path for anyone willing to debug Python. And the <a href="/posts/text2cad-paper">Text2CAD research code</a> is there for anyone who wants to understand the internals and has a weekend to spend fighting conda environments. I recommend all three, for different reasons, and none of them for production work. Not yet.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD API: what&apos;s available and how to use it</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-api</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-api</guid>
      <pubDate>Sat, 14 Feb 2026 00:00:00 GMT</pubDate>
      <description>If you want to generate CAD models programmatically from text prompts, there&apos;s basically one real API right now. Here&apos;s what it can do.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>api</category>
      <category>developer</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Zoo.dev offers the primary text-to-CAD API, accessible via REST endpoints and a Python SDK (kittycad). It accepts text prompts and returns B-Rep geometry as STEP, glTF, OBJ, or STL files. The API supports batch generation and integration into custom workflows. Free tier available.</p>
<p>I spent a Friday afternoon trying to build a small internal tool that generates bracket variations from a spreadsheet. The idea was simple: read a row, construct a prompt, hit an API, get a STEP file back. The kind of thing you'd wire up in an hour if the API existed and the documentation didn't lie. I had a Python script open, a list of twelve bracket specs in a CSV, and the quiet optimism of someone who hasn't yet tried to find a working text-to-CAD API.</p>
<p>Two hours later, I had one API that actually worked, three that turned out to be mesh generators wearing CAD clothing, and a growing appreciation for just how thin the field is. If you're a developer trying to generate real CAD geometry programmatically from text prompts, the options are fewer than you'd expect and more specific than you'd hope.</p>
<h2>The actual landscape</h2>
<p>As of early 2026, there is essentially one production-ready text-to-CAD API: <a href="https://zoo.dev">Zoo.dev</a>. That's it. There are other tools that generate 3D content from text, plenty of them, but they produce meshes. OBJ files, FBX, triangle soups that look like CAD parts in a viewport and fall apart the moment you try to select a face or measure an edge. If you need B-Rep geometry, the kind of solid you can open in Fusion 360 and actually edit, Zoo is where you end up.</p>
<p>I don't love having a single real option. But the reason there's only one production text-to-CAD API isn't gatekeeping. Generating B-Rep from text is genuinely hard, and most companies working on AI 3D are targeting gaming and entertainment, where meshes are the native format. Engineering CAD is the harder problem with the smaller market.</p>
<p>There are research projects and open-source experiments. The <a href="/posts/text2cad-paper">Text2CAD paper</a> has public code. You can wire up an LLM to generate OpenSCAD scripts. But none of these have a stable, documented API you'd integrate into a production pipeline. They're building blocks, not services.</p>
<h2>Zoo.dev API: what you're actually working with</h2>
<p>The Zoo API is a REST API. The core endpoint is straightforward:</p>
<pre><code>POST /ai/text-to-cad/{output_format}
</code></pre>
<p>You send a JSON body with a <code>prompt</code> field, specify your output format in the URL path, and get back a response with a job ID. The generation is asynchronous. You submit the request, get a UUID, then poll until the status flips from <code>in_progress</code> to <code>completed</code> (or <code>failed</code>, which happens more often than the marketing page implies). Once it's done, you fetch the output, which comes back as base64-encoded file data.</p>
<p>Supported output formats: STEP, glTF, glB, OBJ, PLY, STL, and FBX. For engineering work, you want STEP. For visualization or quick previews, glTF or STL. The API returns both STEP and glTF by default, which is a sensible choice since you usually want one for CAD and one for web rendering.</p>
<p>Authentication is token-based. You generate an API token from your Zoo account, set it as the <code>ZOO_API_TOKEN</code> environment variable or pass it in the Authorization header, and you're in. No OAuth dance. Just a bearer token.</p>
<p>The status lifecycle goes: <code>queued</code> → <code>in_progress</code> → <code>completed</code> or <code>failed</code>. Typical generation takes 15 to 90 seconds. A simple bracket might come back in 20 seconds. Something with more features takes longer.</p>
<h2>The SDK situation</h2>
<p>Zoo publishes official client libraries for Python, Go, Rust, and TypeScript. The Python SDK (<code>kittycad</code>) is the most mature and the one I've used the most. It wraps the REST endpoints in typed function calls and handles the polling loop for you.</p>
<p>For a hands-on walkthrough of the Python SDK, the <a href="/posts/text-to-cad-api-python">text-to-CAD API Python</a> post covers installation, authentication, and the generate-poll-save workflow.</p>
<p>The TypeScript and Go clients exist and work, but the Python one has the most examples. There's also a CLI (<code>zoo</code>) that wraps the API for command-line use. You can run <code>zoo ml text-to-cad</code> with a prompt and an output format and get a file back without writing any code.</p>
<h2>Pricing and limits</h2>
<p>Zoo gives you $10 of free API usage per month. That translates to roughly 20 minutes of compute time, which doesn't sound like much until you realize each generation takes 15 to 90 seconds. You can generate somewhere around 15 to 50 parts per month on the free tier, depending on complexity. For testing and light use, that's enough. For a batch pipeline that processes a hundred parts a week, you'll need to pay.</p>
<p>Paid usage is $0.0083 per second, which works out to $0.50 per minute. Failed calls aren't charged, which is fair and also necessary given that the model doesn't succeed every time. A Pro subscription ($25/month for individuals, last I checked) gives you unlimited text-to-CAD through the web UI and the API.</p>
<p>No published rate limits that I've found, but I've hit implicit throttling when I was hammering the endpoint with a batch of 30 requests in rapid succession. Spacing requests a few seconds apart solved it. If you're building something that needs high throughput, you'd want to talk to Zoo about that.</p>
<h2>What the API can and can't do</h2>
<p>It can generate B-Rep geometry from natural language descriptions of mechanical parts. Brackets, plates, enclosures, housings, standoffs, flanges, simple gears, basic structural elements. The geometry comes out as real solids with selectable faces and edges that import cleanly into Fusion 360, SolidWorks, or any STEP-compatible tool.</p>
<p>It can't generate assemblies. One prompt, one part. If you need a mechanism with multiple interacting components, you're generating each one separately and assembling them yourself.</p>
<p>It can't do organic or freeform surfaces. Ask for something with complex curvature, a car body panel, a consumer electronics shell with flowing surfaces, and the output will either be a crude approximation or a failure.</p>
<p>It doesn't handle tolerances, GD&#x26;T, or material specifications. The output is nominal geometry with no manufacturing metadata. You get shapes, not engineering intent.</p>
<p>It doesn't guarantee dimensional accuracy. I've written about this in the <a href="/posts/is-text-to-cad-accurate">accuracy post</a>, but the short version is: dimensions are usually close but not exact. A 50mm feature might come out as 49.5mm or 50.3mm. For prototyping, fine. For production, you need to verify and correct every dimension. The API doesn't solve this; it's a property of the underlying model.</p>
<h2>Building a batch workflow</h2>
<p>The most useful thing I've done with the API is the bracket generator I mentioned at the start. Here's the shape of it: a CSV with columns for type, length, width, thickness, hole count, and hole diameter. A Python script reads each row, builds a prompt string, calls the API, polls for completion, and saves the resulting STEP file with a name derived from the row data. Forty lines of Python plus the <code>kittycad</code> SDK doing the heavy lifting.</p>
<p>The tricky parts are error handling and prompt construction. The API fails on maybe 10 to 15 percent of prompts in my experience, sometimes with a useful error message ("The prompt must clearly describe a CAD model"), sometimes with a generic failure. Your batch script needs to handle retries gracefully and log failures for manual review. I retry once with a slightly rephrased prompt. If it fails again, I log it and move on. Fighting with the model over one stubborn prompt isn't worth the compute time.</p>
<p>Prompt construction is where domain knowledge matters. "L-bracket with holes" gives you something. "L-bracket, 3mm aluminum, 40mm equal legs, two M4 clearance holes per leg on 25mm spacing, 10mm from edges, 2mm fillet at bend" gives you something much closer to usable. The API doesn't know what you need unless you tell it. Vague in, vague out. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> has more on prompt strategy.</p>
<p>For a complete working tutorial that starts with a raw curl command and builds up to a Python batch script, see the <a href="/posts/zoo-text-to-cad-api-tutorial">Zoo text-to-CAD API tutorial</a>.</p>
<h2>Alternatives and workarounds</h2>
<p>If Zoo's API doesn't fit your needs, the alternatives require more assembly.</p>
<p>You can set up a language model (GPT-4, Claude, local models) to generate OpenSCAD code from text prompts, then run OpenSCAD to render the geometry. This gives you a "text-to-CAD API" that you control, but the output is limited by OpenSCAD's geometry capabilities (CSG only, STL export rather than STEP).</p>
<p>You can do the same thing with FreeCAD's Python API, which gives you proper B-Rep with STEP support, but the failure rate is higher because FreeCAD's API is large and LLMs get the details wrong more often.</p>
<p>MCP servers that connect language models to CAD tools are another emerging option. The geometry quality is excellent when it works, but these are research-grade tools, not production APIs.</p>
<p>For more on the Python-specific options, see the <a href="/posts/text-to-cad-api-python">text-to-CAD API Python</a> walkthrough, which covers both the Zoo SDK and the DIY approaches.</p>
<h2>The developer experience, honestly</h2>
<p>The Zoo API works. The documentation is adequate. The SDK saves time. The pricing is reasonable. Generation quality varies but is mostly usable for simple to moderate parts. The async model is a minor annoyance that you wrap once and forget about.</p>
<p>What's missing is everything around the core API. There's no webhook support for completion notifications, so you're stuck polling. There's no batch endpoint, so generating 50 parts means 50 individual POST requests. There's no way to specify constraints or relationships between features in the prompt format, which means the API is always interpreting rather than following instructions. And there's no feedback mechanism more nuanced than thumbs-up/thumbs-down to help the model learn from your corrections.</p>
<p>I also wish there were more providers. Competition would improve everything: pricing, quality, feature sets, documentation. Having one vendor for the entire category makes me nervous in the same way that depending on a single supplier for a critical component makes me nervous.</p>
<p>For now, the Zoo API is the tool we have. It's real, it works for specific use cases, and the <a href="/posts/kittycad-python-sdk">KittyCAD Python SDK</a> makes integration straightforward. Whether it's the tool you need depends on what you're building and how much imperfection you can tolerate in the output. The API is an accelerator, not a replacement, and the developers who treat it that way seem to be the ones getting actual value from it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD API with Python: a developer walkthrough</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-api-python</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-api-python</guid>
      <pubDate>Thu, 12 Feb 2026 00:00:00 GMT</pubDate>
      <description>A practical walkthrough of calling text-to-CAD APIs from Python, including the Zoo/KittyCAD SDK, handling responses, and saving STEP files.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>python</category>
      <category>api</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> To use text-to-CAD from Python: install the kittycad package, authenticate with an API token, call the text-to-CAD endpoint with your prompt, poll for completion, and save the resulting STEP file. The SDK handles auth, polling, and file format conversion.</p>
<p>Last week I had a conversation with a coworker who's mostly a Python person, embedded systems and data pipelines, not much CAD. He'd seen me use Zoo's text-to-CAD through the browser and wanted to know if he could call it from a script. "I just want to POST a description and get a file back," he said, like it was the most reasonable thing in the world. And it is, in principle. In practice, there are about six decisions between <code>pip install</code> and a working STEP file on disk, and most of them aren't obvious from the documentation.</p>
<p>This is the walkthrough I wrote for him, cleaned up for people who are comfortable with Python but haven't necessarily spent time in CAD tooling. If you already know your way around the <a href="/posts/text-to-cad-api">text-to-CAD API</a>, this post is about the Python-specific details: the SDK, the code patterns, the places where things break, and the workarounds I've settled on.</p>
<h2>Setup</h2>
<p>Install the SDK:</p>
<pre><code class="language-bash">pip install kittycad
</code></pre>
<p>The package is called <code>kittycad</code> for historical reasons (Zoo.dev used to be KittyCAD, and the Python package name stuck). The current version at the time of writing is 1.3.5. It pulls in <code>httpx</code> and <code>pydantic</code> as dependencies, which is standard for a modern Python API client.</p>
<p>You need an API token from <a href="https://zoo.dev">zoo.dev</a>. Sign up, go to account settings, generate a token. Set it as an environment variable:</p>
<pre><code class="language-bash">export ZOO_API_TOKEN=your-token-here
</code></pre>
<p>The SDK reads this automatically when you initialize the client. You can also pass the token directly, but the environment variable approach is cleaner for scripts that might end up in version control. Nobody needs to see your token in a git diff at 2 AM.</p>
<h2>The basic workflow</h2>
<p>The pattern is: create client, submit prompt, poll for status, save the file. Here's the minimal version that actually works:</p>
<pre><code class="language-python">import time
import base64
from kittycad.client import ClientFromEnv
from kittycad.api.ml import create_text_to_cad, get_text_to_cad_part_for_user
from kittycad.models import TextToCadCreateBody, ApiCallStatus

client = ClientFromEnv()

result = create_text_to_cad.sync(
    client=client,
    output_format="step",
    body=TextToCadCreateBody(
        prompt="L-bracket, 3mm thick, 40mm legs, two 5mm holes per leg"
    ),
)

request_id = result.id

while True:
    response = get_text_to_cad_part_for_user.sync(
        client=client,
        id=request_id,
    )
    if response.status in (ApiCallStatus.COMPLETED, ApiCallStatus.FAILED):
        break
    time.sleep(5)

if response.status == ApiCallStatus.COMPLETED:
    for name, content in response.outputs.items():
        if name.endswith(".step"):
            with open("bracket.step", "w") as f:
                f.write(base64.b64decode(content).decode("utf-8"))
            print("Saved bracket.step")
else:
    print(f"Generation failed: {response.error}")
</code></pre>
<p>That's about 25 lines of actual code, and most of it is the polling loop. Let me walk through the parts that matter and the parts that trip people up.</p>
<h2>Client initialization</h2>
<pre><code class="language-python">from kittycad.client import ClientFromEnv

client = ClientFromEnv()
</code></pre>
<p><code>ClientFromEnv()</code> looks for <code>ZOO_API_TOKEN</code> in your environment. If it's not there, you get an error at request time, not at initialization, which is mildly annoying because you won't know your token is missing until you've already set up everything else. A quick sanity check after creating the client saves debugging time.</p>
<p>You can also create a client with an explicit token:</p>
<pre><code class="language-python">from kittycad.client import ClientFromToken

client = ClientFromToken(token="your-token-here")
</code></pre>
<p>I use <code>ClientFromEnv</code> for scripts and <code>ClientFromToken</code> for quick interactive testing in a notebook, where I'll paste in a token that I don't want hardcoded anywhere.</p>
<h2>Submitting the prompt</h2>
<pre><code class="language-python">result = create_text_to_cad.sync(
    client=client,
    output_format="step",
    body=TextToCadCreateBody(
        prompt="L-bracket, 3mm thick, 40mm legs, two 5mm holes per leg"
    ),
)
</code></pre>
<p>The <code>output_format</code> parameter goes in the URL path (<code>/ai/text-to-cad/step</code>). The prompt goes in the JSON body. The API always returns STEP and glTF by default regardless of what you specify, but the <code>output_format</code> parameter tells it which additional format to include if you want something else.</p>
<p>The <code>.sync()</code> method blocks until the HTTP request completes (not until the generation completes). You get back an object with an <code>id</code> (UUID), a <code>status</code> (usually <code>queued</code> at this point), and the prompt echoed back. The actual geometry generation happens server-side in the background.</p>
<p>There's also an <code>.asyncio()</code> variant if you're using <code>async</code>/<code>await</code>:</p>
<pre><code class="language-python">result = await create_text_to_cad.asyncio(
    client=client,
    output_format="step",
    body=TextToCadCreateBody(
        prompt="L-bracket, 3mm thick, 40mm legs, two 5mm holes per leg"
    ),
)
</code></pre>
<p>I've used the async version when generating multiple parts concurrently. It's the same API underneath, just wrapped in <code>httpx.AsyncClient</code> instead of the sync one.</p>
<h2>The polling loop</h2>
<p>This is the least elegant part of the whole workflow, and there's no getting around it. The API is asynchronous. You submit a request, get a job ID, and then keep asking "is it done yet?" until it is. The SDK doesn't have a built-in wait-for-completion helper, which seems like an oversight but probably reflects the fact that different use cases want different polling strategies.</p>
<pre><code class="language-python">while True:
    response = get_text_to_cad_part_for_user.sync(
        client=client,
        id=request_id,
    )
    if response.status in (ApiCallStatus.COMPLETED, ApiCallStatus.FAILED):
        break
    time.sleep(5)
</code></pre>
<p>Five seconds between polls is fine for most cases. Generation typically takes 15 to 90 seconds. If you're generating something simple, you'll poll three or four times. If it's complex, maybe fifteen times. I've experimented with adaptive polling, shorter intervals for the first 30 seconds, longer intervals after that, but the improvement is marginal and the code is uglier.</p>
<p>The status values you'll see: <code>queued</code>, <code>uploaded</code>, <code>in_progress</code>, <code>completed</code>, <code>failed</code>. In practice, most requests go <code>queued</code> → <code>in_progress</code> → <code>completed</code> or <code>failed</code>. The <code>uploaded</code> state is transient and I've only seen it in logs, never caught it in a polling loop.</p>
<h2>Handling the output</h2>
<p>When the generation succeeds, <code>response.outputs</code> is a dictionary where keys are filenames (like <code>output.step</code>, <code>output.gltf</code>) and values are base64-encoded file content. This is the part where people get confused, because you need to decode the base64 before saving:</p>
<pre><code class="language-python">for name, content in response.outputs.items():
    if name.endswith(".step"):
        step_data = base64.b64decode(content).decode("utf-8")
        with open("my_part.step", "w") as f:
            f.write(step_data)
</code></pre>
<p>STEP files are plain text, so decoding to UTF-8 works. For binary formats like STL or glB, you'd write bytes instead:</p>
<pre><code class="language-python">for name, content in response.outputs.items():
    if name.endswith(".stl"):
        stl_data = base64.b64decode(content)
        with open("my_part.stl", "wb") as f:
            f.write(stl_data)
</code></pre>
<p>The filenames in the output dictionary aren't always predictable. I've seen <code>output.step</code>, <code>output.gltf</code>, and variations. Matching on the file extension is safer than matching on the exact filename.</p>
<h2>Error handling that actually matters</h2>
<p>About 10 to 15 percent of my requests fail, based on a few hundred generations over the past months. The failure modes break down like this:</p>
<p>The prompt is too vague. You get a 400 error or a <code>failed</code> status with a message like "The prompt must clearly describe a CAD model." This is the most common failure and the easiest to fix. Be more specific. Include dimensions. Describe the shape in terms of features (holes, fillets, extrusions) rather than functions ("something to hold a sensor").</p>
<p>The geometry is too complex. The model tries and fails. You get a <code>failed</code> status, sometimes with a useful error, sometimes with a generic failure message. Simplify the prompt or break the part into simpler components.</p>
<p>Transient failures. Server-side issues, timeouts, bad luck. These are rare but real. A single retry with a short delay usually works.</p>
<p>Here's a pattern I use for production scripts:</p>
<pre><code class="language-python">import time
import base64
from kittycad.client import ClientFromEnv
from kittycad.api.ml import create_text_to_cad, get_text_to_cad_part_for_user
from kittycad.models import TextToCadCreateBody, ApiCallStatus

client = ClientFromEnv()

def generate_step(prompt, output_path, max_retries=2):
    for attempt in range(max_retries):
        try:
            result = create_text_to_cad.sync(
                client=client,
                output_format="step",
                body=TextToCadCreateBody(prompt=prompt),
            )
        except Exception as e:
            print(f"  Request failed: {e}")
            if attempt &#x3C; max_retries - 1:
                time.sleep(10)
                continue
            return False

        for _ in range(60):
            response = get_text_to_cad_part_for_user.sync(
                client=client,
                id=result.id,
            )
            if response.status in (
                ApiCallStatus.COMPLETED,
                ApiCallStatus.FAILED,
            ):
                break
            time.sleep(5)

        if response.status == ApiCallStatus.COMPLETED:
            for name, content in response.outputs.items():
                if name.endswith(".step"):
                    step_data = base64.b64decode(content).decode("utf-8")
                    with open(output_path, "w") as f:
                        f.write(step_data)
                    return True

        print(f"  Attempt {attempt + 1} failed: {response.error}")
        time.sleep(5)

    return False
</code></pre>
<p>That's the function I call from batch scripts. It retries once, handles both HTTP errors and generation failures, and returns a boolean so the calling code knows whether to continue or log the failure. Nothing fancy, but it catches the cases that a naive single-attempt script misses.</p>
<h2>Batch generation</h2>
<p>Generating multiple parts from a list is the workflow that made me write this whole thing. Here's the shape of it:</p>
<pre><code class="language-python">import csv

parts = []
with open("parts.csv") as f:
    reader = csv.DictReader(f)
    for row in reader:
        parts.append(row)

for part in parts:
    prompt = (
        f"{part['type']}, {part['length']}mm by {part['width']}mm, "
        f"{part['thickness']}mm thick, {part['holes']} holes "
        f"of {part['hole_diameter']}mm diameter"
    )
    output_file = f"output/{part['name']}.step"
    print(f"Generating {part['name']}...")

    success = generate_step(prompt, output_file)
    if success:
        print(f"  Saved {output_file}")
    else:
        print(f"  FAILED: {part['name']}")
</code></pre>
<p>A few things I learned running this kind of batch:</p>
<p>Space your requests. Hitting the API 30 times in rapid succession gets you throttled. A natural delay from the polling loop usually provides enough spacing, but if you're using the async variant to fire requests concurrently, limit concurrency to maybe 3 to 5 at a time.</p>
<p>Log everything. Prompt, request ID, status, error message, filename. When request 23 out of 50 fails and you're trying to figure out why, you want the receipt.</p>
<p>Save the prompt alongside the STEP file. I write a small JSON sidecar for each generated file with the prompt, the request ID, and the timestamp. When I open a STEP file three weeks later and can't remember what I asked for, the sidecar tells me.</p>
<h2>The async approach for concurrent generation</h2>
<p>If you want to generate several parts at once instead of one at a time:</p>
<pre><code class="language-python">import asyncio
from kittycad.client import ClientFromEnv
from kittycad.api.ml import create_text_to_cad, get_text_to_cad_part_for_user
from kittycad.models import TextToCadCreateBody, ApiCallStatus

client = ClientFromEnv()

async def generate_async(prompt, output_path):
    result = await create_text_to_cad.asyncio(
        client=client,
        output_format="step",
        body=TextToCadCreateBody(prompt=prompt),
    )
    while True:
        response = await get_text_to_cad_part_for_user.asyncio(
            client=client,
            id=result.id,
        )
        if response.status in (ApiCallStatus.COMPLETED, ApiCallStatus.FAILED):
            break
        await asyncio.sleep(5)

    if response.status == ApiCallStatus.COMPLETED:
        for name, content in response.outputs.items():
            if name.endswith(".step"):
                import base64
                with open(output_path, "w") as f:
                    f.write(base64.b64decode(content).decode("utf-8"))
                return True
    return False

async def main():
    prompts = [
        ("Flat plate 80x50x3mm with four M4 holes", "plate.step"),
        ("Cylindrical standoff OD 20mm ID 10mm height 15mm", "standoff.step"),
        ("U-bracket 50mm wide 30mm tall 3mm thick", "bracket.step"),
    ]

    semaphore = asyncio.Semaphore(3)

    async def limited(prompt, path):
        async with semaphore:
            return await generate_async(prompt, path)

    results = await asyncio.gather(
        *[limited(p, path) for p, path in prompts]
    )
    print(f"Generated {sum(results)} of {len(prompts)} parts")

asyncio.run(main())
</code></pre>
<p>The semaphore limits concurrency to three simultaneous requests. You could go higher, but I haven't tested what the API tolerates before it starts returning errors. Three works reliably and still gives you a meaningful speedup over sequential generation.</p>
<h2>DIY alternative: LLM + OpenSCAD</h2>
<p>If you don't want to depend on Zoo's API, you can build a text-to-CAD pipeline entirely in Python using a language model and OpenSCAD. The idea is: send your part description to GPT-4 or Claude, ask it to generate an OpenSCAD script, run <code>openscad</code> as a subprocess to render the geometry, and save the output.</p>
<pre><code class="language-python">import subprocess
import openai

def text_to_scad(prompt):
    response = openai.chat.completions.create(
        model="gpt-4",
        messages=[
            {"role": "system", "content": "Generate OpenSCAD code for the following part. Output only valid OpenSCAD code, no explanation."},
            {"role": "user", "content": prompt},
        ],
    )
    return response.choices[0].message.content

def render_stl(scad_code, output_path):
    scad_file = output_path.replace(".stl", ".scad")
    with open(scad_file, "w") as f:
        f.write(scad_code)
    subprocess.run(
        ["openscad", "-o", output_path, scad_file],
        check=True,
        capture_output=True,
    )
</code></pre>
<p>This works for simple parts and gives you full control over the LLM, the prompt engineering, and the output pipeline. The downsides are real though: OpenSCAD outputs STL, not STEP. The geometry is CSG, not B-Rep. The LLM generates broken scripts more often than the Zoo API generates broken geometry. And you're paying for LLM API calls plus maintaining the pipeline yourself.</p>
<p>For serious work, I use Zoo's API. For experiments and one-off hacks, the LLM-to-OpenSCAD pipeline is fun and surprisingly capable within its limits. The <a href="/posts/text-to-cad-api">text-to-CAD API</a> overview covers how these approaches compare.</p>
<h2>What I actually use this for</h2>
<p>My current setup is a small Python script that reads part descriptions from a YAML file, generates STEP files via Zoo's API, and drops them in a folder that syncs to my Fusion 360 projects. It runs as a cron job twice a day, picking up any new entries I've added to the YAML file. The whole thing is about 80 lines of Python, and it saves me maybe 15 to 20 minutes per part of manual Fusion 360 modeling for the simple brackets and plates that make up most of my fixture work.</p>
<p>It's not magic. Every generated STEP file still gets opened, measured, and usually edited before I use it. But starting from a generated solid instead of a blank sketch is consistently faster for the kinds of parts where text-to-CAD does well. And the fact that I can define those parts in a text file and generate them programmatically means the whole workflow lives in version control, which makes the project manager in me unreasonably happy.</p>
<p>For the full Zoo-specific tutorial with curl examples and step-by-step progression from first request to production script, see the <a href="/posts/zoo-text-to-cad-api-tutorial">Zoo text-to-CAD API tutorial</a>. For more on the <a href="/posts/kittycad-python-sdk">KittyCAD Python SDK</a>, Zoo's Python documentation is decent and improving.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the broader tool landscape if you're still deciding which approach fits your workflow.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DeepCAD dataset: the training data behind text-to-CAD</title>
      <link>https://blog.texocad.ai/posts/deepcad-dataset</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/deepcad-dataset</guid>
      <pubDate>Wed, 11 Feb 2026 00:00:00 GMT</pubDate>
      <description>Most text-to-CAD models learn from the DeepCAD dataset: about 170,000 parametric CAD models. That&apos;s not a lot. Here&apos;s why that matters.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>dataset</category>
      <category>research</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> The DeepCAD dataset contains approximately 170,000 parametric CAD models represented as sequences of sketch-and-extrude operations, with ~660,000 text annotations added by the Text2CAD project. It&apos;s the primary training dataset for text-to-CAD research, but its limited size and geometric simplicity constrain what current models can generate.</p>
<p>Somewhere in the basement of every text-to-CAD demo is a training dataset, and most of the time it's DeepCAD. I first ran into it while tracing back the claims in a vendor's whitepaper. They kept talking about "trained on hundreds of thousands of parametric models." The number sounded impressive. Then I downloaded the actual dataset, opened a few samples in a viewer, and spent ten minutes looking at what was essentially a collection of geometry that a first-semester CAD student could build during a lunch break. Cylinders, boxes, plates with holes, simple extrusions on simple sketches. Valid parametric models, technically. Also the kind of thing I'd model in two minutes on a slow day.</p>
<p>That's not a complaint about the researchers who built it. Given what was available, DeepCAD was a genuine achievement. But understanding what's in this dataset, and more importantly what isn't, tells you a lot about why text-to-CAD tools behave the way they do.</p>
<h2>What DeepCAD actually is</h2>
<p>DeepCAD was introduced in a 2021 ICCV paper by Rundi Wu, Chang Xiao, and Changxi Zheng from Columbia University. The full name of the paper is "DeepCAD: A Deep Generative Network for Computer-Aided Design Models." The dataset was a byproduct of building a generative model for CAD, and it ended up becoming the most widely used training set in the field.</p>
<p>The dataset contains approximately 178,000 parametric CAD models sourced from ABC, a large-scale collection of CAD models from Onshape's public repository. The original ABC dataset has over a million models, but DeepCAD filtered it down to models that could be represented as sequences of sketch-and-extrude operations. That filtering is important. It means DeepCAD only includes models that were built by sketching a 2D profile and extruding it, possibly multiple times, to create a 3D solid. No sweeps. No lofts. No revolves. No sheet metal. No surfacing.</p>
<p>Each model in the dataset is stored not as a mesh or a B-Rep solid, but as a sequence of CAD commands: create a sketch on a plane, draw line segments, arcs, and circles to define a profile, extrude the profile by some distance. This command-sequence representation is what makes the dataset useful for training AI models. The model doesn't learn what a part looks like. It learns <a href="/posts/how-text-to-cad-works">how to build a part</a>, step by step, the way a CAD timeline records it.</p>
<h2>Size matters, and 178,000 is small</h2>
<p>I keep hearing people describe 178,000 models as "large-scale." In the CAD research world, it is. In the broader AI world, it's tiny.</p>
<p>For reference: Stable Diffusion was trained on about 2 billion image-text pairs. GPT-3 was trained on hundreds of billions of tokens. Even in specialized domains, datasets tend to be in the millions. DeepCAD has 178,000 models, each represented as a sequence averaging maybe 60-80 CAD operation tokens. The total amount of training data, measured in the way AI researchers measure it, is minuscule.</p>
<p>This matters because the diversity of the dataset directly constrains what a trained model can produce. If the training data is 178,000 simple prismatic parts, the model will generate simple prismatic parts. It won't spontaneously learn to create a gear, a turbine blade, or a complex housing with snap-fit features, because it never saw one. The training set is the ceiling.</p>
<p>CAD data is scarce for a reason. Most real CAD models are proprietary. Companies don't publish their part files. The models that do end up in public repositories like Onshape or GrabCAD tend to be simpler than what lives on corporate servers. The really interesting geometry, the assemblies with hundreds of parts, the injection-molded housings with draft angles and rib patterns, the sheet metal enclosures that fold flat, none of that is in DeepCAD. It can't be, because nobody shared it.</p>
<h2>The Text2CAD annotation layer</h2>
<p>The original DeepCAD dataset had geometry but no text. The models came with their CAD command sequences but no natural language descriptions. You couldn't train a text-to-CAD model on it because there was nothing connecting words to shapes.</p>
<p>The <a href="/posts/text2cad-paper">Text2CAD paper</a> fixed this by annotating the dataset with approximately 660,000 text descriptions generated using Mistral and LLaVA-NeXT. Each model got multiple descriptions at different skill levels: beginner ("a box with a hole"), intermediate ("a rectangular block with a through-hole centered on the top face"), and expert ("sketch a 40mm by 25mm rectangle on the XY plane, extrude 15mm, then sketch a 6mm circle centered on the top face and cut-extrude through all").</p>
<p>The multi-level annotation was a smart decision. Real users describe parts at wildly different levels of specificity. A hobbyist says "a bracket." A mechanical engineer says "an L-bracket, 3mm 6061 aluminum, 40mm legs, two M4 clearance holes per leg on a 25mm pitch." The model needs to handle both, and the annotation pipeline gave it examples of each.</p>
<p>But the annotations are only as good as the models they describe. Annotating a simple cylinder with a beginner description and an expert description gives you two ways to say "cylinder." It doesn't give you a way to generate a cam, a spring clip, or a dovetail joint. The bottleneck isn't the text. It's the geometry.</p>
<h2>What the models look like</h2>
<p>I went through a random sample of about fifty DeepCAD models. Here's what I found.</p>
<p>Most are simple extrusions: a sketch profile extruded once or twice to create a 3D shape. A few are more complex, with multiple sketch planes and boolean operations (cutting one extrusion from another). The sketch profiles are made of lines, arcs, and circles. No splines. The geometry is clean but elementary.</p>
<p>Typical examples: a rectangular plate with four corner holes. A cylinder with a bore. A step block. A T-shaped bracket. An L-shaped bracket. A flanged plate. A plate with a centered rectangular pocket. These are the building blocks of mechanical design, and they're perfectly valid parts. They're also the kind of parts that take about three minutes to model by hand in any CAD tool.</p>
<p>What you won't find: assemblies, parts with complex internal geometry, freeform surfaces, thin-walled injection-molded parts, sheet metal with bend reliefs, gears, cams, threaded features, helical geometry, or anything that requires operations beyond sketch-and-extrude. The dataset defines the vocabulary, and the vocabulary is deliberately limited.</p>
<h2>Why this shapes every tool you use</h2>
<p>When a text-to-CAD tool handles your "rectangular bracket with mounting holes" prompt beautifully and then falls apart on "helical gear with 20-degree pressure angle," the DeepCAD dataset is a big part of the reason. The model learned from simple parts. It generates simple parts. The training data is the boundary.</p>
<p>Commercial tools like Zoo.dev likely train on additional proprietary data beyond DeepCAD, and they have their own geometric kernels that may handle more complex operations. But the foundational research, the architecture, the proof of concept, that all came from training on DeepCAD. The field's understanding of what works and what doesn't was shaped by this dataset's contents.</p>
<p>This also explains the dimensional accuracy problem. The DeepCAD models have specific dimensions, but the text annotations describe them approximately. When you train a model on "a box about 40mm long" paired with a box that's exactly 41.3mm, the model learns to approximate. It doesn't learn to be precise, because precision wasn't reliably encoded in the training signal.</p>
<h2>The <a href="/posts/cad-dataset-for-ai">CAD data</a> problem</h2>
<p>DeepCAD is the most-used dataset in text-to-CAD research because there isn't much else. Other public CAD datasets exist, ABC has over a million models, Fusion 360 Gallery has about 20,000, but none of them combine the command-sequence representation with the scale that researchers need. And none of them have text annotations at the scale Text2CAD provided.</p>
<p>Building a better dataset is the obvious next step and also the hardest one. You need parametric CAD models stored as editable command sequences (not just meshes or B-Rep solids), covering a wide range of real engineering geometry, with accurate text descriptions at multiple levels of detail. Getting that data means either generating it synthetically (which risks the model learning to generate synthetic-looking parts), convincing companies to share proprietary models (good luck), or building an annotation pipeline that works on more complex geometry.</p>
<p>Until that dataset exists, text-to-CAD models will keep bumping into the same ceiling. They'll get better at generating the kinds of parts DeepCAD contains. They won't suddenly learn to generate the kinds of parts it doesn't.</p>
<h2>The honest assessment</h2>
<p>DeepCAD did exactly what it needed to do: it proved that representing CAD models as learnable sequences was viable and gave the research community a common training set. The <a href="/posts/text2cad-paper">Text2CAD paper</a> added the language bridge. Together, they made text-to-CAD research possible.</p>
<p>But treating 178,000 simple models as sufficient for production text-to-CAD is like training a writing assistant on nothing but grocery lists and expecting it to draft contracts. The format is similar. The complexity is not. Every limitation I've hit with text-to-CAD tools, the narrow geometry range, the approximate dimensions, the inability to handle real engineering features, traces back, at least in part, to a training dataset that contains the CAD equivalent of "hello world" programs. The tools will get better when the data does. So far, the data hasn't.</p>
]]></content:encoded>
    </item>
    <item>
      <title>KittyCAD Python SDK: getting started</title>
      <link>https://blog.texocad.ai/posts/kittycad-python-sdk</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/kittycad-python-sdk</guid>
      <pubDate>Wed, 11 Feb 2026 00:00:00 GMT</pubDate>
      <description>The KittyCAD Python SDK is how you talk to Zoo.dev&apos;s text-to-CAD API from code. Here&apos;s how to set it up and what to watch out for.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>python</category>
      <category>sdk</category>
      <category>zoo</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> The KittyCAD Python SDK (pip install kittycad) provides typed Python bindings for Zoo.dev&apos;s CAD APIs including text-to-CAD generation, file conversion, and geometry operations. It handles authentication, async polling, and multi-format output (STEP, STL, glTF, OBJ).</p>
<p>I spent a Friday afternoon trying to generate a STEP file from a Python script instead of the Zoo.dev browser UI. Not because the browser is bad. Because I had fourteen bracket variants to generate, and clicking through the same text box fourteen times felt like punishment for a crime I hadn't committed. The KittyCAD Python SDK turned that into a for loop and a cup of coffee. Setting it up took longer than I expected, mostly because I missed one line in the docs and spent twenty minutes convinced the API was broken. It wasn't. I was.</p>
<p>This is the getting-started walkthrough I wished existed that Friday. How to install the SDK, authenticate, generate geometry, handle the async polling, and actually get a file on disk that you can open in Fusion 360 without cursing.</p>
<h2>What the SDK is</h2>
<p>The KittyCAD Python SDK is the official Python client for Zoo.dev's API. Zoo, for those arriving fresh, is the company behind the text-to-CAD service that generates B-Rep geometry from text prompts. The browser UI at <a href="https://zoo.dev">zoo.dev</a> is the friendly version. The API is the version for people who want to script things, automate batches, or integrate text-to-CAD into a larger pipeline.</p>
<p>The SDK wraps Zoo's REST API in typed Python classes. You get methods for text-to-CAD generation, file format conversion, geometry operations, and account management. It handles authentication headers, request formatting, response parsing, and the async polling loop that's necessary because generating CAD geometry takes a few seconds and the API doesn't block.</p>
<p>If you've used any well-structured Python API client, the shape is familiar. If you haven't, it's still not complicated. It's just a library that talks to a server and gives you files back.</p>
<h2>Installation</h2>
<pre><code class="language-bash">pip install kittycad
</code></pre>
<p>That's it. The package is on PyPI. It pulls in <code>httpx</code> for HTTP requests and <code>pydantic</code> for data validation. Python 3.8 or newer.</p>
<p>I'd recommend installing it in a virtual environment because I recommend installing everything in a virtual environment, but I'm not going to lecture you about dependency management. You've heard the speech.</p>
<pre><code class="language-bash">python -m venv venv
source venv/bin/activate
pip install kittycad
</code></pre>
<p>On Windows, replace <code>source venv/bin/activate</code> with <code>venv\Scripts\activate</code>. You know this. I'm writing it down because I once forgot it on a client's machine and pretended I was testing something.</p>
<h2>Authentication</h2>
<p>You need a Zoo API token. Get one from <a href="https://zoo.dev/account/api-tokens">zoo.dev/account/api-tokens</a>. The free tier gives you a limited number of API calls per month, which is enough for development and testing. Paid tiers give you more, obviously.</p>
<p>Set the token as an environment variable:</p>
<pre><code class="language-bash">export KITTYCAD_API_TOKEN="your-token-here"
</code></pre>
<p>The SDK picks this up automatically. You can also pass it explicitly when creating the client, but the environment variable approach keeps your token out of your script, which is where it should be. I've seen tokens committed to public repos more times than I'd like. Don't be that person.</p>
<pre><code class="language-python">from kittycad.client import ClientFromEnv

client = ClientFromEnv()
</code></pre>
<p>That's your authenticated client. If the environment variable isn't set, this throws an error that's clear enough to tell you what's wrong. If the token is invalid, you'll find out when you make your first API call, which is slightly less helpful but still obvious.</p>
<h2>Generating a part from text</h2>
<p>Here's the core workflow. You send a text prompt to the <a href="/posts/text-to-cad-api">text-to-CAD API</a> and get geometry back.</p>
<pre><code class="language-python">from kittycad.api.ml import create_text_to_cad
from kittycad.models import (
    FileExportFormat,
    TextToCad,
    ApiCallStatus,
)
from kittycad.client import ClientFromEnv
import time

client = ClientFromEnv()

result: TextToCad = create_text_to_cad.sync(
    client=client,
    output_format=FileExportFormat.STEP,
    body="A rectangular mounting plate, 80mm by 50mm by 4mm, with four M4 clearance holes on a 60mm by 30mm bolt pattern centered on the plate",
)

while result.status in [ApiCallStatus.QUEUED, ApiCallStatus.IN_PROGRESS]:
    time.sleep(2)
    result = create_text_to_cad.sync(client=client, output_format=FileExportFormat.STEP, body=result.id)

if result.status == ApiCallStatus.COMPLETED:
    for name, file_data in result.outputs.items():
        with open(name, "wb") as f:
            f.write(file_data.get_decoded())
        print(f"Saved: {name}")
else:
    print(f"Generation failed: {result.status}")
</code></pre>
<p>A few things to notice.</p>
<p>The <code>output_format</code> parameter sets what you get back. <code>FileExportFormat.STEP</code> is what you want for engineering work. Other options include <code>STL</code>, <code>OBJ</code>, and <code>GLTF</code>. I almost always use STEP. The others are for visualization or 3D printing, and if you need the differences explained, the <a href="/posts/text-to-cad-file-formats">text-to-CAD file formats</a> post covers that in detail.</p>
<p>The polling loop is necessary. Text-to-CAD generation isn't instant. The API accepts your request, queues it, processes it, and returns a result. That takes anywhere from a few seconds to maybe thirty seconds depending on complexity and server load. The SDK doesn't hide this from you, which I actually appreciate. Some SDKs wrap async operations in blocking calls that feel simple but make error handling a nightmare. Here you can see exactly what's happening and add your own timeout logic if you want.</p>
<p>The <code>result.outputs</code> dictionary contains the generated files. For STEP output, you'll typically get one file. The key is the filename, the value is the encoded file data. Call <code>.get_decoded()</code> to get the raw bytes, write them to disk, and you're done. That STEP file opens in Fusion 360, SolidWorks, FreeCAD, or any other CAD tool that reads STEP, which is all of them.</p>
<h2>Async version</h2>
<p>If you're writing async Python, which you probably are if you're building anything web-facing or dealing with multiple concurrent requests, there's an async variant:</p>
<pre><code class="language-python">import asyncio
from kittycad.api.ml import create_text_to_cad
from kittycad.models import FileExportFormat, ApiCallStatus
from kittycad.client import ClientFromEnv

async def generate_part(prompt: str, filename: str):
    client = ClientFromEnv()

    result = await create_text_to_cad.asyncio(
        client=client,
        output_format=FileExportFormat.STEP,
        body=prompt,
    )

    while result.status in [ApiCallStatus.QUEUED, ApiCallStatus.IN_PROGRESS]:
        await asyncio.sleep(2)
        result = await create_text_to_cad.asyncio(
            client=client,
            output_format=FileExportFormat.STEP,
            body=result.id,
        )

    if result.status == ApiCallStatus.COMPLETED:
        for name, file_data in result.outputs.items():
            with open(filename, "wb") as f:
                f.write(file_data.get_decoded())

asyncio.run(generate_part(
    "L-bracket, 3mm thick, 40mm legs, two 5mm holes per leg",
    "bracket.step"
))
</code></pre>
<p>This is where the batch generation story gets good. You can fire off multiple <code>generate_part</code> calls concurrently with <code>asyncio.gather()</code> and let them all poll in parallel. My fourteen-bracket Friday became a lot more pleasant once I realized I could kick all fourteen off at once and go make lunch.</p>
<h2>File conversion</h2>
<p>The SDK also handles file format conversion, which is useful when you have a STEP file and need an STL for printing, or an OBJ for a rendering pipeline. This is separate from the text-to-CAD generation. You're converting an existing file, not generating new geometry.</p>
<pre><code class="language-python">from kittycad.api.file import create_file_conversion
from kittycad.models import FileExportFormat, FileImportFormat
from kittycad.client import ClientFromEnv

client = ClientFromEnv()

with open("bracket.step", "rb") as f:
    step_data = f.read()

result = create_file_conversion.sync(
    client=client,
    body=step_data,
    src_format=FileImportFormat.STEP,
    output_format=FileExportFormat.STL,
)
</code></pre>
<p>The conversion runs server-side on Zoo's geometry kernel, which means the output is generally cleaner than what you'd get from a random online converter. Tessellation quality for STL output is controlled by the kernel defaults. For most prototyping and printing, the defaults are fine. For high-resolution visualization, you might want to check the triangle count.</p>
<h2>What to watch out for</h2>
<p>A few things I learned the hard way so you don't have to.</p>
<p>The polling loop needs a timeout. The example above polls forever, which is fine for a script you're babysitting but irresponsible for production code. Add a counter or a wall-clock timeout. If the API hasn't returned a result in sixty seconds, something is wrong and you should bail out rather than spin indefinitely.</p>
<p>Prompt quality matters enormously. The SDK doesn't interpret your prompt. It passes it to the same model that powers the Zoo.dev browser UI. Vague prompts produce vague geometry. Specific dimensions, feature counts, and spatial relationships give you better results. This is true of all <a href="/posts/text-to-cad-guide">text-to-CAD tools</a>, but it's easy to forget when you're writing prompts as string literals in a Python script instead of typing them into a chat interface.</p>
<p>Rate limits exist. The free tier has a monthly cap on API calls. If you're running batch generation in a loop during development, you can burn through your allocation fast. I did this on day two and had to wait until the next month. Check your account page for current limits.</p>
<p>Error responses are structured. When the API returns an error, the SDK gives you a typed error object with a message, not just an HTTP status code. Read the message. It's usually specific enough to tell you what went wrong. "Invalid API token" is clear. "Model generation failed" means the AI couldn't produce valid geometry for your prompt, which happens more often with complex or contradictory descriptions.</p>
<p>The generated STEP files need the same verification you'd apply to any text-to-CAD output. Dimensions drift. Holes end up slightly off. Features that should be symmetric aren't always symmetric. The SDK gives you the file. It doesn't guarantee the file is correct. Measure everything that matters before you send it anywhere. I've written about this in the <a href="/posts/is-text-to-cad-accurate">accuracy post</a>, and the advice hasn't changed.</p>
<h2>Where this fits</h2>
<p>The KittyCAD Python SDK is the scripting layer for Zoo.dev's text-to-CAD service. If you're generating one part occasionally, the browser UI is easier. If you're generating many parts, integrating text-to-CAD into a pipeline, or building something that needs programmatic access to CAD generation, the SDK is how you do it.</p>
<p>The <a href="/posts/text-to-cad-api-python">text-to-CAD API Python</a> post covers the broader API story. The <a href="/posts/zoo-text-to-cad-api-tutorial">Zoo text-to-CAD API tutorial</a> walks through a more complete project. The <a href="/posts/text-to-cad-api">text-to-CAD API</a> overview explains what's available beyond just generation.</p>
<p>For my own workflow, the SDK replaced about two hours of clicking per week with a script I run while doing other things. The bracket variants that started this whole adventure now generate overnight and wait in a folder for me to review in the morning. The coffee is better when you drink it instead of clicking through a browser for the fourteenth time.</p>
]]></content:encoded>
    </item>
    <item>
      <title>B-Rep vs mesh in AI generation: why it matters</title>
      <link>https://blog.texocad.ai/posts/brep-vs-mesh-ai-generation</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/brep-vs-mesh-ai-generation</guid>
      <pubDate>Tue, 10 Feb 2026 00:00:00 GMT</pubDate>
      <description>AI tools that output B-Rep geometry give you real CAD. Tools that output mesh give you a pile of triangles. The difference decides whether the output is useful or decorative.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>b-rep</category>
      <category>mesh</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> B-Rep (Boundary Representation) output from text-to-CAD tools contains mathematically exact surfaces, edges, and faces that can be filleted, chamfered, dimensioned, and manufactured. Mesh output (STL/OBJ) is a triangle approximation with no edge or face information, unsuitable for engineering edits or precision manufacturing.</p>
<p>A colleague sent me a text-to-3D model of a motor housing last year. Beautiful render. Smooth surfaces. Nice chamfers on the top edges. He was proud of it. I imported it into Fusion 360, tried to select the top face to add a mounting boss, and watched my cursor highlight one triangle out of about six thousand. There was no top face. There was no edge. There was no chamfer. There were six thousand triangles arranged to look like a motor housing from the outside, and on the inside there was nothing. No structure. No information. No engineering value whatsoever.</p>
<p>That's the difference between B-Rep and mesh. It's not a file format argument. It's not a niche technical distinction that only geometry nerds care about. It's the difference between getting real CAD output from an AI tool and getting a triangle costume that looks like CAD output in a screenshot. If you're evaluating text-to-CAD tools and you don't understand this difference, you'll waste time, money, and patience on output that can't be used for the thing you actually need it for.</p>
<h2>What B-Rep actually is</h2>
<p>B-Rep stands for Boundary Representation. It's the way professional CAD software describes solid objects. A B-Rep model defines a solid by describing its boundaries: the surfaces that enclose the volume, the edges where surfaces meet, and the vertices where edges converge.</p>
<p>The key word is "mathematically exact." A cylindrical hole in a B-Rep model is defined by an axis, a radius, and two bounding planes. The cylinder is exactly cylindrical. Not approximately cylindrical. Not close enough if you squint. The math says it's a cylinder, and any CAD software that reads the model knows it's a cylinder, can measure its radius to arbitrary precision, and can perform operations on it as a cylinder.</p>
<p>A fillet in B-Rep is a tangent-continuous surface that smoothly blends between two faces. The software knows where the fillet starts, where it ends, and exactly what the curvature is at every point. You can change the radius and the fillet recalculates. You can suppress it and the original sharp edge comes back. You can dimension it on a drawing and the dimension is exact, because the geometry is exact.</p>
<p>This is what Fusion 360, SolidWorks, CATIA, NX, Creo, and every other parametric CAD system uses internally. When you sketch a circle, extrude it, and cut a hole, you're building B-Rep geometry. The commands are just user-friendly ways to construct and modify the boundary surfaces, edges, and vertices that define the solid.</p>
<h2>What mesh actually is</h2>
<p>A mesh describes a shape using small flat polygons, almost always triangles. The surface of the object is approximated by gluing triangles together at their edges and vertices. More triangles mean a closer approximation to the true shape. Fewer triangles mean a blockier, more faceted result.</p>
<p>A cylindrical hole in a mesh isn't a cylinder. It's a polygon tube. The "circular" cross-section is actually an octagon, or a 16-gon, or a 64-gon, depending on how fine the mesh is. At high triangle counts, it looks circular on screen. But the data says it's a polygon, and any software that reads the mesh sees a polygon, not a circle.</p>
<p>There's no face information in a mesh. There are triangles. What looks like the "top face" of a box is actually a cluster of triangles that happen to be coplanar. A mesh viewer can shade them the same color to suggest they're a face, but the mesh format contains no concept of "face." It doesn't know which triangles belong together. It doesn't know that a group of triangles is supposed to be flat, or cylindrical, or tangent to the neighboring group.</p>
<p>STL, OBJ, and glTF are mesh formats. They store triangles. That's what they do, and they do it well enough for their intended purposes: 3D printing, game engines, visual effects, and web visualization. What they don't do, and can never do, is store the kind of geometric information that engineering requires.</p>
<h2>Why this matters for AI-generated geometry</h2>
<p>When an AI tool generates geometry, the representation it produces determines everything that happens next. This is not a downstream concern. It's the core architectural decision that separates tools that produce engineering-grade output from tools that produce screenshots.</p>
<p>A text-to-CAD tool that generates B-Rep internally, like Zoo.dev with its KittyCAD kernel, produces output that lives in the same geometric world as your Fusion 360 or SolidWorks models. The STEP file you download contains real surfaces, real edges, real faces. You can import it, select a face, extrude a boss off it, cut a slot through it, fillet the edges, and dimension the result. The AI's output becomes a starting point that you can develop into a finished part using normal CAD operations.</p>
<p>A text-to-3D tool that generates mesh, which includes most tools branded as "AI 3D" or "text-to-3D," produces output that lives in the rendering world. The OBJ or glTF file you download looks like geometry. On screen, it is indistinguishable from real CAD. You can rotate it, zoom in, admire the surface quality. But the moment you try to do anything engineering-related, the illusion breaks. There are no faces to select. No edges to fillet. No dimensions to measure with confidence. The output is a visual artifact, not an engineering object.</p>
<p>The <a href="/posts/text-to-cad-vs-text-to-3d">text-to-CAD vs text-to-3D</a> comparison covers the tool-level differences. Here I'm focused on the geometry itself, because the geometry is what you're left with after the demo is over and you need to make something real.</p>
<h2>The operations that expose the difference</h2>
<p>If you're not sure whether a particular AI tool's output is B-Rep or mesh, try these operations. They'll tell you in about thirty seconds.</p>
<p>Select a single planar face. In B-Rep, one click selects the entire face as a single entity. In mesh, one click selects one triangle, or the software groups nearby coplanar triangles and highlights a rough approximation of the face. If you see triangle edges inside what should be a flat surface, you're looking at mesh.</p>
<p>Measure a circular feature. In B-Rep, the measurement tool reports an exact diameter or radius, because the geometry is a true circle or cylinder. In mesh, the measurement tool either refuses to measure (because there's no circle, just a polygon), or gives you an approximate value derived from the triangle vertices. If the measurement of a "10mm hole" comes back as 9.87mm or varies depending on where you click, you're measuring mesh.</p>
<p>Try to add a fillet. In B-Rep, select an edge, specify a radius, and the fillet operation creates a new tangent-continuous surface. In mesh, the fillet tool either won't activate (because there are no edges in the geometric sense, only triangle boundaries), or it produces a horrifying result that looks like someone tried to smooth a brick with a belt sander.</p>
<p>Try to modify a feature. In B-Rep, you can grab a face and push-pull it to change a dimension. The surrounding geometry updates to maintain continuity. In mesh, moving vertices distorts the surrounding triangles. There's no parametric relationship between features, because there are no features. There are only triangles.</p>
<p>If the AI tool's output fails these tests, it doesn't matter how good the preview looked. You don't have CAD geometry. You have a mesh, and you'll need to rebuild the part from scratch if you want to do anything engineering-related with it.</p>
<h2>The conversion problem</h2>
<p>"But can't I just convert mesh to B-Rep?" People ask this a lot. The answer is: sort of, badly, and not for the reasons you'd want.</p>
<p>Mesh-to-B-Rep conversion, sometimes called "reverse engineering" in CAD software, tries to fit mathematical surfaces over the mesh data. The software looks at a cluster of triangles that seems roughly cylindrical and attempts to fit a true cylinder to them. It does this for every recognizable surface region, then tries to trim and join the surfaces into a valid solid.</p>
<p>For simple shapes with high-quality mesh data, this can work. A box becomes a real box. A cylinder becomes a real cylinder. You get faces and edges that behave properly.</p>
<p>For anything beyond trivial geometry, it falls apart. Blended surfaces where fillets meet become ambiguous. Small features get lost or misinterpreted. The topology of the resulting B-Rep depends heavily on the conversion algorithm's guesses, and those guesses are often wrong in ways that make the output harder to edit than if you'd just rebuilt the part manually.</p>
<p>I've spent hours trying to convert mesh models to B-Rep in Fusion 360's mesh-to-solid tools. The resulting solids have too many faces, strange surface patches, and edge topology that makes a feature tree cry. For a quick visual check, maybe acceptable. For further engineering work, almost never worth it. The <a href="/posts/text-to-cad-file-formats">text-to-CAD file formats</a> post covers the specific format considerations, but the rule of thumb is this: B-Rep converted from mesh is a poor substitute for B-Rep generated as B-Rep.</p>
<h2>What the AI generation methods produce</h2>
<p>The different approaches to AI geometry generation produce different representations, and knowing which is which saves you from disappointment.</p>
<p>Neural B-Rep generation, which is what Zoo.dev does, produces B-Rep directly. The AI model is trained to output boundary representations, not meshes. The geometry goes through a B-Rep kernel (in Zoo's case, the KittyCAD geometry engine) that validates the topology and ensures the result is a valid solid. The output is native B-Rep, and the STEP file you get contains real surfaces and edges. This is the gold standard for text-to-CAD output, and it's still rare.</p>
<p>Code generation approaches, like tools that generate OpenSCAD scripts or FreeCAD Python macros, also produce B-Rep, but indirectly. The AI writes code, the code runs in a CAD kernel, and the kernel builds B-Rep geometry from the operations in the script. The output quality depends on the quality of the generated code and the capabilities of the kernel. OpenSCAD's kernel handles CSG well. FreeCAD uses OpenCascade, which handles more complex geometry but has a steeper API. Either way, the final output is B-Rep because the kernel produces B-Rep. The <a href="/posts/how-text-to-cad-works">how text-to-CAD works</a> post covers these approaches in more detail.</p>
<p>Mesh generation, which is what most "AI 3D" tools use, produces triangle meshes. This includes tools like Meshy, Tripo, and most of the flashy demos you see on social media. The AI models used here are typically derived from image generation architectures adapted for 3D, and they output point clouds or signed distance fields that get converted to mesh. There is no B-Rep anywhere in the pipeline. The mesh is the final product.</p>
<p>Some tools blur the line by generating mesh and then running a mesh-to-B-Rep conversion as a post-processing step. This produces output that technically contains B-Rep data, but with all the quality problems I described above. If the tool's marketing says "STEP export" but the underlying generation is mesh-based, approach the STEP output with suspicion. Open it. Try to select faces. Try to fillet an edge. Let the geometry speak for itself.</p>
<h2>The manufacturing gap</h2>
<p>This is where the B-Rep vs mesh distinction stops being abstract and starts costing money.</p>
<p>A CNC machine cuts geometry using toolpaths calculated from surface data. The CAM software needs to know exactly where each surface is, what its curvature is, and where it meets neighboring surfaces. B-Rep provides this information exactly. The toolpath calculation is precise, the cuts are clean, and the machined part matches the model within the machine's tolerance.</p>
<p>Generating toolpaths from mesh data is possible but degraded. The CAM software has to work with triangle approximations of the surfaces, which introduces errors that depend on the mesh resolution. At high enough resolution, the errors are small. But "high enough" for machining means much finer mesh than what most AI tools produce, and even then, the triangle facets can leave visible artifacts on machined surfaces, especially on curved features. Any machinist who has received a mesh file when they expected a STEP file has a story about this, and the story usually involves a sigh.</p>
<p>For 3D printing, mesh is fine because the layer thickness and nozzle width are larger than the mesh faceting. For sheet metal, you need real surfaces to define bend lines and flat patterns. For injection molding, the mold tool designer needs exact surface data for the cavity. For any manufacturing process more precise than FDM printing, B-Rep is what the process expects, and mesh is a workaround at best.</p>
<h2>The practical takeaway</h2>
<p>When evaluating a text-to-CAD tool, or any AI tool that claims to generate CAD geometry, the first question is: does it produce B-Rep or mesh?</p>
<p>If B-Rep: the output can participate in engineering workflows. You can edit it, feature it, dimension it, tolerance it, and manufacture from it. This is text-to-CAD in the meaningful sense.</p>
<p>If mesh: the output is for visualization, quick concept checking, and 3D printing. It cannot be meaningfully edited in CAD software. It cannot be reliably manufactured to tolerance. It is text-to-3D, regardless of what the branding says.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> ranks tools partly on this basis, because no amount of prompt cleverness or UI polish changes what the geometry fundamentally is. A nice interface on top of mesh output is still mesh output. A rough API that produces B-Rep is still B-Rep.</p>
<p>I'd rather have ugly B-Rep than beautiful mesh. You can fix ugly geometry. You can't fix the wrong representation. That motor housing my colleague sent me? He ended up rebuilding it in Fusion 360 from scratch. The mesh version sits in a folder somewhere, looking gorgeous, doing nothing. That's the most expensive kind of useless.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD tutorial: from prompt to STEP file</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-tutorial</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-tutorial</guid>
      <pubDate>Mon, 09 Feb 2026 00:00:00 GMT</pubDate>
      <description>A step-by-step walkthrough of generating a real part with text-to-CAD, exporting it, and fixing what the AI got wrong in Fusion 360.</description>
      <dc:creator>TexoCAD</dc:creator>

      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> This tutorial walks through generating a mounting bracket using Zoo.dev&apos;s text-to-CAD, exporting as STEP, importing into Fusion 360, identifying dimensional errors, and fixing them manually. The full process takes about 15 minutes vs. 5 minutes for the AI generation alone.</p>
<p>I needed a mounting bracket for a small stepper motor on a test rig I was putting together. Nothing production-critical. Aluminum, 3mm stock, a few mounting holes, a couple of slots for adjustment. The kind of part that takes me about fifteen minutes to model from scratch in Fusion 360 on a good day, and about twenty-five minutes on a day when the timeline decides to misbehave. I figured this was the perfect candidate for a text-to-CAD walkthrough: simple enough to succeed, complex enough to show where the AI stumbles.</p>
<p>I did the whole thing from prompt to finished STEP file, and it took about fifteen minutes total. Five of those minutes were the AI doing its part. The other ten were me doing mine. That ratio tells you something honest about where this technology is right now.</p>
<p>If you've read the <a href="/posts/how-to-use-text-to-cad">how to use text-to-CAD</a> post, you know the general workflow. This is the specific version, with an actual part, actual mistakes, and the actual fixes.</p>
<h2>The part</h2>
<p>Here's what I needed: a flat mounting bracket for a NEMA 17 stepper motor. The NEMA 17 has a standard bolt pattern of 31mm between hole centers, arranged in a square, with a central bore of 22mm for the motor boss. I wanted the bracket to be 60mm x 60mm, 3mm thick, with the four M3 mounting holes on the 31mm pattern centered on the plate, plus the 22mm central bore, plus two 5mm slots on opposite edges for mounting the bracket itself to the test rig with some lateral adjustment.</p>
<p>This is a part I've modeled dozens of times. The sketch takes about two minutes, the extrude is one click, and the holes are a rectangular pattern. If text-to-CAD can't handle this, it can't handle anything.</p>
<h2>Writing the prompt</h2>
<p>I opened Zoo.dev, signed in with the free tier, and sat there for about thirty seconds thinking about how to phrase this. That thirty seconds matters more than people realize. A vague prompt produces vague geometry. A specific prompt produces geometry you can actually use. I've written more about this in <a href="/posts/text-to-cad-prompt-engineering">text-to-CAD prompt engineering</a>, but here's the short version: every dimension you leave out is a dimension the AI guesses, and it guesses wrong more often than you'd like.</p>
<p>Here's the prompt I used:</p>
<p>"Flat rectangular plate, 60mm x 60mm x 3mm. Central through-hole, 22mm diameter, centered on plate. Four M3 through-holes (3.2mm diameter) arranged in a square pattern, 31mm center-to-center, centered on the plate. Two 5mm wide slots on opposite edges (left and right), each 15mm long, centered vertically on the edge, 3mm from the plate edge to the near side of the slot."</p>
<p>That's specific. I gave dimensions in millimeters for everything. I named the features by type (through-hole, slot). I described the pattern and the positioning. I specified the slot relationship to the edge. I left nothing for the AI to infer that I could specify directly.</p>
<h2>Generating the model</h2>
<p>I pasted the prompt and clicked generate. Zoo's server chewed on it for about twelve seconds, which I've learned is normal for a part of this complexity. The progress indicator does its thing and then a 3D preview appears in the browser.</p>
<p>The preview looked roughly right at first glance. A rectangular plate. A big hole in the middle. Smaller holes around it in what appeared to be a square pattern. I could see features on the edges that might be slots. Proportions seemed reasonable.</p>
<p>I downloaded the STEP file. This is always the moment of truth, because the browser preview and the actual geometry can tell different stories. A preview might look fine while hiding internal faces, bad topology, or dimensions that are close enough to fool your eyes but wrong enough to matter.</p>
<h2>Importing into Fusion 360</h2>
<p>I opened Fusion 360, hit Insert > Insert Mesh/File, and selected the STEP file from my downloads folder. Fusion thought about it for a couple of seconds and then the body appeared in the viewport. One solid body in the browser tree. No feature history, obviously, since this was imported geometry, not modeled natively. Just a dumb solid I could measure and edit.</p>
<p>First thing I did was rotate it slowly and look at every face. The plate was there. The central hole was there. Four smaller holes were there. The edge features were there. No obviously missing features, no mystery surfaces, no garbage geometry visible in wireframe mode.</p>
<p>Second thing: measure.</p>
<h2>What the AI got right</h2>
<p>The plate itself was very close. I measured it at 59.8mm x 60.1mm x 3.0mm. The thickness was dead on. The length and width were within a fraction of a millimeter, which for an imported starting point is fine. I could work with this.</p>
<p>The central bore measured 22.0mm, which was exactly what I asked for. I'll take that win.</p>
<p>The M3 mounting holes were 3.2mm diameter, which is correct for M3 clearance. They were arranged in a square pattern centered on the plate, which was correct.</p>
<h2>What the AI got wrong</h2>
<p>The mounting hole pattern spacing. I asked for 31mm center-to-center. What I measured was closer to 30mm. A millimeter off on a bolt pattern means the motor doesn't mount, and there's no "close enough" when you're trying to align a stepper motor to a lead screw. This is the kind of error that looks invisible on screen and becomes very visible on the bench.</p>
<p>The slots. This is where things got more creative. I asked for two 5mm wide slots on opposite edges, each 15mm long, centered vertically, 3mm from the edge. What I got was two slots that were approximately 5mm wide, approximately 14mm long, and positioned about 4mm from the edge instead of 3mm. The vertical centering was close but not exact. Every dimension was in the right ballpark and wrong in the details.</p>
<p>One slot also had a slightly different end radius than the other, which is the kind of inconsistency that happens when the AI generates each feature somewhat independently rather than applying a proper pattern or mirror operation. A human would sketch one slot and mirror it. The AI apparently generated each one as a separate thought.</p>
<h2>Fixing the model</h2>
<p>This is where the tutorial becomes a CAD tutorial, because fixing text-to-CAD output is just regular CAD work on an imported body.</p>
<p>Step one: fix the bolt pattern. I created a new sketch on the top face of the plate. Drew a construction point at the center (which I could snap to by selecting the circular edge of the central bore). From that center, I created a rectangular pattern of four points at 31mm spacing, centered. Then I used the Hole command to place 3.2mm through-holes at those four points. After confirming the new holes were correct, I went back and filled in the original incorrect holes. In Fusion, you can do this by sketching circles on the face at the old hole locations and using Extrude to add material. Or you can use Press/Pull on the cylindrical faces of the holes. Either way, about ninety seconds to fix.</p>
<p>Step two: fix the slots. I deleted both slots by sketching over them and extruding to fill, then created a new sketch on the top face. Drew one slot at the correct dimensions: 5mm wide, 15mm long, centered vertically on the left edge, with the near side 3mm from the edge. Extruded it as a cut through the plate. Then I mirrored that cut feature across the center plane to create the matching slot on the right edge. Both slots now identical, both correctly positioned. About two minutes of work.</p>
<p>Step three: verify everything. I measured all critical dimensions again. Plate 59.8 x 60.1 x 3.0, which I decided was close enough for a test rig bracket (on a production part, I'd fix those too). Central bore 22.0. Mounting holes 3.2mm on a 31.0mm pattern. Slots 5.0mm x 15.0mm, 3.0mm from edge, centered. All correct.</p>
<p>Step four: export. File > Export > STEP. Done.</p>
<h2>Time breakdown</h2>
<p>Here's how the fifteen minutes broke down:</p>
<p>Writing the prompt: 1 minute. Generating in Zoo: about 15 seconds. Downloading and importing: 30 seconds. Inspection and measurement: 2 minutes. Fixing the bolt pattern: 90 seconds. Fixing the slots: 2 minutes. Final verification: 1 minute. Export: 15 seconds. Random pausing to sip coffee and think about whether this was faster than modeling from scratch: about 7 minutes spread throughout.</p>
<p>If I stripped out the coffee pauses and the second-guessing, the actual working time was about eight minutes. Modeling this part from scratch in Fusion would take me about twelve to fifteen minutes, assuming no timeline drama. So the text-to-CAD approach saved me maybe five minutes on this specific part, with the caveat that those five minutes were distributed in a strange way: the AI did the bulk modeling fast, and then I did targeted fixes that required knowing what was wrong and how to fix it.</p>
<p>You still need to know how to use CAD software. The AI doesn't remove that requirement. It just shifts the work from "build from scratch" to "inspect and fix." Whether that's faster depends on the part, the quality of the AI output, and how fast you are at both approaches.</p>
<h2>What I'd do differently</h2>
<p>The prompt was decent but not perfect. If I were doing this again, I'd add a line specifying that the slot pattern should be symmetric about the center. The AI apparently didn't infer that, and the asymmetric slot positions cost me an extra minute of fixing.</p>
<p>I'd also explicitly state "all features through the full 3mm thickness" because on other parts I've seen the AI generate blind holes or partial cuts when I meant through-alls. Being redundantly specific costs nothing in the prompt and saves time in the cleanup.</p>
<p>For this specific part, I probably wouldn't bother with text-to-CAD next time. It's simple enough that modeling from scratch is almost as fast, and I'd have parametric history from the start instead of an imported body I can't roll back. Where text-to-CAD pays off is when I need to explore several variations quickly, or when the part is slightly more complex and the first-draft geometry saves more than a few minutes.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> has a more honest assessment of where the technology helps and where it doesn't. For this tutorial, the takeaway is concrete: text-to-CAD gave me an 85% correct starting point in seconds, and I spent ten minutes getting it to 100%. That's the real ratio right now.</p>
<h2>Trying the same part in CADAgent</h2>
<p>After the Zoo attempt, I wanted to see how CADAgent handled the same prompt. I had the Fusion 360 add-in installed already and my Anthropic API key configured.</p>
<p>I pasted the same prompt into the CADAgent panel and hit generate. Instead of downloading a STEP file, I watched Fusion 360 build the part in real time. A sketch appeared on the XY plane. A rectangle got drawn. An extrude operation fired. Then came the circles for the holes, then the extrude cuts, then the slots.</p>
<p>The result was different from Zoo's in an important way: it had a timeline. I could click on any operation in the feature history and edit it. The sketch dimensions were there, editable. The extrude depths were parameters I could change.</p>
<p>The bolt pattern came out closer to correct than Zoo's, measuring about 30.8mm instead of the 31mm I asked for. Still off, but closer. The slots were more consistent because CADAgent had the good sense to create the second one as a mirror of the first, which is what a human modeler would do.</p>
<p>The downside was speed. CADAgent took about forty-five seconds to generate, compared to Zoo's twelve. And one of the extrude operations threw a warning that I had to dismiss before the generation continued. Minor, but it breaks the flow.</p>
<p>For a part like this, CADAgent's parametric output is more useful than Zoo's dumb solid. I could fix the bolt pattern by editing the sketch dimension directly instead of filling and re-drilling holes. That's a thirty-second fix instead of a ninety-second fix, which adds up over a day of iterating on designs.</p>
<h2>The honest takeaway</h2>
<p>This tutorial is about a specific part. The NEMA 17 bracket is simple, well-defined, and exactly the kind of geometry that text-to-CAD handles best. If your parts look like this, the tools will save you time. If your parts are more complex, if they involve sheet metal bends, organic surfaces, multi-body assemblies, or tight tolerances, your mileage will drop fast.</p>
<p>The process itself is straightforward. Write a specific prompt. Generate. Download. Import. Measure everything. Fix what's wrong. Export. The AI handles the first draft. You handle the quality. Right now, that division of labor saves me a few minutes per part on simple geometry and nothing on complex geometry. It's a tool, not a replacement for knowing what you're doing.</p>
<p>If you want to get better at the prompt-writing part, the <a href="/posts/text-to-cad-prompt-engineering">text-to-CAD prompt engineering</a> guide goes into more detail on what makes a good prompt. For understanding the <a href="/posts/text-to-cad-step-file">STEP file side</a> of things, including why STEP matters and how to handle import issues, that's covered separately.</p>
<p>I still model most parts from scratch. But for the bracket-and-plate class of geometry, a twelve-second first draft from Zoo beats staring at a blank sketch, even if I have to fix two holes and a slot afterward.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Zoo text-to-CAD tutorial: step by step</title>
      <link>https://blog.texocad.ai/posts/zoo-text-to-cad-tutorial</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/zoo-text-to-cad-tutorial</guid>
      <pubDate>Mon, 09 Feb 2026 00:00:00 GMT</pubDate>
      <description>A walkthrough of using Zoo.dev&apos;s text-to-CAD from account setup to STEP export. Including the parts where it doesn&apos;t do what you expect.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>zoo</category>
      <category>tutorial</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> To use Zoo.dev text-to-CAD: create a free account at zoo.dev, open the Design Studio, type a specific prompt with dimensions, generate the model, preview the 3D result, export as STEP, and import into your CAD tool for editing. The free tier includes basic generation.</p>
<p>I spent my first half hour with Zoo.dev typing prompts that sounded perfectly reasonable to me and getting back geometry that looked like it had been designed by someone who'd had the part described to them over a bad phone connection. The proportions were close. The intent was there. The details were wrong in small, expensive ways. A hole pattern shifted 5mm from where I asked for it. A wall thickness that quietly doubled itself. A fillet radius that the tool apparently decided was a suggestion rather than an instruction.</p>
<p>The second half hour went better, because I'd figured out what the tool actually listens to and what it ignores. This tutorial is the version I wish I'd had before that first session.</p>
<h2>Setting up an account</h2>
<p>Go to <a href="https://zoo.dev">zoo.dev</a> and create a free account. Email and password, or sign in with GitHub. The free tier gives you enough API calls per month to properly test things. You don't need a credit card to start.</p>
<p>Once you're logged in, you'll land on the Design Studio. It's sparse on purpose: a text box, a 3D viewport, and a few export buttons. There's no feature tree, no sketch panel, no timeline. If you're used to the density of Fusion 360 or SolidWorks, the emptiness might feel suspicious. Don't worry. The simplicity is the point. Zoo generates geometry from text. What you do with that geometry afterward happens in your real CAD tool.</p>
<p>If you plan to use the Python API later, you can generate an API key from your account settings. For this tutorial, we'll stick to the web interface.</p>
<h2>Your first prompt</h2>
<p>Here's where most people trip up, myself included. The natural instinct is to type something conversational: "make me a bracket for mounting a small PCB." That will produce a bracket. It will also produce a bracket with dimensions the AI invented, hole sizes that may or may not correspond to any actual fastener, and proportions that came from whatever the model's internal average bracket looks like.</p>
<p>Instead, be specific. Treat the prompt like you're filling out a drawing title block, not describing the part to a friend over lunch.</p>
<p>Try this: "L-bracket, 3mm thick, 50mm tall leg, 40mm base leg, two M4 clearance holes on the base spaced 25mm apart and centered, one M4 clearance hole centered on the tall leg at 35mm height."</p>
<p>Type that into the Design Studio text box and click Generate. You'll wait somewhere between ten and thirty seconds. The 3D viewport will populate with a model.</p>
<p>Rotate it. Look at it from a few angles. Does it look like what you described? In my experience, a prompt like that one returns something recognizably correct about 80% of the time. The other 20%, you get a bracket where one of the holes decided to migrate, or the thickness came back at 4mm instead of 3mm, or the base leg is 45mm because the AI felt generous.</p>
<p>This is normal. Welcome to text-to-CAD.</p>
<h2>Inspecting the output</h2>
<p>The Design Studio viewport lets you rotate, zoom, and pan. It doesn't let you measure. That's the first limitation you'll feel. You can see the geometry, but you can't confirm the dimensions without exporting and opening the file in a real CAD tool.</p>
<p>Look for obvious problems first. Is the shape roughly right? Are the holes visible? Does the part have the right number of features? If the AI generated something wildly different from your prompt, don't bother exporting. Just rephrase and regenerate.</p>
<p>If the shape looks correct at a glance, move to the export step. The real inspection happens in Fusion 360 or SolidWorks, where you can actually select faces, measure distances, and check diameters.</p>
<h2>Exporting as STEP</h2>
<p>Click the export button and choose STEP. Zoo also offers glTF, OBJ, STL, and a few others, but STEP is the one you want for engineering work. STEP gives you real B-Rep geometry with selectable faces and measurable edges. STL gives you a triangle mesh that your CAD software will treat like a foreign object.</p>
<p>The STEP file downloads to your machine. It's typically small, a few hundred kilobytes for simple parts. If you've been dealing with mesh files from other AI tools, the file size alone tells you something: this is compact mathematical surface data, not a million triangles pretending to be a cylinder.</p>
<p>For a deeper look at why the file format matters so much, I covered that in <a href="/posts/text-to-cad-step-file">text-to-CAD to STEP file: getting usable output</a>.</p>
<h2>Importing into Fusion 360 (or SolidWorks)</h2>
<p>Open Fusion 360. File, Open, select the STEP file. It'll import as a solid body. You should see real faces in the browser panel, not a mesh import warning.</p>
<p>Now measure things. Select two parallel faces and check the thickness. Select a hole edge and check the diameter. Measure the distance between hole centers. Compare everything to what you asked for in the prompt.</p>
<p>On a good day, the dimensions will be within a fraction of a millimeter. On a bad day, you'll find the kind of discrepancies I mentioned earlier: 5% off on a critical dimension, a hole that shifted, a feature that's geometrically present but not where you specified it. I always measure. Always. Not because I don't trust the tool, but because I've been burned enough times by tools I did trust to know better.</p>
<p>If the geometry is close but not perfect, you can fix it right there in Fusion. Move a hole with a sketch edit. Adjust a dimension. Add a fillet the AI forgot. This is faster than re-prompting and hoping the next generation gets it right.</p>
<p>For a broader view of how text-to-CAD fits into an actual workflow, the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the whole picture.</p>
<h2>Writing better prompts</h2>
<p>After a few dozen generations, I've settled on some patterns that consistently produce better results.</p>
<p>Always include units. "50mm" is better than "50." Without units, the AI guesses, and its guesses aren't always in the unit system you're thinking in.</p>
<p>Specify thickness, height, and width explicitly. Don't assume the AI will infer "structural" thickness for a bracket or "reasonable" wall thickness for an enclosure. It won't. Or rather, it will, and its idea of reasonable will not match yours.</p>
<p>Name standard features. "M4 clearance hole" is better than "4mm hole" because the AI seems to understand that M4 clearance means 4.3mm or 4.5mm diameter, depending on which standard it's pulling from. "Four M3 mounting bosses in the corners" works better than "holes in the corners."</p>
<p>Describe positions relative to edges or other features. "Two holes 15mm from the left edge, spaced 30mm apart" gives the AI anchors. "Two holes on the left side" gives it creative freedom, which is not what you want from a dimensioned part.</p>
<p>Keep it to one part per prompt. Assemblies are beyond what Zoo handles. If you need two mating parts, generate them separately with compatible dimensions.</p>
<p>The <a href="/posts/text-to-cad-prompt-engineering">text-to-CAD prompt engineering</a> post goes much deeper into strategies that work and phrasings that don't.</p>
<h2>When it works and when it doesn't</h2>
<p>Zoo handles prismatic parts well. Anything you could describe as a combination of extrusions, cuts, holes, fillets, and chamfers on a rectangular or cylindrical base has a good shot. Brackets, plates, enclosures, standoffs, adapter plates, spacers. The geometry comes back as proper solids, and the STEP files open cleanly everywhere I've tested.</p>
<p>It does not handle complex curvature, organic shapes, lofted surfaces, or swept features. Don't ask it for an aerodynamic housing with compound curves. Don't ask it for a gear with involute tooth profiles. Don't ask it for a snap-fit enclosure with proper draft angles. I've tried all of these. The results range from "that's not quite right" to "that's not even the right category of wrong."</p>
<p>Sheet metal is another gap. Zoo doesn't know about bend allowances, K-factors, or flat patterns. It'll give you something that looks like folded metal but was never modeled with bending in mind. If your part needs to unfold, you're not starting here.</p>
<p>Multi-body assemblies don't exist in Zoo's generation. One prompt, one solid body. That's it.</p>
<p>For an honest assessment of where Zoo sits in the broader tool landscape, the <a href="/posts/zoo-text-to-cad-review">Zoo text-to-CAD review</a> covers capabilities, pricing, and limitations without the marketing gloss.</p>
<h2>A real example, start to finish</h2>
<p>I'll walk through an actual part I generated last week. I needed a sensor mounting plate for a test fixture: rectangular, 80mm by 50mm, 4mm thick, with four M3 clearance holes on a 70mm by 40mm bolt pattern, and a central 12mm through-hole for a cable grommet.</p>
<p>My prompt: "Rectangular plate, 80mm by 50mm, 4mm thick. Four M3 clearance holes on a 70mm by 40mm rectangular bolt pattern centered on the plate. One 12mm through-hole at the center of the plate."</p>
<p>Generation took about fifteen seconds. The result in the viewport looked right. Four small holes near the corners, one larger hole in the middle, flat rectangular shape.</p>
<p>I exported as STEP, opened in Fusion, and measured. The plate was 80.0mm by 50.0mm by 4.0mm. The bolt pattern was 70.0mm by 40.0mm. The M3 holes were 3.4mm diameter, which is correct for M3 clearance fit. The center hole was 12.0mm. Everything was where I asked for it.</p>
<p>This is Zoo at its best: a simple, well-described prismatic part with explicit dimensions, generating clean geometry in under twenty seconds. I added corner radii in Fusion because I didn't ask for them in the prompt, applied a chamfer to the top edges, and sent the STEP to a colleague. Total time from prompt to finished part: about four minutes. Modeling it from scratch in Fusion would have taken maybe eight. Not a massive savings, but real.</p>
<p>The trick is that not every prompt goes this smoothly. The part before this one, a U-shaped channel with unequal legs and a slot in one side, came back with the slot on the wrong leg and the overall width off by 6mm. I re-prompted twice before giving up and just modeling it in Fusion. Sometimes the tool saves time. Sometimes it costs time. Knowing which kind of part falls into which category is the skill you develop after a few weeks of using it.</p>
<h2>What to do next</h2>
<p>If you've gotten through your first generation and export, you have a feel for what Zoo does and doesn't do. The next steps that helped me the most were learning to write tighter prompts and understanding which kinds of geometry are worth generating versus modeling by hand.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the full landscape of tools and approaches. If you want to go beyond the web interface, the <a href="/posts/zoo-text-to-cad-api-tutorial">Zoo text-to-CAD API tutorial</a> walks through scripting batch generations with the Python SDK, which is where Zoo starts to feel genuinely powerful.</p>
<p>Zoo is not a replacement for CAD skills. It's a shortcut for the boring parts, when the shortcut works. After a month of using it, I generate simple parts there and model anything with real complexity in Fusion. That split saves me maybe an hour a week, spread across a dozen small parts that I'd otherwise have to sketch from scratch. Not transformative. But an hour a week, every week, adds up to enough that I keep the browser tab open.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD tips I wish I knew earlier</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-tips</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-tips</guid>
      <pubDate>Sun, 08 Feb 2026 00:00:00 GMT</pubDate>
      <description>After months of using text-to-CAD tools, here are the things that would have saved me time if someone had told me upfront.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>tips</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Key text-to-CAD tips: always specify dimensions in mm, describe one part per prompt, name standard features explicitly, start simple and iterate, always export as STEP not STL, verify dimensions before trusting them, and budget time for manual cleanup in your real CAD tool.</p>
<p>I've been using text-to-CAD tools most days for the past several months. Zoo.dev, CADAgent, a few others. I've generated hundreds of parts, ranging from mounting brackets I actually used to organic shapes that came back looking like something a toddler might sculpt from modelling clay if the toddler had access to a GPU. Along the way I've wasted plenty of time on mistakes that, in hindsight, were completely avoidable. These are the things I wish someone had told me on day one.</p>
<h2>Always specify dimensions in millimeters</h2>
<p>This sounds obvious. It isn't, apparently, because I spent my first week typing things like "a bracket, 50 by 30, 3 thick" and getting parts back in what appeared to be an AI-generated unit system that corresponded to nothing on earth. Sometimes the numbers came back as millimeters. Sometimes they seemed to be in some scaled-up fantasy unit. Once I got a plate that was clearly interpreted as inches when I was thinking in metric, which meant my little sensor mount came back the size of a serving tray.</p>
<p>Now I write "50mm by 30mm, 3mm thick." Every dimension, every time, with the unit attached. The results got more consistent immediately. It's the kind of habit that feels redundant until you see what happens without it.</p>
<h2>One part per prompt</h2>
<p>I tried, early on, to generate assemblies. "A box with a lid that fits on top, with four screw posts on the box and four clearance holes in the lid." The tool generated something that was technically box-shaped and lid-shaped, but the fit between them was fictional. The screw posts didn't align with the holes. The lid was 2mm too wide. The wall thicknesses differed between the two pieces.</p>
<p>Text-to-CAD tools generate one solid body per prompt. If you need two mating parts, generate them separately, each with explicit dimensions that account for clearance. A box that's 100mm by 60mm on the outside, with 2mm walls, means the lid needs to be about 96mm by 56mm on the inside if it's supposed to drop in. The AI won't figure out that relationship for you. You need to do that math and put it in the prompt.</p>
<p>This constraint is real and it won't change soon. Assemblies require understanding relationships between parts, and that's a fundamentally harder problem than generating a single shape from a description.</p>
<h2>Name standard features instead of describing geometry</h2>
<p>"M4 clearance hole" works better than "4.3mm hole." "M3 counterbore" works better than "hole with a wider flat-bottomed recess at the top." The AI has been trained on engineering terminology, and using standard feature names gives it more context about what you're actually asking for.</p>
<p>I've found this pattern holds for most mechanical features. "Chamfer" produces a chamfer. "Angled cut on the edge" sometimes produces a chamfer and sometimes produces something that looks like the AI had a stroke mid-extrusion. "Fillet, 2mm radius" works. "Round off the corner" is a coin flip.</p>
<p>Standard names for standard features. It's like speaking the same language as the tool, which is a low bar, but one that the prompts need to clear.</p>
<h2>Start simple, then add complexity</h2>
<p>My biggest time-waster was trying to describe the final part in one go. A full enclosure with bosses, ribs, vent slots, mounting tabs, and a cable routing channel. The result was always wrong in at least three ways, and figuring out which three took longer than generating five simple variants would have.</p>
<p>Now I start with the basic envelope: "Rectangular box, 100mm by 60mm by 40mm, 2mm wall thickness, open top." I export that, check it, and decide if the base geometry is worth building on. If it is, I add features in my CAD software. If I want the AI to handle a specific feature, I prompt for a simpler version of the part that includes just that feature.</p>
<p>This approach is less satisfying than typing one magnificent prompt and getting a perfect part back. It's also more productive by a wide margin.</p>
<h2>Always export STEP, not STL</h2>
<p>I made this mistake exactly once. I exported an STL from Zoo, opened it in Fusion 360, and spent twenty minutes trying to figure out why I couldn't select individual faces. Because it was a mesh. Because STL is a mesh format. Because I'd exported the wrong format and was now staring at a blob of triangles that Fusion politely informed me was an "imported mesh body" rather than a solid.</p>
<p>STEP gives you real B-Rep geometry with selectable faces, measurable edges, and the ability to add fillets, cuts, and holes. STL gives you a triangle bag that's good for 3D printing and nothing else. Export STEP for engineering work. Always. If you need STL for a printer, export it from your CAD tool after you've verified and fixed the STEP import.</p>
<p>The <a href="/posts/text-to-cad-step-file">text-to-CAD to STEP file</a> post covers the STEP workflow in more detail, including what to check when you open the file.</p>
<h2>Measure everything before trusting it</h2>
<p>This is the tip I repeat most often because it's the one that matters most.</p>
<p>The generated part looks right. The proportions seem correct. The holes are where they should be. And then you measure and discover that the bolt pattern is 2mm off, or the wall thickness is 3mm instead of the 2mm you asked for, or one of the four mounting holes is 0.5mm further from the edge than the other three.</p>
<p>I measure every critical dimension on every text-to-CAD import. It takes about two minutes and has caught errors that would have been embarrassing (prototyping) or expensive (anything headed to a machine shop). I use Fusion 360's Inspect tool. Select two faces, read the distance. Select a hole edge, read the diameter. Compare to the prompt.</p>
<p>If you take one habit away from this post, let it be this one. Measure before you trust.</p>
<h2>Learn which parts are worth generating</h2>
<p>After months of trial and error, I've developed an instinct for which parts to generate and which to model by hand. The dividing line is roughly this: if I can describe every feature with a dimension and a position, and the features don't depend on each other in complex ways, it's worth trying text-to-CAD. If the part has features that reference other features, complex curvature, or relationships that require manufacturing awareness, I model it from scratch.</p>
<p>Parts I generate: mounting plates, brackets, standoffs, spacers, adapter plates, simple enclosure shells, cable clips, sensor mounts. Parts I don't bother generating: gears, snap-fit enclosures, sheet metal parts, anything with lofted surfaces, anything that mates with another part in a tight tolerance assembly.</p>
<p>The time savings come from generating the boring parts fast, not from attempting the complex ones and spending twice as long fixing them.</p>
<h2>Position features with absolute references</h2>
<p>"Two holes near the top" gives the AI creative license to put them wherever it wants. "Two holes centered 10mm from the top edge, spaced 30mm apart, centered on the part width" gives it coordinates. The second prompt produces more accurate output almost every time.</p>
<p>I've noticed that the AI handles absolute positioning better than relative positioning. "15mm from the left edge" works better than "one-third of the way across." "Centered on the 80mm dimension" works better than "in the middle." The more you can anchor features to specific numbers, the less room the AI has to improvise, and improvisation is not what you want from a tool that's supposed to produce dimensioned geometry.</p>
<h2>Don't prompt for manufacturing details</h2>
<p>Draft angles, bend allowances, thread specifications, surface finishes, tolerance callouts. None of the current tools handle these. I wasted time early on trying to specify "1-degree draft on all vertical faces" and getting back geometry that had ignored the instruction entirely. Same with "M4x0.7 tapped hole." The tool doesn't model threads. It doesn't know what a K-factor is. It doesn't understand that a sheet metal part needs to unfold.</p>
<p>Save manufacturing details for your CAD software. Use text-to-CAD for the basic geometry, then add draft, threads, bend reliefs, and tolerances in Fusion 360 or SolidWorks where the tools actually exist for those operations.</p>
<h2>Re-prompt instead of salvaging bad geometry</h2>
<p>When the generated part is significantly wrong, the temptation is to fix it. Move this hole, adjust that dimension, patch the missing feature. Sometimes that works. Often, especially when multiple features are wrong, you end up spending more time editing imported geometry than it would take to re-prompt with better wording or just model the part from scratch.</p>
<p>My rule: if I need to fix more than two things, I re-prompt. If I need to fix more than three things, I model it by hand. The breakeven point where fixing is slower than starting over arrives faster than you'd expect, because imported geometry doesn't have a parametric history. Every fix is a direct edit on a dumb solid, which means you can't roll back, you can't change your mind easily, and downstream edits get progressively messier.</p>
<h2>Keep a prompt library</h2>
<p>This one took me embarrassingly long to figure out. When a prompt produces a good result, save it. I keep a text file with prompts that worked, organized by part type. When I need a similar part, I start from a working prompt and modify the dimensions rather than writing from scratch.</p>
<p>"L-bracket, 3mm thick, 50mm tall leg, 40mm base leg, two M4 clearance holes on the base spaced 25mm apart and centered, one M4 clearance hole centered on the tall leg at 35mm height."</p>
<p>That prompt produced a good bracket. Next time I need a bracket, I change the dimensions and feature count. I don't try to rephrase it from memory, because the specific wording that works is not always the wording I'd naturally write. Prompt engineering is a skill, and a library of working prompts is the most practical form of that skill.</p>
<p>The <a href="/posts/best-prompts-for-text-to-cad">best prompts for text-to-CAD</a> post collects more of these, and the <a href="/posts/text-to-cad-prompt-engineering">text-to-CAD prompt engineering</a> post explains the principles behind why certain phrasings work.</p>
<h2>The meta-tip</h2>
<p>The biggest lesson from months of text-to-CAD is that the tool rewards precision and punishes ambiguity. Every minute you spend making your prompt more specific saves you several minutes of fixing the output. Every dimension you leave out is a dimension the AI invents. Every feature you describe vaguely is a feature that comes back wrong in a way you'll need to fix by hand.</p>
<p>Text-to-CAD is a shortcut, not a replacement. It's a first-draft generator for simple geometry, and it's good at that job when you meet it halfway with clear, dimensioned, specific descriptions. The <a href="/posts/text-to-cad-for-beginners">text-to-CAD for beginners</a> guide is a good starting point if you're just getting into this, and the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the full picture.</p>
<p>I still model complex parts by hand. I still verify every dimension on generated parts. But for the twenty-odd simple brackets, plates, and mounts I generate each month, these tips have cut my cleanup time roughly in half. Which is not a miracle, but it's an hour of my life back each month that I used to spend arguing with holes that refused to be where I put them.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD to STEP file: getting usable output</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-step-file</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-step-file</guid>
      <pubDate>Sat, 07 Feb 2026 00:00:00 GMT</pubDate>
      <description>STEP is the format that matters for text-to-CAD output. Here&apos;s how to get it, what to check when you open it, and what usually goes wrong.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>step</category>
      <category>file-formats</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Zoo.dev and CADScribe can export text-to-CAD output as STEP files. Always verify STEP output by importing into SolidWorks or Fusion 360 and checking: dimensions match the prompt, faces are valid, no open surfaces exist, and the geometry is a proper solid. Expect to fix 2-3 issues per import.</p>
<p>I once exported a STEP file from a text-to-CAD tool, opened it in Fusion 360, and everything looked perfect. Clean faces. Correct proportions. The holes were even the right diameter. I felt the kind of satisfaction that comes from a tool doing exactly what it promised. Then I tried to shell the part and Fusion threw an error that basically said "this isn't a solid body." Turns out the geometry had an internal face that split what should have been one solid into two touching-but-not-joined pieces. It looked solid. It rendered solid. It was not, in any meaningful sense, solid. I spent fifteen minutes tracking down the phantom face and stitching the body back together, which is a sentence that only makes sense if you've been in CAD long enough to develop strong feelings about surface healing.</p>
<p>STEP is the file format that makes text-to-CAD useful for engineering. Without it, you're working with meshes, and meshes in a CAD context are like being handed a photograph when you asked for a blueprint. STEP gives you real geometry. But "real geometry" and "correct geometry" are different promises, and only one of them is guaranteed.</p>
<h2>Why STEP is the format that matters</h2>
<p>STEP (Standard for the Exchange of Product model data, ISO 10303) is the industry standard for exchanging B-Rep solid geometry between CAD tools. When a text-to-CAD tool exports STEP, it's exporting the mathematical definition of surfaces, edges, and their relationships. Not triangles. Not an approximation. The actual geometry.</p>
<p>This means a STEP file from Zoo.dev opens in Fusion 360, SolidWorks, Creo, NX, or FreeCAD as a proper solid body. You can select individual faces. You can measure true diameters, not approximate chord distances across a mesh polygon. You can apply fillets to real edges. You can cut features, add holes, and modify the geometry the same way you would if you'd modeled it yourself.</p>
<p>STL, by contrast, is a triangle mesh. It's fine for 3D printing slicers, which expect mesh input. It's useless for engineering edits. Try to add a chamfer to an STL edge in SolidWorks and you'll get an error, or worse, a "converted" body made of ten thousand tiny planar faces pretending to be a smooth surface. OBJ, FBX, glTF: these are all mesh or visualization formats. They have uses, but engineering isn't one of them.</p>
<p>If a text-to-CAD tool can't export STEP, I don't consider it a CAD tool. It's a shape generator with a marketing problem.</p>
<h2>Which tools actually output STEP</h2>
<p>Not all of them, which is a useful filter.</p>
<p><a href="https://zoo.dev">Zoo.dev</a> exports STEP and it's their primary engineering format. The STEP files are clean B-Rep and open correctly in every CAD tool I've tested. For a full review of the tool itself, see <a href="/posts/zoo-text-to-cad-review">Zoo text-to-CAD review</a>.</p>
<p>CADScribe exports STEP and STL. The STEP quality has been mixed in my testing. Simple parts export cleanly. More complex geometry occasionally produces STEP files that import with warnings or degenerate faces.</p>
<p>CADAgent generates geometry inside Fusion 360 directly, so there's no export step needed. The model is already in your CAD environment with native feature history. You can save it as STEP from Fusion if you need to share it.</p>
<p>AdamCAD primarily outputs STL with parametric sliders. STL-first tools are useful for 3D printing workflows but limiting for engineering edits.</p>
<p>The point is: before you invest time in any text-to-CAD tool, check whether it outputs STEP. If the answer is "STL only," adjust your expectations. You'll be able to print the result but not easily edit it in a professional CAD environment.</p>
<p>For a broader look at how file formats play into the text-to-CAD workflow, the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers this alongside tool comparisons and workflow strategies.</p>
<h2>What to check when you open a STEP file</h2>
<p>I've imported enough AI-generated STEP files to have a checklist that I run through every time. It takes about two minutes and has saved me from sending bad geometry to colleagues, machinists, and 3D printers more times than I'd like to admit.</p>
<p>First: is it actually a solid body? In Fusion 360, check the Bodies folder in the browser. It should say "Body" with a solid icon, not "Surface Body." In SolidWorks, the feature tree will show it as an imported solid or, if something is wrong, as a surface body. A surface body means the geometry has gaps, open edges, or other defects that prevent it from being a closed solid. You can sometimes fix this with surface healing tools, but it's better to know about it before you start adding features on top of broken geometry.</p>
<p>Second: check overall dimensions. Measure length, width, and height. Compare them to what you asked for in your prompt. I've seen dimensions come back accurate to a tenth of a millimeter, and I've seen them off by 8%. There's no way to predict which you'll get without measuring.</p>
<p>Third: check critical feature dimensions. Hole diameters, slot widths, pocket depths, wall thicknesses. If you asked for M4 clearance holes, they should be about 4.3mm to 4.5mm. If they're 4.0mm, that's not clearance. If they're 5.0mm, that's not M4 anything.</p>
<p>Fourth: check feature positions. Measure from edges to hole centers. Verify bolt patterns. Check symmetry if your prompt asked for it. Position errors are the most common issue I see, more common than size errors. The AI gets the feature right but puts it in approximately the right place rather than exactly.</p>
<p>Fifth: try selecting faces and edges. Can you select individual faces cleanly? Can you pick a single edge for a fillet? If faces seem fused or edges are missing where you'd expect them, the topology of the import might be off. This occasionally happens when the generating kernel produces geometry that's technically valid but topologically messy.</p>
<h2>Common problems and how to fix them</h2>
<p>Internal faces. This is the one I described in the opening. The geometry looks solid but contains invisible faces that split it internally. In Fusion 360, try the Combine or Stitch command to see if it resolves. In SolidWorks, Import Diagnostics usually catches these. If not, you may need to manually delete the internal face and knit the body back together.</p>
<p>Non-manifold edges. This means an edge is shared by more than two faces, which shouldn't happen in a valid solid. It usually indicates the generating kernel left an artifact. Most CAD tools flag this on import. Fixing it often means deleting the offending faces and patching the gap, which is tedious but doable.</p>
<p>Dimension inaccuracies. The most common issue and the easiest to fix. If a plate is supposed to be 80mm long and came back at 78.5mm, you can usually adjust it with a direct edit or by modifying the imported body. It's not elegant, and it's not how you'd want to build a part for production, but for prototyping it works.</p>
<p>Missing features. The AI forgot a hole, or ignored a fillet you mentioned in the prompt, or decided that one of your six mounting bosses was optional. Adding features to imported geometry is straightforward in Fusion or SolidWorks. Create a new sketch on the appropriate face, draw what's missing, and cut or extrude.</p>
<p>Degenerate faces. Tiny sliver faces or faces with near-zero area that confuse downstream operations. These show up occasionally and tend to cause problems when you try to apply fillets or shells near them. Delete the tiny face, extend an adjacent face to close the gap, and move on. If you've done surface repair in SolidWorks, you already know this dance.</p>
<h2>STEP quality varies between tools</h2>
<p>This is worth saying clearly: not all STEP files are created equal. A STEP file from Zoo.dev and a STEP file from another tool can represent the same shape with very different internal quality.</p>
<p>Zoo's STEP output has been the cleanest in my testing. The faces are well-defined, the topology is consistent, and the files import without warnings in Fusion 360 and SolidWorks almost every time. I attribute this to the fact that Zoo built their own geometric kernel (KittyCAD) from scratch, designed specifically for generating clean B-Rep output.</p>
<p>Other tools that generate geometry through code (OpenSCAD scripts, Python scripts for FreeCAD) and then export to STEP can produce valid but messy files. The STEP is technically correct, but the face structure might be overly complex, with unnecessary splits, redundant edges, or topology that makes subsequent editing awkward. You can work with it, but it's like editing someone else's poorly organized SolidWorks file: technically possible, spiritually draining.</p>
<h2>The STEP-to-edit workflow</h2>
<p>My standard workflow after importing a STEP from text-to-CAD:</p>
<p>Import the STEP. Run through the checklist above. Fix any immediate geometry issues (internal faces, non-manifold edges, degenerate faces). Then decide: is this good enough to edit, or should I re-prompt?</p>
<p>If the geometry is within a few percent of what I asked for and the topology is clean, I edit. Move holes, adjust dimensions, add missing features. Working on top of imported geometry in Fusion 360 is slightly more annoying than working with native features because you don't have a parametric history to roll back. But for prototyping, it's fine.</p>
<p>If the geometry is significantly wrong, if major features are missing or positions are wildly off, I don't try to salvage it. I either re-prompt with better wording or model it from scratch. There's a breakeven point where fixing imported geometry takes longer than just drawing the part, and that point arrives sooner than you'd expect.</p>
<p>For simple parts, the text-to-CAD-to-STEP-to-edit pipeline works. It saves me five to fifteen minutes per part compared to starting from a blank sketch. For complex parts, the pipeline produces a starting point that needs so much work it stops being a shortcut. Knowing where that boundary is for your particular kind of work is the difference between using text-to-CAD productively and using it to generate problems to solve.</p>
<h2>The format is the feature</h2>
<p>I keep coming back to this: the file format is the single most important feature of any text-to-CAD tool. Not the prompt intelligence, not the generation speed, not the viewport rendering. The format. Because the format determines whether the output is the beginning of an engineering workflow or the end of a visual exercise.</p>
<p>STEP means you can work with the geometry. STL means you can print it and hope for the best. Everything else is somewhere between those two poles, usually closer to the "hope for the best" end.</p>
<p>If you're evaluating text-to-CAD tools, start with the output format. If it does STEP, proceed to testing accuracy and quality. If it doesn't, the rest barely matters. The <a href="/posts/text-to-cad-tutorial">text-to-CAD tutorial</a> walks through the full generation-to-STEP-to-edit cycle, and the <a href="/posts/zoo-text-to-cad-review">Zoo text-to-CAD review</a> covers the tool that currently does this best. But the principle is tool-agnostic: STEP is the format that makes text-to-CAD real, and everything else is a demo.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD prompt engineering: how to get useful output</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-prompt-engineering</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-prompt-engineering</guid>
      <pubDate>Fri, 06 Feb 2026 00:00:00 GMT</pubDate>
      <description>The difference between a text-to-CAD prompt that works and one that produces garbage is usually specificity. Here&apos;s what I&apos;ve learned about writing prompts that produce actual parts.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>prompts</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Effective text-to-CAD prompts include: specific dimensions in mm, named geometric features (boss, counterbore, fillet), material context, manufacturing intent, and one part per prompt. Avoid vague descriptions, relative sizing, and multi-part assemblies. More specific prompts consistently produce better geometry.</p>
<p>I spent an embarrassing amount of time last month generating the same bracket over and over with slightly different wording, trying to figure out why some prompts gave me a usable part and others gave me something that looked like the AI had read a description of a bracket written by someone who'd never seen one. The part I wanted was simple. L-shaped, two legs, a few holes, a fillet at the bend. I could model it from scratch in Fusion 360 in eight minutes. But I was stubborn, and I wanted to understand why "make me a bracket" produced junk while a longer, more specific prompt produced something I could actually work with.</p>
<p>What I learned, after about forty prompts and a thermos of coffee I forgot to drink, is that text-to-CAD prompt engineering is mostly about specificity. Not cleverness. Not magic words. Not finding some secret syntax the AI prefers. Just being specific about what you want, in the same way you'd be specific on a drawing or in an email to a machinist. The AI doesn't read your mind. It reads your words. And if your words are vague, your geometry will be vague in exactly the same way.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the tools and technology. This post is about the input side: how to write prompts that produce parts worth importing.</p>
<h2>Why prompts matter more in CAD than in image generation</h2>
<p>If you've used Midjourney or DALL-E, you know that vague prompts can produce interesting results. "A castle at sunset" gives you something pretty. The AI fills in the details and the result is often charming in ways you didn't expect.</p>
<p>Text-to-CAD does not work like this, and it's important to understand why. In image generation, there's no "correct" answer. Any castle at sunset is a valid castle at sunset. In CAD, there is a correct answer, or at least a narrow range of acceptable answers. A bracket that's 40mm tall is not the same part as a bracket that's 50mm tall. A 4.2mm hole is not the same as a 5mm hole. An M3 counterbore is not interchangeable with a through-hole. The AI has to produce specific geometry, and it can only produce the right geometry if you tell it what "right" means.</p>
<p>Every dimension you leave unspecified is a dimension the AI gets to invent. Sometimes it invents well. Usually it doesn't. And unlike an image, you can't just squint at a CAD model and decide it's close enough. Either the bolt pattern fits or it doesn't. Either the wall clears the adjacent component or it interferes.</p>
<h2>The anatomy of a good prompt</h2>
<p>After testing hundreds of prompts across Zoo.dev, CADAgent, and a few other tools, I've arrived at a rough structure that consistently produces better results. It's not a template you have to follow rigidly. It's more like a checklist of things the AI needs to hear from you before it can do a decent job.</p>
<p>A good prompt includes: the overall shape (plate, bracket, enclosure, cylinder, etc.), the bounding dimensions in millimeters, the material or thickness if relevant, each feature by its proper name (hole, counterbore, pocket, slot, chamfer, fillet, boss), the dimensions of each feature, the position of each feature relative to known references (edges, centers, other features), and any symmetry or pattern information.</p>
<p>A bad prompt leaves most of that out and hopes the AI infers it.</p>
<p>Here's a bad prompt: "A bracket with mounting holes."</p>
<p>That tells the AI almost nothing. What shape bracket? How big? How thick? How many holes? What diameter? Where? The AI will produce something bracket-shaped with some holes on it, and the result will be wrong in ways that aren't worth fixing because the starting point is too far from what you actually needed.</p>
<p>Here's a better prompt: "L-shaped bracket, 3mm thick, vertical leg 40mm tall by 30mm wide, horizontal leg 50mm long by 30mm wide. Two 4.2mm through-holes on the vertical leg, centered horizontally, first hole 10mm from the top, second hole 30mm from the top. Two 4.2mm through-holes on the horizontal leg, centered across the width, first hole 10mm from the bend, second hole 40mm from the bend. 2mm fillet on the inside corner of the bend."</p>
<p>That's wordy. It's also the kind of prompt that produces a part you can measure and find mostly correct. The AI knows the shape, the material thickness, every dimension, every feature, and every position. There's very little left for it to guess.</p>
<h2>Use millimeters, always</h2>
<p>I've tested prompts in inches and in millimeters. Millimeters produce more consistent results. My guess is that the training data is predominantly metric, which makes sense given that most CAD models globally use metric units. If you're working in inches, convert before prompting. "25.4mm" is more reliable than "1 inch" even though they're the same dimension.</p>
<p>Specify units explicitly in the prompt. Don't assume the AI defaults to your preferred system. I've had prompts where I gave numbers without units and got back geometry scaled by what appeared to be a random conversion factor. Just say "60mm x 40mm x 3mm" every time and avoid the guessing game.</p>
<h2>Name features by their real names</h2>
<p>This makes a bigger difference than I expected. The AI understands CAD-specific vocabulary, and using it produces better results than describing features in plain English.</p>
<p>"A hole with a wider hole on top for a screw head to sit in" is a counterbore. Call it a counterbore. The AI knows what a counterbore is and will produce more accurate geometry if you use the word.</p>
<p>"A sloped edge" is a chamfer. Call it a chamfer and specify the dimension (e.g., "1mm x 45° chamfer on all edges").</p>
<p>"A rounded inside corner" is a fillet. Call it a fillet and specify the radius (e.g., "3mm fillet on the bend").</p>
<p>"A raised cylindrical feature" is a boss. "A flat area cut into the surface" is a pocket. "An elongated hole" is a slot.</p>
<p>Using these terms isn't just about being precise. It seems to activate more specific training data in the model, producing geometry that's closer to what a human CAD user would create for the same feature. When I say "counterbore," the AI generates a stepped hole with reasonable proportions. When I say "a wider hole on top," I sometimes get something that looks like a countersink, or a pocket, or a feature that doesn't match any standard machining operation.</p>
<p>The <a href="/posts/best-prompts-for-text-to-cad">best prompts for text-to-CAD</a> post has more examples of specific feature vocabulary that works well.</p>
<h2>One part per prompt</h2>
<p>This is maybe the most important rule, and the one people most want to break. Do not describe multi-part assemblies in a single prompt. Do not describe two mating parts. Do not describe a box and its lid in one go.</p>
<p>Current text-to-CAD tools generate one solid body at a time. Asking for multiple parts in one prompt forces the AI to either ignore some of them or mash them together into geometry that doesn't make sense as either part individually. I've asked for "a box with a removable lid" and gotten back a single solid that was the box and lid fused together, which defeats the purpose entirely.</p>
<p>Generate each part separately. If you need a box and a lid, write one prompt for the box and one prompt for the lid. Include the mating dimensions in both prompts so they'll fit together. "The box opening is 50mm x 30mm" and "the lid is 51mm x 31mm with a 1mm step" is how you communicate fit between separate prompts.</p>
<h2>Position features relative to known references</h2>
<p>"Holes near the corners" is not a position. "Holes 5mm from each edge, measured to hole center" is a position.</p>
<p>Every feature needs to be located relative to something the AI can resolve: the plate center, an edge, another feature. I use edge references most often because they're unambiguous. "First hole 10mm from the left edge, centered on the width" gives the AI exactly one valid location.</p>
<p>Avoid relative descriptions like "evenly spaced" unless you also give the spacing. "Four holes evenly spaced" tells the AI nothing about the pattern dimensions. "Four holes on a 40mm x 30mm rectangular pattern, centered on the plate" tells it everything.</p>
<p>Center references work well too. "Central bore, 22mm diameter, centered on the plate" is clear. "A hole in the middle" is not, because "the middle" could mean the geometric center, the visual center of mass, or wherever the AI feels like putting it.</p>
<h2>Include symmetry and pattern information</h2>
<p>If your part has symmetry, say so. "Two slots, symmetric about the vertical center plane" produces more consistent results than describing each slot independently. The AI is more likely to generate matching features when you tell it they should match.</p>
<p>For hole patterns, describe the pattern rather than each individual hole. "Four M3 through-holes on a 31mm square pattern, centered on the plate" is better than describing four holes with four separate coordinate sets. Pattern descriptions match how CAD operations work (rectangular pattern, circular pattern), which seems to help the AI produce cleaner geometry.</p>
<h2>What to include, what to skip</h2>
<p>Include: overall bounding dimensions, material thickness, feature types by name, feature dimensions, feature positions, symmetry, pattern information.</p>
<p>Skip: material name (unless it affects geometry, like sheet metal vs. solid), surface finish, color, tolerances (no current tool handles these), GD&#x26;T, assembly context, manufacturing process notes, and anything about how the part will be used rather than how it should be shaped.</p>
<p>The AI generates geometry. It does not understand manufacturing process, material properties, tolerance stacks, or application context. Telling it "this is for mounting a PCB in an outdoor enclosure rated to IP65" won't change the output. Telling it the exact dimensions, hole positions, and seal groove geometry might.</p>
<h2>Good prompt vs bad prompt: side by side</h2>
<p>Bad: "An electronics enclosure."</p>
<p>What you get: a box, roughly the size the AI feels like making, with no features, no mounting, no consideration of what goes inside or how the lid attaches.</p>
<p>Good: "Rectangular electronics enclosure, open top, outer dimensions 80mm x 50mm x 30mm. Wall thickness 2mm. Four M3 threaded inserts at the corners of the open top face, 4mm from each outer edge, 4.5mm hole diameter, 6mm deep. Two M3 mounting tabs extending 8mm from the bottom of opposite long sides, 3mm thick, with 3.5mm through-holes centered on each tab."</p>
<p>What you get: a box with walls, mounting features, and threaded insert holes in roughly the right places. You'll still need to fix dimensions, but the starting point has all the features you asked for.</p>
<p>Bad: "A gear."</p>
<p>What you get: something that looks like a gear from a cartoon. Decorative tooth profiles. No involute. No usable dimensions. A waste of time.</p>
<p>Good: "Spur gear, module 2, 20 teeth, 14.5 degree pressure angle, 20mm face width, 10mm bore with keyway 3mm wide by 1.5mm deep."</p>
<p>What you get: slightly better, but honestly still not great. Gears are one of the things text-to-CAD reliably fails at because the tooth geometry requires precise involute curves that the AI doesn't generate correctly. This is a case where modeling from scratch or using a gear generator plug-in is always faster. I mention it because knowing where the tools fail is as important as knowing where they succeed.</p>
<p>The <a href="/posts/text-to-cad-examples">text-to-CAD examples</a> post shows ten more prompt/output pairs with detailed analysis of what worked and what didn't.</p>
<h2>The iteration workflow</h2>
<p>Even with a good prompt, the first output is often 80-90% correct. The iteration workflow matters.</p>
<p>I generate, download the STEP, import into Fusion 360, and measure everything. If the errors are minor (hole positions off by a millimeter, a fillet radius that's 2mm instead of 3mm), I fix them in CAD and move on. If the errors are structural (wrong topology, missing features, completely wrong proportions), I revise the prompt and regenerate.</p>
<p>Prompt revision is usually about adding detail, not changing approach. If the AI missed a feature, I add more explicit description of that feature. If a dimension is wrong, I sometimes repeat it in the prompt for emphasis, or describe it from two different reference points. "The slot is 15mm long, starting 5mm from the left edge and ending 20mm from the left edge" is redundant, but redundancy sometimes helps.</p>
<p>After two or three iterations, you usually have either a usable part or the realization that this particular geometry is beyond what text-to-CAD handles well. Knowing when to stop iterating and just model it yourself is a judgment call. My personal rule: if two revised prompts don't get me to 85% correct, I model from scratch. The time I'd spend on a third and fourth iteration exceeds the time to just draw the thing.</p>
<h2>Prompts that always fail</h2>
<p>Some categories of geometry reliably produce bad results regardless of prompt quality. Knowing these up front saves you time.</p>
<p>Gears, as mentioned. The tooth profiles are never correct. Save yourself the trouble.</p>
<p>Threads. No current text-to-CAD tool generates actual thread geometry. You'll get a cylinder where a thread should be, which is fine as a placeholder but useless if you need actual thread form.</p>
<p>Sheet metal with bend logic. The AI doesn't understand K-factors, bend allowances, or flat patterns. It'll give you a shape that looks folded but won't unfold correctly.</p>
<p>Multi-body parts. Assemblies. Anything with more than one solid body.</p>
<p>Organic surfaces. Splines, lofts, complex sweeps. The AI works best with prismatic geometry: extrusions, holes, pockets, slots, chamfers, fillets. When you leave the world of straight lines and arcs, the output quality drops off a cliff.</p>
<p>Snap fits, press fits, or any feature that depends on specific interference or clearance values. The AI has no concept of fit classes.</p>
<h2>The honest summary</h2>
<p>Writing good text-to-CAD prompts is a skill, and it's a skill worth developing if you model simple mechanical parts regularly. The investment is small: a couple of hours of practice will teach you what works and what doesn't. The payoff is modest but real: a few minutes saved per simple part, and significantly faster iteration when you're exploring design variations.</p>
<p>The skill isn't about tricking the AI or finding magic words. It's about being specific in the same way you'd be specific in a drawing or a machining order. Dimensions. Feature names. Positions. Patterns. Symmetry. If you can describe the part precisely enough for a machinist to make it without asking questions, you can describe it precisely enough for text-to-CAD to produce a useful starting point.</p>
<p>Emphasis on "starting point." The best prompt in the world still produces output that needs checking and usually needs fixing. Text-to-CAD prompt engineering doesn't eliminate the need for CAD skills. It just means your CAD work starts from a better place than a blank sketch.</p>
<p>The <a href="/posts/text-to-cad-tutorial">text-to-CAD tutorial</a> walks through the full process from prompt to finished STEP with a real example. If you want ready-made prompts to study and adapt, the <a href="/posts/best-prompts-for-text-to-cad">best prompts for text-to-CAD</a> post is a catalog of what's worked for me.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD for beginners: start here</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-for-beginners</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-for-beginners</guid>
      <pubDate>Thu, 05 Feb 2026 00:00:00 GMT</pubDate>
      <description>If you&apos;ve never used text-to-CAD and want to know whether it&apos;s worth trying, this is the short honest version.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>beginners</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Beginners should start with Zoo.dev&apos;s free tier or CADAgent for Fusion 360. Write simple prompts with specific dimensions. Expect simple parts (plates, brackets, boxes) to work and complex parts to fail. Text-to-CAD is useful for getting started but not for replacing CAD skills.</p>
<p>A friend of mine who runs a small electronics business asked me last month if he could "just type what he wants and get a 3D model." He'd seen a demo on social media where someone typed a sentence and a bracket appeared on screen, fully formed, ready to print. He wanted to know if that was real. The short answer is yes, sort of. The longer answer took me twenty minutes and involved a lot of sentences that started with "but."</p>
<p>Text-to-CAD is real, it works for simple parts, and it's not going to replace knowing what you're doing. If you've never tried it and you're wondering whether to bother, this is the honest beginner's version.</p>
<h2>What text-to-CAD actually does</h2>
<p>You type a description of a physical part. An AI turns that description into 3D geometry. Not a picture of a part. Not a rendering. Actual solid geometry that you can open in CAD software, measure, edit, and manufacture from.</p>
<p>The key word in that paragraph is "solid geometry." There are a lot of AI tools that generate 3D shapes from text, but most of them produce meshes: collections of tiny triangles that approximate a shape. Meshes are fine for video games and 3D printing. They're not fine for engineering, because you can't easily select a face, add a hole, change a dimension, or do any of the things you'd normally do in a CAD tool. For the full explanation of why this distinction matters, the <a href="/posts/what-is-text-to-cad">what is text-to-CAD</a> post covers it in detail.</p>
<p>Text-to-CAD tools, the ones worth your time, produce B-Rep (Boundary Representation) solid models. These are the same kind of geometry you'd get from SolidWorks, Fusion 360, or any professional CAD program. Real faces, real edges, real topology. The difference is you got there by typing a sentence instead of spending ten minutes sketching and extruding.</p>
<h2>Where to start</h2>
<p>Two options that I'd recommend for beginners:</p>
<p><a href="https://zoo.dev">Zoo.dev</a> has a free tier. You create an account, type a prompt, and get a 3D model in a web browser. No software to install. No CAD knowledge required to generate a part. The output can be exported as a STEP file, which opens in any professional CAD tool. If you just want to see what text-to-CAD feels like, this is the fastest path from curiosity to a generated model.</p>
<p>CADAgent is a free, open-source add-in for Fusion 360. If you already have Fusion installed (the personal use license is free), CADAgent lets you generate models directly inside the Fusion environment. The advantage is that the output has native feature history, which means you can edit it like any other Fusion model. The setup requires an Anthropic API key, which costs a small amount per generation.</p>
<p>For this post, I'll assume you're starting with Zoo.dev because it requires the least setup and no existing CAD software.</p>
<h2>Your first part</h2>
<p>Go to <a href="https://zoo.dev">zoo.dev</a>, create a free account, and open the Design Studio. You'll see a text box and a 3D viewport. That's essentially the whole interface.</p>
<p>Start simple. Don't try to generate an engine block. Try something like this:</p>
<p>"Flat rectangular plate, 80mm by 50mm, 5mm thick, with four 5mm through-holes near the corners, 8mm from each edge."</p>
<p>Type that in, click Generate, and wait about fifteen seconds. A 3D model should appear in the viewport.</p>
<p>Rotate it around. Does it look like a rectangular plate with four holes? It probably does. This is the kind of part text-to-CAD handles well: flat, rectangular, with simple features that can be described with dimensions.</p>
<p>Now try exporting it. Click the export button, choose STEP. A file downloads to your computer. If you have Fusion 360 or SolidWorks, open that file. You'll see a solid body with selectable faces, measurable edges, and real geometry you can modify.</p>
<p>If you don't have CAD software, you can also export as STL and open it in a free viewer or 3D printing slicer. But the real value of text-to-CAD is in the STEP export, because that's what makes the output editable.</p>
<p>For a more detailed walkthrough of the Zoo.dev interface and workflow, the <a href="/posts/how-to-use-text-to-cad">how to use text-to-CAD</a> guide covers each step.</p>
<h2>What works for beginners</h2>
<p>Simple parts with clear geometry and explicit dimensions. I'm going to keep repeating "explicit dimensions" because it's the single most important habit for getting good results.</p>
<p>Parts that tend to work:</p>
<p>Rectangular plates with holes. Brackets (L-shaped, U-shaped, flat). Simple boxes and enclosures. Standoffs and spacers. Adapter plates with bolt patterns. Basic cylindrical parts.</p>
<p>These are prismatic shapes: combinations of rectangular and cylindrical geometry with cuts, holes, and fillets. Text-to-CAD tools are good at these because the AI can map your description to a straightforward sequence of CAD operations (sketch a rectangle, extrude it, cut some holes, add fillets).</p>
<p>What tends not to work:</p>
<p>Anything with curves that aren't simple arcs or circles. Gears, turbine blades, organic shapes, aerodynamic profiles. Anything where multiple parts need to fit together with specific clearances. Sheet metal parts that need to unfold. Parts with features that depend on each other in complex ways (snap-fit joints, living hinges, undercuts for molding).</p>
<p>If you're a beginner, stick with the simple stuff. Not because the simple stuff is trivial, but because it's where the tool actually delivers on its promise. A mounting plate with four holes is boring, but it's also genuinely useful, and generating one in thirty seconds instead of modeling it in ten minutes is a real time savings.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> maps out the full range of what works and what doesn't across different tools.</p>
<h2>Things beginners get wrong</h2>
<p>I've watched a few people try text-to-CAD for the first time, and the same mistakes come up.</p>
<p>Being too vague. "Make me a bracket" produces something bracket-shaped but with random dimensions. "Make me an L-bracket, 3mm thick, 50mm legs, with two M4 holes on each leg" produces something you can use. Treat the prompt like a specification, not a conversation.</p>
<p>Expecting perfection. The generated part will not be exactly what you asked for. Dimensions might be off by a millimeter or two. Features might shift slightly from where you specified them. This is normal. Text-to-CAD gives you a starting point, not a finished part. Every output needs verification.</p>
<p>Exporting as STL when they should export as STEP. STL is a mesh. STEP is solid geometry. If you want to edit the result in a CAD tool, STEP is the only option that makes sense. STL is for sending to a 3D printer slicer, not for engineering work.</p>
<p>Trying complex parts too early. I get it, the technology seems magical, so why not ask for something ambitious? Because the tool will return something that looks plausible but is wrong in ways that are hard to catch if you don't already know what the correct geometry looks like. A gear with wrong tooth profiles looks like a gear to someone who hasn't designed gears. Start simple. Build intuition for what the tool can and can't do.</p>
<p>Forgetting that this is a first draft. Text-to-CAD generates geometry. It doesn't generate manufacturing intent, tolerances, material specifications, surface finishes, or assembly relationships. If you're planning to actually make the part, the generated model is the beginning of the process, not the end.</p>
<h2>Do you still need to learn CAD?</h2>
<p>Yes.</p>
<p>I know that's not the answer the marketing suggests. But here's the reality: text-to-CAD gives you geometry. Understanding whether that geometry can be manufactured, whether the dimensions are appropriate, whether the material thickness works, whether the holes are in the right places for standard fasteners, that requires knowing what good geometry looks like. And that knowledge comes from actually using CAD software.</p>
<p>Think of text-to-CAD like autocomplete for code. It helps if you already know how to program. It's less helpful, sometimes dangerously so, if you don't, because you can't evaluate whether the output is correct.</p>
<p>If you're a beginner with no CAD experience, text-to-CAD is a great way to get a feel for 3D parts and start thinking in three dimensions. But it's not a substitute for learning Fusion 360 or SolidWorks if you plan to do any serious design work. The <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> comparison can help you pick the right tool to experiment with, and learning a real CAD program alongside it will make you much better at evaluating and fixing the output.</p>
<h2>Where to go from here</h2>
<p>Once you've generated a few simple parts and exported them as STEP files, you'll start to develop a sense for what kind of prompts work and what kind of parts the tool handles well. That intuition is worth more than any tutorial.</p>
<p>If you want to go deeper, the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the full tool landscape, output quality, and workflow integration. The prompt engineering post explains how to write prompts that produce more accurate geometry. And if you decide to get serious about Zoo.dev specifically, the <a href="/posts/how-to-use-text-to-cad">how to use text-to-CAD</a> walkthrough covers the interface and export process in detail.</p>
<p>My honest advice for beginners: try it. Generate five or ten simple parts. Export them. Open them in a free CAD viewer or Fusion 360's personal license. Measure the results. You'll know pretty quickly whether text-to-CAD is useful for the kind of work you do, or whether it's a neat trick that doesn't quite fit your needs yet. Either answer is fine. The technology is early, the tools are improving, and the worst thing you can lose is half an hour of curiosity.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to use text-to-CAD: a practical starter guide</title>
      <link>https://blog.texocad.ai/posts/how-to-use-text-to-cad</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/how-to-use-text-to-cad</guid>
      <pubDate>Wed, 04 Feb 2026 00:00:00 GMT</pubDate>
      <description>You type a description, the AI generates geometry, and then you fix what it got wrong. That&apos;s the real workflow. Here&apos;s how to actually do it.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>getting-started</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> To use text-to-CAD: choose a tool (Zoo.dev for STEP output, CADAgent for Fusion 360), write a specific prompt describing geometry with dimensions and features, generate the model, inspect the output for accuracy, export as STEP, and edit in your CAD software. Expect to fix things.</p>
<p>Last Tuesday I had fifteen minutes before a call and a bracket I needed roughed out. Nothing fancy. A flat plate with standoffs and four countersunk holes. The kind of part I've modeled a thousand times in Fusion 360, and the kind of part that takes exactly long enough to be annoying when you're in a hurry. So I opened Zoo.dev, typed a sentence, and hit generate. Twelve seconds later I had a STEP file on my desktop. It wasn't perfect. Two of the hole positions were off by about a millimeter and the standoff height was close but not what I asked for. But it was a starting point, and that starting point got me to a finished part in about four minutes instead of twelve.</p>
<p>That's what using text-to-CAD actually looks like. Not magic. Not a finished part falling from the sky. A rough first draft that you clean up, the same way you'd clean up a sketch from a napkin except the napkin is three-dimensional and already has most of the features in the right place.</p>
<p>If you've read the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> and want to know how to actually sit down and do this, here's the practical version.</p>
<h2>Pick your tool</h2>
<p>There are several text-to-CAD tools out there now, and they work differently enough that the choice matters. For a full comparison, the <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> post covers each one in detail. For getting started, you really only need to know about two.</p>
<p><a href="https://zoo.dev">Zoo.dev</a> is the most straightforward option. It's browser-based, it outputs real B-Rep geometry as STEP files, and the free tier gives you enough generations to actually learn the tool before deciding if it's worth paying for. You type a prompt, the AI generates a solid model, and you download the result. No install, no plug-in, no API key required for basic use.</p>
<p>CADAgent is the other one worth trying, especially if you already live in Fusion 360. It's an open-source add-in that generates parametric models directly inside Fusion, complete with a real timeline you can roll back and edit. The catch is you need an Anthropic API key, which means setting up an account and paying per-generation. The upside is the output has actual feature history, not just an orphaned solid sitting in the browser tree.</p>
<p>For your first session, I'd start with Zoo. Lower friction, faster feedback loop, and you can evaluate the output in whatever CAD software you already use.</p>
<h2>Write a prompt that's actually specific</h2>
<p>This is where most people fail on the first try, and honestly where I failed too. The instinct is to write something like "make me a bracket" and expect the AI to read your mind. It won't. The AI has no idea what bracket you're imagining. It's going to give you its average bracket, which will look vaguely bracket-shaped and be dimensionally wrong in ways that are hard to fix because the proportions weren't what you wanted in the first place.</p>
<p>Good prompts include dimensions, feature names, and constraints. Bad prompts are vague descriptions that could mean fifty different parts.</p>
<p>Here's a bad prompt: "A mounting bracket for a sensor."</p>
<p>Here's a better one: "An L-shaped mounting bracket, 3mm thick aluminum. Vertical leg 40mm tall, 30mm wide. Horizontal leg 50mm long, 30mm wide. Two 4.2mm through-holes on the vertical leg, centered horizontally, spaced 20mm apart vertically, first hole 10mm from the top. Two 4.2mm through-holes on the horizontal leg, centered across the width, spaced 30mm apart, first hole 10mm from the bend. 2mm fillet on the inside corner of the bend."</p>
<p>The second prompt is longer. It's also the one that produces a usable part. I've written more about this in <a href="/posts/text-to-cad-prompt-engineering">text-to-CAD prompt engineering</a>, but the core principle is simple: every dimension you leave out is a dimension the AI gets to guess, and it will guess wrong often enough to annoy you.</p>
<p>Specify dimensions in millimeters. Name standard features (counterbore, chamfer, fillet, boss, pocket) by their proper names. Describe one part at a time. If you need an assembly, generate each part separately. None of the current tools handle multi-part assemblies well, and trying to describe two mating parts in one prompt produces geometry that looks like the AI had a stroke.</p>
<h2>Generate and wait</h2>
<p>This part is the easy part. In Zoo, you paste your prompt into the text field and click generate. The server does its thing for somewhere between five and thirty seconds depending on complexity, and then you get a preview of the model plus download options.</p>
<p>In CADAgent, you type the prompt in the add-in panel inside Fusion 360 and watch the model build itself in real time. Sketches appear, extrusions happen, fillets get applied. It's genuinely entertaining the first few times and then you start paying attention to whether the operations make sense.</p>
<p>Either way, you're waiting for the AI to interpret your words as CAD operations and execute them. The translation from natural language to geometry is where the interesting failures happen, and why the <a href="/posts/text-to-cad-tutorial">text-to-CAD tutorial</a> walks through the full process with a real example.</p>
<h2>Inspect the output before you trust it</h2>
<p>This is the step people skip, and it's the step that matters most.</p>
<p>Whatever the AI generated, do not assume it's correct. Open the file. Measure things. Check the hole diameters. Verify the overall dimensions. Look at the topology. Rotate the model and check for surfaces that shouldn't be there, internal faces, zero-thickness geometry, or features that look right from one angle and wrong from another.</p>
<p>I have a short checklist I run through on every text-to-CAD output:</p>
<ol>
<li>Overall bounding box dimensions: are they what I asked for?</li>
<li>Hole diameters and positions: are they where I specified?</li>
<li>Feature presence: did it include everything I mentioned?</li>
<li>Feature absence: did it add things I didn't ask for?</li>
<li>Topology: is it one clean solid body, or are there extra surfaces hiding inside?</li>
<li>Edge quality: do the fillets and chamfers look clean, or are there tangency breaks?</li>
</ol>
<p>On a simple plate with holes, maybe half of these checks catch something. On anything more complex, at least one or two will reveal a problem. The holes might be 4mm instead of 4.2mm. A fillet might be missing. The overall height might be 38mm instead of 40mm. These are the kinds of errors that text-to-CAD produces reliably, and they're also the kinds of errors that are easy to fix if you catch them and expensive to miss if you don't.</p>
<h2>Export as STEP</h2>
<p>If you're working in Zoo, you'll download the result as a STEP file. This is the format you want for engineering work. STEP (ISO 10303) preserves B-Rep geometry with real faces and edges that any professional CAD software can import and edit. Avoid STL unless you're going directly to 3D printing and don't care about editing the model further.</p>
<p>If you're in CADAgent, the model is already in Fusion 360, so export isn't an issue. You can save it as a native Fusion file, export to STEP, or work with it directly.</p>
<p>The file format question matters more than people think. I've seen newcomers download an STL from a text-to-CAD tool, try to edit it in SolidWorks, and spend an hour wondering why they can't select a face. An STL is a bag of triangles. You can't fillet a bag of triangles. Get the STEP file.</p>
<h2>Import and fix in your CAD software</h2>
<p>This is where the real work starts, and I mean that in a good way. The text-to-CAD tool gave you a first draft. Now you turn it into a real part.</p>
<p>Open your STEP file in Fusion 360, SolidWorks, or whatever you use. The geometry will appear as an imported body without parametric history (unless you used CADAgent, in which case you already have a timeline). From here, you're doing normal CAD work: adjusting dimensions, adding features, fixing things the AI got wrong.</p>
<p>Common fixes I end up making on text-to-CAD output:</p>
<p>Hole positions. Almost every part I've generated has at least one hole that's off by half a millimeter to a couple of millimeters. I measure, delete the hole, and redrill it where it should be. This takes about thirty seconds per hole.</p>
<p>Missing features. The AI sometimes drops features from the prompt, especially if the prompt is long. A fillet it didn't apply, a chamfer it forgot about, a counterbore that came out as a through-hole. I add these manually.</p>
<p>Dimensional corrections. Overall dimensions are usually close but not exact. If I asked for 80mm and got 78.5mm, I'll sketch on a face and use the move/extend body command to fix it, or just model the correct geometry and use combine/cut to clean it up.</p>
<p>Topology cleanup. Occasionally the imported body has internal surfaces or non-manifold edges that cause problems downstream. Running a body repair or manually deleting the offending surfaces usually handles this.</p>
<p>The whole import-inspect-fix cycle takes me about five to fifteen minutes on a simple part. On anything complex enough that the AI got most of it wrong, I just start over from scratch in CAD. Knowing when to fix the AI output and when to throw it away is a skill you develop after a few sessions.</p>
<h2>The workflow in practice</h2>
<p>Here's what a realistic text-to-CAD session looks like for me now, after a few months of using these tools.</p>
<p>I have a part in my head. Something simple to moderate: a bracket, an enclosure wall, a mounting plate, a spacer, a cable clip. I open Zoo.dev. I spend about sixty seconds writing a specific prompt with all the dimensions I care about. I generate. I download the STEP. I open it in Fusion 360. I spend two minutes inspecting and three to eight minutes fixing. Total time: five to twelve minutes. Compare that to modeling from scratch, which for the same class of parts takes me eight to twenty minutes depending on complexity.</p>
<p>The savings are real but modest for simple geometry. Where text-to-CAD actually saves more time is when I'm exploring variations. Need to try five different bracket configurations to see which one fits best in the assembly? Five prompts in Zoo take maybe three minutes total. Five models from scratch in Fusion take thirty minutes minimum. That iteration speed is where the tool earns its keep.</p>
<p>For anything involving complex surfacing, tight tolerances, sheet metal with bend logic, or parts that need to mate precisely with other components, I don't bother with text-to-CAD. I model those from scratch because the cleanup time exceeds the generation time, and the AI has no concept of manufacturing constraints. Nobody's taught these models what a K-factor is, and it shows.</p>
<h2>What to expect when you're starting out</h2>
<p>Your first few text-to-CAD attempts will probably produce garbage. Not because the tools are bad, but because writing good prompts is a skill and you don't have it yet. I didn't either. My first prompt was something like "a box with a hole in it" and I got back a box with a hole in it that was dimensioned like the AI had never seen a real object.</p>
<p>Give yourself an afternoon. Try ten or fifteen prompts. Start with the simplest parts you can think of: a rectangular plate, a cylinder with a hole, a basic L-bracket. Get a feel for how the tool interprets your words. Notice which details matter (dimensions, feature names, positions) and which ones the AI ignores (material suggestions, manufacturing intent, anything vague).</p>
<p>Read the <a href="/posts/text-to-cad-prompt-engineering">text-to-CAD prompt engineering</a> post before you sit down. It'll save you the worst of the learning curve.</p>
<p>The gap between "my first prompt" and "I can reliably get useful output" is about two hours of practice. After that, you'll know the tool's limits and you'll know when it's faster to type a prompt versus just modeling the thing yourself. That judgment is the real skill here, not the typing.</p>
<h2>Where text-to-CAD fits in real work</h2>
<p>Text-to-CAD is a starting tool, not a finishing tool. It generates first-draft geometry that you edit into a real part. It's fastest for simple prismatic shapes with standard features. It's useless for complex assemblies, organic surfaces, or anything requiring manufacturing-specific logic.</p>
<p>If you're expecting it to replace knowing how to use CAD, it won't. If you're expecting it to save you ten minutes on a bracket, it probably will. That's not a revolution. It's a time saver that works when the geometry is simple and the expectations are realistic.</p>
<p>I keep a shortcut to Zoo.dev pinned in my browser. I use it maybe three or four times a week, almost always for the same kind of thing: a simple part I don't feel like sketching from scratch when I already know exactly what it should look like. It doesn't replace my CAD skills. It just means I spend less time on the parts that don't need my full attention, and more time on the parts that do.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD examples: 10 prompts and what they produced</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-examples</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-examples</guid>
      <pubDate>Wed, 04 Feb 2026 00:00:00 GMT</pubDate>
      <description>I wrote 10 text-to-CAD prompts ranging from simple to ambitious. Here&apos;s each prompt, what came out, and what I&apos;d change next time.</description>
      <dc:creator>TexoCAD</dc:creator>

      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Ten text-to-CAD prompt examples with results: simple plate (good), L-bracket (good), enclosure with lid (partial), gear (failed), mounting bracket with holes (good with offsets), pipe fitting (poor), phone stand (decent), heat sink (partial), hinge (failed), cable clip (good). Simple prismatic geometry works; complex features don&apos;t.</p>
<p>I cleared a Saturday afternoon for this. The plan was straightforward: write ten text-to-CAD prompts, ranging from embarrassingly simple to deliberately ambitious, run each one through Zoo.dev, download the STEP files, import them into Fusion 360, and document exactly what happened. No cherry-picking. No re-rolling until I got a good result. One prompt, one generation, one honest assessment.</p>
<p>I set up a clean folder on my desktop, made a pot of coffee that I optimistically assumed would last the whole session, and started typing. By prompt six the coffee was gone and by prompt ten I had strong opinions about which kinds of geometry AI can handle and which kinds it should probably leave alone for a few more years.</p>
<p>For the principles behind why some of these worked and others didn't, the <a href="/posts/text-to-cad-prompt-engineering">text-to-CAD prompt engineering</a> post covers the theory. For a catalog of prompts I've tested repeatedly and know work well, see <a href="/posts/best-prompts-for-text-to-cad">best prompts for text-to-CAD</a>. This post is the raw experiment.</p>
<h2>Example 1: simple rectangular plate</h2>
<p>Prompt: "Rectangular plate, 100mm x 60mm x 4mm. Four 5.5mm through-holes, one at each corner, hole centers 8mm from each edge."</p>
<p>Result: good. The plate came back at 99.7mm x 60.2mm x 4.0mm, which is close enough that I'd use it directly for a prototype. All four holes were present, 5.5mm diameter, positioned within about 0.5mm of where I asked. The geometry was a single clean solid body with no internal faces or weirdness. Import into Fusion took two seconds. I measured everything, shrugged, and moved on.</p>
<p>This is text-to-CAD at its best. A simple prismatic part with basic features and explicit dimensions. If all your parts looked like this, you'd save real time every day.</p>
<p>Fix time: zero. I'd use it as-is for a non-critical application.</p>
<h2>Example 2: L-bracket with mounting holes</h2>
<p>Prompt: "L-shaped bracket, 3mm thick. Vertical leg 45mm tall, 35mm wide. Horizontal leg 55mm long, 35mm wide. 3mm fillet on the inside bend. Three 4.2mm through-holes on the vertical leg: centered horizontally, spaced 15mm apart starting 7.5mm from the top edge. Two 4.2mm through-holes on the horizontal leg: centered across the width, 15mm and 40mm from the bend."</p>
<p>Result: good with minor issues. The overall shape and thickness were correct. Leg dimensions were within a millimeter. The bend fillet appeared at approximately the right radius. The vertical leg holes were all present and close to their specified positions, though the spacing was 14.5mm instead of 15mm. The horizontal leg holes were both there but one was about 2mm off from where I asked.</p>
<p>This is typical L-bracket performance. The AI handles the basic shape well, gets hole diameters right, but hole positions drift a bit, especially when there are several of them. I fixed the two position errors in about ninety seconds by re-drilling.</p>
<p>Fix time: about two minutes.</p>
<h2>Example 3: electronics enclosure with separate lid</h2>
<p>Prompt: "Rectangular open-top box, outer dimensions 90mm x 60mm x 35mm. Wall thickness 2.5mm, bottom thickness 2.5mm. Four M3 screw bosses at the inside corners, boss outer diameter 6mm, boss height 30mm from inside bottom, with 2.5mm holes centered on each boss."</p>
<p>Note: I deliberately asked for just the box, not the lid. I've learned from painful experience that asking for "a box with a lid" produces fused garbage. I planned to generate the lid as a separate prompt.</p>
<p>Result: partial success. The box outer dimensions were close: 89.5mm x 60.3mm x 34.8mm. Wall thickness measured 2.5mm on three walls and about 2.2mm on the fourth, which is the kind of inconsistency that tells you the AI generated each wall somewhat independently rather than shelling a solid block. The bosses were the problem. Two appeared at roughly the right positions. The other two were missing entirely. The two that did appear had the correct outer diameter but the holes were 3mm instead of 2.5mm.</p>
<p>I've seen this pattern before. When features need to be precisely located at internal corners of a shell, the AI has trouble resolving the containment relationship. "Inside corners" is a harder concept than "5mm from the edge" because it requires understanding the interior geometry, not just the exterior.</p>
<p>The lid prompt (generated separately as "Flat lid, 90mm x 60mm x 2.5mm, with four 3.2mm through-holes on a pattern matching the box boss positions") produced a plate that was dimensionally correct but the hole pattern didn't match the bosses, because the bosses were in the wrong places to begin with. This is the cascading failure mode of multi-part text-to-CAD: if part A is wrong, part B built to match part A is wrong in a different way.</p>
<p>Fix time: about eight minutes, mostly rebuilding the bosses. At that point, I questioned whether I should have just modeled the box from scratch.</p>
<h2>Example 4: spur gear</h2>
<p>Prompt: "Spur gear, module 2, 20 teeth, 14.5 degree pressure angle, 20mm face width, 10mm bore, keyway 3mm wide by 1.5mm deep."</p>
<p>Result: failed. What came back was a cylinder with bumps around the outside that resembled teeth in the way a child's drawing resembles a photograph. The tooth profile was not an involute curve. The root circles, tip circles, and pitch circles bore no relationship to the specified module. The bore was present and roughly 10mm. The keyway was a rectangular cut that was approximately correct in width but positioned off-center.</p>
<p>I expected this. Gears require precise mathematical curves that text-to-CAD tools don't generate. The involute tooth profile is defined by equations, not by description, and "module 2, 20 teeth" is a specification that needs to be computed, not interpreted. This is a case where a dedicated gear generator (even a free one like the GearGenerator add-in for Fusion 360) produces a perfect result in seconds, and text-to-CAD produces decoration.</p>
<p>Fix time: infinite. You can't fix this output. Start over with a real gear generator.</p>
<h2>Example 5: mounting bracket with offset holes</h2>
<p>Prompt: "Flat rectangular bracket, 80mm x 40mm x 3mm. Four 4.2mm through-holes: two on the left half at positions (10, 10) and (10, 30) from the bottom-left corner, two on the right half at positions (70, 10) and (70, 30) from the bottom-left corner. 1mm chamfer on all edges."</p>
<p>Result: good with offsets. The plate dimensions were correct. All four holes were present and 4.2mm diameter. The positions were off by about 1-2mm each, which is consistent with what I've seen on other hole-position tests. The interesting part was the coordinate-style positioning. Using (x, y) coordinates from a corner seemed to work about as well as "mm from each edge" descriptions. The chamfers appeared on most edges but not all. Two edges on the bottom face were missed.</p>
<p>This was a deliberate test of coordinate-style prompting. The result suggests the AI can parse coordinates, but with the same positional drift that affects other reference styles. No magic bullet for positioning accuracy.</p>
<p>Fix time: about three minutes for hole positions and missing chamfers.</p>
<h2>Example 6: pipe fitting adapter</h2>
<p>Prompt: "Cylindrical pipe adapter. One end: outer diameter 25mm, inner diameter 20mm, 15mm long. Other end: outer diameter 32mm, inner diameter 26mm, 15mm long. Transition section between the two ends: 10mm long, smooth taper on outer diameter, stepped inner diameter changing at the midpoint of the transition."</p>
<p>Result: poor. The AI produced a vaguely cylindrical shape with two different diameters, but the transition section was a mess. Instead of a smooth taper on the outside with a stepped bore on the inside, I got something that looked like two cylinders crudely joined with a fillet that was trying to do both jobs at once. The inner bore was continuous rather than stepped. The outer taper existed but wasn't smooth in the way I'd call "machineable."</p>
<p>Pipe fittings involve features that reference each other across a transition, and the AI struggled with the two different things happening (taper outside, step inside) in the same region. The outer and inner profiles need to be generated with different operations that share the same reference axis, and the AI apparently doesn't decompose the problem that way.</p>
<p>Fix time: longer than modeling from scratch. I scrapped it after five minutes of trying to salvage the transition section.</p>
<h2>Example 7: phone stand</h2>
<p>Prompt: "Phone stand. Base plate 80mm x 60mm x 5mm. Angled support rising from one long edge, 60mm wide, 3mm thick, angled at 70 degrees from horizontal, 100mm long along the angle. 5mm lip at the bottom of the angled support, perpendicular to the support surface, to hold the phone. 3mm fillet where the angled support meets the base."</p>
<p>Result: decent. This was the prompt where I expected the AI to struggle with the angle, and it surprised me. The base plate was correct. The angled support appeared at something close to 70 degrees (I measured approximately 68 degrees). It was the right width and roughly the right length. The lip at the bottom was present, which I wasn't confident about, though it was 4mm instead of 5mm. The fillet at the base joint was there and approximately correct.</p>
<p>The overall shape would work as a phone stand. It wouldn't win any design awards, but if I printed it in PLA it would hold a phone. The angle being 2 degrees off doesn't matter for this application. The lip being 1mm short doesn't matter. This is the kind of part where "close enough" is genuinely close enough.</p>
<p>Fix time: one minute to adjust the lip height. I'd use the rest as-is for 3D printing.</p>
<h2>Example 8: heat sink</h2>
<p>Prompt: "Rectangular heat sink base, 40mm x 40mm x 3mm. Nine rectangular fins on the top surface, each 40mm long, 1mm thick, 15mm tall, evenly spaced across the 40mm width."</p>
<p>Result: partial. The base plate was correct. Fins appeared on top, which was good. But only seven fins were generated instead of nine, and the spacing wasn't even. The fins that did appear were approximately 1mm thick and approximately 15mm tall, with two of them visibly shorter than the others. The fin thickness varied between about 0.8mm and 1.2mm.</p>
<p>Heat sinks test the AI's ability to generate repeated thin features, and the result tells you that repetition at fine dimensions is unreliable. The AI seems to lose count or lose dimensional consistency after about six or seven repeated features. For a heat sink you'd actually use, you'd want a fin pattern that's precise enough to calculate thermal resistance, and this isn't it.</p>
<p>Fix time: about seven minutes to delete the uneven fins and recreate them as a proper rectangular pattern. At that point I was basically re-modeling everything above the base plate.</p>
<h2>Example 9: simple hinge</h2>
<p>Prompt: "Two-piece hinge. Leaf one: 40mm x 30mm x 2mm flat plate with two cylindrical knuckles on one long edge, each knuckle 6mm outer diameter, 3mm inner diameter (for hinge pin), 8mm long, positioned 5mm from each end of the edge. Leaf two: 40mm x 30mm x 2mm flat plate with one cylindrical knuckle centered on the same long edge, 6mm outer diameter, 3mm inner diameter, 14mm long, positioned to interleave with leaf one's knuckles."</p>
<p>Result: failed. I knew this was ambitious. A hinge requires two parts that mate together with precise interleaving geometry, and I asked for it in a single prompt despite my own advice about one part per prompt. I wanted to see what would happen.</p>
<p>What happened was a single solid body that was vaguely hinge-shaped, with knuckle-like cylinders on one edge that were fused to the plate rather than being separate interleaving pieces. There were two plates, but they were joined at the hinge line as one body. The cylinders existed but didn't have bores. The whole thing was art, not engineering.</p>
<p>Even generating the leaves separately would be tricky because the interleaving geometry requires precise positional coordination between two parts. This is the kind of thing that's easy to model in CAD (sketch, revolve, pattern) but hard to describe in text because the relationship between the two parts is geometric, not verbal.</p>
<p>Fix time: not attempted. This is a from-scratch job.</p>
<h2>Example 10: cable clip</h2>
<p>Prompt: "C-shaped cable clip for 6mm cable. Outer diameter 10mm, inner diameter 6.5mm, wall thickness 1.75mm, opening gap 4mm at the top. Flat mounting tab extending 8mm below the clip, 10mm wide, 2mm thick, with a 3.5mm through-hole centered on the tab, 4mm from the bottom edge."</p>
<p>Result: good. The C-profile appeared with the correct proportions. Inner diameter measured 6.5mm. Outer diameter measured about 10.2mm, close enough. The gap was approximately 4mm. The mounting tab was present, correct width, correct thickness, with the hole in the right place. The transition from the curved clip to the flat tab was clean.</p>
<p>Cable clips are one of text-to-CAD's success stories. The geometry is simple, the features are well-defined, and there aren't many things to get wrong. I've printed clips from text-to-CAD output several times and they work fine. The tolerances don't need to be tight for cable management, and the worst that happens if the gap is slightly off is you flex the clip a little more when snapping the cable in.</p>
<p>Fix time: zero. Exported directly for 3D printing.</p>
<h2>What the results say</h2>
<p>Out of ten prompts, three produced output I'd use with zero or minimal fixes (plate, cable clip, phone stand). Two produced output that needed moderate fixes but were faster than starting from scratch (L-bracket, mounting bracket). Two produced output that required enough fixing to question the time savings (enclosure, heat sink). Three produced output that was unusable (gear, pipe fitting, hinge).</p>
<p>The pattern is clear, and it's the same pattern I see every time I use these tools. Simple prismatic geometry with holes, chamfers, and fillets works well. Basic brackets and plates are reliable. Features that involve precise mathematical curves (gears), multi-body relationships (hinges), complex transitions (pipe fittings), or many repeated thin elements (heat sink fins) don't work.</p>
<p>If your parts live in the bracket-plate-clip universe, text-to-CAD is a genuine time saver. If your parts live in the gear-fitting-assembly universe, save yourself the trouble and model them in CAD directly. Knowing which universe your part lives in before you start typing is the most important skill in text-to-CAD, and it has nothing to do with prompt engineering.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the broader technology and where it's headed. The <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> post compares the platforms if you want to try this yourself. And the <a href="/posts/best-prompts-for-text-to-cad">best prompts for text-to-CAD</a> post collects the prompts I've found most reliable for the categories that actually work.</p>
<p>My Saturday gave me seven STEP files worth keeping, three worth deleting, and one cold pot of coffee. By the standards of experimental engineering, that's a pretty good ratio. By the standards of text-to-CAD, it's about what you should expect. The tools are useful for the things they're good at and hopeless at the things they're not, and that line between useful and hopeless is drawn exactly where simple geometry ends and real complexity begins.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to generate CAD from text: the honest version</title>
      <link>https://blog.texocad.ai/posts/how-to-generate-cad-from-text</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/how-to-generate-cad-from-text</guid>
      <pubDate>Tue, 03 Feb 2026 00:00:00 GMT</pubDate>
      <description>You can generate CAD geometry from a text description. The honest part is that you&apos;ll spend more time fixing the output than writing the prompt.</description>
      <dc:creator>TexoCAD</dc:creator>

      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> To generate CAD from text, use a text-to-CAD tool like Zoo.dev or CADAgent. Write a specific prompt with exact dimensions, generate the model, export as STEP, then import into SolidWorks or Fusion 360 to fix inaccuracies. Budget 3x the generation time for cleanup.</p>
<p>Last Tuesday I was trying to show a colleague how text-to-CAD works. I typed a perfectly reasonable prompt into Zoo.dev, something like "rectangular enclosure, 100mm by 60mm by 40mm, 2mm wall thickness, four M3 mounting holes in the corners." The model came back in about fifteen seconds. My colleague looked impressed. Then I opened the STEP file in Fusion 360, measured the wall thickness, and it was 3mm. Not 2mm. I fixed it in about a minute, but the look on his face had already shifted from "wow" to "ah." That shift is the honest version of text-to-CAD in one facial expression.</p>
<p>You can absolutely generate CAD geometry from a text description. The tools exist, they work, and for simple parts they save real time. The honest part is that "generate" is the easy half of the sentence. The other half is "verify, fix, and finish," and that half takes longer than anyone's demo suggests.</p>
<h2>The actual process</h2>
<p>The basic flow is the same regardless of which tool you use:</p>
<ol>
<li>Write a text description of the part you want, with specific dimensions.</li>
<li>Feed it to a text-to-CAD tool.</li>
<li>Wait ten to thirty seconds.</li>
<li>Inspect the result.</li>
<li>Export as STEP.</li>
<li>Import into your real CAD software.</li>
<li>Measure everything that matters.</li>
<li>Fix what's wrong.</li>
<li>Add what's missing.</li>
</ol>
<p>Steps 7 through 9 are where the actual work lives. Steps 1 through 6 are the part that looks good in a demo.</p>
<h2>Choosing a tool</h2>
<p>Right now, in early 2026, your practical options are:</p>
<p><a href="https://zoo.dev">Zoo.dev</a> is the most developed dedicated text-to-CAD platform. It runs on a custom GPU-native geometric kernel, outputs real B-Rep geometry as STEP files, and has a free tier generous enough to test properly. For a full walkthrough, I wrote a <a href="/posts/how-to-use-text-to-cad">how to use text-to-CAD</a> guide, and the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the broader landscape.</p>
<p>CADAgent is an open-source Fusion 360 add-in that generates models inside Fusion itself, which means the output has actual feature history. You bring your own Anthropic API key. It's the most promising approach for integration with a real CAD workflow, because the geometry is created using Fusion's own modeling commands rather than generated externally and imported.</p>
<p>AdamCAD generates parametric models with adjustable dimension sliders. Faster for quick prototyping, but the parametric controls are limited compared to a native feature tree.</p>
<p>CADScribe outputs STEP and STL files and has gotten some traction with the 3D printing crowd. Mixed results on anything beyond simple geometry.</p>
<p>The <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> comparison covers all of these with more detail on pricing, capabilities, and limitations.</p>
<p>For this walkthrough, I'll use Zoo.dev because it's the most accessible starting point and produces the cleanest STEP output.</p>
<h2>Writing a prompt that works</h2>
<p>This is the part that separates a useful generation from a waste of time. The AI responds to specificity. Vague descriptions produce vague parts. Precise descriptions with explicit dimensions produce parts that at least attempt to be what you asked for.</p>
<p>Bad prompt: "make a bracket for a sensor."</p>
<p>Better prompt: "L-bracket, 3mm thick, 45mm tall leg, 35mm base leg. Two M4 clearance holes on the base, spaced 25mm apart, centered on the base width. One M4 clearance hole on the tall leg, centered at 30mm height."</p>
<p>The difference in output quality between those two prompts is enormous. The first one gives you whatever the AI thinks a sensor bracket looks like. The second one gives you something with actual engineering intent behind it.</p>
<p>Rules I've learned after months of this:</p>
<p>Always include units. Always. If you say "50" the AI might interpret that as millimeters, inches, or something in between. Say "50mm."</p>
<p>Name standard features explicitly. "M4 clearance hole" works better than "4mm hole" because the AI maps it to the correct clearance diameter. "Mounting boss" works better than "raised section with a hole."</p>
<p>Describe positions in absolute terms. "15mm from the left edge" beats "near the left edge." "Centered on the 40mm dimension" beats "roughly in the middle."</p>
<p>One part per prompt. Don't try to generate an assembly. Generate each piece separately with compatible dimensions.</p>
<p>The <a href="/posts/text-to-cad-tutorial">text-to-CAD tutorial</a> goes into more detail on prompt structure, and the prompt engineering post explores the nuances of phrasing that consistently produces better geometry.</p>
<h2>What happens after generation</h2>
<p>You've typed your prompt, waited the fifteen seconds, and the viewport shows something that looks roughly like your part. Now the work begins.</p>
<p>Export as STEP. Not STL, not OBJ, not glTF. STEP. This is the format that gives you real solid geometry with selectable faces and measurable edges. Everything else is either a mesh (useful for 3D printing, useless for engineering edits) or a visualization format (useful for nothing you care about right now).</p>
<p>Open the STEP file in Fusion 360, SolidWorks, or whatever you actually model in. The file should import as a solid body. If your CAD software shows a mesh import warning instead of a solid body, something went wrong with the export format.</p>
<p>Now measure. Every dimension you specified in the prompt, check it. I keep a simple mental checklist:</p>
<p>Is the overall size correct? Length, width, height. Check all three.</p>
<p>Are the holes the right diameter? Select the hole edges and verify. M4 clearance should be about 4.3mm to 4.5mm. If it's 4.0mm, that's not clearance, that's a press fit nobody asked for.</p>
<p>Are the features in the right positions? Measure from edges to hole centers. Check spacing between holes. Verify symmetry if you asked for it.</p>
<p>Is it actually a solid? Sometimes the geometry looks closed but has an invisible gap or an internal face that makes it technically a surface body rather than a solid. Check your CAD software's body type indicator.</p>
<p>On a typical simple part, I find one or two things that need fixing. A dimension off by a millimeter. A hole that shifted slightly. A missing fillet or chamfer I forgot to include in the prompt. On a more complex part, the fix list grows to the point where I'm essentially rebuilding it, at which point I would have been faster starting from scratch.</p>
<h2>The cleanup reality</h2>
<p>Here's the thing nobody puts in the demo: cleanup is where text-to-CAD time actually goes. The generation takes fifteen seconds. The export and import take another thirty seconds. The measuring and fixing take five to twenty minutes, depending on complexity and accuracy.</p>
<p>For a simple mounting plate with holes on a bolt pattern, cleanup might be just checking dimensions and adding corner radii I forgot to specify. Total saved time versus modeling from scratch: maybe five minutes. Worth it.</p>
<p>For a moderately complex part, say an enclosure with bosses, a lip, screw posts, and vent slots, the generated geometry usually gets the overall shape right but misses the details. The bosses aren't the right height. The lip doesn't have the right clearance for a mating lid. The vent slots are decorative rather than dimensioned. By the time I've fixed all of that, I've spent more time than modeling from scratch would have taken, plus I'm working with imported geometry that doesn't have a clean feature tree.</p>
<p>This is not a condemnation of the technology. It's a calibration of expectations. For the right kind of part, text-to-CAD is genuinely faster. For the wrong kind, it's a scenic detour. Learning to tell the difference is the skill that matters.</p>
<h2>What kinds of parts actually work</h2>
<p>Simple prismatic geometry. If the part is fundamentally a combination of rectangular extrusions, cylindrical holes, fillets, and chamfers, text-to-CAD handles it. Brackets, plates, standoffs, basic housings, spacers, adapter plates with bolt patterns. These are the bread-and-butter parts that make up a surprising amount of mechanical design work, and they're exactly where the tools deliver.</p>
<p>What doesn't work: anything where features have complex relationships. Gears with involute profiles. Sheet metal with bend allowances. Snap-fit features with undercuts. Draft angles for injection molding. Lofted surfaces. Swept profiles. Multi-body assemblies. The tools will attempt some of these and produce geometry that looks plausible at a glance but falls apart the moment you try to manufacture or assemble it.</p>
<p>If you're not sure whether your part is in the "works" category, ask yourself: could I describe every feature with a dimension and a position, without needing to reference curves, surfaces, or relationships between moving parts? If yes, try it. If no, model it by hand.</p>
<h2>The bigger picture</h2>
<p>Text-to-CAD is not a replacement for knowing how to use CAD software. It's a first-draft generator. The useful analogy is autocomplete: it gets you started faster, but you still need to know what a correct sentence looks like.</p>
<p>Where it fits in my workflow: I use it for simple parts that I'd otherwise spend ten minutes sketching and extruding. Sensor brackets, test fixture plates, cable routing clips, mounting adapters. Parts where the geometry is trivial but still takes time to draw from nothing. On those parts, text-to-CAD saves me a few minutes each, which adds up across a week of prototyping work.</p>
<p>Where it doesn't fit: anything going to a machine shop, anything with tolerances that matter, anything that mates with another part in an assembly with clearance fits. For those, I model from scratch in Fusion 360, because I need control over the feature tree, proper constraints, and the ability to update dimensions without reimporting a new STEP file.</p>
<p>The honest version of how to generate CAD from text is: you generate it, you verify it, and you fix it. The generation is fast and occasionally impressive. The verification and fixing are the work. Budget your time accordingly, and the tool is genuinely useful. Expect it to produce finished parts, and you'll spend your afternoon arguing with imported geometry instead of designing.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Best prompts for text-to-CAD: what I&apos;ve learned so far</title>
      <link>https://blog.texocad.ai/posts/best-prompts-for-text-to-cad</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/best-prompts-for-text-to-cad</guid>
      <pubDate>Sun, 01 Feb 2026 00:00:00 GMT</pubDate>
      <description>After hundreds of text-to-CAD prompts, patterns emerge. Specific dimensions beat vague descriptions. Simple geometry beats ambitious complexity. Here are the prompts that actually work.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>prompts</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> The best text-to-CAD prompts specify exact dimensions in mm, name standard features (counterbore, chamfer, fillet radius), describe one part at a time, and include manufacturing context. Example: &apos;Rectangular plate 80x50x5mm with four M4 counterbore holes at corners, 5mm from edges, with 2mm edge chamfers.&apos;</p>
<p>Around prompt number fifty, I started keeping a spreadsheet. Nothing fancy. Just the prompt text, which tool I used, a rating from 1 to 5 on how usable the output was, and a short note about what went right or wrong. I did this because I was losing track of which phrasing produced good results and which phrasing produced the geometric equivalent of a drunk guess. My desk had a growing folder of STEP files labeled things like "bracket_v7_better.step" and "bracket_v8_actually_worse.step" and I needed to stop relying on my memory, which is unreliable at the best of times and completely useless after six hours of staring at imported solids.</p>
<p>After a few hundred prompts, logged and rated, patterns show up. Some types of prompts consistently produce parts worth importing. Others consistently produce garbage, and no amount of rewording fixes them. This post is the good ones. The prompts that have actually worked for me, with enough context that you can adapt them to your own parts.</p>
<p>For the theory behind why these work, the <a href="/posts/text-to-cad-prompt-engineering">text-to-CAD prompt engineering</a> post covers the principles. This is the recipe book.</p>
<h2>The plate prompt</h2>
<p>This is the one I come back to most often, because plates with holes are the bread and butter of fixturing, test rigs, and prototype assemblies. It's also the category where text-to-CAD is most reliable.</p>
<p>Prompt: "Rectangular plate, 80mm x 50mm x 5mm. Four M4 counterbore holes at the corners, hole centers 5mm from each edge. Counterbore diameter 8mm, counterbore depth 3mm, through-hole diameter 4.3mm. 2mm x 45° chamfer on all edges of the top face."</p>
<p>This prompt consistently produces a usable part. The plate dimensions come back close to correct. The counterbores are usually the right diameter and depth. The hole positions are within a millimeter of where they should be, which is close enough to fix quickly. The chamfers sometimes get applied to the wrong edges or only to some edges, which is a common text-to-CAD failure mode, but the overall part is 85-90% correct.</p>
<p>Why it works: every dimension is explicit. The feature type (counterbore) is named correctly. The hole positions are given as distances from edges, which is unambiguous. The chamfer spec uses standard notation.</p>
<h2>The L-bracket prompt</h2>
<p>Brackets are the second most common thing I generate with text-to-CAD, and L-brackets specifically are well within the AI's comfort zone.</p>
<p>Prompt: "L-shaped bracket, 3mm thick. Vertical leg 40mm tall by 30mm wide. Horizontal leg 50mm long by 30mm wide. Inside bend radius 3mm. Two 4.2mm through-holes on the vertical leg, centered horizontally, 10mm and 30mm from the top edge. Two 4.2mm through-holes on the horizontal leg, centered across the width, 10mm and 40mm from the bend."</p>
<p>Typical result: the overall shape is correct. The thickness is spot on. Leg dimensions are usually within a millimeter. The holes are close but I've seen the spacing drift by up to 2mm, especially on the horizontal leg where the "from the bend" reference seems to confuse some tools. The bend fillet appears most of the time, though occasionally it comes out as a sharp corner. Fix time is usually under three minutes.</p>
<p>Why it works: the L-shape is described by its legs rather than as an abstract shape. Each leg has its own dimensions. Hole positions reference specific edges. The bend radius is specified.</p>
<h2>The standoff prompt</h2>
<p>Small cylindrical parts test whether the AI can produce clean bodies of revolution, and standoffs are simple enough that the results are usually good.</p>
<p>Prompt: "Cylindrical standoff, 15mm outer diameter, 25mm tall. M4 internal thread (4.2mm through-hole for clearance). 1mm chamfer on both ends of the outer diameter."</p>
<p>The through-hole comes back correct almost every time. The outer diameter is reliable. The height is usually right. The chamfers are the weak point, sometimes only applied to one end or not at all. No tool I've tested actually generates thread geometry, so you get a smooth bore, which is what I expected. Fix time is under a minute if the chamfers need redoing.</p>
<p>Why it works: simple geometry with a single axis of symmetry. All dimensions are absolute. The feature list is short.</p>
<h2>The electronics enclosure prompt</h2>
<p>This is where things get harder, and the prompt has to work harder to compensate.</p>
<p>Prompt: "Rectangular open-top box, outer dimensions 80mm x 50mm x 25mm. Wall thickness 2mm, bottom thickness 2mm. Four M2.5 boss features at the inside corners, boss outer diameter 5mm, boss height 20mm (from inside bottom), with 2.8mm through-holes centered on each boss. Two M3 mounting tabs extending 8mm from the bottom of the long sides, centered on each side, 3mm thick, with 3.5mm through-holes centered on each tab."</p>
<p>This is the longest prompt in my regular rotation, and the results are the most inconsistent. About half the time, the box comes out with correct outer dimensions and wall thickness. The bosses appear most of the time but their positions are sometimes wrong, floating near the corners rather than actually at the corners. The mounting tabs are the most common failure: they either don't appear, appear on the wrong faces, or appear at the wrong dimensions.</p>
<p>On a good generation, this prompt saves me ten minutes of modeling. On a bad generation, I throw it away and model from scratch. The hit rate is about 50-50, which I'll admit is borderline for whether it's worth bothering. I keep using it because the good results save real time and the bad results are obvious within thirty seconds of importing.</p>
<p>Why it sometimes works: the dimensions are specific and the features are described in detail. Why it sometimes doesn't: the AI struggles with features that reference the inside of a shell, and boss placement at corners requires understanding containment relationships that current tools handle unreliably.</p>
<h2>The mounting plate prompt</h2>
<p>A simpler version of the plate prompt, optimized for the case where you need a flat part with a specific bolt pattern and a central feature.</p>
<p>Prompt: "Square plate, 60mm x 60mm x 3mm. Central through-hole 22mm diameter, centered. Four 3.2mm through-holes on a 31mm square bolt pattern, centered on the plate. 1mm chamfer on all edges of both faces."</p>
<p>This is my NEMA 17 motor mount prompt, and I've used it enough times that I know its failure modes by heart. The central hole is always correct. The plate dimensions are always close. The bolt pattern comes back within a millimeter of 31mm about 70% of the time, and off by 2-3mm about 30% of the time. The chamfers sometimes only appear on the top face. Fix time: two to four minutes, mostly spent correcting the bolt pattern when it drifts.</p>
<h2>The cable clip prompt</h2>
<p>I keep this one around for quick desk and test rig accessories. Cable management parts are the guilty pleasure of text-to-CAD because they're simple, low-stakes, and satisfying when they work.</p>
<p>Prompt: "Cable clip for 8mm cable. C-shaped profile, 12mm outer diameter, 8.5mm inner diameter, 3mm wall thickness, opening gap 5mm. Base tab extending 10mm below the clip, 12mm wide, 2mm thick, with a 3.5mm mounting hole centered on the tab, 5mm from the bottom edge."</p>
<p>This produces usable geometry about 75% of the time. The C-profile is recognizable. The inner and outer diameters are usually close. The opening gap varies more than I'd like, sometimes 4mm, sometimes 6mm. The base tab is the most reliable part. Overall, it's the kind of part where "close enough" actually is close enough, especially for 3D printing.</p>
<h2>The shelf bracket prompt</h2>
<p>A slightly more complex bracket that tests the AI's ability to handle three orthogonal features.</p>
<p>Prompt: "Right-angle shelf bracket. Vertical plate 60mm tall, 40mm wide, 3mm thick. Horizontal shelf 50mm deep, 40mm wide, 3mm thick. Triangular gusset connecting the vertical plate to the shelf, 30mm along each leg, 3mm thick. Two 5mm through-holes on the vertical plate, centered horizontally, 10mm and 50mm from the bottom. One 5mm through-hole on the horizontal shelf, centered across the width, 25mm from the back edge."</p>
<p>The gusset is what makes this interesting. About half the time, the AI includes it with roughly correct dimensions. The other half, it either omits the gusset entirely or produces something that vaguely resembles a triangle but isn't attached properly to both plates. When the gusset works, the part is genuinely useful. When it doesn't, the bracket is still a valid L-bracket with holes, just missing the reinforcement.</p>
<h2>Prompt patterns that consistently work</h2>
<p>After logging all these prompts and results, here are the patterns I've noticed.</p>
<p>Describe the overall shape first, then features. The AI seems to build the base geometry from the first part of the prompt and add features from the rest. Starting with "rectangular plate, 80mm x 50mm x 5mm" and then adding holes works better than starting with "a part with four holes and chamfers that is 80mm by 50mm."</p>
<p>Use absolute positions, not relative ones. "10mm from the left edge" works better than "evenly spaced" or "near the corners." Every time I use relative language, the AI interprets "near" differently than I do.</p>
<p>Reference edges, not abstractions. "Centered across the 30mm width" works better than "centered horizontally," because "horizontally" depends on orientation. Edge references are unambiguous.</p>
<p>Repeat the total count of features. "Four M4 through-holes" is better than listing four holes individually. Pattern language matches CAD operations and produces more consistent spacing.</p>
<p>State symmetry explicitly. "Two slots, symmetric about the vertical center plane" produces more consistent results than describing two slots with mirrored coordinates.</p>
<p>Keep prompts under about 100 words for simple parts and under about 150 words for complex ones. Beyond that length, the AI starts losing track of earlier details. If your prompt needs to be longer than 150 words, the part might be too complex for current text-to-CAD tools.</p>
<h2>Prompt patterns that consistently fail</h2>
<p>Vague sizing. "A small bracket," "a medium-sized plate," "about yay big." The AI has no idea what these mean and neither does your machinist.</p>
<p>Relative positioning without anchors. "The holes should be evenly spaced" without saying how many, what the spacing is, or what they're evenly spaced relative to.</p>
<p>Multi-part descriptions. "A box with a snap-fit lid" always fails. Always. Generate them separately.</p>
<p>Functional descriptions instead of geometric ones. "A bracket that can hold 5kg" tells the AI nothing about geometry. It doesn't know physics. It doesn't know load paths. Describe the shape, not the job.</p>
<p>Complex curves. "An airfoil shape" or "a smooth organic transition" or anything involving splines. Current tools don't generate reliable freeform surfaces from text.</p>
<p>Referencing standards without dimensions. "An M5 counterbore" works sometimes because the AI has seen M5 specifications in training data. "A counterbore for a #10-32 socket head cap screw" is too specific to a lookup table and usually fails. When in doubt, provide the actual dimensions rather than expecting the AI to look up standards.</p>
<h2>My personal favorites</h2>
<p>If I had to pick five prompts to demonstrate text-to-CAD to someone who's never tried it, I'd use these:</p>
<ol>
<li>
<p>The plate prompt from above. It works 90% of the time and demonstrates that the tool can produce genuinely useful geometry.</p>
</li>
<li>
<p>The L-bracket prompt. Shows that the AI handles multi-feature parts with position references.</p>
</li>
<li>
<p>"Cylinder, 20mm diameter, 40mm tall, with a 6mm through-hole centered on the axis and a 1mm chamfer on both ends of the outer diameter." Simple body of revolution. Works nearly every time.</p>
</li>
<li>
<p>"Rectangular tube, outer dimensions 30mm x 20mm x 80mm long. Wall thickness 2mm. Open on both ends." Tests shell geometry without complex features. Usually correct.</p>
</li>
<li>
<p>The cable clip prompt. Shows that the AI can handle C-shaped profiles and attached tabs.</p>
</li>
</ol>
<p>For more prompt/output pairs with detailed analysis, the <a href="/posts/text-to-cad-examples">text-to-CAD examples</a> post shows ten prompts and what they actually produced. For the underlying theory, the <a href="/posts/text-to-cad-prompt-engineering">text-to-CAD prompt engineering</a> post explains why these patterns work. And the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> puts all of this in the context of the broader technology and workflow.</p>
<p>The honest bottom line: good prompts produce good starting points, not finished parts. But the difference between a good starting point and a bad one is the difference between a three-minute fix and a ten-minute rebuild. Over a week of prototyping, those minutes add up. My spreadsheet says I've saved about four hours total across two months of regular use. Not life-changing. But not nothing, either. And the prompts keep getting better as I learn what the tools respond to. That's the part that keeps me trying the next one, even when the last one came back looking like the AI had a bad day.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Zoo vs AdamCAD vs CADGPT: which text-to-CAD tool to use</title>
      <link>https://blog.texocad.ai/posts/zoo-vs-adamcad-vs-cadgpt</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/zoo-vs-adamcad-vs-cadgpt</guid>
      <pubDate>Sun, 01 Feb 2026 00:00:00 GMT</pubDate>
      <description>Three tools, three different approaches to AI-assisted CAD. Zoo generates geometry. AdamCAD gives you parametric sliders. CADGPT writes scripts. They&apos;re not really competing.</description>
      <dc:creator>TexoCAD</dc:creator>

      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Zoo.dev generates B-Rep CAD models from text prompts (best for engineering geometry). AdamCAD creates parametric models with adjustable sliders (best for quick simple parts). CADGPT writes automation scripts, not models (best for CAD scripting). Choose based on whether you need geometry, parameters, or code.</p>
<p>I spent a Friday afternoon running the same prompt through three different tools. A simple L-bracket, 3mm aluminum, 40mm legs, four M4 clearance holes on a 25mm spacing. Nothing fancy. The kind of part you'd model in Fusion 360 in about eight minutes, which is how I know what the result should look like.</p>
<p>Zoo.dev gave me a STEP file with real B-Rep geometry. AdamCAD gave me an STL with sliders to adjust the dimensions. CADGPT gave me an AutoLISP script. Three different answers to the same question, and none of them wrong exactly, just aimed at different problems. It was like asking three people for directions and getting a map, a bus schedule, and a bicycle.</p>
<p>That experience convinced me these tools aren't really competing with each other, even though they all get lumped under "text-to-CAD." If you're trying to choose between them, the answer depends entirely on what you're trying to do. So let me save you the Friday afternoon.</p>
<h2>Zoo.dev: the geometry engine</h2>
<p>Zoo.dev is the one I reach for when I need actual engineering geometry. It runs on a GPU-native kernel called KittyCAD, and the output is real B-Rep: STEP, glTF, OBJ, STL, and several other formats. The free tier is generous enough to actually evaluate the tool, which I appreciate after years of CAD vendors hiding everything behind a sales call.</p>
<p>The workflow is simple. You type a description, wait a few seconds, and get a solid body. The quality depends heavily on how specific your prompt is. "A bracket" gets you whatever the AI's idea of a bracket looks like, which is usually vaguely correct and specifically useless. "A 90-degree L-bracket, 3mm thick, 40mm legs, with two 4.2mm clearance holes per leg spaced 25mm apart, 10mm from the edge" gets you something much closer to what you need. <a href="/posts/text-to-cad-guide">Prompt specificity matters</a> more than anything else with this tool.</p>
<p>What Zoo does well: simple mechanical parts. Brackets, enclosures, mounting plates, standoffs. Geometry that's mostly prismatic with standard features. I've gotten results that needed only minor cleanup in Fusion 360 before they were usable. On a good day, it saves me ten to twenty minutes of sketch-extrude-fillet work.</p>
<p>What Zoo does poorly: anything complex. Gears with functional tooth profiles. Assemblies. Sheet metal with bend allowances. Parts with lots of interdependent features. The accuracy drifts as complexity rises, and by the time you're fixing every dimension manually, you might as well have modeled it from scratch.</p>
<p>The biggest advantage of Zoo is the STEP output. A STEP file opens in SolidWorks, Fusion 360, Creo, NX, or basically any real CAD tool. You get faces, edges, and topology you can actually work with. That makes Zoo a starting-point tool for engineers. Not a finishing tool, but a legitimate starting point. I wrote more about how it performs on specific tasks in the <a href="/posts/zoo-text-to-cad-review">Zoo text-to-CAD review</a>.</p>
<h2>AdamCAD: the parametric slider approach</h2>
<p>AdamCAD takes a different angle. You describe a part, the AI generates it, and you get not just an STL file but dimension sliders you can adjust after the fact. Change the height, tweak the hole diameter, adjust the wall thickness. It's parametric in a limited sense, closer to a configurator than a full feature tree, but for simple parts it's surprisingly useful.</p>
<p>The pricing starts at $5.99 a month, which is cheap enough that you don't feel hostile toward the tool before you've even tried it. I tested it with the same bracket prompt, and the output was decent. Not as geometrically precise as Zoo's STEP output, but the ability to drag a slider and change the leg length without re-prompting or reopening my CAD software is genuinely convenient for quick iterations.</p>
<p>Where AdamCAD works: rapid sizing of simple parts. Need a box enclosure and you're not sure if it should be 80mm or 90mm wide? Generate it once, drag the slider, see both options. For hobbyists, for early prototyping, for "I just need a bracket and I don't want to learn SolidWorks," this is probably the most approachable of the three tools.</p>
<p>Where it falls short: the parametric controls are limited. You get sliders for the dimensions the AI decided to expose, which may or may not be the ones you care about. You can't add new features, create relationships between dimensions, or do anything that requires actual constraint logic. And the STL-only output means you're stuck with mesh geometry, which is fine for 3D printing and useless for engineering edits. If you need to select a face, add a fillet, or export a proper drawing, you're out of luck.</p>
<p>The <a href="/posts/adamcad-review">AdamCAD review</a> goes into more detail on specific use cases and where the parametric controls actually help versus where they're just decoration.</p>
<h2>CADGPT: the script writer</h2>
<p>CADGPT is the odd one out, and I think the name causes most of the confusion. It doesn't generate CAD geometry at all. It writes automation scripts. AutoLISP for AutoCAD, Python for other environments. If you tell it to make a bracket, you don't get a bracket. You get code that, if executed in the right environment, should produce a bracket.</p>
<p>This is a fundamentally different proposition. CADGPT is a coding assistant for CAD automation, not a geometry generator. It's closer to GitHub Copilot for AutoLISP than it is to Zoo or AdamCAD.</p>
<p>For the right user, that's actually valuable. If you spend your days writing AutoLISP routines to batch-process drawings, or you need Python scripts to automate FreeCAD operations, or you want to generate OpenSCAD code without typing it character by character, CADGPT can speed that up. I tested it with a request to write an AutoLISP script that draws a configurable bolt pattern on a rectangular plate. The script was functional, needed some tweaks to the error handling, and saved me about twenty minutes of writing it myself.</p>
<p>But if you're expecting to describe a part and receive a model, CADGPT will disappoint you. You receive text, not geometry. You need a CAD environment to execute that text, and you need enough knowledge to debug the script when it doesn't quite work. That's a different skill set from "describe a part and get a part."</p>
<p>The <a href="/posts/cadgpt-review">CADGPT review</a> covers the scripting capabilities more thoroughly, but the short version is: useful tool, wrong category. It shouldn't be compared with Zoo and AdamCAD because it's solving a different problem.</p>
<h2>The comparison that matters</h2>
<p>Here's the clearest way I can break it down.</p>
<p>If you need engineering geometry you can import, edit, and manufacture: Zoo.dev. The STEP output, the B-Rep quality, and the API access make it the most serious tool for engineers. The output still needs checking and editing, but it's real geometry, not a mesh or a script.</p>
<p>If you need quick simple parts with adjustable dimensions and you're heading straight to 3D printing: AdamCAD. The slider approach is smart for the kind of user who doesn't want or need a full CAD environment. Just don't expect engineering-grade output.</p>
<p>If you need help writing CAD automation scripts: CADGPT. It's a coding tool that happens to know CAD-specific languages. Nothing wrong with that, as long as you know what you're getting.</p>
<p>If you need complex parts, assemblies, tolerances, sheet metal, or anything that requires actual engineering judgment: none of the above. You need traditional CAD software and, ideally, someone who knows how to use it. I'm not being dismissive. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers how these tools fit into professional workflows, and the honest answer is: they fit at the beginning, not the end.</p>
<h2>Where the confusion comes from</h2>
<p>The problem is that all three tools get called "text-to-CAD" even though they do different things. Zoo generates geometry. AdamCAD generates configurable shapes. CADGPT generates code. Calling all of these text-to-CAD is like calling a hammer, a screwdriver, and a tape measure "construction tools." Technically true, practically misleading if you need to drive a nail.</p>
<p>The <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> comparison covers the full field, including tools like CADAgent and CADScribe that take yet other approaches. The space is early and fragmented, which is normal for new technology. But the fragmentation means you need to know what each tool actually does before you pick one, because picking the wrong tool won't just give you a bad result. It'll give you a result in the wrong format for the wrong problem, and you'll blame "text-to-CAD" when you should blame the choice.</p>
<h2>My honest take</h2>
<p>I use Zoo.dev for quick starting geometry when I'm doing simple parts and I want a faster first draft than sketching from scratch. I've looked at AdamCAD for cases where I need to explore dimension ranges quickly without opening Fusion 360. I haven't integrated CADGPT into my workflow because I don't write enough AutoLISP to justify it, but I can see the appeal for someone who does.</p>
<p>None of these tools replaces the other two. None of them replaces traditional CAD. And none of them is as good as the marketing copy suggests. But each one does a specific thing reasonably well, and knowing which thing that is saves you from the frustration of expecting a STEP file and getting a script, or expecting parametric control and getting a dumb solid.</p>
<p>Pick the tool that matches the problem. It sounds obvious, but in a market where every tool is trying to be the everything-tool, obvious advice is the kind that gets ignored the most.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Zoo.dev text-to-CAD: a working review</title>
      <link>https://blog.texocad.ai/posts/zoo-text-to-cad-review</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/zoo-text-to-cad-review</guid>
      <pubDate>Sat, 31 Jan 2026 00:00:00 GMT</pubDate>
      <description>Zoo.dev is the closest thing to a real text-to-CAD tool right now. It generates actual STEP files from text prompts using its own geometric kernel. It&apos;s also not magic.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>zoo</category>
      <category>review</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Zoo.dev (formerly KittyCAD) is an API-first text-to-CAD platform that generates B-Rep geometry as STEP, glTF, OBJ, and STL files using a custom GPU-native geometric kernel. It has a free tier and produces the most engineering-usable output of any dedicated text-to-CAD tool in 2026.</p>
<p>I first tried Zoo.dev on a Tuesday afternoon when I had a sensor bracket to mock up and exactly zero motivation to sketch it from scratch. I typed "rectangular bracket, 60mm by 35mm, 3mm thick, two M4 clearance holes 20mm apart, 10mm from the right edge" into the text-to-CAD prompt box, waited about twelve seconds, and got back a solid body. Not a mesh. Not a render concept. A STEP file I could open in Fusion 360, select faces on, and measure. The hole diameter was 4.3mm, which is correct for M4 clearance. The overall dimensions were within a tenth of what I asked for. I sat there feeling the specific mix of impressed and suspicious that comes from watching a tool do something surprisingly well for the first time.</p>
<p>Then I tried a more complicated part, and the feeling shifted. But we'll get there.</p>
<h2>What Zoo.dev actually is</h2>
<p>Zoo.dev, formerly known as KittyCAD, is an API-first text-to-CAD platform built on a custom GPU-native geometric kernel the team wrote themselves. That last part matters. Most AI 3D tools bolt a language model onto existing mesh-generation pipelines and call it a day. Zoo built their own B-Rep kernel from scratch, designed to run on GPUs, designed to produce the kind of solid geometry that engineers actually use.</p>
<p>The product has two faces. There's the web-based Design Studio where you type prompts and get models, and there's the API (with a Python SDK called <code>kittycad</code>) that lets you integrate text-to-CAD into your own pipeline. Both produce the same output. For the full concept and how Zoo compares to other approaches, I covered that in the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a>.</p>
<h2>The interface and workflow</h2>
<p>Zoo's Design Studio is deliberately minimal. A text box. A 3D viewport. Some format export buttons. That's essentially it.</p>
<p>You type a prompt, wait somewhere between ten and thirty seconds, and get a 3D model in the viewport. You can rotate, zoom, and inspect the geometry. If you like it, you export. If you don't, you rephrase and try again. There's no feature tree, no timeline, no sketch environment. It's a generation tool, not a modeling environment.</p>
<p>This is both the strength and the limitation. Anyone who can describe a part in English can try it. But you can't tweak the output inside Zoo itself. Whatever comes back, you either accept it or re-prompt. If you need to move a hole by 3mm or add a chamfer the AI forgot, you're exporting to Fusion 360 or SolidWorks and doing it there.</p>
<p>I've settled into a workflow where I generate in Zoo, export as STEP, import into Fusion, and fix whatever needs fixing. For simple parts, the fix list is short. For anything moderately complex, the fix list is the whole part. The <a href="/posts/zoo-text-to-cad-tutorial">Zoo text-to-CAD tutorial</a> walks through this workflow step by step.</p>
<h2>What it handles well</h2>
<p>Simple mechanical parts. That's the honest answer, and it's not as dismissive as it sounds. A lot of real engineering work involves simple mechanical parts, and generating them from text instead of sketching them from scratch does save time.</p>
<p>Brackets, mounting plates, standoffs, basic enclosures, spacers, adapter plates with bolt patterns. These are Zoo's sweet spot. If you can describe the part in two or three sentences with specific dimensions, you have a reasonable shot at getting usable geometry back. I've had good results with prompts like "L-bracket, 3mm thick, 50mm legs, four M5 clearance holes on a 30mm square pattern on the base" or "rectangular enclosure 100mm by 60mm by 40mm, 2mm wall thickness, open top, four M3 mounting bosses in the corners."</p>
<p>The geometry comes back as proper B-Rep. Faces are selectable. Edges are real edges, not mesh approximations. The STEP files open cleanly in every CAD tool I've tested.</p>
<p>The trick is learning to write prompts that leave the AI as little room as possible to improvise. Vague prompts produce vague parts. I went into this more in <a href="/posts/text-to-cad-prompt-engineering">text-to-CAD prompt engineering</a>, but the short version is: treat the prompt like you're dimensioning a drawing, not describing it to a friend.</p>
<h2>What it doesn't handle well</h2>
<p>Anything where the relationships between features matter more than the features themselves.</p>
<p>I asked Zoo for a spur gear once. 20 teeth, module 1.5, 8mm bore. What came back had roughly the right number of teeth and roughly the right outer diameter, but the tooth profile was cosmetic. An involute curve has a specific mathematical shape that determines how gears mesh. Zoo's teeth looked gear-shaped but weren't involute by any useful definition. For a render, acceptable. For a gear that needs to mesh with another gear, useless.</p>
<p>Assemblies are out of reach. Zoo generates single bodies. If you need a hinge with a pin, a box with a lid, or two mating parts with clearance fits, you're generating each piece separately and hoping they work together, which they usually don't without manual adjustment.</p>
<p>Sheet metal parts don't work because Zoo doesn't understand bend allowances, K-factors, or flat patterns. You get a shape that looks like folded metal but can't be unfolded because it was never modeled as sheet metal in the first place.</p>
<p>Complex surfacing, organic shapes, lofted features, swept profiles along 3D paths: all either absent or unreliable. The kernel is built for prismatic geometry, and it shows.</p>
<p>Draft angles for molding, thread callouts, GD&#x26;T: none of that exists in the output. The tool produces geometry, not manufacturing intent.</p>
<h2>Output quality and accuracy</h2>
<p>The results are inconsistent enough that any single example is misleading.</p>
<p>On my best tests, dimensions came back within 0.1mm of what I asked for. On my worst, I got a part where the overall width was off by about 8%, which on a 120mm part is nearly 10mm. Not a rounding error. A genuine miss.</p>
<p>The pattern I've noticed: simpler geometry with very explicit dimensions comes back more accurately. Add more features, more relationships between them, and the accuracy degrades. Ask for "four holes equally spaced" and you might get four holes that are almost equally spaced but not quite.</p>
<p>I always measure critical features after importing the STEP file. Always. The output is a starting point, not a finished part. If you treat it as a rough draft that needs checking, you'll be fine. If you treat it as production-ready geometry, you'll have a bad day eventually.</p>
<h2>File formats</h2>
<p>Zoo outputs STEP, glTF, GLB, OBJ, STL, PLY, and FBX. The one that matters for engineering is STEP.</p>
<p>A STEP file from Zoo opens in Fusion 360, SolidWorks, Creo, NX, and FreeCAD as real solid geometry. Selectable faces. Measurable edges. You can add fillets, cut pockets, and apply chamfers like it was native geometry. A lot of "AI 3D tools" output OBJ or STL and call it CAD, which is like calling a photograph of a blueprint a technical drawing. Zoo's STEP output is the cleanest I've seen from any dedicated text-to-CAD tool.</p>
<h2>The API</h2>
<p>This is where Zoo gets interesting for anyone who builds things beyond individual parts.</p>
<p>The Python SDK (<code>kittycad</code>) lets you send text prompts programmatically and get geometry back. You can batch-generate parts, build custom interfaces on top of Zoo's engine, or integrate text-to-CAD into a product pipeline.</p>
<p>I've used the API to generate fifteen bracket variations from a script, each with slightly different dimensions, exported as STEP files for comparison. Twenty minutes to write the script, five minutes to generate all the variants. Doing that by hand in Fusion would have been an hour of sketch-modify-export tedium.</p>
<p>The API documentation is good. Not perfect, but better than most developer docs I've dealt with from CAD-adjacent companies. The <a href="/posts/zoo-text-to-cad-api-tutorial">Zoo text-to-CAD API tutorial</a> covers the setup and a few practical examples.</p>
<h2>The KittyCAD kernel</h2>
<p>Zoo's kernel deserves its own mention because it's the thing that makes their output different from tools that generate meshes and hope nobody notices.</p>
<p>The KittyCAD kernel is a GPU-native B-Rep geometric kernel. Most of the industry runs on Parasolid (Siemens) or ACIS (Dassault). Building a new kernel from scratch is a massive undertaking and a bold bet. The advantage is that it was designed for this exact use case without decades of legacy baggage. The downside is that it's newer and less battle-tested than Parasolid, which has been handling B-Rep operations since the 1980s. I've hit a few cases where fillets don't fully resolve or internal geometry appears that shouldn't exist. These are kernel-maturity issues, and they're real today even if they'll improve over time.</p>
<h2>Pricing</h2>
<p>Zoo has a free tier that's generous enough for actual testing. You get a limited number of API calls per month, but enough to evaluate the tool properly before deciding whether it's worth paying for. The paid tiers add more API calls and higher priority.</p>
<p>I won't quote exact prices because they've changed before and they'll change again. Check <a href="https://zoo.dev">zoo.dev</a> for current numbers. The free tier is the right place to start.</p>
<h2>Where it fits in a real workflow</h2>
<p>Zoo is a first-draft generator. It's not a replacement for CAD software. It's the thing that gets you from a blank screen to a rough solid in thirty seconds instead of ten minutes.</p>
<p>My workflow: generate in Zoo, export STEP, import into Fusion 360, measure everything, fix what's off, add missing details, constrain for future edits. On simple parts, Zoo saves me five to fifteen minutes. On anything complex, it saves nothing because I rebuild from scratch anyway.</p>
<p>Where Zoo adds the most value is iteration speed. "What if the bracket was 10mm wider? What if the holes were on a different pattern?" Each variation takes thirty seconds instead of modifying a sketch and hoping the feature tree survives.</p>
<h2>The verdict</h2>
<p>Zoo.dev is the most serious dedicated text-to-CAD tool available right now. The custom kernel produces real B-Rep geometry. The STEP output is usable in professional CAD software. The API is well-documented and the free tier is honest. For simple mechanical parts with specific dimensions, it works. If you want to see how it stacks up against the rest of the field, the <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> comparison covers that.</p>
<p>It's also limited in exactly the ways you'd expect. Complex geometry breaks down. Assemblies don't exist. Manufacturing intent is absent. Accuracy requires verification on every output. The kernel is young and occasionally produces artifacts that a mature solver wouldn't.</p>
<p>If you're a professional engineer, Zoo won't replace your CAD skills. It might save you a few minutes on simple parts and real time on variant generation. If you're someone who needs occasional mechanical parts but doesn't live in CAD software, Zoo is probably the best option that exists today.</p>
<p>I keep coming back to it, which is the most honest endorsement I can give a tool. Not because it's perfect. Because it's useful just often enough that closing the browser tab feels premature. In a space full of demos that promise everything, a tool that delivers something specific and real is worth paying attention to.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD vs generative design: different tools, different jobs</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-vs-generative-design</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-vs-generative-design</guid>
      <pubDate>Fri, 30 Jan 2026 00:00:00 GMT</pubDate>
      <description>People keep mixing these up. Text-to-CAD generates geometry from words. Generative design optimizes geometry under constraints. They solve different problems and they&apos;re not interchangeable.</description>
      <dc:creator>TexoCAD</dc:creator>

      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD generates new CAD geometry from natural language prompts. Generative design optimizes existing geometry under engineering constraints (loads, materials, manufacturing methods) using topology optimization. Text-to-CAD is about creation from description; generative design is about optimization from requirements.</p>
<p>A colleague sent me a LinkedIn post last month where someone described text-to-CAD as "basically generative design but with prompts." I read it twice, hoping I'd misunderstood. I hadn't. It's like saying a microwave is basically an oven but faster. They both involve heat and food, and that's where the resemblance ends.</p>
<p>I see this confusion constantly, and I get why it exists. Both involve AI. Both produce 3D geometry. Both promise to make CAD easier. But they solve completely different problems using completely different methods, and confusing the two will land you with the wrong tool for the job and a deadline that doesn't care about your misunderstanding.</p>
<p>So let me separate these properly, because the difference matters if you actually make things.</p>
<h2>What text-to-CAD does</h2>
<p><a href="/posts/what-is-text-to-cad">Text-to-CAD</a> generates geometry from a description. You type "a rectangular enclosure, 100mm by 60mm by 40mm, with a lid and four M3 mounting holes in the corners" and the AI produces a 3D model that attempts to match what you described. The output is new geometry that didn't exist before. The AI is creating a shape from language.</p>
<p>The tools doing this right now, Zoo.dev, AdamCAD, CADAgent, and a few others, use transformer-based models trained on CAD datasets to predict sequences of modeling operations (sketch, extrude, fillet, hole) from text input. The quality varies. Simple parts come out reasonably well. Complex parts don't. I've covered the specifics in the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a>, but the essential point here is this: text-to-CAD is a creation tool. It makes something from nothing, guided by words.</p>
<p>The input is natural language. The output is geometry. The AI decides the shape, the features, and the dimensions based on what it learned from training data. You're delegating the design to the model, which is both the appeal and the risk.</p>
<h2>What generative design does</h2>
<p>Generative design starts with a problem definition, not a description. You define the design space (the volume the part can occupy), the loads (where forces act and how strong they are), the constraints (where the part attaches, what material it's made of, what manufacturing process will produce it), and the objectives (minimize mass, maximize stiffness, stay within a certain displacement). Then the software runs topology optimization to figure out the best way to distribute material within those constraints.</p>
<p>The output is geometry, yes. But the geometry isn't invented from language. It's calculated from physics. The algorithm removes material that isn't carrying load and keeps material that is. The result often looks organic, like bones or tree branches, because nature solves similar optimization problems and arrives at similar shapes. That's not a coincidence. It's math.</p>
<p>Fusion 360 has generative design built in. Altair Inspire does it. nTopology handles lattice and topology optimization. Siemens NX has it. Most of the major CAD platforms offer some version. The technology is mature compared to text-to-CAD. Companies have been shipping production parts designed with topology optimization for years, particularly in aerospace and automotive where weight reduction has direct cost and performance implications.</p>
<h2>The fundamental difference</h2>
<p>Here's the simplest way I can put it.</p>
<p>Text-to-CAD: "Make me a bracket." The AI decides what the bracket looks like.</p>
<p>Generative design: "Here's where the bracket attaches, here's the load, here's the material, here's the manufacturing method, now find the shape that uses the least material while meeting the structural requirements." The algorithm computes what the bracket should look like.</p>
<p>One is creation from description. The other is optimization from requirements. They're not two versions of the same thing. They're different activities that happen to produce 3D geometry.</p>
<p>A text-to-CAD tool doesn't know anything about loads. It doesn't know what the part is made of. It doesn't understand that a thin wall under a bending load will fail, or that a particular feature is impossible to machine from a certain direction. It generates shape. That's it.</p>
<p>A generative design tool doesn't understand natural language. You can't tell it "make me something that looks cool." You have to define boundary conditions, load cases, constraints, materials, and manufacturing processes. If you skip any of those, the optimization either fails or gives you an unrealistic result.</p>
<h2>When you'd use text-to-CAD</h2>
<p>Text-to-CAD fits early in the design process, before you know the loads, before you've finalized the mounting points, before you're committed to a manufacturing method. It's for getting a shape on screen quickly so you can react to it, modify it, and decide what to do next.</p>
<p>I use it when I need a starting point. "I need a bracket roughly this size, with holes roughly here, and I'll figure out the details later." For concept exploration, for checking if a form factor makes sense, for generating first-draft geometry that I'll rebuild properly in Fusion 360, text-to-CAD saves time. Not engineering time. Sketching time.</p>
<p>It's also useful for people who don't have CAD skills and need a shape for communication. A hardware startup founder showing a manufacturer what they're thinking. A hobbyist who wants to 3D print a simple enclosure. Someone who needs geometry but doesn't need it to be optimized, just roughly correct.</p>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers specific tools and workflows. The honest summary is: it's a fast, imprecise first pass.</p>
<h2>When you'd use generative design</h2>
<p>Generative design fits later, when you know the engineering requirements. When you have defined load cases, material choices, and manufacturing constraints. When you need the part to be light, stiff, strong, or some combination, and you want the software to explore the design space more thoroughly than you could manually.</p>
<p>The classic use case is a structural bracket where weight matters. Define the bolt locations, apply the loads, set the material to aluminum, constrain the manufacturing to CNC milling with 3-axis access, and let the optimizer run. The result might be a shape you'd never have drawn yourself, but it meets all the structural requirements while using less material than your instinct-driven design would have.</p>
<p>I've used it for a mounting bracket on a test rig where the weight budget was tight and the loads were well-characterized. The generative result saved about 35% mass compared to my hand-designed version. It also looked like an alien artifact, which the machinist found amusing and slightly offensive. But it worked.</p>
<p>Where generative design doesn't help: early concept phases where the requirements aren't defined yet. You can't optimize for loads you haven't calculated. You can't constrain to a manufacturing process you haven't chosen. Generative design needs inputs that text-to-CAD doesn't, and if those inputs are wrong, the output is useless in an expensive way.</p>
<h2>Where people get confused</h2>
<p>The confusion, I think, comes from marketing. Both text-to-CAD and generative design get presented as "AI designs your part for you." And on a very abstract level, that's true. But the kind of "designing" is totally different.</p>
<p>Text-to-CAD is like asking a colleague to sketch you something based on a description. The result depends on their interpretation, their training, their sense of what you probably mean. It might be close. It might be way off. You're trusting their judgment.</p>
<p>Generative design is like giving a structural analyst a fully defined problem and asking them to solve it. The result depends on the inputs, not interpretation. If the inputs are correct, the output is provably good. If the inputs are wrong, the output is provably useless, but at least you know why.</p>
<p>The other source of confusion is that both can produce organic-looking shapes. A text-to-CAD tool might generate a bracket with rounded edges because the training data included a lot of rounds. A generative design tool produces organic shapes because topology optimization tends toward smooth material distributions. They look similar on screen but arrived there by completely different paths.</p>
<h2>Can you combine them?</h2>
<p>This is the interesting question, and the answer is: not directly, but the combination makes sense in theory.</p>
<p>Imagine using text-to-CAD to generate an initial design space and boundary conditions, then feeding that into a generative design solver for optimization. You'd get the speed of text-to-CAD for the early concept and the engineering rigor of generative design for the final shape. Nobody is doing this cleanly yet, but it's not a crazy workflow to imagine. The pieces exist, they're just in different software ecosystems.</p>
<p>What I do in practice is simpler: use text-to-CAD for the starting shape, import it into Fusion 360, set up the generative design study manually, and let the optimizer refine it. The text-to-CAD output serves as a design space reference, not the final geometry. It's two separate steps with a human in between, which is about the level of integration the current tools support.</p>
<h2>The summary that matters</h2>
<p>If you need geometry quickly from a description, and accuracy and optimization aren't critical yet, text-to-CAD is the right tool. If you need structurally optimized geometry that meets specific engineering requirements, generative design is the right tool. They're not interchangeable, they're not in competition, and treating one as a substitute for the other will waste your time.</p>
<p>Text-to-CAD creates. Generative design optimizes. Both produce shapes. The similarity ends there. And the next time someone on LinkedIn calls them the same thing, you'll know they haven't tried to actually make a part with either one. Which, in my experience, is roughly 90% of people posting about AI and CAD on LinkedIn.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD tools comparison 2026</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-tools-comparison</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-tools-comparison</guid>
      <pubDate>Thu, 29 Jan 2026 00:00:00 GMT</pubDate>
      <description>A side-by-side comparison of every text-to-CAD tool I could get my hands on in 2026. Spoiler: the field is thin and the results are uneven.</description>
      <dc:creator>TexoCAD</dc:creator>

      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> In 2026, the main text-to-CAD tools are Zoo.dev (B-Rep STEP output, API-first), AdamCAD (fast parametric with sliders, from $9.99/mo), CADAgent (open-source Fusion 360 add-in), CADGPT (script assistant), CADScribe (limited generator), and Vondy (beginner DXF). Zoo produces the most usable engineering output.</p>
<p>I spent an entire Saturday trying to run the same prompt through every text-to-CAD tool I could find. The prompt was simple on purpose: "A rectangular mounting plate, 120mm by 80mm by 5mm, with four M5 clearance holes on a 100mm by 60mm bolt pattern, 10mm from each edge." Not complex. Not clever. The kind of part you'd model in Fusion 360 in about ninety seconds while your coffee is still warm. I wanted to see what came back from each tool when the input was identical and the expectations were reasonable.</p>
<p>What came back ranged from "actually pretty close" to "I'm not sure what I'm looking at." One tool returned a solid I could open and measure. Another returned something that resembled a mounting plate the way a drawing of a sandwich resembles lunch. A third generated a script instead of geometry. One just timed out.</p>
<p>So here's the honest text-to-CAD tools comparison 2026 edition, written by someone who actually opened every output file and tried to do something useful with it.</p>
<h2>The field, such as it is</h2>
<p>There are not many real text-to-CAD tools. There are a lot of things calling themselves text-to-CAD that are actually text-to-mesh, text-to-script, or text-to-marketing-page. If you filter for tools that take a text prompt and return editable B-Rep geometry or at least parametric CAD output, the list gets short fast.</p>
<p>I tested six tools. Here's the summary before I get into the details.</p>
<table>
<thead>
<tr>
<th>Tool</th>
<th>Output format</th>
<th>Pricing</th>
<th>Best at</th>
<th>Worst at</th>
</tr>
</thead>
<tbody>
<tr>
<td>Zoo.dev</td>
<td>STEP, glTF, OBJ, STL</td>
<td>Free tier (20 min reasoning), paid tiers available</td>
<td>B-Rep solids, API integration, mechanical parts</td>
<td>Complex geometry, assemblies, occasional dimension drift</td>
</tr>
<tr>
<td>AdamCAD</td>
<td>STL, SCAD</td>
<td>$9.99/mo (Standard), $29.99/mo (Pro)</td>
<td>Fast generation, parametric sliders, quick prototyping</td>
<td>Limited feature tree, no native STEP</td>
</tr>
<tr>
<td>CADAgent</td>
<td>Native Fusion 360</td>
<td>Free (bring your own Anthropic API key)</td>
<td>Real parametric history inside Fusion 360</td>
<td>Requires Fusion, API costs add up, early-stage reliability</td>
</tr>
<tr>
<td>CADGPT</td>
<td>AutoLISP/Python scripts</td>
<td>$199/year</td>
<td>CAD scripting assistance, automation</td>
<td>Not a geometry generator despite the name</td>
</tr>
<tr>
<td>CADScribe</td>
<td>STL, STEP</td>
<td>Free (early access)</td>
<td>Simple primitive parts, fast iteration</td>
<td>Anything beyond basic geometry, complex prompts fail</td>
</tr>
<tr>
<td>Vondy</td>
<td>DXF</td>
<td>Varies</td>
<td>Beginner 2D profiles</td>
<td>Not real 3D, limited geometry, no solid output</td>
</tr>
</tbody>
</table>
<p>That table tells most of the story. The rest is texture.</p>
<h2>Zoo.dev</h2>
<p>Zoo is the most complete dedicated text-to-CAD tool I've used. It runs on their own GPU-native geometric kernel, KittyCAD, and outputs real B-Rep geometry. That means when you open the STEP file in Fusion 360 or SolidWorks, you get actual faces and edges. You can select a surface, fillet it, measure it, send it to a machinist. The geometry behaves like geometry.</p>
<p>My test prompt came back with a plate that measured 120.0 by 80.0 by 5.0mm. The holes were the right diameter. The bolt pattern was correct. I have had worse results from Zoo on other tests, dimensions off by a few percent, fillets that didn't survive the import, internal faces that shouldn't exist. But for a straightforward rectangular plate, it nailed it. The free tier gives you 20 minutes of reasoning time per month, which is enough to run a few dozen simple prompts and see if the tool works for your use case.</p>
<p>The API is the real selling point for anyone building a workflow around this. There's a Python SDK, proper documentation, and per-second billing at $0.0083/second after the free credits. If you're a developer integrating text-to-CAD into a product or automation pipeline, Zoo is currently the only serious option. I've written more about it in the <a href="/posts/zoo-text-to-cad-review">Zoo text-to-CAD review</a>.</p>
<p>Where Zoo falls apart is complexity. Ask for a gear, a snap-fit enclosure, or anything with interdependent features and the results go from "useful starting point" to "interesting art project." That's not unique to Zoo, it's the state of the whole field, but it's worth knowing before you expect miracles.</p>
<h2>AdamCAD</h2>
<p>AdamCAD takes a different approach. You type your description, it generates a model, and then it gives you parametric sliders to adjust dimensions after the fact. That's a smart design decision. Instead of trying to get every dimension right on the first pass, it lets you tweak height, width, hole diameter, and spacing in real time without re-prompting.</p>
<p>The Standard plan at $9.99/month gets you 100 generations per week. Pro at $29.99/month gives you effectively unlimited generations and a direct line to the founders, which at this stage of the product probably matters more than the extra generations.</p>
<p>My test plate came back quickly, maybe five seconds. The dimensions were approximate, close enough for prototyping, not close enough for manufacturing. The parametric sliders let me correct the bolt pattern spacing, which was off by about 3mm initially. Export options are STL and SCAD, which means you're getting either a mesh or an OpenSCAD script. No native STEP export, which limits how useful the output is for engineering workflows. If you mostly need geometry for 3D printing, AdamCAD is genuinely fast and convenient. If you need to import into SolidWorks and do real edits, you'll be converting formats and losing parametric data in the process.</p>
<h2>CADAgent</h2>
<p>CADAgent is the most architecturally interesting tool in this comparison. It's an open-source Fusion 360 add-in that uses an LLM to generate modeling commands directly inside Fusion 360's environment. The output isn't a file you download. It's a real parametric model with actual feature history built inside your running instance of Fusion 360.</p>
<p>That matters. A lot. When Zoo generates a STEP file, you get geometry without history. When CADAgent generates a model in Fusion 360, you get a timeline you can roll back, a feature tree you can edit, sketches you can constrain. That's the difference between a snapshot and a living model.</p>
<p>You bring your own Anthropic API key, so the tool itself is free but you're paying for API calls. For my test prompt, it generated the plate correctly, placed the holes, and I could go into the sketch and adjust dimensions like any other Fusion 360 model. The experience felt closer to having a fast junior modeler than to using a separate generation tool.</p>
<p>The catch is reliability. CADAgent is early. It stumbles on prompts that require multiple dependent operations. It sometimes picks the wrong sketch plane or creates redundant bodies. And you need Fusion 360 running, which means you need a Fusion license. But the approach, generating inside a real parametric environment instead of exporting isolated geometry, is clearly the right direction. More on the broader workflow implications in the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a>.</p>
<h2>CADGPT</h2>
<p>I'll be blunt: CADGPT is misnamed. It's an AI assistant for AutoCAD and compatible platforms, not a text-to-CAD geometry generator. You ask it to do something and it writes AutoLISP or Python scripts. That's useful if you want to automate repetitive tasks in AutoCAD, and the scripting capabilities are legitimately helpful for power users who spend their days inside AutoCAD's command line.</p>
<p>At $199/year, it's priced like a productivity tool, which is what it actually is. It includes chat-based assistance, code generation in multiple languages, a calculator, and reference manual lookups. It's a CAD companion, not a CAD creator. When I gave it my mounting plate prompt, it generated an AutoLISP script that would draw the plate inside AutoCAD. Technically accurate, but a fundamentally different thing than generating a STEP file from a text description.</p>
<p>If you work in AutoCAD daily and want AI-assisted scripting, CADGPT is worth looking at. If you want text-to-CAD in the sense of "I describe a part and get geometry back," this isn't it.</p>
<h2>CADScribe</h2>
<p>CADScribe is free, browser-based, and generates models in about 10-15 seconds. For simple geometry, it works. My test plate came back looking roughly correct. The dimensions were close, the holes were there, and I could download an STL.</p>
<p>Where CADScribe breaks down is anything beyond primitives. I tried a follow-up prompt for a part with filleted edges and countersunk holes. It returned a "fail" model, which is at least honest about it. The iterative refinement works: I could tell it "make the holes 6mm instead" and it adjusted correctly. But the ceiling on complexity is low. Gears, airfoils, compound features: all beyond its current capabilities.</p>
<p>CADScribe feels like a tool that's early enough that judging it harshly seems unfair but recommending it confidently seems irresponsible. It's free, it's fast, and it handles the kind of geometry that would take you thirty seconds to model by hand. For anything that would take you thirty minutes to model by hand, which is where time savings would actually matter, it can't help yet.</p>
<h2>Vondy AI CAD Generator</h2>
<p>Vondy generates DXF files, which means 2D profiles. It's aimed at beginners who need laser-cut or CNC-routed flat parts. For that narrow use case, it's fine. For 3D solid geometry, it's not in the conversation. I mention it because it shows up in text-to-CAD tool lists, but calling it text-to-CAD is generous. It's text-to-2D-profile, which is a different job.</p>
<h2>What the comparison actually tells you</h2>
<p>The honest takeaway from testing all of these back to back is that the text-to-CAD tools comparison 2026 looks a lot like early smartphone comparisons in 2008. One or two tools are clearly ahead, a few are finding interesting niches, and several are more promising than they are useful.</p>
<p>Zoo.dev produces the most usable engineering output. If you need a STEP file you can open in professional CAD software and actually work with, Zoo is the tool. If you need an API, Zoo is the only real option. The free tier is generous enough to evaluate honestly.</p>
<p>AdamCAD is the fastest path from idea to something you can 3D print. The parametric sliders are a genuinely good idea that other tools should copy. The lack of STEP output limits its usefulness for engineering workflows.</p>
<p>CADAgent has the best architecture. Generating inside a real CAD environment produces better, more editable output than any standalone generator. It's just not reliable enough yet to recommend without caveats.</p>
<p>Everything else is either a different product (CADGPT writes scripts, Vondy makes 2D profiles) or too early to rely on (CADScribe handles only primitives).</p>
<p>None of these tools will replace your CAD skills. Every output I got needed some manual work before I'd send it to manufacturing. Dimensions to verify, features to add, geometry to clean up. The value is in the starting point, not the finished product. For a deeper look at how text-to-CAD fits into production work, the <a href="/posts/text-to-cad-for-manufacturing">text-to-CAD for manufacturing</a> post covers the gap between "AI-generated geometry" and "geometry a machinist won't send back."</p>
<p>If you're just getting started with text-to-CAD and want to understand what to expect, the <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> overview covers each tool in more detail. And if you're wondering whether any of this is worth your time, here's my honest answer: for simple parts, yes. For complex parts, not yet. For staying aware of where the tools are heading, absolutely. The field is thin in April 2026, but the direction is clear, and the engineer who understands these tools early will have an advantage when they actually become good.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Is text-to-CAD accurate enough for real parts?</title>
      <link>https://blog.texocad.ai/posts/is-text-to-cad-accurate</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/is-text-to-cad-accurate</guid>
      <pubDate>Wed, 28 Jan 2026 00:00:00 GMT</pubDate>
      <description>I measured text-to-CAD output with calipers (after printing) and compared it to what I asked for. The answer is: sometimes close, sometimes not, and never with the confidence you&apos;d want for production.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>accuracy</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD accuracy varies by tool and geometry complexity. Simple dimensions can be within 1-2mm of the prompt specification, but tolerances, hole positions, and complex features are unreliable. No current text-to-CAD tool produces output accurate enough for production manufacturing without manual verification and editing.</p>
<p>I printed five parts from text-to-CAD output last month. Same tool, same printer, same material. Before slicing, I measured each STEP file in Fusion 360. After printing, I measured each part with calipers on my desk, the cheap digital ones I keep next to a coffee mug that's survived more near-misses than it deserves. The dimensions I asked for, the dimensions in the CAD file, and the dimensions on the physical part were three different numbers. Every single time.</p>
<p>That's not unusual in manufacturing. Printers have their own accuracy issues. But the interesting part was the gap between what I asked for and what the AI generated, before any printing happened. That gap is what this post is about. Not printer calibration. Not slicer settings. The accuracy of the geometry that the AI produces from a text prompt, and whether you can trust it.</p>
<p>The short answer: you can't. Not for production. Not without checking every dimension yourself.</p>
<h2>What "accurate" means in this context</h2>
<p>Accuracy in CAD has layers, and people mix them up constantly. There's the nominal dimension: is the 50mm feature actually 50mm in the model? There's the tolerance: how much variation is acceptable? There's the feature relationship: are two holes really 30mm apart, center to center? There's the geometric accuracy: is a circular hole actually circular, or slightly oval? And there's the manufacturing accuracy: does the physical part match the digital model?</p>
<p>Text-to-CAD only touches the first layer, and it doesn't always nail it. The tools produce nominal geometry with no tolerance data, no GD&#x26;T, and no concept of which dimensions are critical and which ones are free. The AI doesn't know that a bearing bore needs to be within 0.01mm and a cosmetic radius can be off by a full millimeter and nobody cares. It treats every dimension with the same indifference.</p>
<h2>The test I ran</h2>
<p>I used Zoo.dev for this because it outputs STEP files with real B-Rep geometry, which means I can import the files and measure actual faces and edges in Fusion 360 rather than trying to measure triangulated mesh data, which is like measuring a brick wall by counting individual bricks.</p>
<p>I wrote five prompts with specific, unambiguous dimensions:</p>
<p>A rectangular plate, 80mm by 50mm by 5mm, with four 4.2mm holes on a 60mm by 30mm bolt pattern centered on the plate.</p>
<p>A cylindrical standoff, 20mm outer diameter, 10mm inner bore, 15mm tall.</p>
<p>An L-bracket, 3mm thick, 40mm legs, with two 5mm holes per leg spaced 25mm apart, 10mm from the edge.</p>
<p>A simple box enclosure, 100mm by 60mm by 30mm, 2mm wall thickness, open top.</p>
<p>A flanged plate, 100mm by 70mm by 4mm, with a 30mm circular boss centered on one face, 15mm tall.</p>
<p>I ran each prompt once and measured the result. No cherry-picking, no re-rolling for a better result.</p>
<h2>What I found</h2>
<p>The rectangular plate was close. Width came in at 79.6mm instead of 80mm. Length was 50.1mm. Thickness was 5.0mm. The bolt pattern was the problem: one hole was shifted about 0.8mm from where it should have been. If you're using clearance holes with M4 bolts, you'd probably still get the bolts through. If you're using dowel pins for alignment, forget it.</p>
<p>The cylindrical standoff was the best result. Outer diameter 20.0mm, bore 10.0mm, height 15.0mm. Simple geometry with simple dimensions. This is where text-to-CAD currently lives comfortably.</p>
<p>The L-bracket was mixed. Thickness was 3.0mm. Leg lengths were 39.5mm and 40.2mm, so the two legs didn't even match each other. Hole spacing measured 24.3mm instead of 25mm, which is close but not what I asked for. On a 3D print, you'd never notice. On a machined part bolting to something with fixed holes, you'd notice immediately.</p>
<p>The box enclosure had the right external dimensions within half a millimeter, but the wall thickness varied between 1.8mm and 2.3mm around the perimeter. A consistent 2mm wall was part of the prompt. The AI got the outer box right and let the inner cavity wander a bit. This is the kind of error that's invisible in a viewport rotation and obvious the moment you section the model.</p>
<p>The flanged plate was the worst result. The plate was close to spec, but the circular boss came in at 28mm diameter instead of 30mm, and the height was 14mm instead of 15mm. Both off by enough to matter if the boss is supposed to locate into a mating hole or clear a specific component.</p>
<h2>The pattern I noticed</h2>
<p>Simple, symmetric geometry with few features tends to be accurate. A cylinder with one bore is an easy problem. The AI nails it.</p>
<p>As you add features, especially features that reference other features, accuracy drifts. A bolt pattern requires holes positioned relative to edges and relative to each other. That's a constraint problem, and text-to-CAD tools don't really reason about constraints. They predict positions based on what similar parts in the training data looked like, not based on the relationships you described. The difference between "holes on a 60mm by 30mm pattern" and "holes approximately where you'd expect them based on similar parts" is small in language and large in manufacturing.</p>
<p>Features that require precise relationships, concentric circles, symmetric patterns, features referenced to datums, tend to be less accurate than features that stand alone. Which is unfortunate, because referenced features are exactly the ones that matter most in real assemblies.</p>
<h2>How this compares to manual CAD</h2>
<p>In Fusion 360 or SolidWorks, if I dimension a hole at 4.2mm, it's 4.2mm. Not 4.15mm. Not 4.3mm. Exactly 4.2mm. The software does what I tell it, no more, no less. The accuracy of the model is limited only by the precision of the geometric kernel, which for practical purposes is perfect. If the dimension is wrong, it's because I typed the wrong number, which is a different class of problem and at least one I can fix by correcting a single value.</p>
<p>Text-to-CAD introduces a layer of interpretation between what you ask for and what you get. That interpretation layer is sometimes very good and sometimes off by enough to matter. There's no way to predict in advance which outcome you'll get for a given prompt, which means you have to check every time.</p>
<p>For comparison: I don't measure every feature of a model I built myself in Fusion 360. I trust the software to put things where I told it to. I measure every feature of a text-to-CAD model before I'd do anything with it. That trust gap is the real accuracy problem, not any individual dimension being wrong.</p>
<h2>The caliper test (after printing)</h2>
<p>I printed the plate and the L-bracket on an FDM printer. Prusa MK4, PLA, standard settings. Then I measured the prints.</p>
<p>The plate came off the printer at 79.3mm by 49.8mm by 4.9mm. Some of that shrinkage is the printer, not the model. But the hole that was already misplaced in the CAD file was now further off, because printer inaccuracy stacks on top of model inaccuracy. The total error on that hole position was about 1.1mm from where I originally asked for it. On a prototype to test fit, still usable. On a functional part, I'd need to drill the holes out and accept the slop.</p>
<p>The L-bracket was similar. The already-mismatched leg lengths got slightly worse. The hole spacing, already off by 0.7mm in the model, was off by about 1mm in the print. Again, usable for checking the concept. Not usable for assembling with a mating part that has fixed hole positions.</p>
<p>The takeaway: text-to-CAD inaccuracy and manufacturing process inaccuracy compound. If the model is already off by half a millimeter and the printer adds another half millimeter of error in a random direction, you can easily end up with a part that's 1mm or more away from what you intended. That's fine for some applications and completely unacceptable for others.</p>
<h2>What this means for different use cases</h2>
<p>For prototyping and concept checking: text-to-CAD accuracy is usually good enough. You're testing form, fit, and general proportions, not hitting tolerances. If the bracket is roughly the right size and the holes are roughly in the right place, you can evaluate the concept. Roughly is the operative word, and for prototyping, roughly is often sufficient.</p>
<p>For 3D printing functional parts: it depends on how functional. A cable clip? Fine. A housing that doesn't need to interface with anything precise? Probably fine. A bracket that bolts to a specific component with specific hole spacing? Check the model first, and adjust as needed. The <a href="/posts/text-to-cad-guide">text-to-CAD for 3D printing</a> workflow always includes a measurement step.</p>
<p>For CNC machining: no. Don't send text-to-CAD output to a machine shop without verifying every dimension. A machinist works to the model, and if the model is wrong, the part is wrong, and now you're paying for material, machine time, and a redo. Measure the STEP file. Fix what's off. Add tolerances. Then send it. The <a href="/posts/ai-cad-for-real-work">manufacturing reality</a> of AI-generated parts is that they need significant human review before they're shop-ready.</p>
<p>For injection molding or sheet metal: not relevant, because text-to-CAD tools don't generate the process-specific features those methods require. Accuracy is a secondary issue when the fundamental <a href="/posts/text-to-cad-limitations">limitations</a> are about missing capabilities rather than dimensional errors.</p>
<h2>How to work with the accuracy you get</h2>
<p>My workflow, which I recommend to anyone using these tools, is simple and non-negotiable:</p>
<p>Generate the part with a specific, detailed prompt. Import the STEP file into your CAD tool. Measure every dimension you care about. Fix what's off. Add tolerances, constraints, and relationships the AI didn't include. Save the corrected version as your actual model. Treat the AI output as a starting sketch, not a finished part.</p>
<p>This adds maybe five to ten minutes per part for simple geometry. On anything complex, you'll spend longer, but on anything complex you'll also be rebuilding most of the model anyway because of the other <a href="/posts/text-to-cad-limitations">limitations</a> beyond accuracy.</p>
<p>The people who get burned are the ones who skip the measurement step. They generate a part, export it, and send it downstream assuming the dimensions are what they asked for. Sometimes they are. Sometimes they're not. "Sometimes" is not an engineering specification.</p>
<h2>The honest verdict</h2>
<p>Text-to-CAD is not accurate enough for production manufacturing. It's close enough for prototyping simple parts. It's inconsistent enough that you should never trust it without verification. And the accuracy gap is narrowing with each generation of these tools, but it's not closed yet and probably won't be for a while.</p>
<p>I'll keep using it for first drafts and quick checks. I'll keep measuring every output before I do anything with it. And I'll keep telling people that the accuracy question isn't really about the numbers. It's about trust. Right now, I trust my Fusion 360 model because I built it. I check my text-to-CAD model because the AI built it. That difference tells you everything about where the technology stands.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD limitations: what nobody tells you</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-limitations</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-limitations</guid>
      <pubDate>Wed, 28 Jan 2026 00:00:00 GMT</pubDate>
      <description>Text-to-CAD tools can generate simple parts. They cannot handle assemblies, tolerances, complex surfaces, or anything that requires actual engineering judgment. Here&apos;s the full list of what breaks.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>limitations</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Current text-to-CAD limitations include: no assembly support, no tolerance or GD&amp;T handling, poor complex surface generation, limited to simple prismatic geometry, no DFM awareness, inconsistent dimensional accuracy, no sheet metal or injection molding features, and inability to handle engineering constraints.</p>
<p>I was trying to explain text-to-CAD to a machinist I work with. Showed him a bracket Zoo.dev generated from a single sentence. He was impressed for about four seconds, the time it took him to rotate the model and notice the internal corner radii were zero. "So it doesn't know cutters exist," he said, and went back to his coffee. That's the whole problem in one sentence, from a guy who's been making parts longer than most of these AI companies have existed.</p>
<p>Text-to-CAD tools are genuinely useful for simple geometry. I've said that before and I mean it. But there's a growing gap between what the marketing implies these tools can do and what they actually deliver when you try to use the output for anything beyond a viewport screenshot or a quick 3D print. I've spent months testing this stuff, and here's the full list of what breaks, what's missing, and what nobody mentions in the demo.</p>
<h2>No assemblies</h2>
<p>This is the one that surprises people most. Current text-to-CAD tools generate single parts. You can describe an enclosure. You can't describe an enclosure with a lid that snaps onto it, a PCB mount inside, a cable gland in the side, and a gasket groove in the rim. You can't describe an assembly of parts that need to fit together with defined relationships.</p>
<p>This matters because most real CAD work is assembly work. A bracket exists in context: it mounts to something, holds something, and clears something else. The dimensions of the bracket depend on the dimensions of the things around it. Without assembly context, a text-to-CAD bracket is a freestanding object that might or might not fit where you need it.</p>
<p>I tried working around this by generating individual parts and assembling them in Fusion 360. It went about as well as you'd expect. The hole patterns didn't line up. The mating surfaces weren't coplanar. One part was 2mm thicker than the other assumed it would be. I spent more time fixing the alignment than I would have spent modeling both parts from scratch with proper assembly constraints.</p>
<h2>No tolerances or GD&#x26;T</h2>
<p>Text-to-CAD tools produce nominal geometry. There are no tolerances. No dimensional tolerances, no geometric tolerances, no surface finish callouts, no fit specifications. The model has dimensions, but those dimensions carry no engineering intent about precision.</p>
<p>This sounds abstract until you try to manufacture something. A 10mm hole is not useful information for a machine shop. A 10mm hole with H7 tolerance tells them exactly what diameter to cut and what surface finish to achieve. A 10mm hole with no tolerance annotation tells them to guess, call you, or apply their house standard, which may not be what you need.</p>
<p>I've never seen a text-to-CAD tool output a model with any tolerance information. Not once. And until they do, every output requires a human to add the engineering data before it's production-ready. The <a href="/posts/is-text-to-cad-accurate">accuracy question</a> is related but separate: even the nominal dimensions aren't always reliable, which makes the tolerance gap even worse.</p>
<h2>Poor complex surfaces</h2>
<p>Flat faces, cylinders, simple fillets. That's roughly the surface vocabulary of current text-to-CAD tools. Ask for a NURBS surface that transitions smoothly between two different cross-sections, or a lofted shape with guide curves, or an organic surface with curvature continuity, and you'll get something that either doesn't work or approximates the surface with faceted geometry that looks smooth from a distance and terrible up close.</p>
<p>I asked Zoo.dev to generate an ergonomic handle. The result looked like a handle in the same way that a balloon animal looks like a dog. The cross-sections didn't flow. The transitions were abrupt. The surface quality was nowhere near what you'd need for an injection-molded grip. For a concept visualization, fine. For tooling, not remotely.</p>
<p>Complex surfaces are hard in manual CAD too. I'm not pretending they're easy. But the AI doesn't have the surfacing vocabulary to handle them, and the training data seems to skew heavily toward prismatic mechanical parts. If your work involves consumer products, ergonomics, or anything with curvature requirements, text-to-CAD isn't in the conversation yet.</p>
<h2>Limited to simple prismatic geometry</h2>
<p>The sweet spot for text-to-CAD is boxes, brackets, plates, simple enclosures, and standoffs. Basically, the kind of geometry you'd create with sketch-extrude-cut-fillet operations. Once you step outside that vocabulary, the results degrade quickly.</p>
<p>Gears are a good example. I asked for a spur gear with specific module, tooth count, and bore diameter. What came back had the right number of teeth (sometimes) but the tooth profile was decorative, not involute. The root radius was wrong. The bore was close but not dimensioned to a standard fit. A gear that doesn't mesh with another gear isn't a gear. It's a decoration.</p>
<p>Springs, cams, helical features, threads, knurling, splines. All of these require specialized knowledge that the current training data doesn't capture well. The AI has seen thousands of brackets in the training set and very few cams. The output quality reflects that distribution.</p>
<h2>No DFM awareness</h2>
<p>Design for manufacturability is not a feature you bolt onto a model after the shape exists. It's a set of constraints that inform the shape from the beginning. Wall thickness for injection molding. Draft angles for mold release. Tool access for CNC machining. Bend radii for sheet metal. Relief cuts for bending. Gate locations. Parting lines. Ejection strategies.</p>
<p>Text-to-CAD tools know none of this. They generate shapes that exist in a manufacturing vacuum. The bracket with zero-radius internal corners that my machinist spotted is typical, not exceptional. The AI doesn't know that a 3-axis CNC can't cut a sharp internal corner. It doesn't know that a 0.5mm wall will chatter and deflect during machining. It doesn't know that vertical faces on an injection-molded part need 1 to 3 degrees of draft or the part won't release from the mold.</p>
<p>This matters more than people realize because DFM violations are expensive. A part that looks fine on screen but can't be manufactured without a secondary operation, a more expensive process, or a complete redesign is not a time-saving. It's a time bomb. I covered this in more detail when I tested <a href="/posts/ai-cad-for-real-work">AI-generated parts for real manufacturing</a>, and the results were not pretty.</p>
<h2>Inconsistent dimensional accuracy</h2>
<p>When I say inconsistent, I don't mean "always wrong by the same amount." I mean sometimes the dimensions are close, sometimes they're off by a millimeter, sometimes they're off by five percent, and you can't predict which outcome you'll get. The same prompt on the same tool can produce different dimensions on different days. Consistency is the problem, not just accuracy.</p>
<p>I tested a specific prompt ten times on one tool. Asked for a plate that was 80mm by 50mm by 5mm. Eight of the ten results were within 0.5mm on all dimensions. One was off by 1.2mm on the width. One was off by 2mm on the thickness, which is a 40% error on a 5mm dimension. There's no warning. The model looks fine in the viewport. You only find out when you measure it, which I do with the STEP file in Fusion 360 before I'd ever send anything to manufacturing.</p>
<p>For prototyping and concept work, this inconsistency is tolerable. For production, it's disqualifying. You can't build a manufacturing process on output that might be accurate. It needs to be accurate every time, or the checking time eats the time savings.</p>
<h2>No sheet metal support</h2>
<p>Sheet metal in CAD is its own discipline. A proper sheet metal part has bend features, not folded solid bodies. It has K-factors that determine the developed length based on material type and thickness. It has bend relief cuts to prevent tearing. It has a flat pattern that unfolds correctly for laser cutting or punching. The relationship between the 3D folded part and the 2D flat pattern is mathematical, driven by real material properties.</p>
<p>Text-to-CAD gives you a shape that looks like folded metal. It is not sheet metal. There are no bend features. No K-factor. No flat pattern. The geometry is a solid body that happens to resemble something you could bend from sheet, but try to unfold it and you'll get nothing, because the model was never designed with bending in mind.</p>
<p>For anyone who works with sheet metal regularly, this is a dead end. You'd spend more time converting the output into proper sheet metal features than you'd save by generating the shape in the first place.</p>
<h2>No injection molding features</h2>
<p>I touched on this above with DFM, but injection molding deserves its own callout because it's such a common manufacturing process and so completely unsupported by text-to-CAD tools.</p>
<p>A good injection-molded part design accounts for draft angles, uniform wall thickness, gate location, weld line control, sink mark prevention, undercuts and side actions, and mold release. The part geometry is shaped by the process as much as by the function.</p>
<p>Text-to-CAD generates geometry that ignores all of this. The walls vary in thickness. The faces have no draft. Snap fits and ribs are designed without regard for ejection. A tooling engineer I showed some AI-generated enclosure designs to said they'd each need a complete redesign before quoting. Not adjustments. Redesigns. That's not a time saving. That's a liability.</p>
<h2>No engineering constraints</h2>
<p>In real parametric CAD, constraints are the skeleton of the model. A hole is concentric with a boss. A bolt pattern is symmetric about an axis. A wall thickness is linked to the overall width by a ratio. These relationships mean the model can adapt when requirements change. Move the mounting surface and the holes follow. Change the material thickness and the bend radii update.</p>
<p>Text-to-CAD geometry has no constraints. Features exist at specific coordinates, but there's no encoded reason why. Move one hole and nothing else adjusts. The AI generated the positions based on the prompt and the training data, not based on engineering relationships. The result is geometry that's fragile to any change.</p>
<p>This is why I keep saying text-to-CAD output is a starting point, not a finished model. You import it, measure it, and then rebuild it with proper constraints in your real CAD tool. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> describes the workflow I actually use, and it always involves significant manual rework. The AI gets you a shape. You turn it into a model.</p>
<h2>The feature tree gap</h2>
<p>Related to the constraint problem: most text-to-CAD output has no usable feature tree. Zoo.dev gives you a STEP file that imports as a dumb solid. No history, no features, no timeline you can roll back. CADAgent, which works inside Fusion 360, does generate a feature tree, but it's usually structured in ways that are fragile and hard to modify.</p>
<p>A good feature tree captures design intent. It lets you change one dimension and have the rest of the model update logically. A text-to-CAD output captures shape, period. For a one-off part that never changes, this is fine. For anything that lives in a project with revisions, it means rebuilding.</p>
<h2>What actually works despite all this</h2>
<p>After listing everything that doesn't work, I should be honest about what does. Because something does.</p>
<p>Simple parts for quick evaluation. Concept geometry for design reviews. First drafts of brackets, plates, and enclosures that you plan to refine in traditional CAD. Quick STL files for test prints where dimensional precision isn't critical. Starting shapes for design exploration when you want to react to geometry rather than imagine it.</p>
<p>For all of those, text-to-CAD saves time. Not manufacturing time. Not engineering time. Sketching time. And that's worth something, especially early in a project when the cost of being approximate is low.</p>
<h2>The honest assessment</h2>
<p>Text-to-CAD limitations aren't temporary inconveniences that the next software update will fix. Some of them, like tolerance handling and DFM awareness, require fundamental changes to how these models are trained and what data they're trained on. Others, like assembly support and complex surfaces, are hard AI problems that the research community is working on but hasn't solved commercially.</p>
<p>If you understand the limitations, you can use the tools productively within their actual capabilities. If you don't, you'll generate geometry that looks like a part on screen and turns into a problem the moment it meets material, tooling, or an inspector with calipers.</p>
<p>I use text-to-CAD. I know what it can't do. That knowledge is what makes it useful instead of dangerous. The tools will get better. But right now, the list of what they can't do is longer than the list of what they can, and anyone telling you otherwise is selling something.</p>
]]></content:encoded>
    </item>
    <item>
      <title>CADScribe review: mixed results, honest take</title>
      <link>https://blog.texocad.ai/posts/cadscribe-review</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/cadscribe-review</guid>
      <pubDate>Tue, 27 Jan 2026 00:00:00 GMT</pubDate>
      <description>CADScribe tries to generate CAD models from text descriptions. Sometimes it works. Sometimes it hands you geometry that looks like it gave up halfway through.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>cadscribe</category>
      <category>review</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> CADScribe is a text-to-CAD generator that outputs STL and STEP files from text prompts. Results are inconsistent: simple shapes work reasonably well, but complex prompts produce unreliable geometry. It&apos;s behind Zoo.dev in output quality and engineering usability.</p>
<p>I asked CADScribe for a flanged bearing mount with four bolt holes and a central bore. What came back looked, at first glance, like a flanged bearing mount. The flange was there. The holes were there. The bore existed. I felt a brief moment of optimism, the kind you get right before the geometry collapses under inspection. I opened the STEP export in Fusion 360, selected a face, and noticed the bore wasn't actually centered. It was off by about 1.5 mm to one side, just enough to look right in the viewport and be completely wrong in reality. The bolt holes were evenly spaced, at least. One out of two isn't bad for a ten-second generation. Except it is bad, because "almost right" in CAD is just "wrong with better presentation."</p>
<p>That's been my experience with CADScribe across maybe twenty test prompts. It generates real geometry, sometimes in STEP, sometimes in STL. The output occasionally nails what you asked for. More often, it gets the general idea right and the specifics wrong in ways that require either significant rework or a full rebuild. The tool is free, it's fast, and it's sitting in an uncomfortable middle ground between "impressive demo" and "usable engineering tool."</p>
<h2>What CADScribe does</h2>
<p>CADScribe is a browser-based text-to-CAD generator. You type a description of a part, it generates a 3D model, and you can export the result as STL or STEP. The generation is fast, usually ten to fifteen seconds, and the interface is conversational. You can refine your prompt by telling it things like "make the hole bigger" or "add a chamfer to the top edge," and it'll attempt to update the model.</p>
<p>It also shows dimension sliders on some models, though this feature seemed inconsistent in my testing. Some models got sliders, some didn't, and the ones that did had sliders for only a few parameters. It's a feature that's clearly in development rather than fully baked.</p>
<p>The tool is currently free to use, which puts it in a different competitive position than paid alternatives. You're not risking a subscription to find out whether it works for your use case. You're risking fifteen minutes, and if you're already sitting at your computer procrastinating on a real model, that's time you were going to lose anyway.</p>
<h2>Where it works</h2>
<p>Simple prismatic parts. Boxes. Plates with holes. Basic brackets. Cylindrical features. If the part you need can be described in one sentence and doesn't involve features that reference each other in complex ways, CADScribe has a reasonable chance of giving you something recognizable.</p>
<p>I got a decent rectangular enclosure with corner mounting holes on the first try. The dimensions were within about five percent of what I prompted, the wall thickness was consistent, and the holes were positioned symmetrically. For a 3D print prototype where "close enough" is the actual specification, this was fine. I exported the STL, sliced it, and had a physical part in a couple of hours. The fact that the wall was 2.1 mm instead of the 2 mm I asked for didn't matter for a test fit.</p>
<p>Basic geometric primitives with modifications also work reasonably well. A cylinder with a bore. A plate with a slot. A block with pocketed features. The tool seems to handle single-operation modifications better than compound features, which makes sense given the nature of the underlying model.</p>
<p>Both All3DP and Xometry have published tests of CADScribe. The Xometry evaluation included it among seven text-to-CAD tools and found it generates "fairly accurate CAD files" that are useful for "simple prototyping, educational demos, and visual exploration." That tracks with my experience. CADScribe sits in the "useful for simple things" bucket, not the "ready for engineering" bucket.</p>
<h2>Where it falls apart</h2>
<p>Anything with geometric complexity. The flanged bearing mount I mentioned was a moderate prompt, not even a particularly demanding one, and it came back with misaligned features. A gear request produced something that looked vaguely gear-shaped but had tooth geometry that would mesh with nothing in this or any other universe. A sheet metal bracket came back as a solid extrusion with no bend features, no flat pattern, and no awareness that sheet metal has rules.</p>
<p>The consistency problem is worse than the accuracy problem. I ran the same prompt three times on different days and got three different models. The general shape was similar each time, but the specific dimensions, feature positions, and geometry quality varied. One version of a simple bracket had clean fillets. Another had no fillets at all. The third had fillets on two edges but not the other two. If you're generating geometry that needs to be repeatable, or that other parts will mate to, this inconsistency is a real problem.</p>
<p>The conversational refinement is hit or miss. "Make the holes larger" sometimes scaled the holes. Sometimes it regenerated the entire model with different proportions. Once it kept the original model and added new, larger holes next to the existing ones, which was creative in a way I did not appreciate. The lack of parametric continuity between iterations means each refinement is partly a new generation, and each new generation is a new roll of the dice.</p>
<p>STEP export quality varies. Some exports opened cleanly in Fusion 360 with selectable faces and proper topology. Others had stitched surfaces, zero-thickness walls, or internal faces that made the solid body act like it was held together with optimism. If you're exporting to STEP for downstream engineering work, you need to inspect every file. Not spot-check. Inspect.</p>
<h2>How it compares</h2>
<p>Against <a href="https://zoo.dev">Zoo.dev</a>, CADScribe loses on almost every axis that matters for engineering work. Zoo produces cleaner B-Rep geometry, handles more complex prompts, outputs more reliable STEP files, and has better dimensional accuracy. The gap is significant. Where CADScribe gives you geometry that looks approximately right, Zoo gives you geometry that's closer to actually right. Zoo also has a well-documented API and a Python SDK, which matters for anyone trying to integrate text-to-CAD into a larger workflow.</p>
<p>Against <a href="/posts/adamcad-review">AdamCAD</a>, CADScribe lacks the parametric slider editing that makes AdamCAD's post-generation workflow faster. AdamCAD's OpenSCAD foundation also means the output is inherently parametric and readable as code, while CADScribe's output is generated geometry without transparent logic. For iterative design exploration, AdamCAD's slider approach beats CADScribe's regeneration approach.</p>
<p>Against <a href="/posts/cadgpt-review">CADGPT</a>, the comparison doesn't apply. CADGPT doesn't generate geometry at all. It's a scripting assistant. Different tool, different job.</p>
<p>The <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> comparison covers the full field if you want to see where CADScribe sits relative to everything else.</p>
<h2>The free factor</h2>
<p>The strongest thing CADScribe has going for it is that it's free. In a category where Zoo.dev has a free tier but rate-limits it, and AdamCAD charges $9.99 a month for standard access, CADScribe lets you generate as many models as you want without paying anything. For students, hobbyists, and people who just want to see what text-to-CAD can do before investing in a paid tool, that matters.</p>
<p>Free doesn't fix the accuracy problems. Free doesn't make the STEP exports more reliable. Free doesn't add parametric editing or manufacturing awareness. But free does lower the bar to the point where "try it and see" costs nothing, and for a tool in this category, that's a valid strategy. Many of the people who start with CADScribe will eventually move to Zoo or another tool when they need better output. CADScribe is a gateway, and there's nothing wrong with that.</p>
<h2>The text-to-CAD limitations problem</h2>
<p>CADScribe's weaknesses are not unique to CADScribe. They're the same <a href="/posts/text-to-cad-limitations">limitations that affect every text-to-CAD tool</a> right now: inconsistent accuracy, no manufacturing awareness, no tolerance handling, unreliable feature trees, and output that needs human review before it goes anywhere near a machine shop. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers these broader issues in detail.</p>
<p>Where CADScribe sits specifically is closer to the bottom of the accuracy spectrum among tools that generate actual geometry. Zoo.dev is ahead. AdamCAD is roughly comparable on simple parts but better on iterative refinement. CADAgent, which operates inside Fusion 360, produces output with better feature history because it's working inside a real parametric environment. CADScribe is competing with tools that have technical advantages it doesn't currently match.</p>
<h2>The verdict</h2>
<p>CADScribe generates real 3D models from text prompts, and sometimes those models are usable. For simple geometry, quick prototyping, educational purposes, or just satisfying your curiosity about what text-to-CAD feels like, it's fine. It's free, it's fast, and the barrier to trying it is effectively zero.</p>
<p>For anything that matters, check the output. Measure the features. Inspect the STEP file before you build around it. The geometry CADScribe produces is a starting suggestion, not an engineering document. If you treat it as a rough first draft that needs manual verification and cleanup, you'll be in the right mindset. If you treat it as finished geometry, you're going to have the kind of afternoon that ends with you arguing at a screen about why a hole isn't where it's supposed to be.</p>
<p>I keep CADScribe bookmarked as a quick test tool. When I want to see if a concept makes sense as a shape, it's faster than opening Fusion 360 and sketching from scratch. When I need geometry I can trust, I go somewhere else. That's an honest assessment from someone who wanted it to be better than it is, and who'll check back in six months to see if it got there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>CADGPT review: useful assistant, not a model generator</title>
      <link>https://blog.texocad.ai/posts/cadgpt-review</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/cadgpt-review</guid>
      <pubDate>Mon, 26 Jan 2026 00:00:00 GMT</pubDate>
      <description>CADGPT writes AutoLISP and Python scripts for CAD automation. It does not generate 3D models. If you know what it actually is, it&apos;s occasionally useful.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>cadgpt</category>
      <category>review</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> CADGPT is a chat-based AI assistant that generates AutoLISP and Python scripts for CAD automation and answers design questions. It is not a text-to-CAD model generator. It&apos;s useful for scripting tasks but does not produce 3D geometry from prompts.</p>
<p>I installed CADGPT expecting a tool that would generate 3D geometry from a text prompt. What I got was a chatbot named Elaine that writes AutoLISP code and answers engineering questions. Not the same thing. Not even close to the same thing. But once I stopped being annoyed about the name and started using it for what it actually does, I found something that's occasionally useful in a narrow, specific way that nobody seems to talk about honestly.</p>
<p>The name is the problem. If you search "CADGPT" expecting it to work like text-to-CAD, you're going to be disappointed, confused, or both. I was both, sitting at my desk on a Monday with a cold coffee and a Fusion 360 file I was trying to avoid opening, wondering why the "CAD" tool I'd just downloaded was offering to translate my emails into German instead of generating a bracket.</p>
<h2>What CADGPT actually is</h2>
<p>CADGPT is made by BackToCAD, and it's billed as an "AI Expert System" for CAD and BIM. The 2026 version runs as an add-on compatible with AutoCAD, Revit, CADirect, BricsCAD, and IntelliCAD products. It costs $199 per year, which is a lot for what amounts to a specialized chatbot, and not much if it saves you even a few hours of scripting work.</p>
<p>The core feature is a conversational AI assistant called Elaine. You ask Elaine questions about CAD workflows, engineering calculations, or software commands, and she answers. You can ask her to generate code in AutoLISP, C#, C++, VB, or ObjectARX. You can ask her to solve engineering math problems step by step. You can ask her to write tutorials or translate text. She does not, at any point, generate a 3D model.</p>
<p>I want to be clear about this because the name implies otherwise and I've seen people get confused. CADGPT is not a text-to-CAD tool. It doesn't produce geometry. It doesn't output STEP files. It doesn't create solid bodies. It writes scripts and answers questions. That's a different category of tool, and calling it "CADGPT" is, in my opinion, misleading in a market where people are actively looking for AI that generates CAD models.</p>
<h2>The scripting side</h2>
<p>This is where CADGPT earns whatever goodwill it has. If you work in AutoCAD and you need a custom AutoLISP routine, writing one from scratch is a specific kind of tedious. You need to remember the syntax, look up function calls, debug parentheses nesting that makes you question your life choices, and test repeatedly. CADGPT can generate a working first draft of a LISP routine from a plain English description, and for simple automation tasks, the output is often good enough to use after minor tweaking.</p>
<p>I tested it with a few prompts. "Write an AutoLISP routine that draws a grid of circles at specified spacing." It produced working code. The spacing parameter was in the right place. The loop logic was correct. I had to adjust one variable name that shadowed a built-in, but the bones were solid. For someone who writes LISP routines regularly, this saves maybe ten to fifteen minutes. For someone who doesn't know LISP at all, it's the difference between having a script and not having one.</p>
<p>The code generation also covers ObjectARX, C#, and VB, which extends the usefulness to people building more complex AutoCAD plugins. I didn't test these as thoroughly, but the C# output I looked at was syntactically correct and logically reasonable, which is more than I can say for some of the code I've seen humans write at 4 PM on a Friday.</p>
<p>The limitation is the same one you'd find with any LLM-based code generation: the output works for common patterns and breaks on edge cases. Ask for something unusual, a LISP routine that manipulates custom extended entity data in a specific way, or a script that interacts with a third-party API through AutoCAD's netload system, and the generated code starts hallucinating function names or inventing API calls that don't exist. You need to know enough about the scripting language to recognize when the output is wrong, which somewhat defeats the purpose for beginners.</p>
<h2>The calculator and reference tools</h2>
<p>CADGPT includes an engineering calculator that solves trig, load calculations, material properties, and other common engineering math with step-by-step explanations. It's fine. It works the same way asking ChatGPT a math question works, but with a CAD-flavored wrapper. If you want to calculate the allowable bending stress on a beam without opening a reference table, it'll do that. Whether that's worth $199 a year when you could ask any general-purpose LLM the same question is a different matter.</p>
<p>The reference manual feature pulls answers from CAD software documentation. In theory, this means you can ask "how do I mirror a block in AutoCAD 2026" and get an answer with the right menu path and command name. In practice, the answers are usually correct for common operations and occasionally wrong for obscure ones. I caught it giving me a command sequence that worked in AutoCAD 2024 but had been reorganized in 2026. Not a disaster, but not the kind of reliability you'd want from a $199 expert system.</p>
<p>The rest of the feature list, email generation, text translation, Dall-E prompt generation, web page generation, feels like padding. If I'm paying for a CAD expert system, I don't need it to write my emails. I have twelve other tools that do that, and none of them cost $199 a year.</p>
<h2>What it's not</h2>
<p>CADGPT is not a text-to-CAD tool. It does not belong in the same category as <a href="https://zoo.dev">Zoo.dev</a>, <a href="/posts/adamcad-review">AdamCAD</a>, or <a href="/posts/cadscribe-review">CADScribe</a>. Those tools take a text prompt and produce 3D geometry. CADGPT takes a text prompt and produces scripts, answers, or code. The distinction matters because someone searching for "CADGPT review" in the context of text-to-CAD is going to be misled by the name.</p>
<p>If you want to generate a bracket from a text description, CADGPT can't do that. If you want a script that automates drawing brackets in AutoCAD based on a parameter table, CADGPT might be able to help. Those are fundamentally different needs, and conflating them doesn't help anyone except the people writing the marketing copy.</p>
<p>For a comparison of tools that actually generate geometry, the <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> overview covers the real options. For understanding what text-to-CAD means and how it differs from what CADGPT does, the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> explains the distinction.</p>
<h2>Who this is for</h2>
<p>If you are an AutoCAD power user who writes LISP routines or ObjectARX plugins and you want a faster way to scaffold code, CADGPT has a narrow usefulness. It's a scripting assistant for a specific ecosystem. If that ecosystem is your daily life, saving twenty minutes per routine adds up, and the $199 annual price might justify itself over a year of regular use.</p>
<p>If you use BricsCAD, IntelliCAD, or Revit, the compatibility extends there too, though I tested primarily with AutoCAD workflows and can't speak to how well it handles the quirks of those other platforms.</p>
<p>If you're a Fusion 360 or SolidWorks user, CADGPT has nothing for you. It doesn't integrate with those tools. It doesn't generate Python scripts for Fusion's API, despite what you might hope. It doesn't know what FeatureScript is. Its world is AutoCAD-adjacent, and outside that world, it's a general-purpose chatbot in an expensive suit.</p>
<p>If you're looking for <a href="/posts/ai-cad-automation">AI CAD automation</a> in a broader sense, tools like CADAgent (which operates inside Fusion 360) or even just using Claude or ChatGPT directly to write Fusion 360 API scripts will get you further than CADGPT will. The LLM underneath CADGPT isn't doing anything that a well-prompted general-purpose model can't do for the same scripting tasks, and the general-purpose model is usually free or cheaper.</p>
<h2>The pricing problem</h2>
<p>$199 a year is not a lot of money in the context of professional CAD software. But it's a lot of money for a wrapped LLM chatbot, especially when the underlying capability (generating AutoLISP and answering CAD questions) is available through ChatGPT, Claude, or any other general-purpose LLM for less money or for free.</p>
<p>The argument for CADGPT would be that it's tuned for CAD workflows and integrated into the AutoCAD environment. Fine. But the integration is a sidebar panel, not a deep plugin that reads your drawing and suggests context-aware automation. It doesn't know what you're working on. It doesn't see your layers, blocks, or geometry. It's a chat window that happens to live inside your CAD application instead of in a browser tab. That's a convenience, not a capability.</p>
<p>If BackToCAD added actual drawing awareness, the ability to analyze your current file and suggest automations, generate scripts based on your existing geometry, or even just read the entities in your drawing and offer context-relevant help, that would be worth $199. As it stands, I'm not sure the integration alone justifies the price over a general-purpose alternative.</p>
<h2>The verdict</h2>
<p>CADGPT is a misnamed product that does something narrower and less exciting than its name implies. It's not text-to-CAD. It's an AI scripting assistant for the AutoCAD ecosystem with some general-purpose features bolted on. If you know that going in, and if you write LISP or ObjectARX code regularly, it can save time. The code generation is competent for common tasks, the engineering calculator works, and the CAD reference answers are mostly right.</p>
<p>But the name is going to keep confusing people who are looking for geometry generation, the price is hard to justify against free alternatives that do similar things, and the padded feature list (email generator, web page generator) suggests a product that's not entirely sure what it wants to be.</p>
<p>I'd recommend trying it only if you're an AutoCAD-centric workflow person who wants scripting help inside the application. For everyone else, there are better ways to spend $199.</p>
]]></content:encoded>
    </item>
    <item>
      <title>I tested every text-to-CAD tool. Here&apos;s what actually works.</title>
      <link>https://blog.texocad.ai/posts/best-text-to-cad-tools</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/best-text-to-cad-tools</guid>
      <pubDate>Sun, 25 Jan 2026 00:00:00 GMT</pubDate>
      <description>I ran the same prompts through every text-to-CAD tool I could find. Most of them produced geometry that looked like a dare. A few produced real parts.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>tools</category>
      <category>comparison</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Zoo.dev is the most capable dedicated text-to-CAD tool in 2026, generating real B-Rep STEP files. AdamCAD is fast for simple parametric parts. CADAgent works inside Fusion 360. CADGPT and CADScribe are more limited. None are production-ready, but Zoo gets closest.</p>
<p>I gave up a Saturday afternoon for this. Sat down at the desk around noon with a thermos of coffee, a quiet house, and a list of every text-to-CAD tool I could find. I had three test prompts written out on a sticky note next to the monitor: a flanged bracket with four M5 mounting holes, a simple electronics enclosure with a lid, and a shaft collar with a set screw. Nothing exotic. The kind of parts I've modeled hundreds of times in Fusion 360 and SolidWorks, parts where I'd know immediately if the AI produced something real or something that was just cosplaying as engineering geometry.</p>
<p>By 5 PM, I had seven browser tabs open, three downloaded add-ins, a folder full of STEP files of varying quality, one genuinely impressive result, and the growing suspicion that most "text-to-CAD" tools are using the term more loosely than I'd like. If you've read the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a>, you know the distinction between actual B-Rep geometry and mesh blobs wearing a disguise. That distinction got tested hard. Here's what I found, tool by tool, with the same honesty I'd give a coworker asking "which one should I try first."</p>
<h2>Zoo.dev</h2>
<p>Zoo is the tool I keep coming back to, which is about the highest compliment I give software. It runs on a GPU-native geometric kernel called KittyCAD, and the output is real B-Rep geometry. When I typed my flanged bracket prompt, what came back was a STEP file I could import into Fusion 360 and actually edit. Select a face, add a chamfer, adjust a hole diameter. The feature tree wasn't there (it's not generated inside a parametric history environment), but the geometry itself behaved like geometry a human modeled.</p>
<p>The shaft collar came out well too. Correct topology, clean edges, reasonable proportions. The set screw hole was in the right place. The electronics enclosure was where things got shakier: the lid existed, but the fit between the two halves was more conceptual than precise. I wouldn't hand it to a machinist, but I also wouldn't throw it away. The starting point saved me twenty minutes of sketching, which on a prototype is real time.</p>
<p>Zoo outputs in STEP, glTF, OBJ, STL, and several other formats. The free tier is usable enough to actually evaluate the tool, which is more than I can say for a lot of SaaS products that gate everything behind a sales call. The API is well-documented and there's a Python SDK if you want to script batch generations. Pricing scales with usage, and for hobbyist or early-stage prototyping the free tier covers a surprising amount.</p>
<p>The weaknesses are real, though. Complex prompts produce unreliable results. Ask for a part with specific GD&#x26;T requirements and the tool doesn't know what to do with that information. Internal faces sometimes appear where they shouldn't. Fillets occasionally fail in ways that remind me of SolidWorks on a bad day, except you can't go back and tweak the sketch that caused it. And there's no sense of manufacturing process. Zoo will happily generate a wall thickness that no injection mold on earth could fill.</p>
<p>I've written more about it in the <a href="/posts/zoo-text-to-cad-review">Zoo text-to-CAD review</a>, but the short version: it's the one I'd recommend trying first. If you want to understand how to get better results, the <a href="/posts/text-to-cad-prompt-engineering">text-to-CAD prompt engineering</a> post covers what works.</p>
<h2>AdamCAD</h2>
<p>AdamCAD takes a different approach. Instead of generating a STEP file and calling it done, it gives you a parametric model with adjustable dimension sliders. Type your prompt, get an STL, then use sliders to tweak dimensions after the fact. It's like getting a rough parametric sketch that you can nudge into shape.</p>
<p>For my flanged bracket, this worked surprisingly well. The base geometry appeared fast, faster than Zoo, and the sliders let me adjust the flange width and hole spacing without regenerating from scratch. The shaft collar was decent too. The enclosure was a mess, which seems to be the universal failure mode for all these tools. Enclosures require too many interrelated constraints for current AI to manage.</p>
<p>Pricing starts at $5.99 a month, which is cheap enough that you don't feel robbed if it only saves you time occasionally. The limitation is that the parametric controls are surface-level compared to a real feature tree. You can change a width, but you can't suppress a feature, mirror a pattern, or add a relationship between two dimensions. It's parametric in the way an online box generator is parametric, not in the way SolidWorks is parametric. For quick-iteration prototyping or 3D printing rough-outs, that might be exactly enough. For production parts, you'll still end up rebuilding in proper CAD.</p>
<h2>CADGPT</h2>
<p>I have a small grudge against CADGPT for the name, because it implies something it doesn't do. CADGPT doesn't generate CAD models. It generates automation scripts. Feed it a description and it writes AutoLISP for AutoCAD or Python scripts for other tools. That's a fundamentally different job than producing geometry.</p>
<p>To be fair, if you're an AutoCAD user who writes AutoLISP regularly, having an AI that can draft scripts from natural language descriptions is genuinely useful. I tested it with my bracket prompt and got back a reasonable AutoLISP script that would have produced roughly the right shape if I'd run it in AutoCAD. I didn't have AutoCAD open that afternoon (one of the perks of a Saturday is not launching AutoCAD), so I can't speak to the exact output. But the script itself looked competent.</p>
<p>The issue is category confusion. If you're searching for a text-to-CAD tool expecting to type a sentence and get a part, CADGPT isn't that. It's a code assistant that happens to know CAD scripting languages. It belongs more in the "AI coding assistant" bucket than the "text-to-CAD" bucket. Useful for the right person, misleading for everyone else.</p>
<h2>CADScribe</h2>
<p>CADScribe generates both STL and STEP files, which puts it in the same general territory as Zoo. It's been reviewed by Xometry and All3DP, which gives it some credibility that not every tool in this space has earned. I went in with moderate expectations.</p>
<p>My flanged bracket came back looking reasonable in the preview. Four holes, a flange, approximately correct proportions. When I imported the STEP file into Fusion 360, the geometry was valid but rough. Edge quality was inconsistent, and a couple of surfaces had that slightly-off-normal look that tells you the kernel struggled. The shaft collar was fine. Simple enough geometry that the tool handled it without drama. The enclosure, again, was the weak link. The walls were uneven, the lid was decorative at best, and there was a mystery internal surface that made the solid body count wrong.</p>
<p>For basic mechanical parts that you're going to rebuild in proper CAD anyway, CADScribe gives you a starting point. For anything you'd want to send directly to manufacturing, you'll be doing enough cleanup that the time savings get questionable. It sits in that awkward middle ground where it's too limited for professionals but too technical for beginners.</p>
<h2>CADAgent</h2>
<p>This is the one that made me sit up straighter. CADAgent is an open-source Fusion 360 add-in released in March 2026 that generates models directly inside Fusion 360. You bring your own Anthropic API key, install the add-in, and type a prompt. The AI then generates actual Fusion 360 modeling commands: sketch, extrude, fillet, the works. The model builds itself in front of you, with a real timeline you can roll back and edit.</p>
<p>I tested my flanged bracket and watched Fusion 360 create a sketch, extrude it, add holes, and apply fillets. The whole thing took about thirty seconds and produced a model with actual parametric history. I could click on the sketch in the timeline, change a dimension, and the rest of the model updated. That's the thing that separates CADAgent from everything else in this list. The output isn't an orphaned solid sitting in the browser tree. It's a fully parametric model you can work with the way you'd work with anything you modeled yourself.</p>
<p>The catches: it's early. Complex prompts sometimes produce operations that fail partway through, leaving you with a half-built model and a red flag in the timeline. The AI occasionally picks strange construction approaches that a human wouldn't choose, like extruding in an awkward direction and then cutting away material to get the right shape instead of just sketching the right profile. And it requires an Anthropic API key, which means API costs on top of your Fusion 360 subscription. For the shaft collar, the result was clean. For the enclosure, it got about 70% of the way there before one of the shell operations failed and the timeline went red.</p>
<p>If you're already living in Fusion 360, this is worth trying. The parametric output alone puts it in a different category from tools that dump STEP files into your downloads folder and wish you luck.</p>
<h2>Vondy AI CAD Generator</h2>
<p>Vondy outputs DXF files and is aimed squarely at beginners. I tested it mostly out of completeness. The bracket prompt produced a flat 2D profile that was recognizably bracket-shaped, which is about what you'd expect from a DXF generator. No 3D. No B-Rep. No parametric anything. If you need a quick 2D outline for laser cutting and you don't want to open a CAD program, Vondy does that. If you need anything resembling a 3D part, look elsewhere.</p>
<h2>HP AI Text to 3D</h2>
<p>HP's tool is tied to their 3D printing ecosystem. It's browser-based, manufacturing-focused, and clearly built with HP's Multi Jet Fusion printers in mind. The geometry it produces is print-oriented STL, not editable CAD. For someone working entirely within HP's ecosystem who wants to go from a text description to a print-ready file without opening CAD software, there's a use case. For anyone else, it's a sideshow. I tested it, got back an STL of my bracket that was technically printable but not editable, and moved on. If you're interested in that workflow specifically, the <a href="/posts/text-to-cad-for-3d-printing">text-to-CAD for 3D printing</a> post covers the printing angle in more detail.</p>
<h2>What I actually learned</h2>
<p>After six hours, a cold thermos, and a folder of mixed-quality geometry files, here's where I landed.</p>
<p>Zoo.dev is the best general-purpose text-to-CAD tool right now. It produces real B-Rep geometry, supports useful output formats, and the results for simple to moderate parts are genuinely helpful. It's not replacing a CAD engineer, but it's a useful first-draft machine.</p>
<p>CADAgent is the most promising approach, because generating geometry inside a real parametric CAD environment solves problems that standalone tools can't. The parametric history alone changes the equation. It's early, it fails on complex parts, and it requires Fusion 360, but the direction is right.</p>
<p>AdamCAD is fast and cheap for simple parts, especially if you want quick dimensional iteration. CADGPT is a scripting assistant, not a geometry generator, and should be evaluated as such. CADScribe is in the middle of the pack. Vondy and HP's tool are niche.</p>
<p>None of these tools are production-ready for real engineering work. Every single output I generated needed some level of cleanup, from minor dimensional adjustments to complete rebuilds. But the gap between "useless novelty" and "saves me twenty minutes on a prototype" is real, and a few of these tools have crossed it. For a deeper side-by-side breakdown, the <a href="/posts/text-to-cad-tools-comparison">text-to-CAD tools comparison</a> has more detail.</p>
<p>The honest verdict: if you model parts for a living, try Zoo and CADAgent. Give them your easiest part first, not your hardest. See if the output saves you time or just creates a different kind of work. That's the only test that matters, and no demo will answer it for you. I learned that on a Saturday, and my coffee was ice cold by the time I did.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AdamCAD review: fast parametric output, some catches</title>
      <link>https://blog.texocad.ai/posts/adamcad-review</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/adamcad-review</guid>
      <pubDate>Sat, 24 Jan 2026 00:00:00 GMT</pubDate>
      <description>AdamCAD generates parametric 3D models fast and lets you tweak dimensions with sliders. The catch is it&apos;s limited to simpler geometry and the output needs cleanup.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>adamcad</category>
      <category>review</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AdamCAD is a text-to-CAD tool that generates parametric 2D and 3D models with adjustable dimension sliders, starting at $5.99/month. It&apos;s fast for simple parts but limited in geometric complexity and output quality compared to Zoo.dev.</p>
<p>I was two coffees in on a Thursday morning, trying to crank out a quick sensor bracket before a design review, and decided to give AdamCAD a proper test instead of just poking at it for five minutes like I had before. The prompt was straightforward: an L-bracket, 3 mm thick, two mounting holes per leg, 40 mm legs, nothing exotic. AdamCAD gave me something back in about ten seconds. The shape was recognizable. The sliders appeared on the side. I dragged one, watched the leg length update in real time, and thought, okay, this actually works. Then I exported the STEP, opened it in Fusion 360, and spent the next twenty minutes figuring out why one of the hole positions was off by nearly two millimeters and the fillet at the bend looked like it had been applied by someone who'd heard of fillets but never used one.</p>
<p>That's AdamCAD in a nutshell. Fast generation, genuinely useful parametric sliders, and output that gets you most of the way there before quietly falling apart on the details.</p>
<h2>What AdamCAD actually does</h2>
<p>AdamCAD is a browser-based text-to-CAD tool. You type a description, it generates a 3D model, and it gives you sliders to adjust dimensions after generation. The parametric slider idea is the thing that sets it apart from most other tools in this space. Instead of regenerating the entire model every time you want to change a hole diameter or a wall thickness, you just drag a slider and the model updates. It's faster than re-prompting, and it gives you a sense of control that most text-to-CAD tools don't offer.</p>
<p>Under the hood, AdamCAD generates OpenSCAD code, which means the output is genuinely parametric in a way that a dumped STEP file from most AI tools isn't. You can export as STL, STEP, or SCAD. The SCAD export is actually the most interesting, because you get the actual code you can open, read, and edit in OpenSCAD if you want to go further than the sliders allow.</p>
<p>The interface is clean and chat-based. You describe what you want, sometimes go back and forth refining, and the model updates in the viewport. It feels modern. It feels fast. Whether it feels like engineering depends on what you're trying to do with the output.</p>
<h2>Pricing</h2>
<p>AdamCAD Standard runs $9.99 a month and gives you 100 generations per week, which is plenty for testing and moderate use. AdamCAD Pro is $29.99 a month for unlimited generations. There's also a free tier with limited generations if you want to try it before committing. When I first looked at the tool, pricing started at $5.99 a month, but it appears to have changed since then. Either way, compared to a SolidWorks subscription, you're spending pocket change. Compared to Zoo.dev's free tier, you're spending anything at all, which is a harder sell when the free alternative produces competitive output.</p>
<h2>Where it works</h2>
<p>Simple prismatic parts. Brackets, plates, enclosures, standoffs, basic housings. If the part can be described in one or two sentences and doesn't involve surfacing, complex fillets meeting at odd angles, or features that reference each other in non-trivial ways, AdamCAD does a decent job. The slider-based parametric editing is genuinely nice for quick iteration. I changed a bracket leg from 40 mm to 55 mm, adjusted a hole diameter, and watched the model rebuild without having to re-prompt or wait for another generation cycle. That workflow is faster than most competitors.</p>
<p>For quick prototyping, especially if you're headed to a 3D printer and don't need tight tolerances, AdamCAD saves real time. I generated a simple mounting plate for a Raspberry Pi enclosure, adjusted the standoff heights with sliders, exported an STL, and printed it. The holes were close enough. The standoffs were close enough. "Close enough" is the operating phrase here, and for a prototype fixture, it was fine.</p>
<p>The conversational refinement also works reasonably well. You can say things like "make the wall thicker" or "add a slot on the left side" and it usually understands. It's not as precise as typing exact dimensions into a sketch, but for roughing out a concept it gets the job done.</p>
<h2>Where it doesn't</h2>
<p>The parametric sliders are limited to what AdamCAD decides to expose, and that's not always what you need. On one model, I got sliders for overall length and width but nothing for the hole pattern spacing. On another, I could adjust wall thickness but not the fillet radius. The sliders feel curated rather than comprehensive, which makes sense given that they're mapped to OpenSCAD parameters, but it means you hit a wall fairly quickly on anything moderately complex.</p>
<p>The OpenSCAD foundation is both a strength and a limitation. OpenSCAD uses constructive solid geometry, CSG, which is great for boxes, cylinders, and boolean operations but struggles with organic shapes, swept features, and anything that would normally require a spline or a loft in a real parametric tool. Ask AdamCAD for a housing with a complex curved surface and you'll get something that approximates the shape with faceted geometry. It works. It's not beautiful. A machinist would have opinions.</p>
<p>Dimensional accuracy is the bigger issue. Across maybe a dozen test parts, I found that prompted dimensions sometimes drifted by one to three percent in the output. That's fine for a prototype jig. It's not fine for anything that mates with other components at specified tolerances. The hole that was supposed to be 5 mm came out as 4.85 mm in one test. The wall that was supposed to be 2 mm measured 1.8 mm at one end. Not catastrophic, but the kind of thing that'll bite you if you trust the output without measuring.</p>
<p>Feature tree quality is another sore spot, though that's partly an OpenSCAD problem rather than an AdamCAD problem. The generated SCAD code is functional but not elegant. If you open it up to make manual edits, you'll find nested operations, magic numbers, and a structure that reflects how the AI thinks about geometry rather than how a human would organize it. You can work with it, but refactoring the code to make it maintainable is its own project.</p>
<h2>How it compares</h2>
<p>Against <a href="https://zoo.dev">Zoo.dev</a>, which I consider the most capable dedicated text-to-CAD tool right now, AdamCAD is faster for simple parts and the slider workflow is more interactive. But Zoo produces cleaner B-Rep geometry, handles more complex prompts, and outputs STEP files that behave better in downstream CAD tools. If I need a quick throwaway bracket, AdamCAD gets it done. If I need starting geometry for a real project, I'm going to Zoo.</p>
<p>Against <a href="/posts/cadgpt-review">CADGPT</a>, the comparison doesn't quite work because CADGPT is an automation assistant, not a geometry generator. They're solving different problems.</p>
<p>Against <a href="/posts/cadscribe-review">CADScribe</a>, AdamCAD's parametric sliders give it a meaningful edge for iterative work. CADScribe generates geometry from prompts too, but adjusting the output means re-prompting rather than dragging a slider. For quick exploration, that speed difference matters.</p>
<p>For a broader view of where all these tools sit relative to each other, the <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> comparison covers the full field.</p>
<h2>The OpenSCAD angle</h2>
<p>This is the part that interests me most, honestly. Because AdamCAD generates OpenSCAD code, the output is transparent. You can see exactly what the AI did, read the logic, and modify it. That's not true of tools that generate geometry inside a proprietary kernel and hand you a STEP file with no history. If you know OpenSCAD, or you're willing to learn enough to read the scripts, AdamCAD gives you more control than most of its competitors.</p>
<p>The flip side is that you inherit OpenSCAD's limitations. No history-based parametric modeling in the Fusion 360 or SolidWorks sense. No feature tree you can roll back. No sketch-and-extrude workflow. OpenSCAD is a programming language for geometry, and if that's not your thing, the SCAD export is just a curiosity.</p>
<p>For the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> readers who want to understand where AdamCAD fits in the larger picture: it's a fast, lightweight tool for simple parametric parts. It's not trying to replace your main CAD package, and it shouldn't.</p>
<h2>The verdict</h2>
<p>AdamCAD does what it says. It generates parametric models from text, it lets you tweak them with sliders, and it gets you from idea to exportable geometry faster than most alternatives. For simple parts, quick prototyping, and early-stage exploration, it earns its subscription price. The slider-based editing is the best implementation of post-generation parametric control I've seen in any text-to-CAD tool.</p>
<p>The catches are real, though. The geometric complexity ceiling is low. The dimensional accuracy needs checking. The OpenSCAD foundation limits what kinds of shapes you can produce. And the output, while usable, rarely survives contact with a real engineering workflow without cleanup.</p>
<p>If you're a maker, a student, or someone who needs fast rough geometry and plans to refine it elsewhere, AdamCAD is worth trying. If you're an engineer who needs reliable dimensions and clean feature trees, it's a starting point at best, and you should budget time for the cleanup that's coming. I keep it bookmarked for quick jobs. I don't rely on it for anything that has to be right the first time.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD workflows and tools</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-workflows-and-tools</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-workflows-and-tools</guid>
      <pubDate>Fri, 23 Jan 2026 00:00:00 GMT</pubDate>
      <description>A practical look at text-to-CAD workflows from prompt to export, covering the tools that exist, the ones that work, and the uncomfortable amount of manual cleanup still involved.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>workflows</category>
      <category>tools</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD workflows follow a loop of prompting, generating, reviewing, editing, and exporting. The tools that matter right now are Zoo.dev for B-Rep output, AdamCAD for quick parametric parts, CADAgent inside Fusion 360, and OpenSCAD paired with an LLM. None of them replace knowing CAD, but some of them save real time on the right kind of geometry.</p>
<p>Text-to-CAD workflows start with a prompt and end with an editable model you can actually manufacture from, but the middle is where things get honest. The tools exist. Some of them produce real B-Rep geometry. The workflow is prompt, generate, review, clean up, export, and probably prompt again because the first result had the wall thickness of a credit card.</p>
<p>I was sitting at my desk last Tuesday, second coffee going cold, trying to see if I could get a usable electronics enclosure out of <a href="https://zoo.dev">Zoo.dev</a> faster than I could model one from scratch in Fusion 360. The enclosure was not complicated. Rectangular box, lid with screw bosses, a cutout for a USB-C port, standoffs for a PCB. The kind of part I've modeled hundreds of times and could probably sketch in my sleep, which is exactly why it felt like a fair test. If text-to-CAD can't beat me on a part I find boring, it's not going to help on the parts I find interesting.</p>
<p>The answer, for what it's worth, was mixed. The AI got me about 70% of the way in maybe two minutes. Then I spent another twenty minutes fixing the things it got wrong. Whether that counts as "faster" depends on how you do the math and how much you enjoy arguing with yourself.</p>
<p>That experience, more than any feature list or product demo, is what text-to-CAD workflows actually feel like right now. And I think anyone considering these tools deserves to hear that before they reorganize their process around a technology that still needs a human holding its hand through most of the interesting decisions.</p>
<h2>The workflow loop nobody shows in demos</h2>
<p>Every text-to-CAD tool, regardless of what it's called or who sells it, follows the same basic loop. The marketing materials like to present this as a straight line: type a prompt, get a model, done. In practice it's a circle, and you will go around it more than once.</p>
<p>The loop looks like this: you write a prompt describing the part you want. The tool generates geometry. You look at the geometry and discover what the tool misunderstood. You refine the prompt or edit the model directly. You export. You check the export in your actual CAD environment. You find more problems. You fix those. Eventually you have something usable, or you give up and model it yourself.</p>
<p>I've gone through this loop dozens of times now with different tools, and the ratio of "generated geometry I kept" to "generated geometry I replaced" varies wildly depending on part complexity. Simple prismatic shapes with clear dimensions? The tools do fine. Anything with internal features, draft angles, thin walls near each other, or organic transitions? You're back to doing the work yourself, just with a worse starting point than a blank sketch.</p>
<p>The prompt stage deserves more attention than it usually gets. I've written about this in detail in my <a href="/posts/text-to-cad-prompt-engineering">text-to-CAD prompt engineering</a> guide, but the short version is: you need to think like you're writing a work order for someone who has never seen your part, has no spatial reasoning, and takes every word literally. Dimensions matter. Material context helps. Vague words like "small" or "sturdy" are useless. If you say "box with a hole," you'll get a box with a hole, and nothing else. No fillets, no wall thickness, no mounting features, no consideration of how the thing gets made. The <a href="/posts/how-to-use-text-to-cad">how to use text-to-CAD</a> guide covers the practical side of this in more detail.</p>
<h2>The tools that actually exist</h2>
<p>The text-to-CAD space is young enough that the list of tools worth talking about fits on one hand. There are others, but I'm focusing on the ones I've actually used or that produce output worth discussing. If you want the broader picture, the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the full range.</p>
<p><strong>Zoo.dev</strong> is the one most people encounter first. It runs on the KittyCAD geometric kernel, which is GPU-native and generates actual B-Rep (Boundary Representation) geometry, not mesh blobs. The output can be exported as STEP, glTF, OBJ, STL, and a few other formats. The STEP export is the one that matters if you're doing real work, because STEP is what your downstream CAD tool can actually digest as editable geometry.</p>
<p>I've had decent results with Zoo for mechanical parts: brackets, simple housings, mounting plates, that kind of thing. Where it struggles is anything requiring relationships between features. Tell it you want a mounting plate with four M4 clearance holes on a bolt pattern, and it might get the holes but space them wrong, or get the spacing right but forget the clearance. It's inconsistent in a way that means you always have to check. For a deeper walkthrough, the <a href="/posts/zoo-text-to-cad-tutorial">Zoo text-to-CAD tutorial</a> covers the specifics.</p>
<p>Zoo also has an API, which is where things get more interesting if you're a developer. The Python SDK (<code>kittycad</code>) lets you script prompt-to-STEP pipelines, which opens up batch generation and integration into automated workflows. The <a href="/posts/text-to-cad-api">text-to-CAD API</a> post goes into this properly.</p>
<p><strong>AdamCAD</strong> takes a different approach. It generates parametric geometry with adjustable sliders, which means you get a model and can then tweak dimensions without re-prompting. The output is mostly STL, which limits its usefulness for downstream parametric editing, but for quick prototyping and 3D printing workflows it's fast and surprisingly capable. At $5.99 a month for the basic tier, it's cheap enough to try without thinking about it.</p>
<p><strong>CADAgent</strong> is the newest entry and the one I find most promising. It's an open-source Fusion 360 add-in (released March 2026, GitHub: er-fo/CADAgent) that generates models directly inside Fusion 360 using an Anthropic API key you provide. This matters because the output lives natively in your Fusion timeline. No import step, no format conversion, no prayer that the STEP file will behave. You describe the part, CADAgent writes the Fusion operations, and the result shows up in your feature tree like you modeled it yourself. It's early and limited, but the approach is right.</p>
<p><strong>OpenSCAD paired with an LLM</strong> is the path that gets the least attention and might have the most long-term potential. OpenSCAD is already code-based, which means an LLM can generate its scripts directly. You describe a part in natural language, the LLM writes OpenSCAD code, you run it, you see the result, you iterate. With the OpenSCAD MCP server adding a visual feedback loop, the LLM can actually see what it generated and correct itself. I wrote a full piece on this: <a href="/posts/openscad-ai">OpenSCAD + AI</a>. If you're comfortable with code and want full control over the geometry, this is the workflow I'd recommend exploring first.</p>
<p><strong>FreeCAD with AI-assisted Python macros</strong> follows similar logic. FreeCAD's Python API is well-documented enough that current LLMs can generate usable macros for common operations. It's not a polished product. You're basically asking ChatGPT or Claude to write FreeCAD Python scripts, then running them and fixing what breaks. But for open-source users who already know FreeCAD, it works surprisingly well for repetitive geometry.</p>
<h2>Prompt engineering is the skill nobody expected to need</h2>
<p>I spent ten years learning how to communicate design intent through sketches, dimensions, and GD&#x26;T. Now I'm also learning how to communicate design intent through sentences typed into a text box, which is a sentence I would have found absurd in 2023.</p>
<p>The uncomfortable truth is that prompt quality is the single biggest factor in text-to-CAD output quality. Better prompts produce better parts. Vague prompts produce garbage. This is true across every tool I've tested.</p>
<p>A few things I've learned the hard way:</p>
<p>Start with overall dimensions. "A rectangular enclosure, 120mm long, 80mm wide, 40mm tall, with 2mm wall thickness" gives the tool something to work with. "A box for my Arduino" does not, because the tool has no idea which Arduino you mean, how much clearance you want, or whether "box" means open-top, lidded, snap-fit, or screw-closed.</p>
<p>Specify features in order of importance. The tool will try to include everything, but when features conflict with each other geometrically, the ones mentioned later tend to get mangled. If the mounting holes matter more than the cosmetic fillets, say the mounting holes first.</p>
<p>Use manufacturing language when you can. "3mm fillets on all external edges" is better than "rounded edges." "M4 counterbore holes, 8mm diameter, 4mm deep" is better than "screw holes." The tools have been trained on engineering data, and engineering vocabulary gets better results than conversational descriptions. My <a href="/posts/text-to-cad-prompt-engineering">text-to-CAD prompt engineering</a> guide has a lot more on this, including prompt templates that work.</p>
<p>Be explicit about what you don't want. If you want a solid body with no internal voids, say so. If you want a shell with uniform wall thickness, say that. Left to their own devices, these tools make assumptions, and the assumptions are not always the ones a person with manufacturing experience would make.</p>
<p>And accept that multi-step prompting usually works better than a single long prompt. Describe the base shape first. Get that right. Then add features. Get those right. Then refine. Trying to specify an entire complex part in one prompt is like trying to explain an assembly to someone using only one sentence. You can do it, but the result will be confusing for everyone involved.</p>
<h2>File formats: where the workflow gets real</h2>
<p>The moment you export from a text-to-CAD tool, you enter the territory I've been complaining about for years: file format interoperability. The <a href="/posts/text-to-cad-tutorial">text-to-CAD tutorial</a> covers the step-by-step of this, but here's the short version of what to expect.</p>
<p>STEP is the format you want for downstream editing. It carries B-Rep geometry that most professional CAD tools can import as solids. When Zoo.dev exports STEP, the result usually imports cleanly into Fusion 360 or SolidWorks as a dumb solid, meaning you get the geometry but not a feature tree. You can still add features on top of it, take measurements, cut sections, and do the kind of work you'd do with any imported body. But you can't go back and change the AI's sketch dimensions, because there are no sketch dimensions. It's a solid lump that happens to be the right shape.</p>
<p>STL is what you get from most tools when STEP isn't available. It's a mesh. It's fine for 3D printing. It's mostly useless for parametric editing. If you import an STL into SolidWorks, you get a mesh body that you can look at, measure, and maybe convert to a solid if you enjoy frustration and have a high tolerance for approximation artifacts.</p>
<p>glTF and OBJ are for visualization. They're what you'd use if you want to drop the geometry into a web viewer, a render engine, or a game. They're not for manufacturing.</p>
<p>DXF matters if you're working in 2D or doing laser cutting and CNC routing. Some tools (Vondy, for example) output DXF directly, which is useful for flat parts but doesn't help with 3D.</p>
<p>The practical advice is simple: if the tool can export STEP, use STEP. If it can't, you're either 3D printing the STL directly or you're rebuilding the geometry in your CAD tool using the AI output as a visual reference. Both are valid workflows, just don't pretend the second one is "AI-generated CAD" when what you actually did was trace over a robot's homework.</p>
<h2>Editing AI output: the part they skip in the demo</h2>
<p>This is where text-to-CAD gets real, and where the demos stop being helpful. Every demo I've seen shows the prompt going in and the shiny model coming out. Nobody shows the twenty minutes after, when you're trying to figure out why the boss features are 0.3mm off-center, or why the wall is 1.5mm thick on one side and 2.1mm on the other, or why there's a tiny internal face that shouldn't exist but is going to ruin your fillet operation.</p>
<p>Editing AI-generated geometry is different from editing your own work. When you model something yourself, you know the construction logic. You know which sketch drives which feature. You know that the shell came before the boss, and that changing the draft angle will ripple into the parting line. AI-generated models have no such logic. They're just geometry. The "feature tree" in CADAgent output is better than nothing, but it's still not your feature tree.</p>
<p>My usual approach with imported AI geometry is to treat it like I'd treat any dumb import from a client: useful as a reference, not trustworthy as a starting point for parametric design. I'll measure it, verify the critical dimensions, and then decide whether to modify the import directly (adding features, cutting material, repairing faces) or use it as an underlay while I rebuild the part properly.</p>
<p>For simple parts, modifying the import is fast and fine. For anything going to manufacturing with tolerances, I rebuild. Every time. Because I've learned the hard way that "close enough" geometry from any source, AI or otherwise, has a habit of being exactly wrong in the one spot that matters.</p>
<h2>Fitting text-to-CAD into an existing workflow</h2>
<p>The question I keep getting from other CAD users is not "which tool is best" but "where does this fit." And the honest answer is: it fits in the early stages, and it fits for certain kinds of parts, and it does not fit everywhere.</p>
<p>Where text-to-CAD works well right now: quick concept models for review. Early-stage prototyping where you need a physical shape fast and don't care about feature trees. Generating starting geometry for simple parts that would be tedious to model from scratch but don't require precision. Creating reference models to sanity-check proportions before committing to a full parametric build. Batch-generating variations of a basic shape for comparison.</p>
<p>Where it does not work well: anything with tight tolerances. Assemblies. Parts with complex internal features. Anything that needs a clean feature tree for future revision. Geometry that must conform to specific manufacturing constraints like draft angles, parting lines, or minimum bend radii. In other words, most of what professional CAD users spend their time on.</p>
<p>The workflow I've settled into is this: I use text-to-CAD the way I used to use hand sketches on scrap paper. It's for getting the rough shape out of my head and into something I can look at, spin around, and evaluate before I commit to the real modeling work. It's a thinking tool, not a production tool. Not yet.</p>
<p>For OpenSCAD users, the workflow is tighter because the LLM output is code, not geometry. You can read the code, understand it, modify it, version-control it, and parametrize it properly. This is closer to a real production workflow, and it's why I think the <a href="/posts/openscad-ai">OpenSCAD + AI</a> path is underrated.</p>
<h2>What the vendors are doing</h2>
<p>The major CAD vendors are all adding AI features, but most of them are not doing text-to-geometry yet. Autodesk announced Neural CAD at AU 2025, which would generate editable 3D geometry from text prompts inside Fusion. It's in development. Dassault is shipping AI companions (AURA and LEO) in SolidWorks 2026, plus an assembly structure designer that takes text input. PTC has an AI Advisor in Onshape. Siemens has NX AI Chat.</p>
<p>These are mostly copilot-style features: AI that helps you use the existing tools faster, not AI that replaces the tools. The distinction matters. A copilot that suggests the right command when you type "extrude this face 10mm" is useful. It's not the same as generating an entire part from a description. The vendors know this, which is why most of them are taking the cautious path. The startups are the ones trying to skip ahead, and the results are predictably mixed.</p>
<p>I expect the vendor features to matter more in the long run, because they'll be integrated with the existing parametric environment. A Fusion 360 text-to-CAD feature that generates timeline operations, not just geometry, would be genuinely useful. CADAgent is a prototype of what that looks like, and even in its early state, the output is more editable than anything I've gotten from a standalone tool.</p>
<h2>Where this is going</h2>
<p>I'm not going to predict the future, because every time someone in CAD predicts the future they end up looking foolish within eighteen months. But I'll say what I see.</p>
<p>Text-to-CAD right now is where 3D printing was around 2012. The technology works in constrained cases. The output is rough. The tools are young. The hype is significantly ahead of the practical reality. But the direction is clear enough that ignoring it seems unwise.</p>
<p>The tools will get better at understanding engineering intent. The output quality will improve. The integration with existing CAD environments will get tighter. And at some point, probably sooner than I'm comfortable with, the prompt-to-part loop will be fast enough and accurate enough that it changes how people start new designs.</p>
<p>It won't replace knowing how to model. It won't replace understanding manufacturing constraints. It won't replace the judgment calls that make the difference between a part that works on screen and a part that works in the real world. But it will change the first ten minutes of a lot of design sessions, and for a field where the first ten minutes often involve staring at a blank sketch and wishing someone else would draw the boring bit, that's not nothing.</p>
<p>My advice, if you're a working CAD user curious about this: try <a href="https://zoo.dev">Zoo.dev</a> or <a href="https://github.com/er-fo/CADAgent">CADAgent</a> on a part you already know how to model. Don't start with your hardest project. Start with something boring. See how far the tool gets. See what it misses. Fix what it gets wrong. That twenty minutes will teach you more about text-to-CAD than any demo reel or product announcement, and you'll know exactly where it fits in your workflow, if it fits at all.</p>
]]></content:encoded>
    </item>
    <item>
      <title>What is text-to-CAD, and what it isn&apos;t</title>
      <link>https://blog.texocad.ai/posts/what-is-text-to-cad</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/what-is-text-to-cad</guid>
      <pubDate>Fri, 23 Jan 2026 00:00:00 GMT</pubDate>
      <description>Text-to-CAD turns a typed description into actual editable CAD geometry. It&apos;s not text-to-3D, it&apos;s not generative design, and it&apos;s not magic. Here&apos;s what it really is.</description>
      <dc:creator>TexoCAD</dc:creator>

      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD is AI that converts natural language prompts into editable B-Rep CAD models (STEP, SCAD, or native CAD files), not meshes. It produces real parametric geometry you can fillet, chamfer, and dimension, unlike text-to-3D tools that output amorphous mesh blobs.</p>
<p>The first time I tried text-to-CAD, I was sitting at my desk with half a lunch break left and a bracket I didn't feel like modeling by hand. I typed something like "L-bracket with two mounting holes, 3mm thick, 40mm legs" into Zoo's text-to-CAD tool, hit enter, and watched the screen think for about fifteen seconds. What came back was an actual L-bracket. Not a render. Not a concept sketch. An actual solid body with fillets, holes, and a STEP file I could open in Fusion 360. I rotated it, measured it, selected a face, and it behaved like real geometry. I also noticed the holes were slightly too close to the edge for any machinist who values their end mills, but that's a separate conversation.</p>
<p>Text-to-CAD is AI that takes a written description and produces editable CAD geometry from it. Not a mesh. Not a point cloud. Not a pretty picture. Actual B-Rep solid models you can open in real CAD software, dimension, modify, and send to manufacturing.</p>
<p>That one sentence is the whole idea, and also the whole reason it matters.</p>
<h2>B-Rep vs mesh, and why you should care</h2>
<p>If you've spent any time in CAD, you already know the difference between a solid body and a mesh, even if you don't think about it in those terms. A solid body in SolidWorks or Fusion 360 has faces, edges, and vertices that the software understands as geometric entities. You can select a face and extrude it. You can fillet an edge. You can add a hole with a specific diameter and the software knows it's a hole, not just a collection of triangles that happen to form a circle-shaped opening.</p>
<p>That's B-Rep, or Boundary Representation. It's how professional CAD has worked for decades. The geometry is defined by mathematical surfaces and their boundaries, not by a skin of tiny triangles approximating a shape.</p>
<p>A mesh is the other thing. It's what you get from text-to-3D tools, from 3D scanning, from game engines, and from most of the AI-generated 3D content that's been making the rounds on social media. A mesh is a bag of triangles. It can look like a bracket, but it doesn't know it's a bracket. Try to fillet an edge on a mesh import in SolidWorks and you'll get a look from the software that roughly translates to "I don't know what you're talking about."</p>
<p>This is the single most important distinction in the text-to-CAD conversation. When a text-to-CAD tool generates a B-Rep model, you get something you can work with in an engineering context. When a text-to-3D tool generates a mesh, you get something that looks nice in a viewport and becomes a problem the moment you need to do anything useful with it. I've imported enough STL files from various "AI 3D generators" to know how that afternoon goes. You spend more time converting the mesh into a solid than it would have taken to model the part from scratch.</p>
<p>A STEP file from a text-to-CAD tool opens in your CAD software as geometry with selectable faces and real edges. An OBJ file from a text-to-3D tool opens as one fused lump that the feature tree treats like a foreign object. If your work ends at "look at this cool shape," the mesh is fine. If your work continues into tolerancing, manufacturing, assembly, or any revision more sophisticated than rotating the camera, you need B-Rep. For a more detailed breakdown of why this matters, I wrote about it in <a href="/posts/text-to-cad-vs-text-to-3d">text-to-CAD vs text-to-3D</a>.</p>
<h2>What text-to-CAD is not</h2>
<p>This part matters because the terms are getting mixed up constantly, and the confusion is not accidental. Vendors love blurry categories. Blurry categories let you claim adjacent territory without doing the actual work.</p>
<p>Text-to-CAD is not text-to-3D. Text-to-3D tools like Meshy, Tripo, and the various diffusion-model-based generators produce meshes for games, animation, and concept art. They are not trying to produce engineering geometry. They are not trying to produce files you can machine from. The output looks like a 3D object on screen, which is where the confusion starts, but the internal representation is completely different. Calling a mesh "CAD" is like calling a photograph of a blueprint "engineering documentation." It resembles the thing without being the thing.</p>
<p>Text-to-CAD is not generative design. Generative design, the kind Autodesk and Siemens have been shipping for years, starts with constraints and loads and uses topology optimization to propose organic-looking shapes. It's a different problem. Generative design asks "what shape best satisfies these forces?" Text-to-CAD asks "can you build me the thing I just described?" Generative design outputs tend to look like bones or coral. Text-to-CAD outputs tend to look like the parts a normal engineer would model. I covered the differences more thoroughly in <a href="/posts/text-to-cad-guide">text-to-CAD vs generative design</a>.</p>
<p>Text-to-CAD is not a CAD copilot. The major vendors are all adding AI assistants to their existing tools. SolidWorks has AURA and LEO. Onshape has an AI Advisor. Siemens NX has a Design Copilot. Autodesk is working on an Assistant for Fusion. These operate inside an existing workflow, suggesting commands, answering questions, or automating repetitive tasks. They don't generate geometry from a blank prompt. They're more like a knowledgeable colleague looking over your shoulder than a tool that builds the first version of the part for you.</p>
<p>The differences matter because each of these approaches solves a different problem, and pretending they're all the same thing helps nobody except the people writing press releases.</p>
<h2>How it actually works, briefly</h2>
<p>The technical details of <a href="/posts/how-text-to-cad-works">how text-to-CAD works</a> deserve their own post, but the short version is this: a text-to-CAD system takes your natural language input, interprets it as a sequence of CAD operations, and executes those operations to produce a solid model.</p>
<p>Some tools do this by generating code. Zoo's system, for example, uses a GPU-native geometric kernel called KittyCAD and can produce models through an API that generates geometry server-side. CADAgent, an open-source Fusion 360 add-in released in March 2026, uses an LLM to generate modeling commands that execute directly inside Fusion 360's environment. Other approaches generate OpenSCAD scripts, which are then compiled into geometry.</p>
<p>The academic foundation comes from work like the Text2CAD paper that got a spotlight at NeurIPS 2024. That research introduced an end-to-end framework using a transformer-based network to generate parametric CAD sequences from text, trained on roughly 660,000 annotations mapped to about 170,000 models from the DeepCAD dataset. The commercial tools build on these ideas, though most keep their exact architectures fairly quiet.</p>
<p>What all of these approaches share is the goal of producing geometry as a sequence of operations, not as a prediction of what a surface should look like. That operation-based approach is what makes the output editable. A chamfer generated as a CAD operation is a chamfer you can modify. A chamfer approximated by a mesh is just geometry that happens to look chamfered until you zoom in.</p>
<h2>Where the technology actually stands</h2>
<p>I'll be honest about this part because I think the demos are running ahead of the reality, and that's a familiar pattern in CAD software.</p>
<p>As of early 2026, text-to-CAD works reasonably well for simple to moderately complex prismatic parts. Brackets, enclosures, simple housings, plates with hole patterns, standoffs, basic mechanical components. If you can describe it in a sentence or two and it doesn't involve complex surfacing, organic shapes, or multi-body assemblies, you have a decent shot at getting something useful back. The output usually needs cleanup. Dimensions might be close but not what you specified. Features might be placed approximately. The topology of the model might be suboptimal for later editing. But the starting point is often better than nothing, especially for prototyping or quick-iteration work.</p>
<p>For anything more complex, the technology gets shaky fast. Ask for a gear with specific module and tooth count, a snap-fit enclosure with proper draft angles, or a sheet metal part with bend reliefs, and you'll start seeing the limits. Multi-part assemblies are mostly out of reach. Tolerances are not handled meaningfully. And the tools have no sense of manufacturing process, so they'll happily generate geometry that looks plausible but would make a machinist reach for a chair to sit down in.</p>
<p>The dedicated tools like <a href="https://zoo.dev">Zoo.dev</a>, AdamCAD, and CADAgent are the most honest about what they can do. The major CAD vendors are adding AI features more cautiously, with Autodesk's Neural CAD and Dassault's AURA still largely in development or early rollout. PTC's Onshape AI Advisor is live but focuses on workflow assistance rather than geometry generation from scratch. For a rundown of what's available and how they compare, see <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a>.</p>
<h2>Who this is actually useful for, right now</h2>
<p>If you're a professional engineer working on production parts with real tolerances, text-to-CAD is not replacing your workflow. Not yet. The output isn't precise enough, the feature trees aren't clean enough, and the lack of manufacturing awareness means you'd spend as much time fixing the output as you'd save.</p>
<p>Where it's genuinely useful today is in the early stages of design. Concept exploration, quick prototyping, generating starting geometry that you'll refine by hand, or producing simple parts for non-critical applications like 3D-printed fixtures and jigs. It's also useful for people who know what they want but don't have deep CAD skills. A hardware startup founder who needs a rough enclosure model to discuss with a contract engineer. A maker who wants a mounting bracket without learning Fusion 360 from scratch.</p>
<p>I think of it like a first draft. Nobody publishes a first draft, but a first draft is better than a blank page. Text-to-CAD gives you a first draft of geometry. What you do with it still depends on knowing what good geometry looks like.</p>
<h2>Where this is going</h2>
<p>Text-to-CAD will get better. The training data is improving, the integration with existing CAD tools is getting tighter, and CADAgent's direct Fusion 360 plugin shows the direction of travel. Within a few years, I expect simple parts to be fairly reliable and moderate-complexity parts to work with human review.</p>
<p>But I don't think it replaces knowing how to use CAD any more than autocomplete replaces knowing how to write. The tool generates geometry. Understanding whether that geometry can be manufactured, whether the feature tree will survive the next revision, whether the tolerances make sense, that's still on you. And honestly, that's always been the hard part.</p>
<p>The bracket I generated on my lunch break? I used it as a starting point. Changed the hole positions, added a gusset the AI hadn't thought of, and exported a STEP file to a machinist who didn't know or care where the first version came from. Not magic. Not useless. Somewhere in between, which is where most useful tools live before the marketing catches up to them.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD vs traditional CAD</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-vs-traditional-cad</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-vs-traditional-cad</guid>
      <pubDate>Thu, 22 Jan 2026 00:00:00 GMT</pubDate>
      <description>Traditional CAD makes you build every feature by hand but gives you total control. Text-to-CAD is faster on the first draft but gives you geometry you might not trust. Here&apos;s where each one wins.</description>
      <dc:creator>TexoCAD</dc:creator>

      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD generates geometry from natural language prompts in seconds but with limited control and accuracy. Traditional CAD requires manual feature-by-feature modeling but gives full parametric control, manufacturing precision, and reliable output. Text-to-CAD currently works best as a starting-point tool, not a replacement.</p>
<p>Last Tuesday I modeled a simple mounting bracket in Fusion 360. Two legs, four holes, a couple of fillets, nothing exciting. Took me about twenty minutes, including the two I spent staring at the screen trying to remember if the bolt pattern was 60mm or 65mm. Then I typed roughly the same description into Zoo.dev's text-to-CAD tool and got a bracket back in about thirty seconds. I put the two files side by side on my second monitor, the one with the wobbly stand I keep meaning to replace, and just looked at them.</p>
<p>From three feet away, they looked the same. Up close, they were different objects living in different realities.</p>
<h2>The thirty-second version and the twenty-minute version</h2>
<p>The bracket from Zoo.dev had the right general shape. The legs were close to what I asked for. The holes existed. But the fillet radii were not what I specified, the wall thickness varied slightly between legs for no reason, and the hole spacing was off by about 1.5mm. Not visible on screen. Very visible when you try to bolt it to an aluminum extrusion.</p>
<p>The Fusion 360 bracket was exactly what I drew. Every dimension was what I chose. Every feature was where I put it. The feature tree was clean enough that I could go back and change the leg height without the rest of the model throwing a tantrum. Not because Fusion is perfect, it absolutely isn't, but because I built the model with constraints that I understood and controlled.</p>
<p>That comparison is basically the entire text-to-CAD vs traditional CAD conversation, compressed into one bracket and two cups of coffee.</p>
<h2>Speed vs control, and why you can't have both yet</h2>
<p>Text-to-CAD is fast. Absurdly fast for simple geometry. You type a description, wait a few seconds, and get a solid body. If you just need a rough shape to check proportions or test a fit, that speed is genuinely valuable. I've used it to knock out quick fixture concepts while waiting for a file to export. It fills dead time well.</p>
<p>Traditional CAD is slow by comparison. You sketch, constrain, extrude, add features one at a time, check your work, and occasionally argue with a fillet that has decided today is the day it develops personal standards. But every minute you spend building the model is a minute the model is learning what you actually want. Constraints capture design intent. Parametric dimensions mean you can change your mind later without starting over. The slowness isn't waste. It's information being embedded into the geometry.</p>
<p>Text-to-CAD gives you speed but skips the intent. You get a shape that approximates your description. What you don't get is the reasoning behind the shape. There are no sketch constraints linking the holes to the edges. No equations tying one dimension to another. No relationships saying "this hole pattern is symmetric about this axis" or "this wall is always 1.5 times the fillet radius." The geometry just is, and if you need to change it, you're either re-prompting from scratch or rebuilding in traditional CAD anyway.</p>
<p>If you want the longer explanation of how text-to-CAD generates geometry and why the output looks the way it does, the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the technology and tools in detail.</p>
<h2>The feature tree problem</h2>
<p>This is where the gap between the two approaches really shows. In traditional CAD, the feature tree is the model. It's the history of decisions you made, in order, with each feature depending on the ones before it. A well-built feature tree is like a recipe: you can adjust one ingredient and the rest adapts. A badly built one is a hostage negotiation, but at least it's your hostage negotiation. You know where the bodies are buried.</p>
<p>Most text-to-CAD output has no usable feature tree. Zoo.dev gives you a STEP file, which imports into Fusion 360 as a dumb solid. No features. No history. No parameters you can tweak. It's geometry, not a model. CADAgent, which runs inside Fusion 360 directly, is better here because it generates actual modeling operations that show up in the timeline. But even then, the feature tree it builds is rarely how a human would have structured it. The dependencies are fragile. Change one sketch and you're more likely to get a cascade of errors than a clean update.</p>
<p>For a part you're going to build once and never touch again, this doesn't matter much. For anything that lives in a project with revisions, where a client changes the bolt spacing or the enclosure needs to be 5mm taller six weeks later, a model without a clean feature tree is a model you'll end up rebuilding.</p>
<h2>Accuracy and manufacturing readiness</h2>
<p>I'll be direct about this because it's the part that separates "cool demo" from "useful tool."</p>
<p>Traditional CAD gives you the dimensions you put in. If you draw a 4.2mm hole, you get a 4.2mm hole. You can add tolerances, specify fits, annotate features for GD&#x26;T, and export drawings that a machine shop can actually quote from. The geometry is exactly as precise as you make it. No surprises, unless you made a mistake, which is a different problem and at least one that's your fault.</p>
<p>Text-to-CAD gives you dimensions that are approximately what you asked for. Sometimes very close. Sometimes not. I've had holes come back 0.3mm off from what I specified, which is fine for a 3D print and completely unacceptable for a press-fit bushing. There's no tolerance handling. No GD&#x26;T awareness. No understanding of whether a feature is functional or decorative. The AI treats a clearance hole and a press-fit bore the same way, which is to say, it treats them both as circles with a number attached.</p>
<p>Sheet metal is another place where text-to-CAD falls apart. A proper sheet metal part in SolidWorks has bend radii, K-factors, relief cuts, and a flat pattern that actually unfolds correctly. Text-to-CAD will give you a shape that looks bent, but ask for the flat pattern and you'll get a blank stare from the software, because the model was never designed with bending in mind.</p>
<p>If you want to understand what <a href="/posts/what-is-text-to-cad">text-to-CAD</a> can and can't produce right now, the accuracy and format issues are covered there.</p>
<h2>Where text-to-CAD wins</h2>
<p>It's not all bad news. There are real situations where text-to-CAD saves time, and being stubborn about ignoring them would be dishonest.</p>
<p>Concept exploration is the obvious one. When you're early in a design and need to see ten different bracket configurations before committing to one, typing ten prompts is faster than building ten models. The output isn't production-ready, but it doesn't need to be. You're checking proportions, testing ideas, seeing if a shape even makes sense before investing the time to model it properly. I've used this to settle arguments with a colleague about whether a particular mounting approach would even fit in the available space. Beat sketching it on a napkin.</p>
<p>Quick prototyping for <a href="/posts/text-to-cad-for-3d-printing">3D printing</a> is another strong case. Additive manufacturing is tolerant of imperfect geometry. If the mesh is watertight and the dimensions are in the right neighborhood, you can print it, hold it in your hand, and decide if the concept is worth refining. I've printed text-to-CAD brackets to test fit against real hardware, and for that purpose they're perfectly fine.</p>
<p>First-draft geometry also has value. Starting from an 80% shape and fixing it is sometimes faster than starting from a blank sketch, especially for simple parts. Not always. But for a mounting plate with a hole pattern, or a basic electronics enclosure, getting the starting point for free is worth something.</p>
<p>People who aren't full-time CAD users benefit the most. A hardware startup founder who needs a rough model to discuss with a manufacturer. A hobbyist who wants a bracket but doesn't want to learn parametric modeling. A mechanical engineer who needs a quick concept model for a design review but doesn't want to spend an hour in Creo for a throwaway part. Text-to-CAD lowers the floor, and that matters.</p>
<h2>Where traditional CAD wins</h2>
<p>Everything else. Production parts. Toleranced designs. Assemblies where parts need to fit together with real constraints. Sheet metal. Injection-molded parts with draft angles and gate locations. Weldments. Anything where a machinist, mold maker, or inspector needs to trust the geometry.</p>
<p>Traditional CAD also wins on revision. A parametric model built with proper intent survives changes. A text-to-CAD output doesn't, because there's no intent encoded in it. Change one thing and you're starting the conversation over. In a project with three rounds of client revisions and a last-minute change to the mounting interface, that difference is worth hours.</p>
<p>Complex assemblies are completely out of reach for text-to-CAD right now. Mating conditions, interference checks, motion studies, assembly-level configurations: none of this exists in the text-to-CAD world. If your part lives alone, fine. If it lives in an assembly with forty other components and needs to play nice with all of them, you're in traditional CAD territory, no discussion.</p>
<p>The question of <a href="/posts/will-ai-replace-cad-designers">whether AI will replace CAD designers</a> gets asked a lot. The short answer is: not with the current technology, and not for the work that actually requires engineering judgment.</p>
<h2>The hybrid workflow that actually makes sense</h2>
<p>The interesting answer isn't "text-to-CAD or traditional CAD." It's both, used for what each does well.</p>
<p>My current approach for appropriate projects: generate a first draft with text-to-CAD, import the STEP file into Fusion 360, and then model it properly. Rebuild the feature tree, fix the dimensions, add the constraints and relationships the AI couldn't know about. The text-to-CAD output serves as a 3D reference, like tracing over a rough sketch. Sometimes it saves ten minutes. Sometimes it saves nothing because the output was too far off to be useful. I don't force it.</p>
<p>The more mature version of this workflow, the one I think we'll see in a couple of years, uses text-to-CAD for initial geometry and AI assistants inside traditional CAD tools for the refinement. Autodesk is heading this direction with Neural CAD. Dassault's AURA and LEO features in SolidWorks 2026 are pointing the same way. The <a href="/posts/ai-cad-workflow">AI CAD workflow</a> is less about replacing the traditional process and more about accelerating the boring parts of it.</p>
<p>For now, text-to-CAD is a fast, imprecise first pass. Traditional CAD is the reliable, slow, detailed finish. Neither one is going away. The engineers who'll be most productive are the ones who stop treating it as a competition and start treating it as a toolbox with more than one tool in it. Though I'll admit, when my Fusion 360 bracket came out perfect on the first try and the AI bracket needed three rounds of edits, the old-fashioned way did feel a little smug.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD vs text-to-3D: why the difference matters</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-vs-text-to-3d</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-vs-text-to-3d</guid>
      <pubDate>Wed, 21 Jan 2026 00:00:00 GMT</pubDate>
      <description>One gives you editable engineering geometry. The other gives you a bag of triangles. The distinction matters more than most people realize.</description>
      <dc:creator>TexoCAD</dc:creator>

      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD generates editable B-Rep geometry (STEP files with real edges, faces, and feature history) while text-to-3D generates mesh models (STL/OBJ triangle soups). CAD output can be filleted, dimensioned, and manufactured; mesh output typically cannot without extensive rework.</p>
<p>Last Tuesday I watched a colleague import an AI-generated mesh into SolidWorks with the confidence of someone who has never been hurt by an STL file. He'd typed a prompt into one of those text-to-3D tools, something about a sensor housing with snap fits, and the preview looked great. Clean edges on screen. He dragged the OBJ into an assembly, and within about three seconds the mood shifted. SolidWorks treated the import like a lump of concrete dropped onto the feature tree. No selectable faces. No real edges. Just a dense, triangulated blob refusing to participate in engineering. He tried to add a fillet to the snap-fit lip and got an error that basically said "I don't know what you're pointing at." I sipped my lukewarm coffee and said nothing, which is my version of sympathy.</p>
<p>That moment captures the entire text-to-CAD vs text-to-3D distinction better than any diagram could. Both approaches start with a text prompt and end with a 3D shape on your screen. But what's inside that shape, and what you can do with it afterward, is completely different. If your work ends at "rotate it in a viewport," you might never notice. If your work continues into dimensioning, manufacturing, or the next revision, you'll notice immediately, and it will cost you an afternoon.</p>
<h2>The geometry under the hood</h2>
<p>A text-to-CAD tool produces B-Rep geometry. B-Rep stands for Boundary Representation, and it's the same kind of math that SolidWorks, Fusion 360, NX, and every other serious CAD program uses internally. A B-Rep solid has faces defined by mathematical surfaces, edges defined by the intersection of those surfaces, and vertices where edges meet. Select a face and the software knows it's a face. Ask for a fillet on an edge and the software knows which edge you mean and how to roll a radius along it. You can measure exact distances, compute real volumes, cut pockets, and export a STEP file that a machinist can work from without asking what went wrong.</p>
<p>A text-to-3D tool produces a mesh. A mesh is a skin of triangles draped over an approximated shape. It can look identical to B-Rep geometry in a viewport, the way a photograph of a sandwich looks identical to a sandwich until you try to eat it. But there's no face information, no edge topology, no mathematical surface definition. Just triangles, thousands or millions of them, stitched together into something that vaguely resembles geometry. Import that into SolidWorks and the software sees one monolithic body made of facets. You can rotate it. You cannot fillet it, chamfer it, or dimension a hole diameter.</p>
<p>If you want a deeper explanation of <a href="/posts/what-is-text-to-cad">what text-to-CAD actually is</a> and how it produces editable models, I've covered that separately. The short version: text-to-CAD tools generate sequences of CAD operations (sketch, extrude, fillet, chamfer) that produce real parametric solids. Text-to-3D tools predict what a surface should look like and approximate it with triangles. Same screen, very different file.</p>
<h2>What you can actually do with each</h2>
<p>With B-Rep output from a text-to-CAD tool, you can do everything you'd do with a model you built by hand. Select a face and offset it. Add a counterbore. Shell the part. Apply draft angles for molding. Measure true diameters and wall thicknesses. Export to STEP and hand it to a machine shop. Generate a drawing with real dimensions. Roll back the feature tree and edit an earlier sketch. The geometry behaves like CAD geometry because it is CAD geometry.</p>
<p>With mesh output from a text-to-3D tool, you can render it, 3D print it (if the mesh is watertight, which isn't guaranteed), and look at it. That's roughly where the useful list ends. You can try to convert a mesh to B-Rep using tools like SolidWorks' mesh-to-solid or Fusion 360's mesh workspace, and I have tried this many times, usually on a Friday afternoon when my judgment is weakest. The conversion is lossy, slow, and produces geometry that looks hungover. Faces that should be planar come back warped. Cylindrical holes become polygonal approximations. You spend longer cleaning up the converted geometry than you would have spent modeling the part from scratch. The <a href="/posts/brep-vs-mesh-ai-generation">B-Rep vs mesh comparison</a> covers the technical breakdown, but the practical summary is: mesh-to-solid conversion is a last resort, not a workflow.</p>
<h2>The file format tells you everything</h2>
<p>This is the fastest way to figure out whether a tool is doing text-to-CAD or text-to-3D with better branding. Look at the output format.</p>
<p>STEP (ISO 10303) is the standard exchange format for B-Rep geometry. If a tool outputs STEP, you're getting real engineering geometry that opens in any professional CAD program with selectable faces and real edges. STEP is what machine shops expect. It's what mold makers expect. It's what your downstream workflow needs.</p>
<p>STL is triangulated mesh. It's the lingua franca of 3D printing, and that's about where its usefulness ends for engineering work. You cannot dimension an STL. You cannot fillet an STL edge. You cannot generate a manufacturing drawing from an STL. A tool that only outputs STL is a text-to-mesh tool, regardless of what it calls itself.</p>
<p>OBJ, FBX, glTF, PLY are all mesh or visualization formats. They exist for rendering, games, and animation. They are not engineering formats.</p>
<p>OpenSCAD's .scad format is an interesting case because it's code that compiles into geometry. An AI generating an OpenSCAD script is technically generating a parametric model, which is closer to the CAD side. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the full range of output formats and what they're each good for.</p>
<p>The file format question isn't pedantic. A colleague once sent me a "CAD model" that turned out to be a high-resolution OBJ from a text-to-3D tool. It had 400,000 triangles and looked gorgeous in the viewport. It was completely useless for the tolerance analysis I needed to do. Forty minutes trying to extract a clean diameter measurement from what was essentially a polygon sculpture. That's time I'm not getting back.</p>
<h2>The tools on each side</h2>
<p>The text-to-CAD side is smaller and more serious. <a href="https://zoo.dev">Zoo.dev</a> is the most established dedicated tool, running on a GPU-native kernel called KittyCAD, outputting real B-Rep geometry in STEP and other formats. AdamCAD generates parametric models with adjustable dimension sliders. CADAgent is an open-source Fusion 360 add-in that generates models inside Fusion's own environment, which means the output has actual feature history. These tools are trying to solve the hard problem: generating geometry that engineers can work with. The <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> post covers each one in more detail.</p>
<p>The text-to-3D side is larger and louder. Meshy, Tripo, DreamFusion, Point-E, Magic3D. These tools produce impressive visual 3D content: game assets, concept visualization, character models, product renders. The meshes can look stunning. They're also engineering dead ends. Nobody is going to CNC machine a Meshy output. That's not a criticism of those tools. They're doing exactly what they're designed to do. The problem is when someone mistakes what they do for what CAD needs to do.</p>
<h2>When mesh is fine</h2>
<p>I don't want to give the impression that mesh output is always wrong. It's not. Mesh is the right format for plenty of real work.</p>
<p>Game development lives on meshes. Every character, environment, and prop in a modern game is mesh geometry, and text-to-3D tools are genuinely useful for accelerating asset creation. Architectural visualization, product renders, VFX, concept art, 3D-printed figurines, physical prototypes where exact dimensions don't matter much. All fine with mesh output. The geometry doesn't need to be editable in a parametric sense. It just needs to look right and, in the case of printing, be watertight.</p>
<p>The trouble starts when someone tries to cross the boundary. When a mesh that was fine for rendering gets handed to an engineer who needs to add mounting features. When a 3D-printed prototype that "worked fine" needs to become an injection-molded production part. When the cool-looking housing from the text-to-3D demo needs real snap fits, real wall thicknesses, and real draft angles. That's where mesh output hits a wall, and the wall is made of manufacturing reality.</p>
<h2>When you need CAD</h2>
<p>If the geometry is going to be manufactured with any process that cares about precision (CNC, injection molding, sheet metal, casting), you need B-Rep. Period. Machinists work from STEP files. Mold designers work from STEP files. Nobody in manufacturing is working from OBJ files, and if someone tells you they are, check whether they're actually in manufacturing.</p>
<p>If the part needs to mate with other parts, you need B-Rep. Assembly relationships, interference checks, and tolerance stack-ups require real geometric data.</p>
<p>If the part will go through revisions (and every real part goes through revisions), you need geometry you can edit without rebuilding from zero. A B-Rep model with a proper feature tree can survive a dimension change. A mesh cannot. You change one dimension on a mesh by rerunning the prompt and hoping the AI gives you something close to what you had before, which it usually doesn't.</p>
<p>The <a href="/posts/text-to-cad-file-formats">text-to-CAD file formats</a> post goes deeper on which formats support which workflows, but the decision tree is short: if the part needs to be manufactured, edited, or mated with precision, you need text-to-CAD output. If the part needs to look good on a screen, text-to-3D is fine.</p>
<h2>The marketing blur</h2>
<p>The thing that irritates me most about this space is how deliberately the line gets blurred. Every week there's another announcement about an AI tool that "generates 3D models from text," and every week I check the output format and find OBJ or FBX. The demos look amazing. And then you try to open the result in Fusion 360 and you're back to staring at a triangle soup wondering why you trusted a demo.</p>
<p>Calling mesh output "CAD" is like calling a photograph of a floor plan "architecture." It resembles the thing. It is not the thing.</p>
<p>If you're evaluating any text-to-geometry tool, the first question to ask is: "what format is the output, and can I select a face in my CAD software?" If the answer is yes, you're looking at text-to-CAD. If the answer is "it exports to OBJ and STL," you're looking at text-to-3D, no matter what the landing page says.</p>
<p>I keep a STEP file and an OBJ file of roughly the same bracket on my desktop. Same shape, same proportions. One came from Zoo.dev, the other from a text-to-3D tool. The STEP file is 47 kilobytes. The OBJ is 3.2 megabytes. One opens with twelve selectable faces and eight real edges. The other opens as a single imported body with 68,000 triangles. Same bracket. Two completely different futures. The small, boring file is the one I can actually build something from.</p>
<p>That's the whole argument, really. Text-to-CAD gives you geometry with a future. Text-to-3D gives you geometry with a viewport. Both have their place, but if you confuse the two, you'll find out the hard way, usually on a Tuesday afternoon with a deadline and a mesh that won't cooperate.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD meaning: plain English, no marketing</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-meaning</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-meaning</guid>
      <pubDate>Tue, 20 Jan 2026 00:00:00 GMT</pubDate>
      <description>Text-to-CAD means typing a description and getting actual editable CAD geometry back. Not a render. Not a mesh. Real geometry you can fillet, dimension, and send to a machine shop.</description>
      <dc:creator>TexoCAD</dc:creator>

      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD means using AI to convert a natural language description into editable CAD geometry, typically B-Rep solids output as STEP or native CAD files. Unlike text-to-3D (which outputs meshes), text-to-CAD produces engineering-grade geometry with real edges, faces, and parametric features.</p>
<p>Somebody in a product review last month asked, mid-sentence, "What does text-to-CAD even mean?" and the room split into two camps. One side started explaining language models. The other side started talking about STEP files. Both were technically right and neither was helping. The product manager's eyes glazed over around the thirty-second mark. I know that look. I've caused it before, usually while trying to explain why STL is not the same as STEP to someone who just wants a bracket.</p>
<p>So here's the plain-language answer, the one I wish someone had given in that meeting while my coffee was still warm.</p>
<h2>The short version</h2>
<p>Text-to-CAD means you type a description of a part, in regular words, and an AI generates real, editable CAD geometry from it.</p>
<p>Not a render. Not a mesh for a video game. Not a concept image. Actual solid geometry with faces, edges, and features that you can open in professional CAD software, measure, modify, and send to manufacturing. The kind of geometry an engineer would build by hand with sketches, extrudes, and fillets, except the AI builds the first version for you based on what you wrote.</p>
<p>That's it. That is the entire meaning. Everything else is details, and the details matter, but the core idea fits in one sentence.</p>
<h2>The "text" part</h2>
<p>The "text" in text-to-CAD is just a natural language prompt. You write what you want in English (or whatever language the tool supports) and the system interprets it.</p>
<p>This can be as simple as "L-bracket with two mounting holes" or as specific as "rectangular enclosure, 120mm by 80mm by 40mm, 2mm wall thickness, four M3 mounting holes on a 100mm by 60mm bolt pattern, with a snap-fit lid." The more specific you are, the closer the output lands to what you actually needed. Vague prompts produce vague parts. I learned this the hard way after describing a "small housing" and getting back something that could have been a phone case or a coffin for a hamster.</p>
<p>The prompting part is surprisingly important. It's closer to writing a work order for a junior colleague who's skilled but has no context. You have to specify dimensions, material thickness, feature placement, and constraints, or the AI fills in the blanks with its own guesses. Sometimes those guesses are reasonable. Sometimes they're the kind of thing that makes a machinist close their eyes and breathe slowly. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers this in more detail.</p>
<h2>The "CAD" part (this is where it matters)</h2>
<p>The "CAD" in text-to-CAD is the part most people gloss over, and it's the part that actually determines whether the output is useful or decorative.</p>
<p>CAD geometry, properly speaking, means B-Rep: Boundary Representation. It's the mathematical description of a solid using surfaces, edges, and vertices that CAD software understands as real geometric entities. When you select a face in SolidWorks and extrude it, that's B-Rep at work. When you fillet an edge in Fusion 360, the software knows what an edge is because the model is B-Rep.</p>
<p>A mesh is the other thing. A mesh is a bag of triangles that approximate a shape. Meshes are what game engines use, what 3D scanners produce, and what most "AI 3D" tools on social media generate. A mesh can look exactly like a bracket, but try to select one face and add a chamfer and you'll discover it's not a bracket in any engineering sense. It's a triangle sculpture of a bracket. Pretty to look at, useless to machine from.</p>
<p>Text-to-CAD tools produce B-Rep. The output is typically a STEP file (the standard exchange format for solid geometry) or native CAD files that open in professional software with real feature trees. You can dimension it. You can tolerance it. You can export a drawing from it. You can hand it to a CNC programmer without an apology.</p>
<p>Text-to-3D tools produce meshes. OBJ files, FBX files, STL at best. These are fine for rendering, animation, and 3D printing if you don't care too much about precision. They are not fine for engineering. The <a href="/posts/text-to-cad-vs-text-to-3d">text-to-CAD vs text-to-3D comparison</a> goes deeper on this, but the short version is: if the output is a mesh, it's not CAD. Full stop.</p>
<h2>What text-to-CAD produces</h2>
<p>When a text-to-CAD tool works correctly, you get something like this:</p>
<ul>
<li>A solid body you can open in Fusion 360, SolidWorks, or any CAD software that reads STEP</li>
<li>Selectable faces and edges</li>
<li>Measurable dimensions (sometimes accurate, sometimes approximate)</li>
<li>Geometry you can modify: add fillets, cut pockets, move holes, change thicknesses</li>
<li>A file that's one edit session away from being manufacturing-ready, rather than a complete rebuild away</li>
</ul>
<p>For simple parts, brackets, plates, basic enclosures, standoffs, the results can be genuinely useful as starting geometry. You'll still check every dimension and probably fix a few features, but you're editing a part instead of building one from nothing. For a fuller picture of how this fits into real work, the <a href="/posts/what-is-text-to-cad">what is text-to-CAD</a> post covers the practical side.</p>
<h2>What text-to-CAD does not mean</h2>
<p>This part needs saying because the terms are getting blurred constantly, mostly by people selling things.</p>
<p>Text-to-CAD is not rendering. It does not generate images of parts. It generates the parts themselves, as editable 3D solids.</p>
<p>Text-to-CAD is not text-to-3D. Text-to-3D tools like Meshy and Tripo produce meshes for games and concept art. They look impressive in a demo reel. They are useless in a tolerance stack. Different technology, different output, different purpose.</p>
<p>Text-to-CAD is not generative design. Generative design starts with loads and constraints and uses topology optimization to suggest organic shapes. It answers "what shape can handle these forces?" Text-to-CAD answers "build me the thing I described." One looks like coral. The other looks like a normal part. Both have their place, but they're solving different problems.</p>
<p>Text-to-CAD is not a chatbot inside your CAD software. The major vendors are all adding AI assistants that help you use existing tools faster. That's useful, but it's workflow automation, not geometry generation from a blank prompt. The <a href="/posts/how-text-to-cad-works">how text-to-CAD works</a> post breaks down the technical distinctions.</p>
<h2>Why any of this matters</h2>
<p>Because the words people use shape what they expect, and mismatched expectations waste everyone's time. I've watched people get excited about "AI-generated CAD" after seeing a mesh demo, only to be disappointed when the output can't be edited. I've also watched people dismiss text-to-CAD as hype after trying a text-to-3D tool and getting a mesh blob, not realizing that the actual text-to-CAD tools produce something fundamentally different.</p>
<p>Text-to-CAD means AI that produces real CAD geometry from words. The "real CAD geometry" part is load-bearing. Without it, you just have another mesh generator with a better name. With it, you have something that lets you skip from a written description to an editable solid without touching a sketch tool. Whether that solid is good enough for your particular job is a separate question, but the meaning itself is simple.</p>
<p>Type what you want. Get geometry you can actually engineer with. That's text-to-CAD.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Text-to-CAD: the working engineer&apos;s guide</title>
      <link>https://blog.texocad.ai/posts/text-to-cad-guide</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/text-to-cad-guide</guid>
      <pubDate>Mon, 19 Jan 2026 00:00:00 GMT</pubDate>
      <description>I&apos;ve been testing text-to-CAD tools for months now. Some of them generate real B-Rep geometry you can actually edit. Most of them don&apos;t. Here&apos;s what works, what&apos;s hype, and what matters if you do real engineering work.</description>
      <dc:creator>TexoCAD</dc:creator>

      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD is a category of AI tools that generate editable CAD models, typically B-Rep geometry with feature trees, from natural language text prompts. Unlike text-to-3D tools that output meshes, text-to-CAD produces STEP, SCAD, or native parametric files you can open, edit, and manufacture from in professional CAD software.</p>
<p>The first time I asked an AI to make me a flanged bracket, it came back looking almost right. Good proportions. Mounting holes in reasonable places. Fillets that seemed intentional. I sat there with my second coffee going cold, feeling a little unsettled, like the software had read my mind. Then I opened the file in Fusion 360 and the whole illusion collapsed. No feature tree. No parametric dimensions. Just a mesh blob wearing a convincing disguise. I couldn't edit a single hole diameter without rebuilding the entire thing from scratch.</p>
<p>That was about a year ago, and it taught me something useful about this whole text-to-CAD space: the distance between "looks like a CAD model" and "is a CAD model" is enormous. Text-to-CAD is AI that generates real, editable CAD geometry from a text description. Not meshes. Not point clouds. Not pretty screenshots. Actual B-Rep solids with faces, edges, and feature history that you can fillet, chamfer, dimension, and hand to a machine shop without apologizing. That distinction matters more than anything else in this conversation, and most of the marketing around AI-generated 3D conveniently forgets to mention it.</p>
<p>I've spent months testing the tools, reading the papers, importing the outputs, and trying to make parts from the results. This is what I know so far.</p>
<h2>What text-to-CAD actually is</h2>
<p>Text-to-CAD means you type a description of a part and the AI generates parametric CAD geometry. The output should be a B-Rep (Boundary Representation) solid, the same kind of geometry you'd get from sketching and extruding in SolidWorks or Fusion 360. Proper B-Rep means real faces, real edges, real topology. You can select a face, apply a fillet, measure a distance, cut a pocket, or export a STEP file that a CNC programmer won't throw back at you.</p>
<p>This is fundamentally different from <a href="/posts/text-to-cad-vs-text-to-3d">text-to-3D</a>, which typically outputs triangle meshes, the kind of geometry used for game assets, rendering, and visual effects. A mesh from Meshy or Tripo might look like a bracket, but try selecting one face to add a chamfer and you'll understand why mesh geometry and engineering geometry are different species. I've covered the technical distinction in more detail in the <a href="/posts/text-to-cad-vs-text-to-3d">text-to-CAD vs text-to-3D comparison</a>, but the short version is: meshes approximate shape, B-Rep defines it. Manufacturing cares about the difference even when marketing doesn't.</p>
<p>If you want the full concept explained from scratch, the <a href="/posts/what-is-text-to-cad">what is text-to-CAD</a> post covers it without assuming you already know what B-Rep means.</p>
<h2>How the technology works under the hood</h2>
<p>The academic foundation for this stuff comes from a NeurIPS 2024 spotlight paper called "Text2CAD," which introduced the first end-to-end framework for generating parametric CAD models from natural language. The researchers used roughly 660,000 text annotations on the DeepCAD dataset of about 170,000 models. The architecture is a transformer-based autoregressive network: a BERT encoder processes the text prompt, and a CAD sequence decoder generates a series of modeling operations (sketch, extrude, fillet, chamfer) rather than raw coordinates.</p>
<p>That sequence-of-operations approach is what makes this different from image generation or mesh generation. The AI isn't predicting pixels or triangles. It's predicting CAD commands. In theory, the output is a feature tree you can roll back, edit, and rebuild, the same workflow a human CAD user would follow.</p>
<p>In practice, the commercial tools are still catching up to the research. Most of the available tools use some variation of this approach: take a text prompt, map it to a sequence of CAD operations, execute those operations in a geometric kernel, and output a solid. Some tools generate code (OpenSCAD scripts, Python scripts for FreeCAD, AutoLISP for AutoCAD). Others generate operations directly inside a proprietary kernel. The <a href="/posts/how-text-to-cad-works">how text-to-CAD works</a> post goes deeper on the architecture and the different implementation strategies, but the key thing to understand is that the quality of the output depends heavily on how well the AI learned the grammar of CAD operations, not just the shape of the result.</p>
<h2>The tools that exist right now</h2>
<p>I'll be blunt: the field is early. Most of these tools are useful for specific things and terrible at others. None of them will replace a competent CAD user on a real project. Some of them can save you time on simple geometry if you know their limits. The <a href="/posts/best-text-to-cad-tools">full tool comparison</a> covers each one in more detail, but here's an honest overview of where things stand.</p>
<p>Zoo.dev is the most serious dedicated text-to-CAD tool right now. It runs on a GPU-native geometric kernel called KittyCAD and outputs real B-Rep geometry in STEP, glTF, OBJ, STL, and several other formats. The API is well-documented, there's a Python SDK, and the free tier is generous enough to actually test things. I've gotten decent results for simple mechanical parts: brackets, enclosures, standoffs, basic housings. When it works, the output is a real solid you can import into Fusion 360 or SolidWorks and actually edit. When it doesn't work, you get geometry that looks plausible but has weird internal faces, missing fillets, or dimensions that suggest the AI was guessing. More on that in the accuracy section.</p>
<p>AdamCAD generates parametric models with adjustable dimension sliders, which is a smart approach. You get an STL plus controls to tweak dimensions after generation, starting at $5.99 a month. It's faster than Zoo for quick prototyping, but the parametric controls are limited compared to a real feature tree in SolidWorks.</p>
<p>CADGPT is misnamed, in my opinion, because it doesn't actually generate CAD models. It writes automation scripts: AutoLISP for AutoCAD, Python for other tools. That's a different job. Useful if you want AI-assisted scripting, but it's not text-to-CAD in the way the name implies.</p>
<p>CADScribe generates STL and STEP files and has been tested by Xometry and All3DP. The results are mixed on anything beyond simple geometry. For a basic box with mounting holes, fine. For anything with real constraints, fillets meeting at compound angles, shell features, or draft, you'll be rebuilding most of it.</p>
<p>CADAgent is the most interesting newcomer. It's an open-source Fusion 360 add-in released in March 2026 that generates models in real time inside Fusion 360 itself. You bring your own Anthropic API key. The fact that it operates inside a real parametric CAD environment means the output has actual feature history, which is what separates this from tools that generate geometry in isolation and hope for the best. I've been testing it and the results are promising for simple to moderate parts.</p>
<p>Vondy's AI CAD Generator outputs DXF files and is aimed at beginners. HP's AI Text to 3D is focused on their 3D printing ecosystem. Both are limited to simpler geometry.</p>
<h2>What the big CAD vendors are doing</h2>
<p>The dedicated text-to-CAD startups aren't the only story. The major vendors are all bolting AI onto their existing platforms, with varying levels of ambition and varying levels of actually shipping anything.</p>
<p>Autodesk announced Neural CAD at AU 2025, which aims to generate editable 3D geometry from text prompts directly inside Fusion. They've also shown Text to Command ("extrude this face by 1 inch") and an Autodesk Assistant feature. As of Q1 2026, most of this is still in development or exploratory. I've seen the demos. They look good. Demos always look good. I'll judge it when I can break it in production.</p>
<p>Dassault Systèmes is shipping AI features in SolidWorks 2026 under the names AURA and LEO. There's an Assembly Structure Designer that takes text prompts, Design Inspection using natural language queries, and automated drawing generation. Most features are scheduled to ship by July 2026. The assembly structure approach is interesting because assemblies are where CAD workflows actually get complicated. Single-part modeling is the easy problem.</p>
<p>PTC's Onshape has an AI Advisor that went live in October 2025, offering real-time guidance while you model. They also have LLM-powered FeatureScript autocomplete, which is clever because FeatureScript is already a programming language, making it a natural fit for language models. Creo has generative design and an AI assistant in beta.</p>
<p>Siemens is building NX AI Chat and a Design Copilot for both NX and Solid Edge. The Solid Edge automated drawing feature that claims 80% of views auto-generated is genuinely useful if it holds up. Drawing creation is one of the most tedious parts of CAD work and one of the easiest to get wrong.</p>
<p>The <a href="/posts/ai-in-cad-software">AI in CAD software</a> post tracks all of this in more detail, with specifics on what's actually shipping versus what's still a conference slide.</p>
<h2>What text-to-CAD is good at right now</h2>
<p>Simple, well-described mechanical parts. That's the honest answer. If you need a rectangular enclosure with four corner mounting holes, a lid with snap fits, a basic L-bracket, or a standoff with specific thread dimensions, the current tools can often generate something usable. The key word is "well-described." Vague prompts produce vague parts. <a href="/posts/text-to-cad-prompt-engineering">Prompt engineering for text-to-CAD</a> is a real skill, and it matters a lot more than people realize.</p>
<p>The sweet spot right now is rapid prototyping and first-draft geometry. You need a quick bracket to hold a sensor? A simple enclosure to test fit? A mounting plate with a specific bolt pattern? Text-to-CAD can get you a starting point in seconds instead of minutes. You'll almost always need to edit the result, but starting from an 80% solution is faster than starting from a blank sketch.</p>
<p>For <a href="/posts/text-to-cad-for-3d-printing">3D printing</a> specifically, the tools are more forgiving because additive manufacturing is more tolerant of imperfect geometry than machining or molding. If the mesh is watertight and the dimensions are close, you can print it and see what happens. That's not how you'd approach a CNC part, but for prototyping it's fine.</p>
<h2>What text-to-CAD is bad at</h2>
<p>Complex geometry with interdependent features. Multi-body parts. Assemblies. Anything where the relationship between features matters more than the features themselves.</p>
<p>I asked Zoo.dev to generate a gear once. I got something that looked like a gear, in the same way a drawing of a sandwich looks like lunch. The tooth profile was decorative, not functional. The root radius was wrong. The bore was close but not dimensioned to any standard. For a render, fine. For a print that needs to mesh with another gear, useless without complete rework.</p>
<p>Sheet metal is another weak spot. A good sheet metal part has bend allowances, K-factors, relief cuts, and flat-pattern logic baked in. Text-to-CAD tools don't understand any of that. They'll give you a shape that looks like folded metal but won't unfold correctly because it was never designed with bending in mind.</p>
<p>Draft angles for injection molding. Undercuts. Thin-wall considerations. Thread specifications. GD&#x26;T. None of the current tools handle these, and these are exactly the things that separate "a 3D shape" from "a part someone can manufacture." The <a href="/posts/text-to-cad-for-manufacturing">text-to-CAD for manufacturing</a> post covers this gap in more detail, but the short version is: if you need manufacturing-ready output, you need a human in the loop. Period.</p>
<p>Tolerance handling is nonexistent. I have never seen a text-to-CAD tool output a model with proper tolerances. The dimensions are nominal at best, approximate at worst. For prototyping, this is fine. For anything going into production, you're adding tolerances manually anyway, so the time savings shrink.</p>
<h2>The file format question</h2>
<p>This matters more than it sounds. What a text-to-CAD tool outputs determines whether the result is useful or just pretty.</p>
<p>STEP (ISO 10303) is the gold standard for B-Rep exchange. If a tool outputs STEP, you can open that file in SolidWorks, Fusion 360, Creo, NX, or almost anything else and get real geometry with faces and edges you can work with. Zoo.dev outputs STEP, which is one of the reasons I take it seriously.</p>
<p>STL is triangulated mesh data. It's fine for 3D printing. It's useless for engineering edits. You can't select a face, you can't measure a true diameter, and you can't add a fillet to a mesh edge without converting it back to B-Rep first, which is a lossy, painful process that usually produces garbage. Tools that only output STL are not really text-to-CAD. They're text-to-mesh with better marketing.</p>
<p>OpenSCAD's .scad format is interesting because it's already code. An LLM generating an OpenSCAD script is generating a parametric model by definition, one you can open, read, edit, and rebuild. The downside is that OpenSCAD's modeling approach (CSG, constructive solid geometry) has real limitations for complex organic shapes.</p>
<p>DXF is 2D. Useful for laser cutting and CNC routing, not for 3D parts.</p>
<p>glTF, OBJ, PLY, FBX are all mesh or visualization formats. They have their uses but they're not engineering formats.</p>
<p>The takeaway: ask what format the tool outputs before you get excited. If the answer is "STL only," temper your expectations accordingly.</p>
<h2>Accuracy and dimensional reliability</h2>
<p>This is the part where I get a little uncomfortable making general statements, because accuracy varies wildly between tools, between prompts, and between the complexity of what you're asking for.</p>
<p>I've tested Zoo.dev with specific dimensional prompts like "a rectangular plate 120mm by 80mm by 6mm with four M5 clearance holes on a 100mm by 60mm bolt pattern, 10mm from each edge." Sometimes the dimensions come back within a millimeter of what I asked for. Sometimes they're off by five percent, which might not sound like much until you try to bolt it to something real.</p>
<p>The pattern I've noticed: the more specific and constrained your prompt, the better the accuracy. "A bracket" gives you whatever the AI's average bracket looks like. "A 90-degree bracket, 3mm thick aluminum, 40mm legs, with two 4.2mm holes on each leg spaced 20mm apart" gives you something much closer to what you need. This is why <a href="/posts/text-to-cad-prompt-engineering">prompt engineering</a> matters so much for text-to-CAD specifically.</p>
<p>But even with perfect prompts, I wouldn't trust the dimensional output for anything going to manufacturing without measuring it myself. Every STEP file I've gotten from any text-to-CAD tool, I import it, measure the critical features, and adjust. It's a starting point, not a finished part. That's not a dealbreaker for the technology. It's just the honest state of things.</p>
<h2>Where this sits in a real workflow</h2>
<p>Text-to-CAD is not a replacement for CAD software. It's a starting tool. Think of it like a sketch on a napkin, except the napkin is three-dimensional and already close to the right dimensions.</p>
<p>My current workflow when I use it: generate a first draft with a text-to-CAD tool, import the STEP into Fusion 360, measure and fix dimensions, add features the AI missed (fillets, chamfers, holes, mounting features), apply proper constraints, and then treat it like any other part. On simple geometry, this saves me maybe five to fifteen minutes compared to starting from scratch. On complex geometry, it saves nothing because I end up rebuilding the whole thing anyway.</p>
<p>Where I see the real value long-term is in iteration speed. You describe a concept, get geometry instantly, modify the description, get a new version. That feedback loop is fast in a way that sketch-extrude-fillet-undo-try-again isn't. For exploration, for "what would this look like if," for quickly testing five different bracket configurations before committing to one, text-to-CAD is genuinely useful right now.</p>
<p>For production work, it's supplementary. For prototyping and exploration, it's starting to earn its place.</p>
<h2>The B-Rep vs mesh distinction, one more time</h2>
<p>I keep coming back to this because the market keeps trying to blur it. Every week I see another "AI generates 3D models from text!" announcement that turns out to be mesh generation. Meshes are fine for games, animation, VFX, and concept visualization. They are not fine for engineering.</p>
<p>A mesh is a bag of triangles. A B-Rep solid is a mathematically defined shape with faces, edges, vertices, and topology. You can compute exact volumes, areas, center of mass. You can apply manufacturing operations. You can generate toolpaths. You can check interference with mating parts. You can export drawings with real dimensions. None of that works reliably with mesh data.</p>
<p>If someone tells you their tool does "text-to-CAD" and the output is OBJ or FBX, they're doing text-to-3D with aspirational naming. The distinction isn't pedantic. It's the difference between getting a model you can work with and getting a model you have to recreate.</p>
<h2>Will this replace CAD users?</h2>
<p>No. Not soon. Possibly not ever, depending on how you define "replace."</p>
<p>Text-to-CAD will change what CAD work looks like. It will automate some of the repetitive geometry creation that takes time but not much thought. Standard brackets, simple enclosures, mounting plates, basic structural shapes. That stuff will get faster.</p>
<p>But the hard parts of CAD, the parts where engineering judgment, manufacturing knowledge, tolerance analysis, assembly relationships, and plain experience matter, those are not what text-to-CAD solves. I still don't trust any of these tools to generate a part I'd send to a machine shop without checking every feature myself. A machinist I work with puts it this way: "I don't care how the model was made. I care if the model is right." Fair enough.</p>
<p>The more interesting question is whether text-to-CAD makes CAD more accessible to people who don't currently use it. Industrial designers who think in shapes but don't want to learn sketch constraints. Hardware engineers who need a quick enclosure but not a SolidWorks license. Hobbyists and small-shop people who just need a bracket and don't have time to learn a feature tree. That's where the real impact might be, not in replacing experienced users but in expanding who can generate usable geometry at all.</p>
<h2>My honest assessment</h2>
<p>Text-to-CAD is real, it's early, and it's being oversold by about 60%. The core technology works. B-Rep generation from text prompts is a solved problem in the academic sense and a partially solved problem in the commercial sense. Zoo.dev, CADAgent, and a few others produce genuinely useful output for simple parts. The major vendors are paying attention and adding AI features to their platforms, which tells you the direction even if the shipping timeline is optimistic.</p>
<p>What's missing is reliability, complexity handling, manufacturing awareness, and dimensional precision. These are exactly the things that separate a demo from a tool you'd actually bet a project on. They'll improve. The research is moving fast. The training data is growing. But right now, in April 2026, text-to-CAD is a productivity aid for simple geometry and a curiosity for everything else.</p>
<p>I use it. I don't rely on it. That's probably the right balance for the next year or two. If you want to start experimenting, Zoo.dev's free tier is the best place to begin. Learn to write <a href="/posts/text-to-cad-prompt-engineering">good prompts</a>. Understand the <a href="/posts/how-text-to-cad-works">output formats</a>. Set your expectations at "useful starting point" rather than "finished part" and you'll be in the right neighborhood.</p>
<p>The tools will get better. The question for working engineers isn't whether to adopt text-to-CAD. It's when the output quality crosses the line from "interesting experiment" to "saves me real time on real work." For simple parts, that line is already here. For everything else, I'm watching, testing, and keeping my Fusion 360 shortcuts warm.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How text-to-CAD actually works</title>
      <link>https://blog.texocad.ai/posts/how-text-to-cad-works</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/how-text-to-cad-works</guid>
      <pubDate>Sun, 18 Jan 2026 00:00:00 GMT</pubDate>
      <description>The short version: an AI reads your prompt and tries to output real CAD geometry instead of a mesh blob. The longer version involves transformers, B-Rep kernels, and a lot of duct tape.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>how-it-works</category>
      <category>b-rep</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> Text-to-CAD works by feeding a natural language prompt into a trained neural network that outputs a sequence of CAD operations (sketches, extrusions, fillets) rather than pixels or mesh triangles. The AI generates B-Rep geometry using learned patterns from datasets like DeepCAD&apos;s 170,000 parametric models.</p>
<p>Text-to-CAD works by converting a written prompt into a sequence of CAD modeling operations, like sketch, extrude, fillet, and chamfer, that a geometric kernel then executes into real B-Rep geometry. The AI doesn't draw a shape. It writes a recipe for building one.</p>
<p>I figured this out the hard way. I'd been using Zoo's text-to-CAD tool for a few weeks, getting results that ranged from surprisingly useful to quietly wrong, and I couldn't tell why the same kind of prompt would produce a clean bracket one day and a cursed lump the next. So I did what I always do when software annoys me enough: I went and read the research papers, sitting at my desk at ten o'clock on a Tuesday night with one dead monitor and a browser full of arxiv tabs. What I found was both more interesting and more fragile than I expected.</p>
<h2>The pipeline, from English to geometry</h2>
<p>The general architecture behind text-to-CAD follows a pattern that will look familiar if you've paid any attention to how large language models work, except the output isn't text. It's a sequence of parametric CAD commands.</p>
<p>Here's the basic flow. You type a prompt: "rectangular enclosure, 80mm by 50mm by 30mm, 2mm wall thickness, four M3 mounting holes on the corners." That prompt goes into a text encoder, typically a BERT-style transformer, which converts your words into a dense numerical representation that captures the meaning, dimensions, and spatial relationships you described. Think of it as translating English into math that the next stage can actually use.</p>
<p>That encoded representation then feeds into a decoder, usually an autoregressive transformer that generates a sequence of CAD operations one step at a time. Not triangles. Not voxels. Operations. "Create sketch on XY plane. Draw rectangle 80mm by 50mm. Extrude 30mm. Shell to 2mm wall. Place hole, M3 clearance, at position (5, 5). Repeat at corners." Each operation in the sequence is a token, and the model predicts the next token based on everything that came before, the same way a language model predicts the next word in a sentence.</p>
<p>The difference that matters: when GPT predicts the next word, a bad prediction gives you a weird sentence. When a CAD sequence decoder predicts the wrong operation, you get geometry that intersects itself, a fillet on an edge that doesn't exist, or an extrusion that collapses the model into something that would make a topology professor weep. CAD geometry has rules. Hard rules. Surfaces have to be watertight. Faces have to connect. Boolean operations have to produce valid solids. There's no "close enough" in B-Rep the way there is in mesh approximation.</p>
<h2>What the Text2CAD paper actually showed</h2>
<p>The most important academic work in this space is the Text2CAD paper that got a spotlight at NeurIPS 2024. I've seen it cited by every vendor in the space, usually with the inconvenient parts left out.</p>
<p>The researchers built the first end-to-end framework for generating parametric CAD models from natural language. They used the <a href="/posts/deepcad-dataset">DeepCAD dataset</a>, which contains roughly 170,000 parametric CAD models, and annotated it with about 660,000 text descriptions at varying levels of detail and skill. Some annotations read like an engineer's spec. Others read like a beginner describing a shape they saw once. That range was deliberate, because real users don't all talk like SolidWorks power users.</p>
<p>The architecture uses a BERT encoder for the text side and a transformer-based autoregressive network for the CAD sequence side. The model learns to map natural language descriptions to sequences of sketch and extrude operations that, when executed, produce the described geometry. It's trained end-to-end, meaning the text understanding and the CAD generation learn together rather than being bolted on separately.</p>
<p>The results were genuinely impressive for what they are. The model could generate recognizable mechanical parts from text descriptions, with correct topology and editable feature history. But "recognizable" and "dimensionally accurate" are not the same thing, and "editable" and "production-ready" aren't either. The <a href="/posts/text2cad-paper">Text2CAD paper</a> is an academic proof of concept, not a shipping product. Most of the commercial tools build on similar ideas but add their own layers of engineering on top, and none of them are particularly transparent about how much duct tape is involved.</p>
<h2>B-Rep generation vs mesh generation</h2>
<p>Most AI 3D tools (Meshy, Tripo, diffusion-model generators) produce meshes: bags of triangles that approximate a surface. A mesh can look like a bracket, but it doesn't know it's a bracket. You can't select a face. You can't fillet an edge. Import one into SolidWorks and the software treats it like a foreign object that wandered in from a game engine.</p>
<p>Text-to-CAD produces <a href="/posts/brep-vs-mesh-ai-generation">B-Rep geometry</a>. Boundary Representation. Mathematically defined surfaces and edges, the same kind your CAD software creates when you sketch and extrude. Real faces, real edges, topology the software understands. You can measure, modify, and export a STEP file a machine shop will accept.</p>
<p>This is also why text-to-CAD is harder than text-to-3D. Mesh triangles just need to look right from a distance. B-Rep geometry means every operation has to produce a mathematically consistent solid. One bad boolean, one self-intersecting surface, one unclosed sketch, and the model fails. I've seen outputs that look perfect in the viewport and explode the moment you try to fillet an edge.</p>
<h2>The operation sequence is the key idea</h2>
<p>What separates text-to-CAD from other AI 3D approaches is that the output is a sequence of operations, not a surface prediction.</p>
<p>A human in Fusion 360 builds a part step by step: sketch, dimension, extrude, add a hole, fillet edges, shell the body. The feature tree records this history. You can roll back, change a dimension, watch the rest update. A text-to-CAD model generates that same kind of sequence. "Sketch on XY. Rectangle, origin-centered, 80x50. Pad 30mm. Fillet edges, 2mm radius. Pocket, circular, 3.2mm, position (5, 5, 30)." The geometric kernel executes these in order, producing a solid with a real feature tree.</p>
<p>This is why the kernel matters. Zoo uses KittyCAD, a GPU-native kernel built for AI-driven geometry generation. Other tools generate OpenSCAD code, Python scripts for FreeCAD, or commands that run inside Fusion 360 like CADAgent does. The kernel has to execute whatever the AI generates and produce valid geometry at the end. When the kernel and the AI disagree about what's geometrically possible, you get the silent failures that make this technology maddening to debug.</p>
<h2>Why this is genuinely hard</h2>
<p>I want to be clear about something, because the demos make this look easier than it is. Generating valid parametric CAD geometry from text is harder than generating images or meshes.</p>
<p>CAD geometry has constraints that images and meshes don't. Every sketch needs to be fully constrained or the extrusion is ambiguous. Every boolean operation (cut, join, intersect) needs to produce a valid solid, not a self-intersecting mess. Fillets and chamfers can only be applied to edges that actually exist in the current state of the model, and whether a fillet succeeds depends on the surrounding geometry in ways that are difficult to predict without actually trying it. I've been using SolidWorks for over a decade and I still get surprised by which fillets fail and which don't. Expecting a neural network to get this right every time is optimistic.</p>
<p>The training data problem is real too. The DeepCAD dataset has 170,000 models, which sounds like a lot until you compare it to the billions of images used to train Stable Diffusion or the trillions of text tokens used for GPT. CAD data is scarce because most of it is proprietary. Companies don't publish their parametric models. The models that do exist in public datasets tend toward simple mechanical parts. So the AI has seen a lot of brackets and boxes and housings, and not many gears, snap fits, sheet metal parts, or complex multi-body assemblies. It generates what it's been trained on, and the training data has holes you could drive a forklift through.</p>
<p>Then there's evaluation. How do you measure whether a generated CAD model is "good"? The Text2CAD paper uses metrics like coverage and constraint satisfaction. But those don't capture what an engineer cares about: Is the feature tree clean? Can I edit it without breaking everything? Would a machinist accept this without calling me? Those questions don't have neat mathematical answers, which makes it hard to train a model to optimize for them.</p>
<h2>What this means for the output you actually get</h2>
<p>When you use a text-to-CAD tool today, you're getting predictions from a neural network trained on a small dataset of simple CAD models, run through a geometric kernel that enforces validity but can't fix bad predictions. Good outputs happen when your prompt aligns with what the model has seen in training and when the operation sequence is geometrically valid. Bad outputs happen when any of that breaks down. And it breaks down quietly.</p>
<p>You don't get an error saying "the fillet failed because the adjacent face conflicts with the shell." You get a model that looks fine and turns out to have internal faces, zero-thickness walls, or dimensions 15% off from what you asked for. I've learned to measure every critical feature on every model I get from these tools, the same way I measure parts from a shop. Trust, but verify, except I don't trust yet.</p>
<h2>Where this actually stands</h2>
<p>The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the practical side of using these tools. The <a href="/posts/what-is-text-to-cad">what is text-to-CAD</a> post explains the concept without assuming you know what B-Rep means. What I've tried to explain here is the machinery: why it works when it works, and why it breaks when it breaks.</p>
<p>The core technology is real. Transformers can learn to generate valid CAD operation sequences from text. The commercial tools have turned that into something usable, with varying reliability. But there's a real gap between "the AI can generate a plausible sequence of CAD operations" and "the AI can generate the right sequence for your part with correct dimensions and a feature tree you'd want to edit."</p>
<p>A CNC machine can cut any shape you tell it to, but the shape is only as good as the program. Text-to-CAD is the same deal. The kernel builds whatever the AI tells it to. The question is whether the AI is telling it the right thing. For simple parts with clear descriptions, often yes. For anything with real complexity, the answer is a polite version of "sort of, but check everything." The architecture is sound. The training data is growing. The kernels are improving. I'll be watching, importing STEP files, and keeping my calipers on the desk.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Can AI actually design CAD models?</title>
      <link>https://blog.texocad.ai/posts/can-ai-design-cad-models</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/can-ai-design-cad-models</guid>
      <pubDate>Sat, 17 Jan 2026 00:00:00 GMT</pubDate>
      <description>Sort of. AI can generate simple CAD geometry from text prompts, and the results are getting less terrible. But &apos;design&apos; is a strong word for what it&apos;s actually doing.</description>
      <dc:creator>TexoCAD</dc:creator>

      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI can generate basic CAD models from text prompts using tools like Zoo.dev and CADAgent, producing real B-Rep geometry (not just meshes). But it cannot yet handle complex assemblies, tight tolerances, or manufacturing constraints. It generates geometry, not engineering design.</p>
<p>Last Tuesday I showed a colleague an enclosure I'd generated with Zoo.dev's text-to-CAD tool. Decent wall thickness, snap-fit tabs, four corner standoffs for a PCB, a slot for a USB connector. The whole thing had taken about thirty seconds of typing and fifteen seconds of waiting. He rotated it on screen, nodded a few times, and then asked the question I'd been trying not to ask myself: "But who designed it?"</p>
<p>I didn't have a good answer. I typed a prompt. The AI produced a shape. The shape looked like something a person would design. But nobody had sat there thinking about draft angles, or checking if the snap fits would actually engage, or worrying about whether the USB slot was wide enough for the connector plus the cable strain relief. The AI skipped all of that because the AI doesn't know any of that exists. It just made geometry that looked plausible, and plausible is a dangerous neighbourhood to live in when you're trying to make physical objects.</p>
<p>That afternoon stuck with me, partly because the enclosure really did look good on screen, and partly because my coffee was already cold by the time I'd finished measuring all the ways it wouldn't work. The question "can AI design CAD models" has a complicated answer, and most of the internet is giving you the simple one.</p>
<h2>What AI can actually generate</h2>
<p>The honest answer is that AI can generate simple CAD geometry, and it's getting better at it faster than I expected. Tools like Zoo.dev, CADAgent, and a handful of others can take a text description and produce real B-Rep solid models. Not meshes. Not renders. Actual solids with faces, edges, and topology you can open in Fusion 360 or SolidWorks, select a face, add a fillet, export a STEP file. That part is real, and it matters.</p>
<p>The sweet spot is prismatic parts with clear descriptions. Brackets. Mounting plates. Simple enclosures. Standoffs. Cable clips. The kind of geometry that an experienced CAD user would knock out in fifteen minutes but that still takes fifteen minutes nobody wants to spend. If you type "L-bracket, 3mm aluminum, 50mm legs, two M5 clearance holes per leg on a 30mm spacing," the current tools will usually give you something close. Not perfect, but close enough to be a useful starting point.</p>
<p>I've been <a href="/posts/text-to-cad-guide">testing text-to-CAD tools</a> for months now, and the pattern holds: the simpler and more specific the prompt, the better the result. A rectangular plate with a bolt pattern comes out fine. A cylindrical standoff with a counterbore works more often than not. A box enclosure with mounting bosses lands in the right neighbourhood. You learn to describe things the way a careful machinist would read a drawing, leaving nothing ambiguous, specifying every dimension that matters.</p>
<p>Where things start to go sideways is the moment you need features that relate to each other in ways the AI can't infer from a sentence. A snap fit that needs to flex a specific amount. A wall that tapers for draft. A pocket that references the position of a mating part. The AI doesn't understand mechanical context. It understands shapes that tend to appear near other shapes, which is a very different thing.</p>
<h2>The difference between generating geometry and designing a part</h2>
<p>This is the part that matters, and the part that most AI hype conveniently skips over.</p>
<p>Generating geometry means producing a 3D solid that matches a description. Designing a part means understanding why the geometry should be a certain way, what forces act on it, how it will be manufactured, what tolerances it needs, how it mates with adjacent parts, what happens when the material shrinks or the temperature changes or the bolt gets overtorqued by an assembler having a bad Monday.</p>
<p>AI does the first thing. It does not do the second thing. Not even a little.</p>
<p>When I model a bracket in Fusion 360, I'm making dozens of small decisions that never appear in the final geometry. I'm choosing a wall thickness based on the material and the loads. I'm positioning holes relative to edges with enough meat to avoid cracking. I'm adding fillets not because they look nice but because stress concentrations kill parts. I'm thinking about whether a CNC mill can reach that pocket, or whether the bend radius works for the sheet metal brake in the shop down the road. Every feature has a reason that lives outside the model itself.</p>
<p>AI-generated geometry has none of that embedded knowledge. The bracket looks like a bracket. The holes are in plausible locations. The fillets exist. But the reasoning behind each feature is absent, and that reasoning is what separates a shape from an engineered part. A machinist I've worked with for years once described an AI-generated STEP file as "a part that had never met a tool." He wasn't wrong.</p>
<h2>What AI cannot do yet</h2>
<p>The list is long, and it maps pretty directly to the things that make CAD work actually hard.</p>
<p>Complex assemblies are out. The current tools work on single bodies. Ask for an assembly with mating constraints, fasteners, clearances, and an assembly sequence, and you'll get either an error or a single blob that vaguely suggests multiple parts fused together. Real assemblies are about relationships between parts, and relationships require the kind of engineering judgment AI doesn't have.</p>
<p>Tolerances don't exist in AI output. No dimensional tolerances, no GD&#x26;T, no fit classes, no surface finish callouts. The geometry arrives as nominal dimensions that are approximately correct if you're lucky. For prototyping, this is workable. For anything going to a supplier with a purchase order, you're adding all the engineering data yourself. I've covered the <a href="/posts/is-text-to-cad-accurate">accuracy problem</a> in detail elsewhere, but the short version is: don't send AI-generated dimensions to a machine shop without measuring everything yourself first.</p>
<p>Design for manufacturing is completely absent. Draft angles for injection molding. Bend allowances for sheet metal. Tool access for CNC pockets. Minimum wall thickness for the process. Gate locations. Ejection considerations. Weld lines. None of this is in the AI's vocabulary. The geometry might look manufacturable on screen, but the shop floor has a way of revealing what the viewport hid.</p>
<p>Organic and complex surfaces barely work. Swept profiles, lofted blends, variable-radius fillets, anything that requires smooth G2 continuity across a surface network. These are hard for experienced CAD users. For AI, they're basically impossible right now. If your part has freeform surfaces, you're modeling them yourself.</p>
<p>The theme across all of these is the same: AI can approximate shape, but it cannot approximate the thinking that went into the shape. And in engineering, the thinking is the design. The shape is just its shadow.</p>
<h2>An honest look at where things stand</h2>
<p>I want to be fair, because the technology is real and dismissing it entirely would be dishonest. I use text-to-CAD in my own workflow. Not for production parts, but for getting started. If I need a quick bracket concept to show a client, I'll generate one in thirty seconds rather than spending fifteen minutes in Fusion. If I want to explore five different enclosure proportions before committing, text-to-CAD lets me iterate at the speed of typing rather than the speed of sketching and extruding.</p>
<p>The tools I take most seriously are Zoo.dev, which outputs real B-Rep as STEP files through a well-documented API, and CADAgent, an open-source Fusion 360 add-in that generates models with actual feature history inside a real parametric environment. Both of these produce geometry you can genuinely work with, not just look at. The major CAD vendors are also adding AI features to their platforms, with Autodesk, Dassault, PTC, and Siemens all building various forms of AI-assisted modeling into their existing tools. Most of that is still early, but the direction is clear enough.</p>
<p>The honest assessment: for simple, well-described parts, text-to-CAD saves real time. For moderate complexity, it gives you a starting point that needs significant rework. For anything complex, it saves nothing because you end up rebuilding the model from scratch anyway. That's not failure. That's just early technology being early.</p>
<h2>Where this is heading</h2>
<p>The research is moving quickly. The Text2CAD paper that got a spotlight at NeurIPS 2024 showed that sequence-based generation of CAD operations from text is a viable approach, and the commercial tools are catching up. If you want to understand the technical architecture behind <a href="/posts/how-text-to-cad-works">how text-to-CAD works</a>, the key insight is that these systems predict CAD operations, not raw geometry, which is why the output is editable at all. Training datasets are growing. Integration with real CAD environments is getting tighter. Within a year or two, I expect simple-to-moderate parts to be fairly reliable straight from a prompt.</p>
<p>The harder problems, the ones that require manufacturing awareness, tolerance reasoning, and multi-part thinking, will take longer. Maybe much longer. There's talk of bolting DFM validation onto AI output, which would catch the worst errors before they reach a shop. There's work on training models with manufacturing context, not just geometry. Both of those would help. Neither of those is shipping today.</p>
<p>I think the most likely near-term future is a hybrid one. AI generates the starting geometry. A human engineer adds the intelligence: tolerances, manufacturing constraints, assembly relationships, the stuff that turns a shape into a product. The <a href="/posts/ai-vs-human-cad-design">comparison between AI and human CAD design</a> is less about competition and more about figuring out which parts of the work benefit from automation and which parts still need a brain that's been yelled at by a machinist.</p>
<h2>So, can AI design CAD models?</h2>
<p>It can generate them. It cannot design them. That distinction sounds pedantic until you try to manufacture the output, at which point it becomes the only thing that matters.</p>
<p>"Design" implies intent, constraint awareness, and engineering judgment. AI has none of those. It has pattern recognition trained on existing geometry, and it uses that to produce shapes that statistically resemble real parts. Sometimes the resemblance is close enough to be useful. Sometimes it's close enough to be dangerous, which is worse.</p>
<p>If you're exploring concepts, generating quick geometry for discussion, or producing simple parts for non-critical applications, AI text-to-CAD is a real tool that saves real time. If you're engineering a product that needs to work, fit, survive, and be manufactured reliably, AI gives you a starting sketch at best. The <a href="/posts/text-to-cad-limitations">text-to-CAD limitations</a> are not temporary inconveniences. They're reflections of how much implicit knowledge goes into every part a good engineer models.</p>
<p>I'll keep using these tools. I'll keep being surprised when they get something right and unsurprised when they miss. But I'm not calling what they do "design" until the output can survive a conversation with a machinist without anyone reaching for a chair. We're not there yet. We're getting closer. And my Fusion 360 shortcuts aren't going anywhere.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI CAD for real work: manufacturing, accuracy, and limits</title>
      <link>https://blog.texocad.ai/posts/ai-cad-for-real-work</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-cad-for-real-work</guid>
      <pubDate>Fri, 16 Jan 2026 00:00:00 GMT</pubDate>
      <description>I took AI-generated CAD output and tried to actually make parts from it. CNC, 3D printing, injection molding. Here&apos;s what happened, what broke, and where the gap between demo and production still lives.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>manufacturing</category>
      <category>accuracy</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI CAD tools can generate geometry that looks correct on screen, but most output still fails basic manufacturing checks: missing tolerances, non-manufacturable features, broken topology, and no DFM awareness. The technology is useful for early concepts and simple parts, not production-ready engineering.</p>
<p>AI-generated CAD output is not ready for production manufacturing in most cases, and anybody who tells you otherwise hasn't tried to actually make the parts. I spent a long week taking text-to-CAD output from three different tools, exporting STEP files, and sending them to a local machine shop and a 3D print service. The machinist called me after two hours. Not to ask a question. To tell me the geometry was, in his words, "theoretically a part." The wall on one side was 0.3 mm thick. A pocket had no tool access. Two holes were positioned where they'd intersect with a fillet that didn't exist in some views. The 3D printer fared slightly better, in the way that a C-minus is slightly better than failing.</p>
<p>That experience shaped everything in this post. I've spent over a decade in CAD, starting in AutoCAD, living in SolidWorks for years, now mostly working in Fusion 360, and I wanted to give <a href="/posts/text-to-cad-guide">text-to-CAD</a> a fair shot in the context where it actually has to perform: making physical things. Not rendering them. Not spinning them in a browser. Making them, out of material, with tools that don't care about your demo.</p>
<h2>The gap between demo geometry and real parts</h2>
<p>Every text-to-CAD demo I've seen follows the same script. Someone types a prompt. A 3D model appears. The audience makes impressed noises. The demo ends before anyone asks whether the part can be machined, molded, printed, or even dimensioned.</p>
<p>That gap is where real engineering lives, and it's where AI CAD currently falls apart.</p>
<p>The geometry that comes out of tools like Zoo.dev, AdamCAD, or CADAgent looks like a part. It has faces, edges, volumes. It exports to STEP or STL. In a viewport, it passes the squint test. But passing the squint test is not engineering. A bracket that looks like a bracket but has no defined tolerances, no consideration for tool access, no draft angles, and wall thicknesses that change arbitrarily from face to face is not a bracket. It's a sculpture with mechanical ambitions.</p>
<p>I'm not saying this to be cruel. I'm saying it because <a href="/posts/is-text-to-cad-accurate">the accuracy problem</a> is the single biggest obstacle between AI CAD and real work, and most of the conversation around these tools pretends it doesn't exist.</p>
<h2>Dimensional accuracy: what the numbers actually look like</h2>
<p>When I talk about accuracy, I don't mean "does the shape look roughly right." I mean: if you ask for a 50 mm x 30 mm x 10 mm block with a 6 mm hole centered on the top face, do you get those dimensions?</p>
<p>The honest answer is: sometimes, sort of. The tools I tested got the gross dimensions within a few percent on simple prompts. A box is usually close to the right size. A cylinder is usually close to the right diameter. But "close" in manufacturing is not a compliment. A 6 mm hole that comes out as 5.7 mm doesn't fit the M6 bolt. A 50 mm dimension that's actually 49.2 mm means the part doesn't mate with the assembly. And these are the easy cases, single features on simple geometry.</p>
<p>Once you add complexity, holes on curved surfaces, pockets with fillets, features that reference other features, the dimensional drift gets worse. I measured one output where the prompted hole diameter was 8 mm and the generated geometry measured 7.4 mm on one axis and 7.8 mm on the other. Not a circle. An oval pretending to be a circle. The STL triangulation didn't help, but the underlying B-Rep was also off.</p>
<p>For reference, a typical CNC machining tolerance is plus or minus 0.1 mm for standard work, tighter for precision fits. The AI outputs I tested were off by 0.3 to 0.8 mm on individual features, and that's before you talk about GD&#x26;T, surface finish, or feature relationships. Nobody is holding position tolerance on a hole that the AI placed by vibes.</p>
<h2>Tolerances: the thing AI doesn't know exists</h2>
<p>Here is a fact that should make any mechanical engineer uncomfortable: current text-to-CAD tools do not generate tolerances. Not dimensional tolerances. Not geometric tolerances. Not surface finish callouts. Nothing.</p>
<p>The model arrives as nominal geometry. It has no concept of fit classes, no awareness that a bearing bore needs to be H7, no understanding that a mating surface might need to be flat within 0.05 mm. The AI generates shapes. It does not generate engineering intent.</p>
<p>This matters more than most people outside manufacturing realize. A part without tolerances is a suggestion, not a specification. You can't quote it, inspect it, or hold a supplier accountable to it. Every shop I've worked with would look at a toleranceless model and either guess, call you, or add their own standard tolerances which may or may not match what you needed.</p>
<p>I have fixed this kind of mess before, usually while reheating the same coffee for the third time. The fix is always manual. You take the AI output, import it into your real CAD tool, and add all the engineering data yourself. Which raises the question: if you're doing all the engineering work anyway, how much time did the AI actually save?</p>
<h2>Can you CNC machine AI-generated parts?</h2>
<p>Short answer: not directly.</p>
<p>Longer answer: the geometry might cut, but the part definition won't survive a real machining workflow without significant rework. Here's why.</p>
<p>CNC machining needs more than a 3D shape. It needs tool access. It needs radii that match available cutters. It needs walls thick enough to not chatter or deflect. It needs features positioned so a vise can hold the stock and a spindle can reach the cut. It needs draft considerations if the part goes into a fixture. It needs a drawing, or at least a model with PMI, that tells the machinist what matters.</p>
<p>AI-generated geometry ignores all of this. I sent a text-to-CAD bracket to my usual shop with a STEP file and nothing else. The response was educational. The internal pocket had sharp corners, which means no end mill can cut them without EDM or a secondary operation. One wall was 0.4 mm thick on a part that was supposed to be aluminum, which would flex like a beer can. Two holes were positioned so close to an edge that the material between them would likely crack during machining. And the overall shape, while parametric in the source tool, arrived as dumb geometry with no feature tree, no sketch references, and no way to adjust anything without essentially remodeling it.</p>
<p>A machinist who's been doing this for thirty years told me the geometry "looked like it was designed by someone who had seen a part but never held one." That stuck with me.</p>
<p>If you want to understand this problem better, I wrote about <a href="/posts/ai-cad-for-cnc-machining">AI CAD for CNC machining</a> in more detail. The short version: text-to-CAD can give you a starting shape for CNC work, but turning that shape into a machinable part is still a manual job, and it's not a small one.</p>
<h2>3D printing: the most forgiving test, and it still struggles</h2>
<p>3D printing is supposed to be the easy case. FDM, SLA, SLS, and similar processes are more tolerant of weird geometry than subtractive methods. No tool access issues. No cutter radius limits. Less concern about workholding. If AI CAD was going to succeed anywhere, <a href="/posts/text-to-cad-for-3d-printing">3D printing</a> should be it.</p>
<p>And it does work, sometimes, for simple things. I got a few box-shaped enclosures and basic bracket-like parts to print successfully on an FDM printer. The dimensions were close enough for a prototype. The shapes were printable. If your bar for success is "plastic object that exists and roughly resembles what you asked for," text-to-CAD can clear it.</p>
<p>But "roughly resembles" is a low bar, and even 3D printing has rules. Overhangs need support or design consideration. Wall thickness needs to be consistent and above minimum for the process. Bridging distances matter. Hole orientations affect accuracy. Print direction affects strength. None of this information is encoded in AI-generated output.</p>
<p>One part I tested had a floating internal ledge that would have required support material inside a closed cavity. The AI didn't model drain holes. It didn't consider print orientation. It just generated a shape that worked in the viewport and called it done. That's fine for a concept render. It's not fine for anyone trying to actually press "print" and get a usable result.</p>
<p>The gap is smaller here than with CNC, but it still exists. And for production 3D printing, where you need consistency, dimensional stability, and process-aware design, the gap is larger than the FDM prototype crowd might expect.</p>
<h2>Injection molding: not even close</h2>
<p>I almost feel bad including this section because it's so lopsided. Injection molding is one of the most constraint-heavy manufacturing processes in common use. Draft angles. Uniform wall thickness. Gate location. Parting lines. Undercuts. Ejection. Sink marks. Weld lines. Shrinkage compensation. Material flow analysis. Every one of these factors needs to be considered during part design, not after.</p>
<p>Text-to-CAD tools have zero awareness of any of this.</p>
<p>I took a text-to-CAD generated enclosure, the kind of thing you'd injection mold for a consumer product, and showed it to a tooling engineer. He didn't even open it in CAD. He looked at the render for about ten seconds and pointed out three problems: no draft on any vertical face, variable wall thickness that would cause differential cooling and warpage, and a snap-fit feature that was geometrically impossible to eject from a two-part mold without a side action.</p>
<p>To be fair, most junior engineers also don't know injection molding constraints when they start. The difference is that junior engineers learn. Current AI CAD tools don't have the training data, the feedback loop, or the physics awareness to learn DFM constraints in any meaningful way. The geometry comes out looking like a part that forgot it needed to be manufactured.</p>
<p>If you're doing injection molding, text-to-CAD is not your tool. Not yet. Possibly not for a long time.</p>
<h2>Sheet metal: a mixed bag</h2>
<p>Sheet metal CAD is a specialized domain. You need bend radii, K-factors, flat pattern calculations, relief cuts, hem considerations, and awareness of what a brake can actually do. The part in 3D needs to unfold into a flat pattern that can be laser-cut or punched, then bent into shape without tearing, buckling, or springing back into the wrong angle.</p>
<p>I tested one text-to-CAD tool's attempt at a simple L-bracket in sheet metal. It generated a solid body that looked like a bent piece of metal, but it was actually a solid extrusion with no sheet metal definition. No bend features. No flat pattern. No awareness of material thickness as a driving parameter. It was a picture of a sheet metal part, not a sheet metal part.</p>
<p>This is a pattern I keep seeing. AI CAD tools generate geometry that resembles the manufacturing output without understanding the manufacturing process. The visual fidelity is decent. The engineering fidelity is missing.</p>
<h2>Where text-to-CAD actually fits in real workflows</h2>
<p>After all this complaining, let me be honest about where these tools do something useful. Because they do. It's just not the thing the demos promise.</p>
<p>Text-to-CAD is genuinely helpful for early-stage concept exploration. If you need to see a rough shape quickly, explore a few form options, or generate starting geometry that you'll rebuild properly in a real parametric tool, <a href="/posts/text-to-cad-guide">text-to-CAD</a> can save time. Not manufacturing time. Design thinking time.</p>
<p>I've used it for:</p>
<ul>
<li>Generating starting geometry for simple brackets and mounting plates, then importing into Fusion 360 to add real dimensions, fillets, and hole patterns</li>
<li>Exploring enclosure form factors quickly before committing to a parametric model</li>
<li>Creating visual stand-ins for early assembly mockups where exact dimensions don't matter yet</li>
<li>Generating geometry for non-critical fixtures and jigs that will be iterated anyway</li>
</ul>
<p>In those cases, the workflow is: prompt the AI, get a rough shape, import it, throw away most of the geometry, and rebuild the part properly. The AI saves maybe 15 to 30 minutes of initial sketching and extrusion on simple parts. It does not save the engineering.</p>
<p>The problem with <a href="/posts/text-to-cad-limitations">text-to-CAD limitations</a> is not that the tools are useless. It's that they're being marketed as more capable than they are, and people who don't know manufacturing are believing the marketing. A concept model that looks like a machined part is not a machined part, in the same way that a photo of food is not dinner.</p>
<h2>The DFM problem nobody is solving</h2>
<p>Design for manufacturability is not a checklist you apply after the geometry exists. It's a way of thinking about geometry while you create it. You choose wall thicknesses because of the molding process. You position holes relative to edges because of the tooling constraints. You add draft because the part needs to come out of the mold. You avoid sharp internal corners because the cutter has a radius. You think about how the part will be fixtured, inspected, and assembled while you're still sketching.</p>
<p>AI CAD tools don't think this way because they don't have a manufacturing model. They have a geometry model trained on existing CAD datasets, and those datasets don't typically include the manufacturing context that drove the design decisions. The AI can learn that brackets tend to have holes in certain positions, but it can't learn why those holes are positioned the way they are in relation to a specific manufacturing process.</p>
<p>This is a fundamental gap, not a software version gap. Until AI CAD tools are trained on manufacturing process data alongside geometry data, or until they can run DFM checks against their own output, the parts they generate will look right and be wrong. Not always catastrophically wrong. Sometimes just expensively wrong.</p>
<p>A tooling engineer I've worked with for years put it simply: "The geometry is the easy part. The hard part is knowing what you can't see in the model." He was talking about constraints, tolerances, process limits, and material behavior. He was also, I suspect, talking about experience. The kind of knowledge you build by having parts come back wrong and learning why.</p>
<h2>What this means for CAD careers</h2>
<p>There's a question floating around engineering forums and LinkedIn posts that amounts to: <a href="/posts/will-ai-replace-cad-designers">will AI replace CAD designers</a>? My answer is no, but it will change what the job looks like, and the reason goes directly back to manufacturing.</p>
<p>The parts of CAD work that AI can already do are the parts that require the least engineering judgment: generating a basic shape from a description, creating starting geometry, roughing out a concept. These tasks are real, but they're not where most of the value or difficulty in CAD work lives.</p>
<p>The hard part of being a CAD designer is everything else. Knowing that a 1 mm wall will warp in ABS. Knowing that a 90-degree internal corner will crack under cyclic loading. Knowing that the beautiful swept surface you just created will require a five-axis mill and triple the machining cost. Knowing that the assembly looks great in the exploded view but can't actually be assembled in that order because the fastener access is blocked.</p>
<p>That judgment is what makes a CAD professional worth paying, and it's exactly the knowledge AI CAD tools don't have. The people who should be worried are the ones whose entire job is tracing shapes from reference images or recreating simple geometry from sketches. That work is going to get automated. The people who understand <a href="/posts/text-to-cad-for-manufacturing">manufacturing constraints</a>, tolerancing, assembly design, and real-world trade-offs are going to be more valuable, not less, because someone needs to clean up after the AI.</p>
<h2>Where this technology might go</h2>
<p>I'm skeptical but not cynical. The current state of AI CAD for manufacturing is poor, but the rate of improvement in AI generally is fast enough that writing it off permanently would be foolish.</p>
<p>The most plausible near-term improvements I see:</p>
<p>DFM validation layers that check AI output against manufacturing rules before the user ever sees it. This doesn't require the AI to understand manufacturing. It just requires a rule engine bolted onto the output. Several CAD companies are already working on this, and it's probably the fastest path to making AI-generated geometry more useful.</p>
<p>Process-specific training data. If you train an AI on parts that were actually manufactured (with their manufacturing context, tolerances, and process parameters), the output should get more realistic over time. The bottleneck is data. Most manufacturing data is proprietary, messy, and locked inside company PLM systems.</p>
<p>Hybrid workflows where the AI handles initial geometry and a human engineer handles everything else. This is basically what I described above, and it works today if you set expectations correctly. The AI is a drafting assistant, not an engineer.</p>
<p>What I don't see happening soon is AI that can replace the full design-for-manufacturing loop. That requires understanding physics, process constraints, cost trade-offs, supplier capabilities, assembly sequences, and inspection methods. It requires the kind of knowledge that comes from having a machinist hand you a ruined part and explain, with visible disappointment, what went wrong.</p>
<h2>The honest summary</h2>
<p>AI CAD tools generate geometry. They do not generate engineered parts. The difference matters every time you try to make something physical from the output. For concept work, visualization, and early-stage exploration, these tools offer real time savings. For anything that will be machined, molded, printed at production quality, or assembled into a product that needs to work, the output requires significant manual rework by someone who understands manufacturing.</p>
<p>The gap between the demo and the shop floor is not a bug that the next software update will fix. It's a reflection of how much engineering knowledge goes into a real part beyond its shape. Shape is necessary but not sufficient. Until AI CAD tools understand that distinction, they'll keep producing parts that look right on screen and arrive wrong in the mail.</p>
<p>My advice: use these tools where they help, which is early and rough. Don't trust them where it matters, which is everywhere else. And if your machinist calls you two hours after receiving your AI-generated STEP file, answer the phone. You're going to learn something.</p>
]]></content:encoded>
    </item>
    <item>
      <title>AI in CAD software: what&apos;s real and what&apos;s a slide deck</title>
      <link>https://blog.texocad.ai/posts/ai-in-cad-software</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/ai-in-cad-software</guid>
      <pubDate>Fri, 16 Jan 2026 00:00:00 GMT</pubDate>
      <description>Every vendor is shipping AI features now, or at least demoing them with confident lighting. Here&apos;s what Fusion 360, SolidWorks, Onshape, Siemens, and PTC are actually delivering in 2026, and where the line sits between a working feature and a well-rehearsed promise.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>ai-cad</category>
      <category>vendor-features</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> AI in CAD software in 2026 spans four categories: text-to-geometry generation, copilot assistants, generative design optimization, and AI-powered automation. SolidWorks 2026, Fusion 360, Onshape, Siemens NX, Creo, and Solid Edge all ship or preview AI features, but maturity varies widely. Most shipping features are assistants and automation aids, not geometry generators.</p>
<p>I watched an AI demo at Autodesk University last year where somebody typed a sentence into Fusion 360 and a 3D model appeared. It looked like a kitchen appliance. The geometry was clean, the surfaces were reasonable, and the crowd made a small appreciative noise, the kind people make when magic tricks work. I wrote down the feature name and went looking for it the next day in my actual copy of Fusion. It wasn't there. The feature was still in development. The demo was real, the timeline was not, and I'd spent twenty minutes poking around the interface like a person trying to find a restaurant that closed last year.</p>
<p>AI in CAD software is real, but the distance between what gets demoed on stage and what shows up in your installed copy is still measured in quarters, sometimes years. As of early 2026, every major CAD vendor has announced, previewed, or shipped something with AI in the name. Some of it works. Some of it is useful. Some of it is a press release with a roadmap attached. The trick is knowing which is which before you reorganize your workflow around a feature that doesn't exist outside a conference hall.</p>
<p>If you want the full picture of how text-to-CAD fits into this, the <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers the dedicated tools. This post is about what the big vendors are doing inside their own platforms, where the hype stops, and where the useful parts begin.</p>
<h2>Four types of AI showing up in CAD</h2>
<p>Before going vendor by vendor, it helps to name the categories, because "AI in CAD" has become a bucket that holds very different things.</p>
<p>The first is text-to-geometry generation: you type a description, the software builds a 3D model. This is the flashiest, the most demo-friendly, and the least mature. Autodesk's Neural CAD is the most visible example from a major vendor. The <a href="/posts/best-text-to-cad-tools">best text-to-CAD tools</a> post covers the dedicated startups doing this too.</p>
<p>The second is the copilot or assistant pattern. This is an AI chatbot built into the CAD interface that answers questions, troubleshoots errors, explains features, or executes commands from natural language. Onshape AI Advisor, Creo AI Assistant, Siemens Design Copilot, and Autodesk Assistant all fit here. The <a href="/posts/ai-cad-copilot">AI CAD copilots</a> piece goes deeper on this pattern.</p>
<p>The third is generative design, which is older and more established than the other categories. You define constraints, loads, materials, and manufacturing methods, and the software generates optimized shapes that a human would not have drawn. Autodesk and PTC both ship mature tools here. This is not new, but vendors now call it AI because the branding is better.</p>
<p>The fourth is AI-powered automation: features like automatic drawing generation, smart assembly snapping, error diagnosis, and design inspection. These are the least exciting to watch in a demo and often the most useful in practice. SolidWorks 2026 and Solid Edge 2026 are both shipping real tools in this category right now.</p>
<p>The confusion comes when vendors mix these together in a single announcement, which is most of them. A press release that mentions "AI-powered design" might mean generative topology optimization, or it might mean a chatbot that links to help articles. Reading carefully has become an engineering skill of its own.</p>
<h2>Autodesk Fusion 360: big ambitions, early innings</h2>
<p>Autodesk has the most ambitious AI story in CAD right now, and also the widest gap between what's been shown and what's available to use. Three things matter here.</p>
<p><a href="https://www.autodesk.com/solutions/autodesk-ai/neural-technology">Neural CAD</a> is Autodesk's text-to-geometry feature. You type a prompt like "create a contemporary air fryer" and the system generates native, editable 3D geometry inside Fusion. Not a mesh blob, not a render, but actual B-Rep geometry with a feature tree you can modify. That is genuinely impressive in concept, and the demos I've seen produce surprisingly decent starting shapes. The problem is that Neural CAD is still in development. It was announced at AU 2025 and remains exploratory, subject to change in release timing, which in vendor language means "don't plan your Tuesday around it."</p>
<p>Text to Command is the more practical sibling. Instead of generating geometry from scratch, it lets you describe operations in plain English and the software executes them. "Extrude this face by 1 inch." "Add a 0.5 mm chamfer to all edges." "Split this body with my construction plane." You can even save multi-step sequences as reusable prompts. This is part of Autodesk Assistant, the conversational AI layer built into the platform. It's less dramatic than generating a whole model, but honestly more useful if you already know what you're building and just want to stop clicking through menus. It is also still in development.</p>
<p>For a detailed breakdown, the <a href="/posts/fusion-360-ai-features">Fusion 360 AI features</a> post tracks what's available and what's still roadmap.</p>
<p>Autodesk also has mature generative design tools that have shipped for years. Generative Design in Fusion uses cloud compute to explore hundreds of design options under constraints. This is the part that actually works today, is battle-tested, and occasionally produces geometries that look like organic sculpture and perform better than the hand-designed version. The catch is it outputs mesh-like topology-optimized shapes that often need manual rebuilding as solid bodies, which is exactly the kind of detail demos leave out.</p>
<p>My read on Autodesk is that they're building toward something genuinely different, but most of the interesting stuff is not in your copy of Fusion yet. The generative design works. The assistant features are coming. Neural CAD is a research project with good marketing. If you're making purchasing decisions, use what ships, not what demos well.</p>
<h2>Dassault SolidWorks 2026: the most features, the fastest shipping</h2>
<p>SolidWorks 2026 is the most aggressive AI rollout from any major CAD vendor this year, and the unusual part is that most of it is actually shipping. The February 2026 release (FD01) included the first batch, with more features arriving through summer 2026. Dassault CEO Manish Kumar stated publicly that everything shown was working, not speculative, which is either refreshing honesty or a very confident bet.</p>
<p>The headline features are AURA and LEO, virtual AI companions embedded in the SolidWorks interface. AURA handles general assistance, guidance, and Q&#x26;A. LEO handles more design-specific tasks. In practice, they function as an integrated copilot layer similar to what other vendors are building, but Dassault went further and built specific tools around them.</p>
<p>AI-powered Drawing Generation lets you create drawings from text prompts with customizable templates and standards. If you have ever spent a Friday afternoon producing six standard views of a bracket because someone needs the documentation before Monday, this is the feature that might actually save you real time. It is in beta as of FD01.</p>
<p>AI-powered What's Wrong Analysis diagnoses model errors with AI-guided root-cause analysis. Instead of staring at a red feature and guessing which sketch reference broke, the system traces the failure and suggests fixes. I have lost more hours to detective work on broken feature trees than I'd like to admit, so this one appeals to me personally even if the execution is still being refined.</p>
<p>The Assembly Structure Generator creates assembly structures from text prompts. Design Inspection, Material Manager, and Project Planner are all in beta or arriving by summer 2026.</p>
<p>The <a href="/posts/solidworks-ai-features-2026">SolidWorks 2026 AI features</a> post has the full breakdown with dates.</p>
<p>What makes SolidWorks 2026 interesting is the breadth. Most vendors have one or two AI features they push hard. Dassault is shipping ten, and <a href="https://www.engineering.com/10-ai-tools-coming-to-solidworks-in-2026/">Engineering.com counted them</a>. Whether all ten work well is a different question, but at least you can install them and find out, which puts Dassault ahead of vendors still showing slides.</p>
<h2>PTC Onshape AI Advisor: the quiet practical one</h2>
<p>Onshape's approach to AI is less flashy but arguably smarter for the current moment. The AI Advisor is a real-time guidance tool embedded in the Onshape design environment, powered by Amazon Bedrock and built on official Onshape documentation. It shipped in October 2025 and has been updated since, most recently in February 2026 when it was integrated into the help system.</p>
<p>What it does is straightforward: you ask it questions about Onshape features, modeling techniques, or troubleshooting, and it gives you answers drawn from verified sources. Step-by-step recommendations, best practices, error resolution. It doesn't generate geometry. It doesn't execute commands. It teaches and assists.</p>
<p>The <a href="/posts/onshape-ai-advisor">Onshape AI Advisor</a> post covers the specifics.</p>
<p>PTC's roadmap for Onshape AI is more ambitious. They've talked about agent workflows, FeatureScript code generation, model metadata interaction, and AI-assisted rendering. But for now, what ships is a guidance tool, and honestly that might be the right thing to ship first. An AI that helps you use the software better is less exciting than one that builds geometry, but it's also less likely to generate a model that looks right and is secretly garbage. PTC has said publicly that effective AI in CAD relies on existing intelligent automation features, not replacing proven geometry engines, which sounds like a philosophy and a dig at competitors at the same time.</p>
<p>The thing I respect about Onshape's approach is that it is cloud-native, which means AI features can ship faster, reach every user at once, and improve without waiting for an annual release cycle. When PTC does ship geometry generation or agent workflows, the infrastructure is already there. Whether they ship them fast enough to matter is the question.</p>
<h2>PTC Creo AI Assistant: error troubleshooting first</h2>
<p>Creo's AI story is separate from Onshape's, because PTC runs them as different product lines with different architectures.</p>
<p>The Creo AI Assistant arrived in beta with Creo 13, available in Creo+ (the SaaS version) since September 2025, with on-premises Creo 13 following around May 2026. In its current form, it focuses on error troubleshooting. When something fails, the assistant pulls relevant information from PTC's support knowledge base, contextualizes the error, and presents solutions in a side panel without requiring you to leave the application or open a browser to dig through support articles.</p>
<p>That's a narrow scope, and PTC is upfront about it being a first iteration. But for Creo users who have spent time searching help forums at 3 p.m. trying to understand why a sweep failed with a cryptic error code, having the answer surfaced inside the tool is a genuine quality-of-life improvement. PTC plans to expand the assistant's capabilities in future versions.</p>
<p>Creo also has mature generative design tools. Creo GTO (Generative Topology Optimization) and GDX are built into Creo 7+ with a separate license. These let engineers define design requirements, materials, and manufacturing constraints, then generate multiple manufacture-ready alternatives. Companies like Cummins have used them for weight reduction and sustainability work. This is proven technology that predates the current AI hype cycle.</p>
<h2>Siemens: NX AI Chat and Solid Edge Design Copilot</h2>
<p>Siemens is running AI features across two product lines, and the Solid Edge side is further along in shipping real tools.</p>
<p>Solid Edge 2026 launched with the Design Copilot, an AI chatbot built into the interface using generative AI and RAG (Retrieval Augmented Generation) technology. It understands natural language, responds in the user's language, and generates follow-up questions based on conversation context. It's available across all Solid Edge tiers, which is notable because some vendors gate AI features behind premium plans.</p>
<p>But the more practical Solid Edge features are the automation tools. Magnetic Snap Assembly recognizes constraints and snaps parts into correct positions automatically, making assembly reportedly up to nine times faster for supported geometry types. Automatic Drawing Creation uses AI to generate 70 to 80 percent of a 2D drawing automatically, including orthogonal, broken, and isometric views with dimensioning. If you've ever hand-placed thirty dimensions on a drawing that should have been trivial, that last number is the one that matters.</p>
<p>On the NX side, Siemens has NX AI Chat under active development. Details are thinner here. NX is the enterprise tool, and enterprise features tend to ship quietly to large accounts before they get public documentation. What's been shown is a conversational assistant similar in concept to what other vendors offer.</p>
<p>Siemens' overall AI strategy leans on its broader Xcelerator platform and industrial AI initiatives, which means CAD-specific features sometimes get bundled into larger announcements about digital twins and manufacturing intelligence. The useful CAD features are there if you dig past the enterprise language.</p>
<h2>What's actually shipping vs. what's a roadmap</h2>
<p>Here's the honest tally as of April 2026.</p>
<p>SolidWorks 2026 has the most AI features you can actually install and use today. Drawing generation, error analysis, assembly structuring, and the AURA/LEO companions are in beta or shipping. More arrives by July.</p>
<p>Solid Edge 2026 is shipping the Design Copilot, Magnetic Snap Assembly, and automatic drawing creation. These are available now.</p>
<p>Onshape AI Advisor is live and has been since October 2025. It's a guidance tool, not a geometry generator, but it works and it's been updated steadily.</p>
<p>Creo AI Assistant is in beta, available in Creo+ and coming to on-premises Creo 13 by mid-2026. Creo's generative design tools are mature and shipping.</p>
<p>Fusion 360 has generative design shipping. Neural CAD and Text to Command are in development, announced but not available to regular users. Autodesk Assistant is the framework that will deliver them, but the timeline is still "exploratory."</p>
<p>NX AI Chat is under development with limited public information.</p>
<p>If you need AI features working in your CAD tool today, SolidWorks 2026 and Solid Edge 2026 are the most tangible options. If you're willing to wait and bet on potential, Fusion 360's roadmap is the most ambitious. If you want a stable, shipping assistant right now, Onshape AI Advisor is the safest choice, understanding that it doesn't generate geometry.</p>
<h2>The pattern worth watching</h2>
<p>Step back and look at all six vendors together, and a pattern appears.</p>
<p>Almost nobody is shipping text-to-geometry generation from a major vendor. Autodesk is working on it. The <a href="/posts/best-text-to-cad-tools">dedicated text-to-CAD tools</a> from startups like Zoo.dev are further along than the big vendors on this specific problem. That should tell you something about how hard the problem is.</p>
<p>What's actually shipping, from nearly everyone, is the assistant/copilot pattern. Chatbots trained on documentation, error diagnosis tools, natural-language command execution, and contextual help. This is less exciting than typing "design me a gearbox" and watching it appear, but it's also the kind of thing that might save you real time on a Wednesday afternoon when your model breaks and the help forum is useless.</p>
<p>The automation features are the sleeper category. Automatic drawing generation from SolidWorks and Solid Edge, smart assembly snapping, AI error diagnosis. These don't make good keynote demos. They make good Tuesdays. If I had to bet on which AI features actually change daily CAD work first, it's these, not the generative stuff. The person who doesn't have to manually place forty dimensions on a standard three-view drawing is going to feel the difference long before anyone's typing prose into a prompt box and getting a usable manifold.</p>
<p>Generative design remains the most mature AI capability in CAD, but it has been around for years and it solves a specific problem: structural optimization under constraints. It's real, it works, and it produces strange-looking parts that perform well. It is also not what most people mean when they say "AI in CAD" in 2026. The conversation has moved to natural language interaction and automated workflows, even though generative design is still the part with the most production mileage.</p>
<h2>What this means if you actually use CAD for work</h2>
<p>If you're a working designer or engineer wondering how much of this matters to your actual job right now, the honest answer is: some of it, but probably less than the announcements suggest.</p>
<p>The assistant and copilot tools are worth trying if your vendor has shipped them. Onshape AI Advisor is good for learning the software faster. SolidWorks' error analysis could save you real debugging time. Solid Edge's drawing automation could cut boring documentation work significantly. None of these will redesign your part for you, but they might reclaim an hour here and there, which adds up.</p>
<p>Text-to-geometry generation from a major vendor is not ready for production use. Autodesk's Neural CAD is promising but unavailable. If you want to experiment with text-to-CAD today, the standalone tools are where the action is. The <a href="/posts/text-to-cad-guide">text-to-CAD guide</a> covers what works and what doesn't.</p>
<p>Generative design is worth learning if you haven't already, especially for lightweighting, structural optimization, or exploring design spaces you wouldn't reach manually. Fusion 360 and Creo both have mature implementations.</p>
<p>For everything else, watch the shipping dates, not the announcements. A feature that arrives in your copy of the software is worth ten features that arrive in a keynote video. I have been burned enough times by CAD roadmaps to know that "coming soon" sometimes means "coming never," and "exploratory" sometimes means "we showed it once and it worked and we're not sure about the second time."</p>
<p>My advice is simple. Use what ships. Test what's in beta. Ignore what's only in slides. And keep your actual workflow running on features you can rely on, because the AI stuff is coming, genuinely, but it's coming at the speed of real engineering, not at the speed of the demo reel. The gap between those two speeds is where disappointment lives, and I have spent enough late afternoons in that gap to suggest you pack a sandwich.</p>
]]></content:encoded>
    </item>
    <item>
      <title>What Is CAD, and Who Uses It?</title>
      <link>https://blog.texocad.ai/posts/what-is-cad-and-who-uses-it</link>
      <guid isPermaLink="true">https://blog.texocad.ai/posts/what-is-cad-and-who-uses-it</guid>
      <pubDate>Thu, 15 Jan 2026 00:00:00 GMT</pubDate>
      <description>CAD exists because redrawing the same idea by hand every time reality shows up with a correction is a stupid way to build anything. Here&apos;s what it actually is, who lives in it, and why file formats still ruin weeks.</description>
      <dc:creator>TexoCAD</dc:creator>
      <category>beginners</category>
      <category>workflows</category>
      <category>history</category>
      <content:encoded><![CDATA[<p><strong>Quick answer:</strong> CAD, or computer-aided design, is software used to create, revise, document, and share precise 2D drawings and 3D models for products, buildings, and infrastructure. Architects, drafters, engineers, industrial designers, and manufacturing teams use it because it turns design intent into geometry and documentation other people can actually build from.</p>
<p>Years ago I changed a hole callout late in the day, the kind of change that looks harmless until it isn't. One view updated, one didn't, the DXF still had the old geometry, and a machinist asked me which version he was supposed to trust. I remember staring at the screen, cold coffee on the desk, thinking this was a stupid way to lose half an hour over one damned hole. CAD exists because redrawing the same idea by hand every time you learn something new is an even stupider way to build anything.</p>
<p>CAD stands for computer-aided design. If you want the clean official wording, <a href="https://www.autodesk.com/solutions/cad-software">Autodesk</a> describes it as digital design and drafting that replaces manual hand drawing, and <a href="https://www.ptc.com/en/technologies/cad">PTC</a> describes it as a way to create 2D drawings and 3D models of real-world products before they are manufactured. Both definitions are accurate enough. In normal human terms, CAD is the software you use to draw, model, dimension, revise, document, and sometimes simulate the thing you actually need built.</p>
<p>That last bit matters because CAD is not just "drawing on a computer." People say that when they have never had to fix a broken assembly reference at 4:40 p.m. It can be a floor plan. It can be a welded frame assembly. It can be a plastic enclosure with snap fits you will regret the second you get cocky about wall thickness. It can be a sheet metal flat pattern, a piping layout, a road corridor, a fixture plate, or a piece of tooling that exists purely to hold another part still while something louder happens to it.</p>
<h2>What CAD Actually Does</h2>
<p>At the simplest level, CAD gives you precise geometry. Lines are the right length. Holes are concentric when you tell them to be. A radius is actually the radius you typed, not whatever your pencil decided after coffee number three. That sounds basic, but basic is good when somebody else will be cutting metal from your drawing.</p>
<p>Past that, good CAD helps you manage relationships. In 2D work, that might mean dimensions, annotations, layers, blocks, title blocks, and drawing sheets that don't need to be rebuilt from scratch every time a change comes in. I started in old-school 2D drafting, and I still think people underrate how much pain a clean block library and a sane title block can save you. In 3D work, it usually means parts, assemblies, constraints, mates, feature trees, and drawings generated from the model. You build the geometry once, then reuse it in views, sections, exploded assemblies, bills of materials, toolpaths, or downstream documentation.</p>
<p>This is why people get attached to parametric modeling even when it behaves like a moody coworker. If the model is set up properly, you can change a sketch dimension, rebuild the part, and watch the drawing update instead of fixing five separate files by hand. That is the sales pitch, anyway. The real experience is more mixed. Sometimes the update works beautifully. Sometimes one missing edge reference turns the feature tree red and the whole thing becomes a hostage situation. If you've used SolidWorks, Fusion 360, Inventor, or anything else with history, you already know the look.</p>
<p>Still, the value is real. CAD makes revision work less painful, lets teams check fit and interference before cutting metal or ordering material, and gives manufacturing or construction something more useful than a napkin sketch and false confidence.</p>
<h2>2D CAD, 3D CAD, and the Confusion Between Them</h2>
<p>When people first hear "CAD," they often picture a 3D model spinning on a screen with dramatic lighting like it is auditioning for a launch video. That's part of it, but it isn't the whole thing. A lot of CAD work is still plain 2D drafting. Floor plans, details, schematics, panel layouts, site plans, wiring diagrams, fabrication drawings, and shop drawings are all still the job.</p>
<p>3D CAD adds another layer. Instead of just showing views of an object, you build the object itself as digital geometry. That lets you check size, volume, fit, clearance, and assembly order before anything becomes expensive. It also means you can make drawings from the model, export data for CAM, rendering, simulation, or 3D printing, and hand a more complete definition of the part to the next poor soul in line.</p>
<p>People love to argue about whether a specific tool is "real CAD," whether BIM is separate, whether mesh modeling counts, whether direct modeling is better than history-based modeling, and other arguments that are fun right up until a supplier sends back the wrong file format and your afternoon catches fire. For a beginner, the useful distinction is simpler. If the software helps define a product, part, building element, layout, or system accurately enough that someone else can manufacture, build, inspect, or approve it, you're in CAD territory.</p>
<h2>Why CAD Matters in Practice</h2>
<p>Paper drawings were not useless. They were just stubborn. Every design change meant more manual work. Reuse was clumsy. Coordination was slow. If one view got updated and another didn't, good luck to whoever had to machine the part or pour the concrete.</p>
<p>CAD fixes a lot of that. You can copy proven geometry instead of redrawing it. You can keep standard parts in libraries. You can create families of parts with small controlled changes instead of making twenty nearly identical files by hand. You can check whether two components crash into each other before the prototype tells you in a more expensive language. A tool that saves five minutes in a demo can save two hours in a real job too, but only if the workflow still holds together after revision three and a supplier import.</p>
<p>It also improves documentation, which is not glamorous but is where money quietly disappears. A nice-looking model is fine. A model tied to dimensions, tolerances, section views, part numbers, and manufacturing notes is better. In manufacturing, that data may keep moving downstream into CAM, inspection, and other systems. <a href="https://www.nist.gov/programs-projects/product-definitions-smart-manufacturing">NIST's work on product definitions for smart manufacturing</a> is dry reading, but it makes the point clearly: model-based standards such as STEP and QIF matter because design data has to survive the trip from engineering to the people and machines that do the actual work. That trip is where a lot of "looks good to me" models get exposed.</p>
<p>That survival part is not automatic. File exchange is where a lot of clean plans go to die. Native files are great until the other company uses a different system. Neutral formats help, but they don't carry everything equally well. I still don't trust imported geometry that arrives looking too clean. A pretty model that loses feature history, tolerances, or associativity on export is still a problem, just a shiny one.</p>
<h2>Who Uses CAD?</h2>
<p>More people than non-CAD folks usually realize. It isn't reserved for mechanical engineers in expensive chairs pretending the render is the hard part.</p>
<p><a href="https://www.autodesk.com/solutions/cad-software">Autodesk's overview of CAD users</a> runs from architects and civil engineers to product designers, production engineers, construction teams, and automotive designers, which gives you a decent sense of how wide the category really is. The common thread is not the software brand. It is the need to define something precisely enough that another person can trust it.</p>
<p>Architects use CAD every day. The <a href="https://www.bls.gov/ooh/architecture-and-engineering/architects.htm">U.S. Bureau of Labor Statistics</a> says architects use computer-aided design and drafting along with building information modeling for designs and construction drawings. That tracks with reality. Even when the office is deep into BIM, there is still a ton of drawing production, coordination, revision management, and detail work that lives in a CAD-shaped world.</p>
<p>Civil engineers and infrastructure teams use CAD for roads, grading, utilities, drainage, alignments, survey-based layouts, and construction documentation. This is not the glamorous side of digital design, but it is the side that decides whether water goes where it should and whether a contractor can read what you meant. That counts for a lot.</p>
<p>Drafters are one of the clearest examples because CAD is basically in the job description. The <a href="https://www.bls.gov/ooh/architecture-and-engineering/drafters.htm">BLS page for drafters</a> describes them as the people who turn engineers' and architects' designs into technical drawings. If you've ever received a drawing package that was actually readable, somebody did that work properly. Drafters often sit right in the middle between design intent and buildable information.</p>
<p>Mechanical engineers and product designers rely on CAD to develop parts, assemblies, housings, brackets, fixtures, sheet metal parts, molded components, and all the tedious little supporting pieces that make a product real. A nice render might win the meeting, but the real value is in fit checks, tolerances, interference detection, manufacturing handoff, and revision control. I have watched people admire a glossy render and then go very quiet when the left and right halves of an enclosure would not close because one boss was 0.7 mm too proud. The render was innocent. The model was not.</p>
<p>Industrial designers use CAD too, though often with a different emphasis. According to the <a href="https://www.bls.gov/ooh/arts-and-design/industrial-designers.htm">BLS page for industrial designers</a>, 3D CAD software is increasingly used to turn two-dimensional ideas into models. That makes sense. Industrial design sits in the awkward but important space between something looking right and something being manufacturable. Surface quality, ergonomics, proportion, and visual intent matter, but eventually the object still has to survive material thickness, draft angle, fastening, tooling, and assembly.</p>
<p>Manufacturing and production teams use CAD data downstream even when they are not the people creating the original model. Toolmakers, CNC programmers, fixture designers, inspectors, and manufacturing engineers all depend on accurate geometry and documentation. Once a model feeds CAM or inspection planning, CAD stops being just a design tool and becomes part of the production system. A machinist once explained this to me with a scrap part and a look of deep disappointment. The model said one thing, the drawing implied another, and the part that came off the machine was the only honest participant in the conversation.</p>
<p>Construction professionals use CAD-derived information as well, especially when drawings, coordination models, and revisions pass between architects, engineers, trades, and site teams. Again, the boring part is the important part. People are trying to build from this information in weather, under schedule pressure, with subcontractors who have seen enough vague drawings for one lifetime.</p>
<p>From hands-on experience, smaller shops use it too. Cabinet makers, sign shops, fabrication shops, custom motorcycle builders, furniture designers, and one-person product businesses all end up in CAD if they want repeatability. You can absolutely sketch a bracket on cardboard to get through the afternoon. If you want to make ten more next month and have them still fit, CAD starts looking less optional.</p>
<h2>What People Get Wrong About CAD</h2>
<p>The first mistake is thinking CAD is the same thing as a picture. It isn't. A drawing can be a picture. A CAD model is supposed to define geometry precisely enough that decisions can be made from it. The more serious the job, the less room there is for vibes.</p>
<p>The second mistake is assuming 3D means the work is smarter by default. It doesn't. You can build a gorgeous model that is impossible to machine, impossible to mold, miserable to inspect, or one tiny edit away from exploding into errors. The software can help you think clearly. It can also help you create very accurate nonsense. "Possible" and "usable in production" are different things, and CAD is full of people learning that the annoying way.</p>
<p>The third mistake is underestimating file management. CAD has a reputation for precision, and that part is earned. It also has a long history of broken references, missing fonts, wrong versions, bad exports, outdated drawings, and linked files that vanish when somebody drags a folder to the desktop like they're cleaning a kitchen drawer. If you've worked with external references, assembly dependencies, or supplier data from another system, you already know the emotional texture here. File formats matter more than most people want to admit, right up until they ruin the week.</p>
<p>Another common misunderstanding is that CAD removes the need for engineering judgment. It doesn't. The computer will let you model a part with walls too thin to mold, holes too close to the edge, impossible tool access, or a weldment that only works in a universe where heat distortion took the day off. CAD can represent bad decisions very efficiently.</p>
<h2>CAD Before It Was CAD</h2>
<p>Before people were rotating shaded models on a second monitor, drafting meant boards, T-squares, triangles, French curves, erasing shields, and a level of patience I do not possess. Revisions were physical. If you moved one hole, changed one wall thickness, or shifted one dimension chain, you were not "updating the model." You were redrawing views, cleaning up notes, checking title blocks, and hoping the copy going to the shop matched the copy on your desk. There is a reason older drafters can still sound a little haunted when they talk about vellum.</p>
<p>That manual world mattered because it set the problem CAD was trying to solve. Engineers and drafters did not need a prettier pencil. They needed a way to make changes without recreating the whole drawing set every time reality showed up with one more correction. They needed reuse, consistency, and some protection from the kind of clerical errors that sneak in when your fifth revision still smells faintly of ammonia print fluid and panic.</p>
<h2>MIT, Light Pens, and the First Big Break</h2>
<p>By the late 1950s and early 1960s, people at MIT were already treating computer-aided design as a serious engineering problem, not a science-fiction party trick. The most famous milestone from that era is Ivan Sutherland's 1963 <code>Sketchpad</code>, which <a href="https://lemelson.mit.edu/resources/ivan-sutherland">MIT's Lemelson program</a> describes as a system that let users create graphic images directly on a display using a light pen. That matters because <code>Sketchpad</code> was not just digital drawing. It introduced the idea that geometry on screen could be manipulated directly, constrained, reused, and made to behave according to rules.</p>
<p>That sounds normal now because modern CAD stole the good ideas and buried the weirdness. At the time, it was a big shift. Instead of feeding a machine coordinates and waiting politely, you could point at geometry, drag it around, define relationships, and let the computer keep track of some of the logic. If you have ever snapped a line horizontal, constrained a sketch, or reused a block or symbol, you are living downstream of that moment whether you know it or not.</p>
<h2>When CAD Was Expensive Enough to Need a Very Good Reason</h2>
<p>The next phase was not democratic. Early commercial CAD in the 1960s and 1970s lived on mainframes and high-end systems that cost real money, needed specialist staff, and made sense mostly for aerospace, automotive, defense, and other industries where design errors were brutally expensive. This was not software you casually installed because you had a free afternoon and a stubborn bracket to design.</p>
<p>A lot of the progress in that era came from companies that had strong reasons to care about geometry, especially large assemblies and complex surfaces. Car bodies, aircraft structures, tooling, and production drawings all benefit when you can define shape more accurately and revise it with less chaos. The catch was that early CAD often improved one pain while introducing three new ones: hardware cost, specialist training, and systems that felt about as welcoming as a submarine hatch. Powerful, yes. Friendly, not especially.</p>
<h2>AutoCAD Put CAD on More Desks</h2>
<p>The big cultural shift came when CAD moved onto personal computers. Autodesk marked <a href="https://forums.autodesk.com/t5/community-blog/autocad-turns-40-how-time-flies/ba-p/10901693">AutoCAD's 40th birthday</a> in 2022, which puts its beginning in 1982. <code>AutoCAD</code> was not the first CAD system, but it was one of the major reasons CAD stopped being something only giant companies could afford to care about. Suddenly a lot more offices could draft digitally without buying a room full of hardware that looked like it belonged in a missile program.</p>
<p>This is where many people first met CAD in a practical sense: command lines, layers, blocks, plotters, floppy disks, and drawings that still carried a lot of the logic of manual drafting. It was still mostly 2D for a lot of users, and that should not be treated like a lesser phase. Good 2D CAD changed documentation work in a huge way. Reuse got easier. Editing got faster. Standards got more repeatable. Also, plenty of us learned that deleting the wrong block definition can ruin a morning just as efficiently on a computer as on paper.</p>
<h2>Parametric Modeling Changed the Job</h2>
<p>Another major leap came in the late 1980s. <a href="https://www.ptc.com/en/blogs/cad/a-quick-history-of-ptc-and-ptc-creo">PTC's own history of Creo</a> points to the 1988 launch of <code>Pro/ENGINEER</code> as the first commercially successful parametric, associative, feature-based solid modeling system. That sentence is dense and slightly ugly, but the idea underneath it is important. Instead of drawing views and manually editing each one, you could build a part as features with dimensions and relationships that described design intent.</p>
<p>That changed the job. A hole was not just circles on paper anymore. It was a feature tied to references, dimensions, and downstream geometry. Change the parameter and the rest of the model could follow. Drawings could update from the part. Assemblies could react. Families of parts became more manageable. This was the point where CAD began to act less like electronic drafting and more like a model of the object itself.</p>
<p>It also introduced a new category of suffering, because parametric history is useful in the same way an old Honda is useful. It will absolutely get you where you need to go, right up until one tiny thing breaks and suddenly you are on the floor with a flashlight asking where your afternoon went. Still, once engineers had tasted associative 3D modeling, there was no real going back.</p>
<h2>SOLIDWORKS and the Spread of 3D CAD</h2>
<p>If <code>Pro/ENGINEER</code> proved the parametric idea, <code>SOLIDWORKS</code> helped spread 3D CAD much further. The company's 30-year retrospective says the <a href="https://blogs.solidworks.com/products/solidworks/30-years-of-solidworks-a-legacy-of-innovation">1995 release of SOLIDWORKS</a> was the first professional-grade 3D CAD tool built natively for Windows. That mattered for the same reason <code>AutoCAD</code> mattered earlier. It lowered the barrier. You did not need a fancy UNIX workstation and a tolerance for pain just to build solid models professionally.</p>
<p>This is one of the reasons the 1990s feel so important in CAD history. 3D modeling moved from being elite and specialized toward being normal engineering practice. Assemblies, drawings, configurations, feature trees, and mainstream PC hardware became part of ordinary product-development work. The software was never simple, despite what anniversary marketing likes to imply, but it became more reachable. More engineers could learn it. More small and mid-sized companies could justify it. More design work became natively 3D instead of 2D drawings trying to impersonate 3D thinking.</p>
<h2>Cloud CAD Changed the Argument, Not the Need</h2>
<p>By the 2010s, the next big fight was no longer "should this be 2D or 3D?" It was "why are we still passing files around like cursed attachments?" That is the context for cloud-native CAD. Onshape founder Jon Hirschtick wrote that he started the company from scratch because design teams had become distributed and because traditional CAD was not built for a cloud, web, and mobile world. You can read that argument directly in <a href="https://www.onshape.com/en/blog/why-we-started-from-scratch-in-the-cad-business">Onshape's own explanation</a>, and whether or not you like the business model, he was not wrong about the problem.</p>
<p>Cloud CAD did not magically fix CAD. Nothing does. But it changed some important assumptions. Files stopped being the center of the universe. Versioning and collaboration could live inside the platform instead of being bolted on through shared drives and somebody's overworked PDM setup. Browser access, easier updates, and real-time collaboration became part of the pitch. Later, PTC's <a href="https://www.ptc.com/en/blogs/cad/a-quick-history-of-ptc-and-ptc-creo">2019 acquisition of Onshape</a> made it even clearer that cloud delivery was not a side experiment anymore.</p>
<p>Of course, new convenience came with new arguments. Subscription licensing got worse. People worried, not unreasonably, about lock-in, uptime, data control, and what happens when the internet decides today is a character-building exercise. Some of the old pain disappeared. Some of it simply changed clothes.</p>
<h2>CAD as Working Infrastructure</h2>
<p>So that is the short version of a long story. CAD went from manual drafting to interactive graphics, from rare institutional systems to PC drafting, from feature-based 3D modeling to cloud collaboration. Different industries moved at different speeds, and none of these eras replaced the previous one cleanly. Even now, the real world is a messy overlap of 2D drawings, 3D models, neutral exports, PDFs, markups, revision tables, and one person on the team who still trusts paper more than the server.</p>
<p>CAD is the working language between an idea and a thing. Sometimes that thing is a machined part. Sometimes it's a building detail, a site layout, a mold tool, a fixture, a bracket, or a set of drawings that tell somebody where to drill, cut, weld, cast, print, or inspect. The software category is broad because the jobs are broad.</p>
<p>That is also why so many different people use it. Architects need it. Drafters need it. Engineers need it. Industrial designers need it. Manufacturing teams need the data coming out of it. Small shops end up needing it once repeatability starts mattering. Even when the tools differ, the underlying job is the same: turn design intent into something specific enough that another person can trust it.</p>
<p>My view is simple. CAD is not magic, and it is not automatically elegant just because the model spins nicely on screen. It is infrastructure. It is where design gets precise enough to argue with, fix, quote, machine, inspect, and eventually build. That is why it matters, and that is why so many people wind up living in it whether they planned to or not. You can call it drafting, modeling, BIM, product design, detailing, or digital product definition if you want to sound expensive. Most days it still comes down to this: making geometry honest enough that somebody else can do their job without swearing at you.</p>
]]></content:encoded>
    </item>
  </channel>
</rss>