Fusion 360 Neural CAD: what Autodesk is actually building
Neural CAD is Autodesk's attempt at text-to-geometry inside Fusion 360. It was announced at AU 2025. As of early 2026, you still can't use it.
Quick answer
Neural CAD is an Autodesk research project for generating editable 3D geometry from text prompts inside Fusion 360. Announced at Autodesk University 2025, it is not yet publicly available. It aims to produce parametric, editable output rather than mesh, but no shipping date has been confirmed.
I've been checking the Fusion 360 updates page roughly once a week since November, the way you check a tracking number for a package that still says "label created." Neural CAD was announced at Autodesk University 2025 in Nashville with the kind of energy that makes you think shipping is imminent. It's now April 2026, and the feature is still somewhere between "active research" and "coming to a product near you." The update page keeps showing me improvements to the sketch environment and new thread profiles. Useful stuff. Not what I'm refreshing the page for.
Neural CAD is Autodesk's name for a new type of AI model trained to generate real CAD geometry from text prompts. If it ships the way they've described it, it would be the first text-to-CAD capability built directly into a major professional CAD platform. That's a big deal. The problem is the "if."
What Autodesk said at AU 2025#
At Autodesk University 2025, Mike Haley from Autodesk Research described Neural CAD as "completely reimagining the traditional software engines that create CAD geometry." The claim is that these are new AI foundation models trained specifically to reason about CAD objects, not general-purpose language models bolted onto existing geometry kernels.
The key technical claim: Neural CAD generates BREP (Boundary Representation) geometry from text prompts. That means real solid geometry with mathematical surfaces, edges, and vertices. Not mesh. Not triangulated approximations. The same kind of representation that Fusion's parametric engine works with natively. The demo showed someone typing something like "create a contemporary air fryer" and getting back an editable 3D model in the Fusion canvas.
Autodesk positioned this as fundamentally different from what existing text-to-CAD tools do. Most of those tools, Zoo.dev included, generate geometry externally and hand you a STEP file to import. Neural CAD would generate geometry inside Fusion itself, integrated with the timeline, the parametric history, and the rest of the design environment. That integration is what makes the promise compelling. It's also what makes it hard to ship.
Why this is technically hard#
Generating editable BREP geometry from text is a different and harder problem than generating mesh from text. The text-to-CAD vs text-to-3D distinction matters here.
A mesh model is a bag of triangles. You can approximate any shape with enough triangles, and the AI doesn't need to understand much about engineering to produce one. The output looks like a thing. It's not a useful engineering artifact, but it looks right in a viewport.
BREP geometry requires the AI to produce mathematically precise surfaces that meet at exact edges, form valid solid bodies, and behave correctly when you try to add features to them. A BREP model isn't just a shape. It's a data structure that encodes topology: which faces are adjacent, which edges bound which faces, which surfaces are planar versus cylindrical versus freeform. If any of those relationships are wrong, the model breaks the moment you try to fillet an edge or shell the body.
The parametric piece is even harder. If Autodesk wants Neural CAD output to participate in Fusion's timeline, the generated geometry needs to be expressed as a sequence of modeling operations that can be rolled back, edited, and replayed. That's not just generating a shape. It's generating a construction history that produces the shape. The difference is like the difference between giving someone a finished cake and giving them a recipe that produces the cake. One is dramatically more complex than the other.
Whether Autodesk has solved this or plans to ship something more limited than the AU demo suggested, I don't know. The research blog posts are encouraging. The absence of a shipping date is less encouraging.
What exists today vs what was announced#
As of April 2026, here's the reality:
The Autodesk Assistant is live in Fusion 360 as a Tech Preview. It can execute existing commands via natural language (that's the Text to Command capability). It can create basic geometry, apply modeling features, and answer questions about your model. This is real, shipping, and usable today, with the usual caveats about Tech Preview reliability.
Neural CAD for geometry, the text-to-BREP generation, is not available. It's not in Tech Preview. There's no beta access that I'm aware of. The Fusion Roadmap 2026 references "neural CAD experiences that turn natural language prompts into editable design geometry" as something the team is working toward. The language is aspirational, not committal.
This distinction matters because people conflate the two. The Autodesk Assistant doing command execution is impressive but incremental. It's a smarter command line. Neural CAD generating novel geometry from prompts is a different category of capability, and it's the one that hasn't arrived yet.
How it compares to what's already available#
If you want text-to-CAD geometry right now, tools like Zoo.dev already generate BREP solids from text prompts and output STEP files you can import into Fusion 360 or any other CAD software. They work today. The geometry is imperfect and needs cleanup, as I've written about in the accuracy post, but the basic capability exists.
The difference Neural CAD promises is integration. An external tool generates a STEP file that arrives in Fusion as a dumb imported body with no feature history. Neural CAD, as described, would generate geometry that's native to Fusion's modeling environment. You could roll back the timeline, edit a sketch dimension, add features, and the generated geometry would participate in the parametric workflow like any other operation.
That's a real advantage if it works. The gap between "import a STEP file and work with it" and "have native parametric geometry generated inside your design environment" is significant. It's the difference between getting a block of rough-cut material and getting a partially finished part on your machine with the fixtures already set up.
But promises about integration don't count until they ship. Zoo.dev, for all its limitations, works today. Neural CAD doesn't. That's the current state, and it's the only state that matters for anyone trying to get work done this week.
The research angle#
Autodesk Research has published enough about their AI work to suggest this isn't vaporware. They have teams working on geometry understanding, CAD-specific foundation models, and the intersection of machine learning with parametric modeling. The research is real. The engineering challenge of turning research into a product feature that works reliably for millions of users is where things get slow.
I've seen enough AI demos that looked great in a controlled setting and fell apart the moment real users with real models started poking at them. A demo that generates a clean air fryer from a carefully crafted prompt is one thing. A production feature that handles "flange bracket, 3mm thick, four M4 holes, 60mm bolt pattern, with a stiffening rib down the middle, and make it look like the one from the Johnson project but smaller" is another thing entirely. The second prompt is closer to how engineers actually talk, and it's the kind of prompt that makes AI systems produce interesting garbage.
What I'm watching for#
When Autodesk does ship something under the Neural CAD name, here's what I'll be testing:
Can it produce geometry that survives a fillet? Not just looks good in the viewport, but actually has valid topology that Fusion's modeling tools can work with.
Can it hit prompted dimensions? If I say 50mm, I want 50mm, not 48.7mm.
Does the output have a real feature timeline? Can I roll back and edit the AI-generated operations, or is it an imported body with a single "Neural CAD" node?
Does it handle engineering language? Not marketing prompts like "create a contemporary air fryer" but engineering prompts like "rectangular plate, 80x50x5mm, four 4.2mm through holes on a 60x30mm bolt pattern."
The honest take#
Neural CAD is the most interesting thing Autodesk has announced in years. The idea of generating native, editable, parametric BREP geometry from text prompts inside a professional CAD environment is genuinely compelling. If Autodesk pulls it off, it changes how text-to-CAD fits into real workflows in a way that external tools can't match.
But it's not here. The feature page doesn't have a download button. The roadmap doesn't have a date. The AU demo was six months ago, and the follow-up has been silence punctuated by improvements to unrelated parts of Fusion. That's normal for complex software development, and it's also normal for features that got announced before they were ready.
I'll keep refreshing the update page. I'll keep my expectations calibrated to what I can actually open and use. And I'll keep using external text-to-CAD tools for the work I need done today, because waiting for the perfect integrated solution is a luxury that shipping deadlines don't allow. When Neural CAD arrives, I'll test it the same way I test everything: with a real part and no mercy. Until then, it's a promising research project with excellent marketing attached.
Newsletter
Get new TexoCAD thoughts in your inbox
New articles, product updates, and practical ideas on Text-to-CAD, AI CAD, and CAD workflows.