Fusion 360 AI features: what's shipping and what's vapor
Autodesk has announced a lot of AI features for Fusion 360. Some of them are real. Some of them are slide deck material. Here's the honest status.
Quick answer
Fusion 360 AI features in 2026 include the Autodesk Assistant (shipping), generative design (shipping), and announced-but-not-yet-available features like Neural CAD (text-to-geometry) and Text to Command (natural language operations). Most AI features are still in development or limited preview.
I watched Autodesk's AU 2025 keynote from my home office while eating leftover pizza, which felt appropriate because the presentation was about half substance and half reheated promises. Neural CAD. Text to Command. An AI assistant that would become your "thought partner." The audience applauded. I wrote down the feature names and put question marks next to most of them. Six months later, sitting in front of the actual software, I can report that some of those question marks have turned into checkmarks and some have turned into longer question marks.
Fusion 360 in 2026 has more AI features than it did a year ago. That's undeniable. Whether the features matter to your actual work depends entirely on which ones you use and how high your expectations are set. Here's the honest inventory.
Autodesk Assistant: the one that actually shipped#
The Autodesk Assistant is the most visible AI addition to Fusion 360, and it's the one I've spent the most time with. It lives in a panel on the right side of the screen, accessible from a button in the upper-right corner. You type something in natural language, and it tries to do something useful.
As of March 2026, it's in Tech Preview. That label matters. It means the feature is live, you can use it, but Autodesk is telling you up front that it will occasionally behave like an intern who's enthusiastic but still learning the org chart.
What the Assistant actually does right now: it can create basic geometry. Extrude, fillet, chamfer, hole, shell, split. It can create sketches with dimensions. It can apply materials and appearances. It can generate circular and rectangular patterns. It can answer questions about your model, things like volume, surface area, and geometry identification. Since the March 2026 update, it also handles some CAM tasks: creating manufacturing setups, renaming operations, selecting tools. That's real functionality.
What it feels like in practice: faster than clicking through menus for simple stuff, and genuinely confusing for anything that requires context. I told it to "extrude this face by 10mm" and it worked perfectly. I told it to "add a pocket to the top of the bracket, 20mm square, centered, 3mm deep" and it picked the wrong face, extruded in the wrong direction, and produced something that looked like a Cubist interpretation of my request. That cycle of prompt, fail, rephrase, succeed is the actual workflow right now.
Autodesk recommends a three-part prompt formula: state your goal, identify the target, and list constraints. That level of specificity helps, and it tells you something about the current state of AI in CAD: the tool is most useful when you already know exactly what you want and you're just looking for a faster way to ask for it.
For a deeper look at the command-execution side specifically, the Text to Command post covers that in more detail.
Neural CAD: announced, demoed, not available#
This is the one that got the biggest applause at AU 2025. Neural CAD is Autodesk's term for a new generative AI foundation model trained specifically to reason about CAD geometry. The idea is that you type a description, something like "create a contemporary air fryer," and the AI generates native, editable BREP geometry directly inside Fusion's canvas. Not mesh. Not a screenshot. Real solid geometry with faces and edges you can select and modify.
The demo at AU looked impressive, the way all demos look impressive when curated by people whose job is to make demos look impressive. Mike Haley from Autodesk Research described it as "completely reimagining the traditional software engines that create CAD geometry."
As of April 2026, Neural CAD is not publicly available in Fusion 360. You can't use it. There's no button for it. The roadmap says "neural CAD experiences" are coming, with language about turning "natural language prompts into editable design geometry," but no shipping date has been confirmed. The gap between the announcement and the reality is currently about six months wide and showing no signs of closing quickly.
I have a separate post on what Neural CAD is and what it means if you want the full breakdown. The short version: the technology is genuinely interesting, the ambition is real, and the shipping status is "soon" in the same way that "soon" has meant "eventually" in software for the last forty years.
Generative design: shipping, but a different thing#
Generative design is the AI feature Fusion 360 has had for a while, and it's the one most people confuse with text-to-CAD even though they're solving completely different problems.
Generative design takes a set of constraints, loads, materials, keep-out zones, manufacturing methods, and generates organic-looking shapes that satisfy all of them. The output is typically something that looks like a bone or a reef structure, optimized for weight and stiffness but not for looking like a normal bracket. It's topology optimization dressed up in a more approachable interface.
It works. I've used it for lightweighting parts where the geometry doesn't need to be conventional and the manufacturing method can handle organic shapes. The results are genuinely useful when the constraints are well-defined.
It's available as the Generative Design Extension, an add-on you pay for on top of your Fusion subscription. The pricing has changed enough times that I'll just say "check the current Autodesk page" rather than write a number that'll be wrong by next Tuesday.
The reason I mention it here is that people searching for "Fusion 360 AI features" often have generative design in mind, and it's the one AI feature in Fusion that has genuine production history behind it. Companies have shipped parts designed with it. It's real in a way that the newer AI features aren't yet.
That said, generative design is not text-to-CAD. You're not typing a description and getting a bracket back. You're defining an engineering problem with specific inputs and getting a shape that solves it. The text-to-CAD vs generative design comparison explains the difference in more detail, but the quickest way I can put it: text-to-CAD is "build me what I described," generative design is "show me what the physics wants."
Text to Command: the useful middle ground#
Text to Command is the feature that gets the least attention but might end up being the most practical. Instead of generating geometry from scratch, it lets you operate on existing geometry using natural language. "Extrude this face by 1 inch." "Add a 0.5mm chamfer to all edges." "Split this body with my construction plane."
It's essentially a natural language interface layered on top of Fusion's existing command system. You describe what you want to do, and the AI translates that into the appropriate Fusion command and executes it. It's part of the Autodesk Assistant, which means it's available in Tech Preview right now.
I've been using it for the kind of operations where I know what I want but can't remember which menu it's buried in. Fusion has a lot of commands. Nobody remembers where all of them live. Typing "revolve this sketch around this axis" is faster than hunting through the Create menu when you use revolve twice a year.
The limitations are the same as the Assistant overall: it works well for simple, well-specified operations and struggles with ambiguity or multi-step workflows. You can save multi-step sequences as reusable prompts, which is a nice touch, but the execution reliability drops as the complexity goes up.
I wrote a full assessment of Text to Command as its own post because it deserves its own evaluation separate from the bigger AI hype.
What's on the roadmap but not shipping#
The Fusion Roadmap 2026 mentions several AI-related items:
Neural CAD experiences for text-to-geometry. Status: coming, no date.
Expanded Assistant capabilities across more workspaces. Status: partially shipping, more coming.
AI-powered renderings via Microsoft Azure OpenAI. Status: announced, not widely available.
AutoConstrain for drawings. Status: announced, unclear timeline.
How this compares to the competition#
The broader AI in CAD software landscape is moving quickly, and Fusion 360 is in a peculiar position. It has more AI features than most of its competitors, but fewer fully-shipped ones than the marketing suggests.
SolidWorks 2026 shipped AI companions (AURA and LEO) in beta with its FD01 release in February 2026. Those are at a similar maturity level to the Autodesk Assistant.
Zoo.dev and other dedicated text-to-CAD tools already let you generate BREP geometry from text prompts today. They're specialized tools, not integrated into a full CAD platform, but they're shipping something that Fusion's Neural CAD has only demoed.
The advantage Fusion has is integration. If Neural CAD ships and works well inside the Fusion environment, with access to the timeline and the parametric engine, that's a fundamentally different proposition from importing STEP files. The question is when, and how much of the demo translates to reality.
The honest scorecard#
Here's where each Fusion 360 AI feature stands as of April 2026:
Autodesk Assistant (natural language interface): shipping in Tech Preview. Works for simple operations. Useful but inconsistent.
Text to Command (operate on existing geometry via text): shipping as part of the Assistant. The most practically useful AI feature in Fusion right now.
Generative design (topology optimization): shipping as a paid extension. Proven, mature, and genuinely useful for the right problems.
Neural CAD (text-to-geometry generation): announced, demoed, not available. The most exciting promise and the biggest gap between announcement and shipping.
AI-powered rendering: announced, limited availability. Nice for presentations, not relevant to engineering work.
AutoConstrain for drawings: announced, unclear timeline.
Autodesk is doing real AI work. The research behind Neural CAD is genuinely interesting, and the Autodesk Assistant is a real product you can use today. But there's a gap between what Autodesk talks about and what Autodesk ships, and if you're making decisions based on the keynote rather than the current feature set, you'll be disappointed.
I keep Fusion 360 as my primary CAD tool. I use the Assistant when it saves me a menu hunt. I don't plan my workflows around AI features that haven't shipped yet. When Neural CAD becomes something I can open and use on a Monday morning, I'll test it with a real part, a set of calipers, and low expectations. Until then, it's a slide deck.
Newsletter
Get new TexoCAD thoughts in your inbox
New articles, product updates, and practical ideas on Text-to-CAD, AI CAD, and CAD workflows.