Fusion 360 Text to Command: natural language meets feature trees
Text to Command lets you tell Fusion 360 what to do in plain English instead of clicking through menus. It's a different idea from text-to-CAD, and it solves a different problem.
Quick answer
Fusion 360 Text to Command is an Autodesk feature that translates natural language instructions into CAD operations (e.g., 'extrude this face by 10mm'). Unlike text-to-CAD, it doesn't generate geometry from scratch. It operates on existing models and works as a natural language interface to Fusion 360's command system.
I spend an embarrassing amount of time in Fusion 360 looking for commands I've used maybe three times in my life. The revolve tool is under Create, which makes sense, but the split body tool is under Modify, which also makes sense but not until you think about it. The circular pattern is somewhere I know I'll find it faster if I stop thinking and just let my eyes scan. The search bar helps, but typing "split" into the search bar and then clicking the result and then clicking the plane and then clicking the body isn't exactly the frictionless workflow the marketing team imagines.
Text to Command is Autodesk's answer to this particular brand of menu fatigue. Instead of navigating the ribbon or searching for a command name, you type what you want to do in plain English. "Split this body with my construction plane." "Extrude this face by 10mm." "Add a 0.5mm chamfer to all edges." The Autodesk Assistant interprets the instruction and executes the corresponding Fusion command.
It's not text-to-CAD. It doesn't generate geometry from a blank canvas. It operates on what's already there. That distinction matters, and it's one that people keep mixing up.
What it actually does#
Text to Command is part of the Autodesk Assistant, which sits in a docked panel on the right side of the Fusion window. You open it, type an instruction, and the Assistant figures out which Fusion command you're asking for, sets the parameters, and executes it.
As of the March 2026 update, the supported operations in the design workspace include:
Geometry creation and modeling features: extrude, fillet, chamfer, hole, shell, split, revolve. The basics.
Sketch creation with dimensions. You can say "create a rectangle, 40mm by 20mm" and it produces a sketch with the right dimensions applied.
Patterns: circular and rectangular.
Primitives: spheres, toruses, coils.
Material and appearance assignment. "Assign stainless steel to this body" works as expected.
Design queries: ask about volume, surface area, identify geometry types, count features. Useful when you want a quick answer without opening the measure tool.
Since March 2026, it also handles manufacturing workspace tasks: creating manufacturing setups, generating toolpaths, selecting tools, batch renaming operations. The CAM side is newer and less polished than the design side, but it's there.
The execution model is straightforward. You type a command in natural language. The Assistant interprets it. It either executes immediately or, if you follow Autodesk's recommended workflow, proposes the steps and waits for your confirmation before executing. The Ask, Confirm, Execute pattern. I prefer the confirmation step because watching an AI extrude in the wrong direction without asking is the kind of surprise I've had enough of in my life.
Where it works well#
Simple, well-specified operations on clearly identifiable geometry. That's the sweet spot.
"Extrude this face by 15mm." Works. No ambiguity about which face, no ambiguity about the operation.
"Fillet all edges of this body, 2mm radius." Works. Identical to what you'd get clicking through the dialog manually.
"Assign aluminum 6061 to this body." Works. Faster than navigating the material library.
For commands I use rarely, Text to Command is genuinely faster than hunting through menus. I use the loft tool maybe once a month. I can never remember whether it's under Create or under Surface or under some contextual menu that only appears when you've already made specific selections. Typing "loft between these two profiles" is easier than the menu search, and the Assistant handles it.
The reusable prompts feature is a nice touch. If you have a multi-step sequence you repeat, you can save it as a prompt and replay it later. This is basically a macro system with natural language as the input format. Less flexible than Fusion's API scripting, but lower barrier to entry. A designer who doesn't write Python can still automate a three-step sequence by saving a prompt.
Where it falls apart#
Ambiguity is the killer.
"Add a pocket to the top of this part." Which face is "the top"? How big? How deep? Square or rectangular or circular? The Assistant has to guess, and its guesses are wrong often enough that you learn to be very specific or to just use the command palette instead.
"Make this thinner." Thinner how? Shell it? Scale it? Modify a specific dimension? The Assistant will pick one interpretation, and it might not be yours. I told it to "make the walls thinner" on a box and it shelled the body, which was correct for what I wanted but was a coin flip between that and offsetting individual faces.
Multi-step operations with dependencies are unreliable. "Extrude this face by 10mm, then add a 5mm hole centered on the new face" is two operations with a dependency: the second one needs the result of the first one. Sometimes the Assistant handles this. Sometimes it gets confused about which face is the "new face" and drills the hole somewhere unexpected. The recommended approach is to do one step at a time, confirm each one, and build up the sequence manually. Which works, but it's not much faster than just clicking the commands.
Context awareness has limits. The Assistant doesn't always understand spatial relationships the way you'd describe them to a colleague. "Put a hole near the left edge" requires the AI to know what "left" means in your current view orientation, how "near" translates to a dimension, and which edge you mean. A human colleague would ask clarifying questions. The Assistant sometimes asks, sometimes guesses, and sometimes produces something that makes you wonder if you're speaking the same language.
How it compares to text-to-CAD#
This is the comparison that confuses people, and it's worth being very clear about it.
Text-to-CAD means generating geometry from nothing. You describe a part, the AI creates it. A blank canvas goes to a finished (or at least started) model. That's what tools like Zoo.dev do today, and what Autodesk's Neural CAD aims to do inside Fusion eventually. The text-to-CAD guide covers this in detail.
Text to Command means operating on existing geometry using natural language instead of menus. You already have a model. You want to modify it. Instead of finding the chamfer tool in the ribbon, you type "chamfer these edges." The AI translates your words into Fusion commands.
One is a generative tool. The other is an interface tool. Both use natural language. They solve fundamentally different problems.
The prompting tax#
There's an overhead to Text to Command that's easy to miss. To get reliable results, you need to be specific. Autodesk's recommended formula is: "I want to [GOAL] on [TARGET]. Constraints: [UNITS], [DON'T CHANGE X]."
That's a lot of typing for an extrude. If you already know the command, it's faster to just use it. The time savings come when you don't know the command, when the command is buried in a menu, or when you want to chain operations and save the sequence for later. For power users who know Fusion cold, Text to Command is a curiosity. For occasional users who switch between CAD platforms and can never remember where anything is, it's more useful.
The verdict#
Text to Command is a genuinely useful feature that works well within a narrow band of complexity. For simple, clearly specified operations on existing geometry, it's faster and more pleasant than menu navigation. For anything ambiguous, multi-step, or context-dependent, it's unreliable enough that you'll want to keep your mouse-and-menu skills sharp.
It's shipping now as part of the Autodesk Assistant Tech Preview. It's free with your Fusion subscription. There's no reason not to try it. There's also no reason to restructure your workflow around it until the reliability improves.
The broader picture is that Text to Command, Neural CAD, and the rest of the Fusion 360 AI features are all pieces of a larger bet Autodesk is making on natural language as an interface for design software. Some of those pieces work today. Some are still being assembled. Text to Command is the piece that works, within its limits, and I use it most days. It hasn't changed how I design. It's changed how I find the Split Body tool, which on some mornings is enough.
Newsletter
Get new TexoCAD thoughts in your inbox
New articles, product updates, and practical ideas on Text-to-CAD, AI CAD, and CAD workflows.