The future of CAD and AI: what I actually expect
Vendors promise a lot. Research papers promise more. Here's what I think will actually ship, actually work, and actually matter in the next five years.
Quick answer
In the next 2-3 years: better AI assistants inside existing CAD tools, improved text-to-CAD accuracy for simple parts, and AI-powered search/recommendation in PLM systems. In 5 years: parametric AI generation for simple part families, AI-assisted DFM checking, and natural language CAD editing. Full autonomous design is 10+ years away, if ever.
In the next two to three years, expect better AI assistants inside existing CAD tools, improved text-to-CAD for simple parts, and AI-powered search in PLM systems. Full autonomous AI design is ten-plus years away, if it arrives at all. Everything in between is a gradient of vendor promises, research prototypes, and cautious optimism from people like me who have been burned by too many roadmap slides that never turned into shipping features.
I sat through four vendor presentations last quarter. Every single one had an AI slide. Every single one used the phrase "AI-powered" next to something that was either a search bar with a chatbot skin, a generative geometry demo running on a curated prompt, or a bullet point about a feature that doesn't exist yet. I took notes. I also took a photo of my coffee, which I'd been holding for forty minutes without drinking because the presentations kept promising things that made me want to ask uncomfortable questions.
Here's what I actually expect to happen, organized by how far out it is and how much confidence I have.
The next one to two years: incremental and real#
The near-term future of AI in CAD isn't dramatic. It's useful. The changes that are already shipping or nearly shipping are modest, practical improvements that make existing workflows faster without fundamentally changing what a CAD designer does.
AI assistants in major CAD tools will get better at command discovery and operation suggestions. Autodesk, Siemens, PTC, and Dassault have all shipped some version of an AI assistant inside their CAD software. Right now, these assistants are somewhere between a fancy search bar and a junior colleague who sometimes gives useful advice. They'll improve. They'll get better at understanding context, suggesting next operations based on what you've already done, and automating repetitive actions like applying standard features.
This isn't exciting. It's the AI equivalent of a better autocomplete, but in a CAD menu structure that has a thousand commands spread across fifty toolbars. Finding the right command faster is genuinely valuable, especially for occasional users who don't have the shortcut keys memorized. I use Fusion 360 daily and I still discover commands I forgot existed.
Text-to-CAD accuracy will improve for simple parts. The tools I test today, Zoo.dev, AdamCAD, others, are already better than they were a year ago at hitting prompted dimensions and generating cleaner topology. I expect that trend to continue. Simple brackets, plates, and enclosures will get more reliable. The dimensional accuracy gap will narrow from "usually close, sometimes wrong" to "almost always close, occasionally wrong." That's meaningful for concept work, even if it still isn't good enough for production.
AI-powered search and retrieval in PLM systems will become standard. Finding similar parts, suggesting reuse before modeling from scratch, identifying duplicate geometry across a company's part library. This is boring, high-value work that AI handles well, and it's starting to ship in enterprise tools. It won't make headlines, but it'll save real time for companies with large libraries.
Three to five years: the interesting middle ground#
This is where my predictions get less certain and more interesting. The next few years are where the gap between what's possible in research and what's usable in production will either narrow or stay stubbornly wide.
Parametric AI generation for simple part families. Right now, text-to-CAD produces dumb solids with no feature tree. The research on generating parametric models, geometry with constraints, sketch relationships, and editable feature histories, is active and making progress. I expect that within three to five years, at least one tool will be able to generate a simple parametric bracket that you can edit by changing dimensions in a feature tree rather than re-prompting from scratch.
This matters more than it sounds. A dumb STEP file is a dead end. A parametric model is a starting point you can live with. The difference between those two things determines whether AI output integrates into a real text-to-CAD workflow or stays a party trick.
AI-assisted DFM checking. Not DFM-aware generation, that's harder, but automated checking of geometry against manufacturing rules. "This wall is too thin for injection molding." "This internal corner needs a radius for CNC milling." "This overhang angle needs support for SLA printing." Rule-based DFM checking already exists in some tools. Adding AI to make it smarter, more contextual, and easier to use is a natural next step.
I'll believe it's real when my machinist stops calling me about AI-generated parts with impossible features. That hasn't happened yet, but I can see the path.
Natural language CAD editing. Instead of finding the right command, selecting the right face, and entering the right parameters, you say "make this wall 2 mm thicker" or "add M4 holes on a 40 mm bolt circle on the top face" and the tool does it. This is an extension of the AI assistant concept, but applied to editing rather than just command discovery. Fusion 360's timeline-based architecture seems well-suited for this. SolidWorks is probably thinking about it too.
The tricky part is disambiguation. "Make this wall thicker" is simple when there's one wall. When there are forty walls and the AI needs to figure out which one you mean from context, it gets hard. But for specific, well-described edits, I think this will work within the next few years.
Five to ten years: speculation territory#
Everything beyond five years is guessing, and I want to be honest about the difference between prediction and speculation. Here's what I'd bet on, with low confidence.
Multi-part AI generation. Generating simple assemblies where parts have defined relationships, clearances, and mating conditions. Not a full assembly of a hundred parts, but maybe a two or three-part enclosure with a lid that actually fits, snap fits that actually snap, and internal mounting features that align with a PCB outline. This is hard because it requires the AI to understand spatial relationships between parts, not just geometry within a single part.
Simulation-informed generation. AI that generates geometry and then checks it against basic structural or thermal simulation, iterating until the design meets a performance target. This is generative design with an AI front end, and some version of it exists today in Fusion 360's generative design tools. Making it accessible through natural language and connecting it to AI-generated starting geometry is plausible in the five to ten year range.
Process-aware geometry. AI that knows the part will be injection-molded and generates draft angles, uniform walls, and gate-friendly geometry from the start. This requires training on manufacturing process data alongside geometric data, and the data pipeline is the bottleneck. Most manufacturing data is proprietary and poorly structured. Companies that solve the data problem will have a real advantage.
What I don't expect even at the ten-year horizon is fully autonomous design. The idea that you describe a product and an AI engineers the complete solution, with all tolerances, manufacturing considerations, assembly sequences, and cost trade-offs, is so far from current capabilities that I'd put it firmly in the "maybe someday" category. It's not just a scaling problem. It's a knowledge representation problem that the field hasn't solved.
What needs to happen technically#
For any of these predictions to come true, a few technical problems need solutions. Parametric generation is the biggest. Current text-to-CAD models generate B-Rep output or fragile construction history. Producing clean parametric models with editable feature trees requires AI that understands CAD operations as meaningful sequences, not just paths to a final shape. The DeepCAD research line is promising but not production-ready.
DFM integration requires manufacturing process data that AI models are not currently trained on. That data exists inside companies but is rarely structured, annotated, or shareable. Assembly reasoning requires spatial and functional understanding that current models lack entirely. And simulation integration needs fast approximate solvers, because running full FEA on every generated iteration is too slow to be practical.
Vendor roadmap versus reality#
When a vendor says "AI-powered" at a conference, divide the promised capability by four and add two years. That's roughly what will ship and when. I'm not being cynical. I'm pattern-matching from a decade of watching CAD vendor keynotes.
Autodesk is probably closest to useful AI integration, with Fusion 360 AI features that are incremental but real. Siemens has deep technology but ships user-facing features slowly. PTC and Dassault are moving, but enterprise CAD moves at enterprise speed, and their customers are conservative about new workflows. The most interesting work might come from startups that don't carry legacy code or legacy business models. Zoo.dev's approach is different from what the big vendors are doing, and that diversity is healthy.
My personal bets#
If I had to bet on what matters most in the next five years, I'd put my money on three things.
First, AI-powered search and reuse in enterprise PLM. This is unsexy and incredibly valuable. Companies waste enormous amounts of engineering time redesigning parts that already exist somewhere in their system. AI that surfaces existing designs before you start modeling will save more total hours than text-to-CAD geometry generation. It'll just never make a flashy demo.
Second, natural language editing inside existing CAD tools. Not generation from scratch, but modification of existing geometry through conversational commands. This is closer to how designers actually work. You don't start from nothing every day. You modify, adapt, and iterate. An AI that's good at helping with that process is more useful than one that generates a first draft you'll throw away.
Third, DFM validation on AI-generated output. A safety net that catches the worst manufacturing violations before the geometry leaves the design environment. This doesn't require the AI to understand manufacturing. It requires a checking layer that knows the rules. It's achievable, practical, and would immediately make every text-to-CAD tool more useful for real work.
What I'm not betting on#
Full autonomous design. Prompt in, engineered product out. The complexity of real engineering is so far beyond current AI that I'd be surprised to see this in my career.
AI replacing CAD software. AI will live inside CAD tools, augment them, and change how people interact with them. But the fundamental need for a precision geometric modeling environment isn't going away.
The death of parametric modeling. Feature trees capture design intent. They make models adaptable. AI generation without parametric structure produces disposable geometry. Parametric modeling will coexist with AI generation, not lose to it.
Where this leaves working designers#
If you're a CAD designer watching the AI space, my advice is simple: keep doing good work, learn the tools as they ship, and don't panic about a future that vendor slides promise but can't deliver yet.
The next five years will bring tools that make you faster at some parts of your job. They won't make you unnecessary. The parts AI can't do, design intent, manufacturing awareness, assembly thinking, tolerance specification, client communication, are what make you valuable. Those skills are worth developing more than learning to write better prompts.
The AI will keep getting better. What I don't expect is to walk into my office one morning and find that a language model has figured out how to design a multi-part injection-molded assembly with proper draft angles, tolerance stacks, and a tooling cost estimate. When that day comes, I'll be impressed, worried, and immediately suspicious of the tolerance callouts. Until then, the future of CAD and AI is incremental improvement. That's fine. It's how most useful technology actually progresses.
Newsletter
Get new TexoCAD thoughts in your inbox
New articles, product updates, and practical ideas on Text-to-CAD, AI CAD, and CAD workflows.