Is your design data safe with text-to-CAD tools?
You're sending geometry descriptions to cloud APIs. Your prompts describe proprietary parts. The output passes through someone else's servers. Here's what you should know about data safety.
Quick answer
Text-to-CAD tools process prompts on cloud servers. Zoo.dev's API processes your text descriptions server-side. Most tools don't store generated geometry long-term, but your prompts describe proprietary designs. Check each vendor's data retention policy, NDA compliance, and whether prompts are used for model training. Self-hosted options are extremely limited.
Your text-to-CAD prompts describe proprietary geometry, and every one of them leaves your machine. That's the short version. The longer version involves reading privacy policies written by lawyers who get paid by the clause, and I've done that so you don't have to. I spent a Tuesday evening going through the data handling documentation for every text-to-CAD tool I could find, which is not how I'd normally choose to spend an evening, but a client had asked me point-blank whether their enclosure designs were safe with these tools, and I didn't have a good answer. Now I do. It's complicated.
The thing about text-to-CAD that makes data safety different from, say, using a cloud CAD tool like Onshape is the nature of the input. When you use Onshape, you're uploading geometry you've already created. When you use text-to-CAD, you're describing geometry that doesn't exist yet, in plain language that spells out dimensions, features, and sometimes the exact purpose of the part. A prompt like "cylindrical housing for a pressure sensor, 22mm OD, 1.5mm wall, with a sealed cable gland on one end" tells the server quite a lot about what you're building. If that's a proprietary product under development, your prompt is a design disclosure in a text box.
What actually leaves your machine#
When you send a prompt to a text-to-CAD tool, the following data typically goes to the vendor's servers:
Your text prompt, which contains the geometry description, dimensions, features, and any design intent you've written. Optional parameters like output format, units, and material hints. Your account credentials or API key for authentication. Metadata: timestamps, IP address, session information, the usual web request payload.
The server processes this, runs inference on a large AI model, generates geometry, and sends the result back. Depending on the tool, the generated geometry might also be stored temporarily or permanently on the server side.
What doesn't leave your machine is any existing CAD data on your local drive, your feature trees, your assembly files, or your manufacturing drawings. Text-to-CAD tools don't reach into your file system. The risk is specifically about what you type into the prompt and what the server does with it afterward.
Vendor policies: what I found#
I went through the published data policies for the main text-to-CAD tools. Here's what they say, and what they don't say.
Zoo.dev processes prompts server-side through their KittyCAD geometry kernel. Their privacy policy states they collect usage data including prompts and generated outputs. The key question for enterprise users is whether prompts are used to train future models. As of early 2026, Zoo's terms allow them to use data to improve their services, which is standard language that could include model training. For organizations with strict IP policies, that ambiguity is a problem. Zoo does not currently offer a self-hosted deployment option, which I covered in the text-to-CAD self-hosted post.
AdamCAD runs generation on their cloud servers. Their terms of service describe standard data collection. The parametric model generation happens server-side, and the results are sent back to your browser. I couldn't find a published statement specifically addressing whether prompts are used for model training, which in practice means you should assume they might be.
CADScribe generates STEP and STL files server-side. Their privacy documentation is limited. For a tool that handles geometry descriptions, the lack of a detailed data retention policy is itself a data point, and not a reassuring one.
For tools that rely on third-party AI providers (like CADAgent, which uses the Anthropic API), your prompts also pass through the AI provider's infrastructure. Anthropic's commercial API terms state they don't train on API inputs by default, but that's Anthropic's policy, not the tool developer's. The prompt travels through two sets of servers and two sets of policies.
The training data question#
This is the one that makes engineers uncomfortable. If a vendor uses your prompts to train their AI model, your design descriptions become part of the model's learned knowledge. Not in a way where someone can extract your exact prompt, but in a way where the patterns, dimensions, and design approaches you described influence future outputs.
Is that a real risk? For a single prompt describing a generic bracket, probably not. The model has seen thousands of brackets. Yours doesn't meaningfully change anything. For a prompt describing a proprietary mechanism with unusual geometry and specific dimensional relationships, the calculus is different. The more unique your design, the more identifiable the training signal.
Most major AI providers offer opt-out mechanisms for training data use on their commercial API tiers. OpenAI's API does not use inputs for training by default. Anthropic's API has the same policy. Google's enterprise Gemini offerings have similar commitments. But these policies apply to the AI provider, not necessarily to the text-to-CAD tool built on top of them. A tool developer could, in principle, log your prompts and use them for their own fine-tuning regardless of the underlying AI provider's policy.
The practical advice: read the tool's own data policy, not just the AI provider's. Ask explicitly whether prompts are used for model training. Get the answer in writing if your IP requires it.
NDA and IP implications#
If you're working under an NDA, describing a client's product geometry in a text prompt sent to a third-party cloud service could be a breach. I'm not a lawyer, and this isn't legal advice, but I've been in enough contract reviews to know that "reasonable measures to protect confidential information" probably doesn't include typing that information into a cloud API with default privacy settings.
The industries where this matters most are exactly the ones you'd expect. Defense contractors operate under ITAR and CUI regulations that explicitly restrict where technical data can be processed and stored. Sending a prompt describing a defense component to a cloud API server, especially one that might process data outside the US, is a compliance problem, not a preference.
Medical device companies working under FDA 21 CFR Part 820 and ISO 13485 have design control requirements that include controlling access to design outputs. A text-to-CAD prompt that describes a device component's geometry is arguably a design input, and sending it to an unvalidated cloud service creates a documentation and compliance gap.
Aerospace suppliers operating under AS9100 face similar traceability and information security requirements. Automotive suppliers with TISAX certification have information security obligations that cover product design data.
For all of these industries, the question isn't whether text-to-CAD is useful (it might be), it's whether the data handling meets regulatory requirements. In most cases today, it doesn't, because the tools weren't designed with regulated industries in mind.
What you can actually do about it#
The most secure approach is not using cloud text-to-CAD tools for proprietary designs. That's the honest answer, even if it's not the exciting one. For non-sensitive work, concept exploration, generic parts, educational use, or designs that aren't under IP protection, the data risk is minimal and the tools are useful.
For sensitive work, you have a few options. The self-hosted route using OpenSCAD with a local LLM keeps everything on your machine, but the output quality is significantly lower than cloud tools. Running a local LLM to generate FreeCAD or build123d scripts is another option, with similar quality trade-offs.
If you must use a cloud tool for sensitive work, take these steps: read the vendor's data retention policy and confirm in writing how long prompts and outputs are stored. Ask whether prompts are used for model training and whether you can opt out. Use the API rather than a web interface when possible, as API terms sometimes offer better data handling commitments than consumer-facing products. Avoid including project names, client names, or product identifiers in your prompts. Describe geometry abstractly when you can. Keep a log of what prompts you've sent and to which service, because if you ever need to demonstrate due diligence to a client or auditor, that record matters.
If your organization has a data classification system, text-to-CAD prompts for proprietary designs should be treated at whatever level your geometry data falls under. If your STEP files are "confidential," the prompts that describe those STEP files should be "confidential" too.
The gap between convenient and safe#
The frustrating part of this whole topic is that the tools that produce the best results are the ones that require the most trust. Zoo.dev generates the best B-Rep geometry, and it's cloud-only. The self-hosted options that protect your data completely produce noticeably worse output. There's no tool in 2026 that gives you cloud-quality text-to-CAD generation with on-premise data security.
That gap will close eventually. Local AI models are getting larger and more capable. The open-source geometry kernels (OpenCascade, build123d) are mature enough to handle the generation side. The bottleneck is the language model quality for local inference, and that's improving faster than most other parts of this stack.
Until then, the data safety question for text-to-CAD comes down to a trade-off that every team has to make for themselves. The convenience of typing a prompt and getting usable geometry in fifteen seconds is real. The risk of sending proprietary design descriptions to a cloud service is also real. Pretending either side of that trade-off doesn't exist is how organizations end up surprised, either by slow workflows or by uncomfortable questions from a compliance auditor who found out where their design data went.
My habit is simple: I use cloud text-to-CAD tools freely for personal projects, concept work, and anything that isn't under an NDA. For client work with IP sensitivity, I model the part myself in Fusion 360 and keep my prompts to myself. It's less exciting than the demo reel, but my clients' geometry stays where it belongs, which is on my machine and nowhere else.
Newsletter
Get new TexoCAD thoughts in your inbox
New articles, product updates, and practical ideas on Text-to-CAD, AI CAD, and CAD workflows.