Skip to main content
AEC Hub -- Field Guide

The Subscription Killer

A Field Guide to Claude for AEC

Almost every AEC firm pays for SaaS tools that wrap workflows they could now own internally. The cost of building firm-specific software has collapsed. This guide is what you need to know to start building.

$500–2K
Per Seat / Year
27–59%
Firms Using AI
200K
Token Context Window
1
Weekend To First Tool
aechub.org -- Published May 2026 -- Tags: ai, aec, automation
01
Start Here
The third path almost nobody is talking about

Everyone is talking about AI. Almost nobody is telling you what to actually do about it.

You know the drill. Every vendor in your inbox has wedged "AI-powered" into their tagline. Your LinkedIn feed is a parade of think pieces. Three SaaS tools you already pay for want to charge you extra for the AI tier. And somewhere in the middle of all of it, you're supposed to figure out what any of this actually means for the way your firm works on a Tuesday morning.

Wading through it is exhausting. So most people pick one of two paths. They either ignore the whole thing and hope it's a phase. Or they buy three more tools and call it a strategy.

There is a third path. Almost nobody is talking about it. The companies trying to sell you tools have no incentive to.

Here it is: you can build your own. This is the subscription killer.

Not a small thing. Not a side feature on something you already pay for. Actual tools. Tailored to the way your firm works. Connected to your data. Solving the friction your team complains about every Friday afternoon. The thing the past-project-finder vendor wants $1,800 a seat for, except your version actually works on your file structure, with your conventions, on the projects you have actually done.

Does it take work? Yes. Does it take testing? Absolutely. Does it take a few false starts? It does. But once it clicks, you own it. The running cost is a rounding error compared to the parade of vendor invoices you're paying right now. And the real upside is bigger than the money. You finally get to point a tool at the things that actually consume time inside your firm: the 20-minute hunt for an old detail, the two hours building proposal precedents, the Friday afternoon reconciling a drawing schedule. Things no SaaS company will ever optimize for, because they don't know your firm exists.

That is the unlock. That is what most people are missing while they argue about which chatbot is best.

But to get there you have to learn a few basics. Not all of them. Not the deep ones. Just enough to know how the pieces fit together. The three modes of Claude. How tokens and context windows actually work. How to write a prompt that doesn't waste your day. How to verify the output, because (and we'll come back to this a lot) Claude is wrong sometimes, and confidently wrong is the worst kind of wrong. How Skills, Plugins, and MCP servers stop being buzzwords and start being Lego pieces you can stack into something useful.

That is what this guide is. Not a sales pitch for AI. The context and the framework you need to go build something real.

Here is the goal: by the time you finish reading, pick one annoying thing your firm does every week. Spend a Saturday on it. End the weekend with a working tool. It does not have to be a big tool. It does not even have to be a useful tool the first time. The first one is a test. The point of the test is what happens in your head while you're doing it. An entire mental model shifts. You stop asking "which AI app should we buy?" and start asking "what do we want to build?"

That is where the real value lives. Not in adopting another shiny tool. In building bespoke tools nobody else has, on data nobody else has, for a firm nobody else runs.

There is a boatload more to this: open source models you can run locally, evaluation frameworks that let you trust the output, ways to ship tools across your team. Future AEC Hub guides will go deep on each. This one gets you on the right foot.

Four Things To Take With You
  • The power has moved. You have your data. You have your workflows. The vendors don't.
  • Out of the box, AI is generic. The real win is in the firm context layer you build on top of it.
  • Verify everything. Especially the stuff that looks correct.
  • The boring problems are the gold. Whatever annoying thing your team does every week: that's the tool waiting to be built.
02
A Useful Lens Before You Start
Anthropic's AI Fluency framework. Twenty minutes that pays for itself

You can skip the philosophy and start clicking. That works fine for a while. But Anthropic publishes a short framework called AI Fluency that's worth twenty minutes of your time, because it names the thing that separates the firms quietly winning at AI from the firms quietly getting burned by it.

Two parts.

The Four Pillars describe what good AI work looks like: Effective (you got the outcome), Efficient (it didn't burn your week), Ethical (you'd defend the choices to a client), Safe (you accounted for failure modes, including the AI's own).

The 4Ds describe what you actually do.

The 4Ds
  • Delegation: choosing what you do, what Claude does, what you do together. Not everything should be handed off. Some things should be partly handed off. That choice shapes the work.
  • Description: communicating clearly enough for Claude to act. Vague prompts, vague output. This is a skill. It's learnable.
  • Discernment: evaluating what comes back. Was it right? Is the reasoning sound? Are the references real? This is the hard one.
  • Diligence: the responsibility layer. Verification, audit trails, who reviewed what, when.

Here is the part most teams miss. They get good at Description (writing prompts) and they enthusiastically Delegate (use Claude for everything), and then they ship work containing errors nobody caught. The teams that win lead with Discernment and Diligence. Everything else flows from there.

Read more at anthropic.com/ai-fluency. Pass it around your team.

03
Context Is the Moat
The firm context layer compounds. Vendors can't copy it

Here is the part the vendor demos hide.

Out of the box, Claude knows everything about everything and nothing about your firm. It will draft a fee proposal. Just not your fee proposal. It will write a spec section. Just not in your firm's voice, with your firm's structure, citing your firm's standards. It will analyze a drawing set. Just not the way your team has decided to analyze drawing sets after twenty years of figuring out what actually matters.

The vendor demo never shows you this. The vendor demo shows you a generic output that looks impressive because you have no baseline to compare it to. Take it back to your firm and the gap shows up immediately. It's close, but it's not yours. It's not how you would have written it. It's missing the thing your principal always asks for. It cites the wrong standard. It uses a layout your firm stopped using three years ago.

This is where almost everyone gets stuck. They try ChatGPT or Claude raw, get a result that's "kind of okay," conclude AI isn't ready, and go back to their old workflows. Or worse, they conclude AI is amazing because the output is pretty, and they ship work that quietly doesn't reflect the firm's standards.

Both responses are wrong. The right move is to operationalize your firm's context.

What does that mean in practice? You are not training the model. Anthropic trained the model. What you train is the system around it. You feed Claude your standards, your conventions, your spec format, your QA protocols, your project setup checklist, your design principles, the way your principals like deliverables written, your internal terminology. You do this through every mechanism this guide covers next: project instructions, CLAUDE.md files, Skills, Plugins. You do it once well. You maintain it deliberately. And from that point on, it shows up in every single piece of work that flows through Claude.

A useful way to think about it: you are not adopting AI. You are building your firm's AI context layer. The model is the engine. The context layer is the chassis you bolt around it that makes it actually work for your practice.

That layer is the moat. It is the thing a competitor cannot copy by signing up for the same Claude plan. It encodes how your firm actually works: the lessons, the standards, the patterns, the "don't ever do this again" notes from past projects. Every project that flows through it makes it a little smarter, because you encode what you learned that round. The layer compounds.

Most firms will skip this work. They will keep using AI raw and wondering why the output is mediocre. They will keep paying SaaS vendors for tools that bake in someone else's idea of how a firm should work. Don't be most firms.

The One Operational Decision
Someone at your firm owns the context layer. Maintaining it is part of their actual job. Every time the firm learns something (a new standard, a better template, a recurring mistake) that lesson gets folded back into the layer. That is the practice. That is what separates the firms quietly winning from the firms quietly spinning.

The how (CLAUDE.md, Skills, Plugins, the whole machinery) comes in Section 07. First, the basics.

04
The Three Claudes
Chat, Cowork, Code. Pick the right surface for the task

Here is the thing most people miss. Claude is not one product. It is three different surfaces stacked on top of one account, and each one is good at a different shape of work. Most users live in Chat and wonder why they are underwhelmed. Don't be most users.

Claude Chat
The Assistant

Conversation. Drafting, summarizing, research, second opinions on design decisions. Browser or desktop. If your task is a conversation, this is the surface.

Claude Cowork
The Operator

Multi-step jobs on your local files. Reorganizing archives, generating deliverables across many inputs, auditing folders. Desktop only. If your task is a job, this is the surface.

Claude Code
The Builder

Writes, modifies, and reviews software. Terminal, desktop, or claude.ai/code. Where firm-specific tooling lives. If your task is a tool, this is the surface.

The rule, one more time: conversation → Chat. Job → Cowork. Tool → Code. Memorize it. Picking the right surface is the first practical skill.

05
How Claude Works Under the Hood
Models, tokens, context window. Three things you have to know

You do not need to be an AI engineer to use Claude well. You do need to know three things: the model, the tokens, and the context window. Get these wrong and you will spend your week wondering why answers got worse.

Models. Think of the model picker as a dial. Bigger model means smarter answer and more tokens burned. As of May 2026 the dial has three settings: Opus 4.7 for deep reasoning and complex multi-step problems, Sonnet 4.6 as the everyday default, and Haiku 4.5 for fast, cheap, simple tasks. Sonnet handles most professional work well. Crank it up to Opus for complex analysis, multi-document review, or hard code work. Crank it down to Haiku for bulk, repeatable tasks where speed matters more than depth. When in doubt, take the latest Sonnet.

Tokens. Tokens are the unit Claude reads, writes, and bills in. A useful approximation: every five or six characters of typed text is one token, so a single page of a typical spec is around 500 tokens. Your prompts cost tokens. Claude's replies cost tokens. Anything attached to the conversation costs tokens. Tokens are how you pay; they are also how Claude thinks. Run out and the work stops mid-sentence.

Paid plans operate on a five-hour rolling window of token usage. Max plans add a weekly cap on top of that. Anthropic does not publish exact figures per plan, but the shape matters: agentic surfaces (Cowork and Code) consume tokens far faster than Chat does, sometimes by an order of magnitude or more, because each "step" in an agent's plan is itself a model call. A few heavy Cowork sessions can flatten a Pro plan in an afternoon. If anyone in your firm is using Cowork or Code as a daily driver, give them Max. Pro is for exploring. Max is for working.

The context window. This is Claude's working memory for a single conversation. Standard is 200,000 tokens; there is a 1 million token beta on Sonnet for API users. Everything lives inside that window: your messages, Claude's replies, attached files, project instructions, connector results.

The window is sacred space. Treat it that way. As it fills with stale messages and old files, Claude starts to lose the thread, ignore earlier instructions, and produce strange output. The community calls this "context rot." It is the single biggest reason intermediate users get bad answers without understanding why. The fix is simple: start fresh conversations when you switch topics, upload only what is relevant, and prefer Markdown over PDFs for static reference material. A heavy PDF can eat 15,000 tokens by itself. Convert it once, save the tokens forever.

Pricing. Pro is $20 per month. Max starts at $100 per month and scales to $200 per month for the highest-usage tier. Team plans are per-seat with a five-seat minimum and add SSO and admin controls. Larger firms negotiate Enterprise contracts. Always confirm at claude.com/pricing.

One-time setup. Walk through Settings before your first real conversation. On free accounts, your chats are used to train the model unless you opt out. On paid consumer accounts, training is off by default for chat content. Make the choice consciously, especially if you handle anything under client NDA.

While you are in there, write your personalization paragraph. This is one block of text that travels with you into every conversation and every project. It is worth thinking about. Aim for short and pointed rather than exhaustive. A scaffold to start from:

Personalization Scaffold
  • Context: [your role] at an AEC firm working on [project types or technical domain].
  • Working style: I prefer pushback over agreement. If you spot a flaw in my reasoning, say so plainly. Avoid praise and avoid hedging.
  • Output style: skip preamble; lead with the answer. No em dashes. Cite sources for any factual claim I would have to check.
  • Before responding, when the request is ambiguous, ask a clarifying question instead of guessing.
  • Before generating code, drawings, or long deliverables, confirm with me first.

Tweak the wording to match how you actually want to be addressed. This setup pays for itself in the first week and compounds from there.

06
Working With Claude Well
Description, Discernment, Diligence as practical skills

Time to actually use the thing. This is where the 4Ds from Section 02 (Description, Discernment, Diligence) show up as practical skills. The principles below are the ones most worth knowing for AEC work. Anthropic publishes deeper prompting guidance at docs.anthropic.com if you want to go further.

Description: How to Actually Prompt

Think step by step. The simplest reliable upgrade. Adding "think step by step" or "walk me through your reasoning" to a prompt produces meaningfully better output on anything analytical. For code compliance review, structural assessments, cost reasoning, or any task with branching logic, structured chain-of-thought is non-negotiable.

Use XML tags for structured prompts. Claude is specifically trained to recognize XML structure. When a prompt has multiple components (instructions, context, examples, formatting requirements), tags like <context>, <task>, <examples>, and <format> produce more reliable results than dumping everything in prose. This sounds technical; it isn't. It is just a way of putting labels on the parts of your request.

Show, don't just tell. Examples are more powerful than instructions. Rather than describing how you want a fee proposal structured, paste in two finished proposals and say "match this style." This is called few-shot prompting in the literature and it is one of the highest-leverage prompting techniques.

Put long context at the top. For documents over roughly 20,000 tokens, place them near the start of your prompt, above your instructions. This dramatically improves Claude's ability to locate and use the information. Counterintuitive, worth knowing.

Give Claude a role. "You are a senior project architect reviewing this drawing set for code compliance" is a better prompt opener than "review this drawing set." The role primes the response.

Iterate. Your first prompt is not the right prompt. Get over it and rerun. Treat prompting as a software development cycle: define what good output looks like, write a draft prompt, see where it fails, fix the prompt, run again. The pros iterate three or four times before they trust the output. The amateurs hit send and ship.

Discernment: How to Evaluate Output

This is the harder half and the more important one.

Quote grounding. When asking Claude to work with a long document (a spec, a contract, a building code section), the most effective accuracy upgrade is to require Claude to extract direct quotes first, then perform the task. "Find every clause in this contract that addresses change order procedure. Quote each one verbatim with its section number, then summarize." The quotes are auditable; the summary becomes much harder to fabricate.

Citation and retraction. Have Claude cite sources for every factual claim. Then check that each cited source actually exists and contains the claim attributed to it. If it doesn't, the claim must be retracted. This is the single highest-leverage check against hallucination on knowledge work.

Best-of-N. For high-stakes outputs, run the same prompt two or three times and compare. If the answers diverge, the model is guessing. Consistency is not proof Claude is right, but inconsistency is proof something is wrong. Free signal. Use it.

Adversarial review. After Claude produces something, open a fresh conversation, paste the output, and ask Claude to tear it apart. Find the errors. Find the omissions. Flag the unsupported claims. Claude is often better at criticizing existing work than producing it from scratch. Lean on that.

Diligence: Human-in-the-Loop and Audit Trails

Anthropic's own research on agent autonomy is worth knowing. The data shows that the large majority of AI agent tool calls in mature production environments still have a human in the loop, and almost all have some safeguard layered in: restricted permissions, approval requirements, or explicit review gates. That is the operating norm at the frontier of AI use. It should be the operating norm in your firm.

Treat these as non-negotiable for AEC work:

Non-Negotiables for AEC Work
  • Anything that goes to a client gets reviewed by a qualified human first.
  • Anything that touches code-of-record drawings, structural calculations, sealed work, or permit applications gets the normal human review plus a verification pass for AI-introduced errors.
  • Cost estimates, schedules, and quantity takeoffs get a sanity check by someone who can spot a wrong-by-an-order-of-magnitude error.
  • Any text containing specific facts, citations, or reference numbers gets those facts spot-checked before the document leaves the firm.

The real risk is not that someone catches Claude making things up once and learns to distrust it. The real risk is the workflow that almost always works. The team gets comfortable. The verification step gets dropped. The failure mode shows up six months later in front of a client. Build the gates in early, write them down, and do not let comfort erode them.

Anthropic's official guidance on reducing hallucinations and building evals lives at docs.anthropic.com/en/docs/test-and-evaluate and anthropic.com/engineering/demystifying-evals-for-ai-agents.

07
The Four Mechanisms
Where firm context actually lives. Projects, Skills, Connectors, Plugins

Back to the moat. Section 03 made the case that firm context is the thing that separates a firm using AI well from a firm using ChatGPT raw. This is the section where we get into how you actually operationalize that context. Four mechanisms do the heavy lifting. Learn these and the rest of Claude falls into place.

Projects and CLAUDE.md

Projects are how you scope context. In Chat, a Project is a cloud workspace with its own custom instructions and attached files. In Cowork and Code, a Project is a folder on your computer, and the equivalent of Project instructions is a Markdown file at the project root called CLAUDE.md.

If you remember one thing from this guide, remember CLAUDE.md. It is the most underused mechanism in the entire toolchain, and it is the primary place your firm's context gets encoded. Plain text. Sits at the root of your project folder. Claude reads it automatically every single time it works inside that project.

What goes in it: your firm conventions. Your standards references. Your spec format. Your project-naming rules. Your QA protocol. The way your principals like deliverables written. The names of the people on the project. The patterns you want followed. The mistakes Claude made last time and how to avoid them. Anything a new hire would need to be briefed on before they touched a deliverable.

Treat it as a living briefing document that gets smarter every week. Every project that ends becomes a chance to update the firm's master CLAUDE.md with what was learned. That is what compounding firm context looks like in practice. (Use CLAUDE.local.md for personal overrides you don't want in a shared repo.)

Why Markdown? Because PDFs are a tax on your context window. A typical PDF carries layout instructions, font tables, embedded images, and structural metadata that an LLM has to wade through to find your actual content. A Markdown version of the same document strips all of that out and keeps only the meaning. The savings are enormous; the loss is invisible. For firm reference material that gets reused across projects (standards, specifications, design guides, naming conventions), convert once, paste into Markdown, and stop fighting your tokens.

Skills

A Skill is a reusable Markdown bundle that captures how to do a recurring task. Here is the trigger: if you find yourself typing the same instructions three times, that's a Skill. Stop retyping. Save it once. Use it forever.

Examples: a Skill that encodes your firm's writing voice for technical reports. A Skill that captures your protocol for a design review. A Skill that codifies how to assemble a fee proposal. Claude can pick a Skill automatically when it spots a match. You can also invoke one explicitly by typing / in the chat box. Skills live in the cloud or locally; local Skills do not roam between machines.

Connectors and MCP

Connectors are the bridge from Claude to other software. Anthropic ships official connectors for the common SaaS layer: Gmail, Google Calendar, Google Drive, Notion, Asana, Canva. Behind them is an open protocol called MCP, the Model Context Protocol. MCP is the standard way for AI tools to talk to other software and was originally proposed by Anthropic. The connector library covers SaaS; MCP covers everything else, including the design software your firm actually uses. Section 08 covers the AEC MCP ecosystem.

Plugins

Plugins are how you ship a workflow. Where a Skill is one instruction set, a Plugin wraps several pieces (Skills, connectors, slash commands, MCP configurations) into something a colleague can install with a single click. Anthropic publishes official Plugins for common business functions; firms build their own for the workflows nobody else has. Plugins live only in Cowork. They don't appear in regular Chat. The practical use case: your firm has figured out how to do a tricky workflow well, and you want every project team to use the same approach without retraining everyone.

A Note on Artifacts

When Claude produces something with structure (a document, a slide deck, a spreadsheet, a small web app, a diagram), it returns it as an Artifact rather than dumping it into chat. Supported formats include Word, PowerPoint, Excel, PDF, HTML, React, SVG, Markdown, and Mermaid diagrams. In Chat, Artifacts are stored in the cloud. In Cowork, they are local files in your project folder; on April 22, 2026, Cowork added a Live Artifacts feature that gives you the cloud Artifact experience while keeping the file local. Useful when you cannot send client work to a cloud service.

08
Connecting Claude to AEC Software
The state of the AEC MCP ecosystem in 2026

This is where things get fun for AEC. Until very recently, "talking to your Revit model" or "scripting Rhino in plain English" was a research-paper fantasy. As of 2026, it is a Tuesday afternoon. The AEC MCP ecosystem is still young (most of these servers appeared in late 2025 or early 2026) but it is real, and a working setup is feasible today.

Rhino and Grasshopper. Community MCP servers exist and are usable. They open a two-way connection between Claude and a running Rhino instance, letting Claude inspect geometry, manage layers, query and manipulate objects, and inspect or drive Grasshopper components in natural language. Reference projects: grasshopper-mcp and rhino-mcp.

Revit. Autodesk has shipped an official AEC Data Model MCP server, and Revit 2027 has built-in MCP support. With it, Claude can read element categories, parameters, geometry, and spatial data, and can drive operations like finding and tagging elements or generating views and schedules. See Autodesk's APS blog post on the AEC Data Model MCP server.

Dynamo. Autodesk's Dynamo team has signaled MCP support on the public roadmap. The current direction lets Claude become an executable step inside a Dynamo workflow rather than just an external assistant.

Civil 3D and AutoCAD. Both have working community MCP implementations. The AutoCAD community server gives Claude direct access to AutoLISP execution. The Civil 3D servers expose project data and let Claude create, modify, and delete elements through automation.

Figma. Figma ships an official MCP server. It gives Claude design context (components, variables, layout, FigJam content) and the ability to write to the canvas or generate code from frames. For firms that use Figma for diagrams, design briefs, or research presentations, this is genuinely useful. See Figma's MCP guide.

A note on stability. Official MCP servers from large vendors will mature steadily through 2026 and 2027. Community servers move faster but break more often. For firm-wide deployments, prefer the official path. For prototypes and experiments, the community path is fine.

There is also a separate browser path. Claude for Chrome is an extension that lets Claude operate inside a tab the way a human would: clicking, filling forms, extracting data. It is slower and less reliable than a real MCP connection, but it works on permitting portals, supplier websites, manufacturer catalogs, and other AEC-relevant sites that have no API.

09
Killing Subscriptions, Building Tools
The audit, the playbook, and what to actually build first

This is the part the rest of the guide builds toward.

Before we get into the playbook, let's get concrete. Here are tools that AEC firms have already built or could realistically build in a weekend. None of them replace your core software. They sit on top of it.

What Firms Are Building
  • A past-project finder. "Where's the entry detail we drew on the Henderson project in 2021?" That question costs your team twenty minutes every time it comes up. Point Claude at your project archive, build a thin layer that indexes drawings and references, and now anyone in the office types a question and gets a link back. Total cost: a Saturday and the project folders you already have.
  • A precedent puller for proposals. Incoming RFP looks like the museum project from 2019? Claude scans your past proposals, finds the relevant precedents, and assembles a starting draft with the right examples plugged in. The pursuit team gets back the half-day they used to spend digging.
  • A spec consistency checker. Compares what you're writing against your firm's standards library and flags anything that contradicts. Not a replacement for review. A faster first pass.
  • A weekly project status digest. Reads project folders, calendars, and the RFI log, produces a half-page summary. PMs stop chasing for updates. Principals stop flying blind.
  • A meeting-to-task tool. Notes go in. Action items get extracted. Tasks land in Asana with the right owners and dates. Twenty minutes back per meeting.
  • A drawing schedule QA tool. Checks that every drawing referenced in the schedule actually exists, that numbering is consistent, that revision dates line up.

Notice what's on this list and what isn't. None of these replace Revit. None replace Rhino, Bluebeam, AutoCAD, your ERP, or your accounting software. The core stays. What you're building is the connective tissue between those tools, plus the small custom workflows nobody else is going to build because they don't know how your firm operates.

The Subscription Audit

Before you build, run a quick audit. Pull every tool the firm pays for. For each one, ask: is this load-bearing software (Revit, Rhino, Bluebeam, AutoCAD, ERP), or is it a thin layer over a recurring workflow (a fee proposal generator, a one-feature QA tool, a spec-template service, a niche project add-on)?

The thin layers are the candidates. Most firms find they're paying somewhere between $500 and $2,000 per seat per year for tools that wrap a workflow they could now own internally. Multiply by seat count. That's the prize. Two or three subscriptions retired pays for the in-house build many times over in year one alone.

And here's the part nobody talks about. When you kill a subscription, you don't just save the line item. You also stop being dependent on a vendor's roadmap, their pricing changes, their data export policies, their security incidents, their sunset announcements. You own the tool. You own the data. You change it whenever you want.

Where Claude Code Fits

Claude Code is the agent that writes, modifies, and reviews software. You describe what you want in plain language. Claude plans the work, writes the code, runs tests, fixes what breaks, keeps going. Runs in a terminal, in the desktop app, or in a browser at claude.ai/code.

You still need a baseline. Know what a code repository is. Know how Git works. Know roughly what your tool will run on. Without that, you'll get useful output but you won't catch when Claude is wrong, and the result will suffer.

Inside firms that have started building, the people leading it are usually not full-time developers. They're technology directors, design technology specialists, computational designers, or operations leads who learned the basics. "Person who can clearly specify a workflow and review what Claude builds" is becoming a recognizable role inside AEC firms. Anthropic's own engineering team describes the most successful internal pattern as treating Claude Code as a thought partner, not a code generator. That framing applies just as well outside engineering.

The Weekend Playbook

Notice that most of these steps are about capturing context, not writing code. That is intentional. The tool is the easy part. The firm context layer is the moat.

The Weekend Playbook
  1. Pick one annoying thing. Not the sexiest workflow. The most boring, repetitive, friction-laden one your team does every week. Boring is where the leverage is.
  2. Open a Cowork project in a folder dedicated to that workflow. Write a CLAUDE.md that captures the firm context for this workflow: the inputs, the outputs, the conventions, the standards it has to respect, the people involved, the things your firm has decided over the years that nobody else would know. This is the most valuable artifact in the whole exercise.
  3. Run the workflow once with Claude assisting. Note what works and what breaks. Pay attention to the moments where Claude does something almost right; that's a missing piece of firm context.
  4. Update CLAUDE.md with the corrections. Encode what Claude got wrong and why. Run it again. Three or four iterations, the loop stabilizes.
  5. Define the verification gate. Before this becomes a firm tool, write down: who reviews the output, what they check for, what triggers a rebuild. Skip this step and you've built a liability, not a tool.
  6. Move from Cowork to Claude Code. Have Code wrap the workflow as a small repeatable tool with a clear interface, short README, way to run it without rewriting prompts. The CLAUDE.md you wrote in step 2 ships with the tool.
  7. Bundle as a Plugin and deploy across the team. Make sure the firm context the tool depends on is documented and owned. As the firm learns more, that context gets updated, and every team using the tool benefits the next time they run it.

Your first tool will take longer than expected. Your fifth will be fast. You're not just building tools. You're building the firm context layer those tools sit on top of. That is the part that compounds.

One scope warning. Don't start with the most ambitious workflow. Start with something small, bounded, low-risk. Where verification is easy and failure is cheap. Build trust in the process before pointing it at anything sealed, billable, or contractually sensitive.

10
Practical Setup and Field Guide Checklist
What to do this week, and what to keep doing

If You Are Starting Today

Install the Claude desktop app from claude.com/download on the machines where you actually work. The browser version misses Cowork and most of the AEC integration story.

Subscribe to Pro for personal exploration. If your firm is committing to the toolchain, move to Max or Team within the first month.

Spend an hour on your one-time setup: privacy settings, personalization paragraph, your first Project with a CLAUDE.md, one Skill that captures your writing voice. This setup compounds.

Designate a context owner. Pick the person at your firm responsible for maintaining the firm's shared CLAUDE.md, Skills, and Plugins. This is not a side-of-desk task. As the firm learns, the context layer needs to absorb those lessons or the work doesn't compound. Without an owner, the layer rots. With an owner, it gets sharper every quarter.

Pick one AEC integration to start with based on the software your firm uses most. For a Rhino-heavy practice, install one of the Rhino MCP servers and run a single small experiment. For a Revit-heavy practice, set up the AEC Data Model MCP. Do not try to wire up everything at once.

When you find yourself doing the same prompt or pattern three times, capture it as a Skill or a Plugin. That is the firm context layer growing.

Anthropic Academy

Anthropic publishes a course library at anthropic.com/learn. Three courses are particularly relevant for AEC technology leaders:

  • AI Fluency: Framework and Foundations. The 4Es and the 4Ds in depth. The right starting point for anyone leading AI adoption inside a firm. Worth running through as a leadership team, not just as individuals.
  • Claude API Development Guide. Prompting, tool use, function calling, and agentic systems. Useful for the design technology specialists who will actually be building.
  • Model Context Protocol (MCP) Fundamentals. How to build MCP servers and clients. The right course for whoever in your firm will be writing or maintaining custom MCP integrations against your design software.

If your firm is taking this seriously, designate two or three people to complete these in the first quarter and report back to leadership on what they would change in firm practice.

Field Guide Checklist

Before any AI-assisted work leaves the firm

  • The output has been reviewed by a human qualified to catch errors.
  • Specific facts, citations, code references, and reference numbers have been spot-checked against sources.
  • For high-stakes outputs (sealed work, permit applications, cost commitments), there is a documented verification gate someone owns.
  • The Project's CLAUDE.md reflects what was learned in this round, so the next time is better.

Across the firm

  • Privacy and training settings have been set deliberately on every account, especially free ones.
  • A short, honest internal policy exists describing what client data can and cannot go into Claude.
  • A context owner is named. The firm CLAUDE.md, Skills, and Plugins are maintained as an actual ongoing practice, not a side-of-desk task.
  • Two or three people have completed AI Fluency or equivalent training.
  • The subscription audit has been run at least once, and the candidates for replacement are documented.

Closing

The actual situation in 2026 is simpler than the AI hype cycle makes it sound. Strip away the noise and you're left with this:

  • The cost of building firm-specific software has collapsed.
  • You own your data and your workflows.
  • Vendors don't have either.
  • Therefore, the layer on top of your core tools is yours to build, if you want it to be.

That's the whole pitch. No revolution. No "transformation." Just a quiet, very real shift in who gets to build what.

You don't have to take any of this on faith. Pick one boring workflow. Spend a Saturday on it. See what happens. Worst case, you lose an afternoon. Best case, you've started the muscle that lets your firm own the tooling layer for the next decade.

Future AEC Hub guides will go deeper: hands-on integration walkthroughs, case studies of firms killing subscriptions, deep dives on Claude Code for AEC, and templates ready to drop into your practice. A live AEC Hub course is in development for fall 2026. Join the waitlist on the AEC Hub site.

Until then: the tools that matter to your firm are the ones you start building this weekend.

Published by AEC Hub · May 2026 · Tags: ai, aec, automation

Get the next AEC tech analysis in your inbox

Original research on AEC tools, AI adoption, and industry data. Free, irregular cadence, easy unsubscribe.

Need help applying this to your firm?

We advise AEC firms on technology and AI strategy. Tool audits start at $750. The fee credits 100% toward any deeper engagement.

See advisory services