Why AI governance matters, even for non-tech industries
In January 2026, the operations director of a mid-size Portland construction firm — 90 employees, commercial and residential projects across Southern Maine — reached out. The problem she described was one we hear often: "I just found out three of our project managers are using ChatGPT to write client proposals. I don't know if that's fine, a problem, or a disaster."
Thirty days later they had a workable policy, a trained team, and clear answers. Here's exactly how we did it.
Week 1 — Discovery
We started with a short anonymous survey to all 90 staff. We asked three questions: What AI tools are you using at work? What do you use them for? What are you unsure about?
The results were eye-opening for the leadership team:
- 47% of staff were using at least one AI tool regularly — not just project managers
- Tools in use included ChatGPT, Microsoft Copilot, Grammarly, and Otter.ai for meeting notes
- Staff were using AI for proposals, client emails, invoice descriptions, safety documents, and even subcontractor agreements
- The top "unsure" question was: "Is it okay to paste client information into these tools?"
Week 2 — Risk Mapping
We held a two-hour session with leadership to classify the actual risks. Not every AI use is risky — drafting a general company update is fine. Pasting a client's full property details and contract terms into ChatGPT is not.
We mapped their workflows into three tiers: green (AI use fine with any tool), yellow (AI use okay with enterprise tools only), and red (no AI use without explicit approval). This tier system became the foundation of the policy.
Week 3 — Policy Drafting
We wrote a one-page policy in plain language. No legalese. The entire document was 400 words. It covered:
- Approved tools: Microsoft 365 Copilot (licensed), Grammarly (business edition)
- Prohibited: Free public AI tools for anything involving client data, pricing, or contracts
- Data rules: green/yellow/red tier definitions with concrete examples from their actual work
- Review requirement: Any client-facing document drafted with AI must be reviewed by a second person before sending
- Ask-first contact: The operations director, for any edge cases
Week 4 — Rollout and Training
On the first Friday of week 4, we ran a 45-minute all-hands training. We walked through the policy, showed real examples of each tier, answered questions, and gave everyone a laminated one-page reference card.
The key move: we brought the anonymous survey results into the training. Staff saw that AI use was common, that their questions were shared by colleagues, and that the policy wasn't punitive — it was clarity.
Results After Three Months
We checked back at the 90-day mark. The results:
- Zero incidents of sensitive data being shared with free AI tools
- AI usage actually increased from 47% to 73% — because staff felt safe to use it openly
- Three staff members had brought AI policy questions to the operations director, exactly as intended
- The firm added the policy to their client-facing documentation, which became a differentiator in two competitive bid situations
The Takeaway
AI governance doesn't have to be slow, expensive, or complicated. For a company with no prior policy and active AI use across the team, we moved from zero to fully-rolled-out in 30 days.
If your organization is in a similar place — AI being used informally, no written policy, questions that aren't getting clear answers — this is the kind of engagement we specialize in. Book a free discovery call and we'll scope what 30 days would look like for your team.