← Blog & Resources
Example Scenario

Example: How a Portland construction firm could build AI policy in 30 days.

A practical example of how a mid-size trades company could move from no policy and informal AI use to a workable framework.

Why AI governance matters, even for non-tech industries

Imagine the operations director of a mid-size Portland construction firm — 90 employees, commercial and residential projects across Southern Maine — realizing several project managers are already using ChatGPT to write client proposals. The question is one we hear often: is that fine, a problem, or a serious risk?

Thirty days later, a firm like this could have a workable policy, a trained team, and clearer answers. Here's how that process can work.

Week 1 — Discovery

Start with a short anonymous survey to staff. Ask three questions: What AI tools are you using at work? What do you use them for? What are you unsure about?

The results were eye-opening for the leadership team:

  • 47% of staff were using at least one AI tool regularly — not just project managers
  • Tools in use included ChatGPT, Microsoft Copilot, Grammarly, and Otter.ai for meeting notes
  • Staff were using AI for proposals, client emails, invoice descriptions, safety documents, and even subcontractor agreements
  • The top "unsure" question was: "Is it okay to paste client information into these tools?"

Week 2 — Risk Mapping

Next, hold a two-hour session with leadership to classify the actual risks. Not every AI use is risky — drafting a general company update is fine. Pasting a client's full property details and contract terms into ChatGPT is not.

We mapped their workflows into three tiers: green (AI use fine with any tool), yellow (AI use okay with enterprise tools only), and red (no AI use without explicit approval). This tier system became the foundation of the policy.

Week 3 — Policy Drafting

Then write a one-page policy in plain language. No legalese. A starter document can stay close to 400 words and cover:

  • Approved tools: Microsoft 365 Copilot (licensed), Grammarly (business edition)
  • Prohibited: Free public AI tools for anything involving client data, pricing, or contracts
  • Data rules: green/yellow/red tier definitions with concrete examples from their actual work
  • Review requirement: Any client-facing document drafted with AI must be reviewed by a second person before sending
  • Ask-first contact: The operations director, for any edge cases

Week 4 — Rollout and Training

In week 4, run a 45-minute all-hands training. Walk through the policy, show role-specific examples of each tier, answer questions, and give everyone a one-page reference guide.

The key move: bring the anonymous survey results into the training. Staff see that AI use is common, that their questions are shared by colleagues, and that the policy is not punitive — it is clarity.

What Success Could Look Like After Three Months

At the 90-day mark, the target outcomes would be:

  • No known incidents of sensitive data being shared with free public AI tools
  • More open, trackable AI use because staff feel safe asking questions
  • Staff bringing AI policy questions to the designated internal contact
  • A client-facing explanation of responsible AI practices for proposals or vendor questionnaires

The Takeaway

AI governance doesn't have to be slow, expensive, or complicated. For a company with no prior policy and active AI use across the team, a practical 30-day sprint can create real clarity.

If your organization is in a similar place — AI being used informally, no written policy, questions that aren't getting clear answers — this is the kind of engagement we specialize in. Book a free discovery call and we'll scope what 30 days would look like for your team.

Share

Ready to build your AI policy?

We can help you create a practical, enforceable AI acceptable-use policy tailored to your organization. Start with a free discovery call.