What AI Your Employees Are Already Using — And What to Do About It
40% of U.S. employees now use AI at work, and most business owners have no idea which tools, what data, or whether any of it is safe. A plain-English guide to ChatGPT, Claude, Microsoft Copilot, Google Gemini, Manus, and the shadow AI problem — written for owners who want to understand before they decide.
Not Legal Advice: This article is for informational purposes only. Consult a licensed attorney for legal guidance specific to your business.
Here is something worth knowing: 40% of U.S. employees now use AI tools at work, according to Gallup's 2025 workplace survey. That number has nearly doubled in two years. And in most small businesses, the owner has no idea which tools their employees are using, what data those tools are processing, or whether any of it is safe.
This isn't a criticism of employees. AI tools are genuinely useful, and employees are using them to get more done. The problem is that most of this adoption is happening without any guidance, policy, or oversight from the business owner. Employees are making their own decisions about what's appropriate — and those decisions are often based on convenience rather than security. This guide is for business owners who want to understand what's actually happening before they decide what to do about it.
The Tools Your Employees Are Most Likely Using
ChatGPT, made by OpenAI, is by far the most widely adopted AI tool in the workplace. Pew Research found that 28% of employed adults use ChatGPT at work — up from just 8% two years ago. It is available free at chatgpt.com, requires no corporate approval, and can be used for almost anything: drafting emails, summarizing documents, writing job descriptions, creating marketing copy, answering customer questions, and much more. The free version is what most employees are using, and it has more permissive data retention settings than the paid business plans.
Claude, made by Anthropic, is ChatGPT's closest competitor and is gaining ground rapidly in professional settings. It is particularly popular for tasks that require careful reasoning, long document analysis, and nuanced writing. Employees in legal, finance, and research roles often prefer Claude for its tendency to be more precise and less prone to confident-sounding errors. Like ChatGPT, it is available free at claude.ai.
Microsoft Copilot is embedded directly into Microsoft 365 — the suite that includes Word, Excel, Outlook, Teams, and PowerPoint. If your business uses Microsoft 365, your employees may already have access to Copilot without any additional purchase, depending on your subscription tier. Copilot can draft emails in Outlook, summarize Teams meetings, generate Excel formulas, and create PowerPoint presentations. Because it is integrated into tools employees already use daily, adoption tends to happen quietly and quickly.
Google Gemini is Google's AI assistant, integrated into Gmail, Google Docs, Google Sheets, and Google Meet. If your business uses Google Workspace, Gemini may already be available to your employees. It works similarly to Copilot — drafting emails, summarizing documents, generating content — but within the Google ecosystem. Google has been rolling out Gemini features progressively across Workspace tiers since 2024.
Manus is an autonomous AI agent — a newer category of tool that can independently browse the web, write and execute code, manage files, and complete multi-step tasks without constant human direction. Where ChatGPT and Claude answer questions and generate content, Manus and similar agents can actually do things: research a topic and produce a report, build a spreadsheet from web data, or draft and send communications. Employees in operations, research, and administrative roles are beginning to experiment with these tools, often without their employers' knowledge.
Perplexity is an AI-powered search engine that answers questions by synthesizing information from the web in real time, with citations. It is popular with employees who need quick research answers and find traditional search engines too slow or too cluttered with ads. It is also used to fact-check AI-generated content, which is a genuinely useful application.
What Your Employees Are Using These Tools For
The OpenAI workplace adoption report, published in January 2026, analyzed how employees across industries actually use ChatGPT at work. The four most common use cases in the first 90 days of adoption are writing, research, programming, and data analysis — in that order. Writing dominates: drafting emails, creating reports, writing job descriptions, generating marketing copy, and producing customer communications. Research is second: answering questions, summarizing documents, and synthesizing information from multiple sources.
For small businesses specifically, the most common scenarios are: drafting customer-facing communications (emails, proposals, responses to complaints), creating internal documents (policies, procedures, meeting summaries), researching competitors or market trends, generating social media content, and answering operational questions that would otherwise require a phone call or web search. These are all legitimate productivity uses — and they all carry risk if the employee is entering sensitive business or customer data to get the job done.
The risk is not that employees are doing something wrong. It is that they are doing something useful without understanding the implications. An employee who pastes a customer complaint into ChatGPT to draft a response is trying to help the business. They are not thinking about data retention policies, model training consent, or CCPA compliance. That is the owner's job — which is why this guide exists.
The Shadow AI Problem
Axios reported in May 2025 that absent clear policies, workers are taking an 'ask forgiveness, not permission' approach to AI tools — risking workplace friction and costly mistakes. This phenomenon has a name in enterprise IT: shadow AI, the use of AI tools that have not been reviewed, approved, or even acknowledged by the business.
Shadow AI is not unique to large companies. In a 2025 survey by Clutch, 74% of respondents said they use AI at work — but only 30% said their organization has any guidelines or formal policies for AI use, according to Gallup. That gap — between widespread adoption and near-total absence of governance — is where most small business AI risk lives.
The practical implication is that you almost certainly have employees using AI tools you don't know about, for purposes you haven't considered, with data you haven't evaluated. The first step is simply to ask. Have a direct conversation with your team about what tools they are using and what they are using them for. You may be surprised by the answer — and the conversation itself signals that you are paying attention.
What to Do With This Information
Understanding what your employees are using is the first step. The second step is deciding what to do about it — and the answer is not to ban everything. Banning AI tools that employees find genuinely useful will not stop them from using those tools; it will just drive the behavior underground and make it harder to manage. The goal is governance, not prohibition.
A practical governance approach for a small business has three components. First, an approved tools list: a short list of the AI tools your business has evaluated and decided are acceptable for business use, with any conditions attached (for example, 'ChatGPT Team plan only — not the free version'). Second, a data classification rule: a clear statement of what categories of data can never be entered into any AI tool, regardless of whether it is on the approved list. Customer PII, financial records, health information, legal correspondence, and trade secrets should always be on this list. Third, a reporting process: a simple way for employees to flag AI-related concerns or incidents, so that problems surface before they become crises.
The Employee AI Safety Course covers all of this material in plain English, with practical exercises designed for employees who are not technical. Module 3 specifically addresses the approved tools framework and the data classification exercise. The AI Workplace Policy Kit gives you the written policy template, the approved tools list template, and the employee acknowledgment form — everything you need to turn this conversation into a documented governance program.
Your employees are already using AI. The question is whether they are doing it safely. The answer starts with a conversation, followed by a short approved list, a clear data rule, and a training session. None of that requires a lawyer or an IT department — it requires a decision to take it seriously.
Start Employee AI Safety TrainingReady to take action?
The AI Workplace Policy Kit gives you the documents to act on what you've just read.
Get the Policy KitTrain your whole team
The Employee AI Safety Course covers this and more — in under 2 hours.
View the courseThe Sentinel Brief
Weekly AI risk intelligence for small businesses. Plain English. No hype. Free.
No spam. Unsubscribe anytime.
Related Guides
How AI Can Benefit — and Seriously Hurt — Your Custom Home Building Business
AI and HR Compliance: What Every Small Business Owner Needs to Know in 2026
Does Your Business Insurance Cover AI Mistakes? What SMBs Need to Know in 2026
AI Safety Checklist
16-point checklist for small businesses. Free download, no credit card.
Download free