5-Step Security Checklist for Using AI Tools Safely
Using AI tools without thinking about security is like leaving your front door unlocked because you're only going out for five minutes. Usually fine. Occasionally a problem you really wish you'd avoided.
This checklist takes fifteen minutes to complete and covers the most important bases. Work through it once, then keep it somewhere you can revisit as your AI tool usage evolves.
✅ Step 1: Audit What You're Sharing
Go through your recent AI tool conversations. Be honest about what kinds of information you've pasted in:
- Full names and contact details of clients or colleagues
- Financial figures or business data
- Health or personal information
- Confidential documents or contracts
- Login credentials (this should never happen, but it does)
If you see patterns of oversharing, establish a personal rule: if it's sensitive, anonymize it before it goes into a prompt. Replace names with "Client A," figures with approximate ranges, and identifying details with placeholders.
Action: Set a reminder to do this audit once a month.
✅ Step 2: Check Your Data Training Settings
Every major AI provider offers settings to control whether your conversations are used to train future models. These are almost never the default.
- ChatGPT: Settings → Data Controls → "Improve the model for everyone" — turn off.
- Claude: Check Anthropic's privacy settings for your account type.
- Google Gemini: Google Account → Data & Privacy → "My Activity" controls.
Action: Spend five minutes finding and adjusting these settings in every AI tool you use regularly.
✅ Step 3: Use Strong, Unique Passwords and 2FA
AI tools with memory features accumulate sensitive information over time. Protecting access to your account matters.
- Use a password manager (Bitwarden is excellent and free) to generate and store strong, unique passwords for each tool.
- Enable two-factor authentication (2FA) on every AI platform that offers it. This means even if your password is compromised, your account isn't.
Action: Enable 2FA on your three most-used AI tools today.
✅ Step 4: Understand What Third-Party Integrations Can See
Many AI tools offer integrations with your calendar, email, documents, and CRM. Each integration is a permission you're granting.
Before connecting an AI tool to your accounts:
- Review exactly what permissions it's requesting
- Check whether you actually use and benefit from the integration
- Revoke any integrations you're not actively using
Action: Go to your Google or Microsoft account's "Third-party app access" page and remove any AI integrations you don't recognize or actively use.
✅ Step 5: Use the Right Tool for the Context
Not all AI tools have equal data protection standards. There's a meaningful difference between:
- Consumer free tiers: Convenient, capable, but your data may be used for training and protection guarantees are minimal.
- Paid personal plans: Usually include opt-out options and stronger terms.
- Business/Enterprise plans: Offer contractual data protection guarantees, typically with a commitment that your data won't be used for training.
If you're using AI tools for work — especially with client data, confidential business information, or anything regulated — the enterprise tier is worth the cost.
Action: Identify which tools you use for sensitive work tasks and check whether you should be on a paid or business plan.
Keep It Simple
Security doesn't have to be complicated. These five steps address the most common risks for the vast majority of AI tool users. Do them once, revisit them quarterly, and you'll be significantly better protected than most.
The goal isn't perfect security — it's sensible security. Know what you're sharing, control what you can, and use tools that are appropriate for the sensitivity of your work.