The AI Question You're About to Face
Your vendor just announced they've "added AI" to their product. Maybe it's your CRM promising smarter lead scoring. Your support platform offering automated responses. Your analytics tool with "AI-powered insights." The press release is glowing. The demo is impressive.
But somewhere in your brain, a question is forming: What exactly does this mean for our data?
You're right to ask. AI features fundamentally change the risk profile of any vendor relationship. Data that was once stored and processed is now potentially being used to train models, shared with third-party AI providers, or analyzed in ways your original contract never contemplated. And most vendor security questionnaires haven't caught up.
Why Traditional Vendor Assessment Falls Short
Your standard security questionnaire asks about encryption, access controls, and incident response. These questions still matter—but they miss what makes AI different.
Traditional software is deterministic: given the same input, you get the same output. AI systems are probabilistic. They learn, adapt, and can produce unexpected results. They often rely on massive datasets that may include your data. And they increasingly involve third-party foundation models from OpenAI, Anthropic, or Google that your vendor doesn't fully control.
When a vendor says "we use AI," they often mean they've integrated someone else's AI. Your data may flow to OpenAI's API, Anthropic's Claude, or Google's Gemini—companies that weren't in your original vendor assessment. Ask explicitly: who actually processes the data when AI features are used?
Databricks' AI Security Framework identifies 62 distinct risks across the AI lifecycle—from data poisoning during training to prompt injection during inference. Gartner's TRiSM (Trust, Risk, and Security Management) framework emphasizes that AI requires continuous governance, not point-in-time assessment. Traditional vendor reviews catch maybe a third of these risks.
The Questions That Actually Matter
Forget the 300-question security assessment for a moment. When evaluating a vendor's AI capabilities, these ten questions cut to the heart of what you need to know:
1. What type of AI/ML model powers this feature?
Is it a proprietary model they built, a fine-tuned open-source model, or an API call to a foundation model provider? Each has different risk implications. A vendor using GPT-4 via API has different data handling than one running a local model.
2. Where does our data go when AI features are used?
Get specific: Does data leave their infrastructure? Is it sent to third-party AI providers? Is it stored separately from non-AI processing? The answer should be a clear data flow diagram, not marketing language.
3. Is our data used to train or improve models?
This is the question. Many AI providers use customer data for model training by default. You need an explicit, contractual answer—not an assumption based on a privacy policy that can change.
4. What testing has the model undergone?
Ask about red-teaming, adversarial testing, and bias evaluation. A mature vendor can describe specific tests performed and results. Vague answers like "extensive testing" are red flags.
5. What are the known limitations and failure modes?
Every AI system has them. A vendor who can't articulate limitations either doesn't understand their own system or isn't being honest. Both are problems.
6. How do you handle model updates and changes?
AI models evolve. How will you be notified of significant changes? Can you opt out of updates that might affect your use case? What's the rollback process if an update causes issues?
7. What happens to AI outputs—who owns them?
If the AI generates content, analyses, or recommendations using your data, who owns that output? Can the vendor use aggregated insights? This matters for IP and competitive reasons.
8. How is the AI feature protected from adversarial attacks?
Prompt injection, data poisoning, and model extraction are real threats. Ask about specific defenses. If they look confused, that's your answer.
9. Can we audit AI decision-making?
For any AI that affects your customers or operations, you need explainability. Can you understand why the AI made a specific recommendation? Is there logging? Can you contest or override decisions?
10. What's your AI incident response process?
When (not if) something goes wrong with the AI—hallucinations, bias incidents, security breaches—how will you be notified? What's the remediation process? Is there an AI-specific incident playbook?
Red Flags That Should Stop the Deal
During your evaluation, certain responses should trigger serious concern. These aren't minor issues to work around—they're signals that a vendor isn't ready for enterprise AI deployment.
When asked about model architecture, training data sources, or third-party providers, the vendor claims confidentiality. Legitimate competitive concerns exist, but complete opacity about how your data is processed is unacceptable.
Questions about bias testing, security controls, or compliance are deflected with vague references to AI capabilities. This suggests the vendor doesn't actually understand or control their own system.
Basic capabilities like audit logging, data opt-out, or model explainability are promised for future releases. You're being asked to accept current risk for hypothetical future protections.
The vendor provides clear data flow documentation, names their AI sub-processors, explains their testing methodology, acknowledges limitations, and has written policies for AI governance.
Ask for written AI documentation: data processing agreements that specifically cover AI, model cards or system descriptions, third-party AI provider contracts, and AI-specific security testing results. If these don't exist, the vendor's AI governance is immature—regardless of how impressive the demo was.
Contract Terms You Need
Standard vendor contracts were written before AI. Even recent templates often miss critical protections. Here are the clauses that should be in any agreement involving AI features:
The question is not just what AI can do, but who is responsible when it makes mistakes.
Verifying Vendor Claims
Vendors make claims. Your job is to verify them. Here's how to pressure-test what you're told:
Request Evidence, Not Assertions
"We take security seriously" means nothing. Ask for SOC 2 reports covering AI systems, penetration test results for AI features, bias audit reports, and red team findings. Mature vendors have this documentation. Immature ones make excuses.
Test the AI Yourself
Before signing, get sandbox access. Try edge cases. Attempt prompt injection. Feed it unusual inputs. See how it handles errors. A few hours of adversarial testing reveals more than a hundred questionnaire responses.
Talk to Their Security Team
Request a call with the vendor's security or AI governance team—not just sales engineers. Ask technical questions about architecture, controls, and incident history. Their depth of knowledge (or lack thereof) tells you how seriously they take this.
Check Their Sub-Processors
Get the list of third-party AI providers. Research each one independently. Review their security practices, data handling policies, and incident history. Your vendor's security is only as strong as their weakest AI partner.
A Framework for AI Vendor Risk
Not every AI feature carries the same risk. Use this framework to prioritize your evaluation effort:
High-risk combinations (sensitive data + third-party AI + automated decisions) require extensive due diligence. Lower-risk features (metadata analysis + local processing + human review) may warrant lighter evaluation—but still need the contract protections.
Core vendor announced AI-powered features for patient data analysis. Initial excitement turned to concern when security team asked basic questions and got evasive answers. Vendor couldn't name their AI sub-processors or confirm data training policies.
Rather than reject the feature outright, they created a structured evaluation using this framework. Insisted on written documentation, conducted adversarial testing in sandbox, and negotiated AI-specific contract amendments. Evaluation took 6 weeks.
When to Walk Away
Sometimes the right answer is no. Consider walking away from a vendor's AI features—or the vendor entirely—if:
They refuse to provide written AI documentation or policies. They can't or won't name their AI sub-processors. Your data will be used for model training without meaningful opt-out. They have no AI incident response process. Basic security questions are met with marketing responses. Contract negotiations on AI terms are refused.
AI capabilities are valuable, but not at any cost. A vendor who can't demonstrate responsible AI governance today isn't likely to develop it tomorrow—and you'll be left managing the consequences.
Sometimes "no AI" is the right answer. If a vendor can't meet your requirements for AI governance, ask if the AI features can be disabled entirely. Many products work fine without AI enhancements—and you eliminate a category of risk you can't adequately assess or control.
The Bottom Line
AI is transforming enterprise software, and vendors are racing to add AI features to everything. Some are doing it responsibly with proper governance, transparency, and security controls. Others are bolting on AI to check a marketing box without understanding the implications.
Your job as a buyer is to tell the difference.
Ask the hard questions. Demand documentation. Verify claims. Negotiate proper contract terms. And don't let impressive demos override careful due diligence.
The vendors who can answer your questions clearly and provide the protections you need are the ones worth partnering with. The ones who can't are telling you something important about how they'll handle your data— and your trust.