Skip to main content

The “Passive AI” Conundrum

There is lot of guidance available for enterprises that are moving towards building and deploying AI / GenAI based applications for their own use. When such enterprises go through the life cycle of define, pilot, build, deploy and run then most of the enterprises are now treating AI governance, guardrails, ethical use, security as first class requirements. But there is a real risk of enterprises getting blind-sided when the AI was passively adopted rather than actively built. Let us say an enterprise has been using ITSM / ITOM platform and the platform vendor has infused AI / GenAI into their platforms and these features have come into the enterprise as part of platform updates. Then what enterprises are supposed to do as part of their own risk management and obligations under local laws?

1. The enterprise blind-spot

Below are some of the reasons for getting blind-sided as part of passive adoption.

Auto-enabled features — Platform vendors ship AI features as default-on in updates, without explicit enterprise opt-in.

Feature flag sprawl — AI capabilities tucked inside release notes that platform admins approve without escalating to security, legal, or compliance teams.

Shadow AI by procurement — Business units renew platform contracts without realizing the new tier includes GenAI capabilities.

No AI inventory — Most enterprises lack a formal register of where AI is operating, so passive infusion goes undetected.

Vendor framing — Vendors may market these as “productivity enhancements” or “intelligent automation,” not as “AI systems requiring governance”.

2. Does governance obligation still apply?

To analyze this, we have to look at two scenarios:

  1. The enterprise is using the AI / GenAI features available as part of the 3rd party platforms that they have deployed or updated.
  2. The enterprise never uses these AI / GenAI features though they have come as part of platform update.

The subsequent sections below explores the subject by taking ITSM/ITOM platforms and EU AI Act as example. Many countries are looking up to EU AI Act while formulating their versions acts.

3. When AI / GenAI features are used

Regulators, auditors, and legal frameworks do not care who built the AI. What matters is:

“Is your organization using AI to make or influence decisions that affect people, data, or operations?”

If the answer is yes — whether you built it or a vendor shipped it — you are accountable.

Governance Dimensions That Still Apply (and Why)

Governance Dimension Why It Still Applies
Regulatory & Legal Compliance EU AI Act, GDPR, HIPAA etc. hold the data controller / deployer responsible — not just the builder
Data Privacy & Minimization Say, ITSM Platform may process ticket data containing PII, employee info, or sensitive operational data
Bias & Fairness AI-driven ticket routing, prioritization, or auto-resolution can embed bias in IT service delivery
Transparency & Disclosure Employees/users interacting with AI-driven chatbots or auto-responders may not know it’s AI
Access Controls AI features may access broader data sets than originally scoped for human agents
Audit Trails AI-influenced decisions (e.g., auto-closing incidents) need to be traceable
Human-in-the-Loop High-stakes actions (change approvals, security incident handling) should not be fully AI-automated without oversight
Vendor Risk Management The AI models powering these features (often LLMs from OpenAI, Google, etc.) introduce third-party and fourth-party risk
Model Transparency Enterprises need to know which model, trained on what data, with what guardrails the vendor is using.

The Specific Risk Vectors in ITSM/ITOM Platforms

  • Incident Auto-Resolution — AI closing tickets without human review could mask real outages or SLA breaches
  • Change Risk Scoring — AI scoring change requests may introduce opaque risk decisions in critical infrastructure changes
  • Knowledge Article Generation — GenAI auto-generating KB articles could propagate incorrect or hallucinated solutions
  • Virtual Agents / Chatbots — Employees or customers unknowingly interacting with GenAI that has no content guardrails
  • Predictive Alerting (ITOM) — AI-driven alert suppression or correlation could cause genuine incidents to be missed
  • Workforce & Sentiment Analytics — The EU AI Act classifies AI used in employment, HR, and worker management as “High-Risk.” Some platforms infuse AI into workforce productivity scoring. Such infusion could transition the entire platform into the High-Risk category, requiring much stricter documentation and human oversight.

What Enterprises Must Do — Even as Passive Adopters

Immediate Actions

  • AI Feature Inventory Audit — Scan all active platforms for AI/GenAI features that are live, even if not intentionally enabled.
  • “Opt-Out” Clause — Procurement teams specifically negotiate “Governance-First” defaults, where all AI features must be “Off” until a formal Data Protection Impact Assessment (DPIA) is completed.
  • Vendor Interrogation — Formally ask vendors: What AI is in your platform? What data does it access? What model powers it? What are your guardrails?
  • Feature Flag Review — Work with platform admins to identify and consciously enable/disable AI features rather than accept defaults.
  • Data Flow Mapping — Understand what data (PII, operational, financial) the AI features are processing.

Governance Adaptations

  • Extend AI governance policy to cover third-party and vendor-infused AI, not just internally built AI
  • Third-Party AI Risk Addendum — Add AI-specific clauses to vendor contracts covering model transparency, data usage, liability, and audit rights. Otherwise, while platform vendors use LLM models for their GenAI functionality, but the enterprise has no direct contract with the model provider but is liable for the output.
  • Designated AI Owner per Platform — Assign accountability for each platform’s AI features, not just the platform overall
  • Include platform AI in your AI Risk Register — Even if you didn’t build it, it must be catalogued

Ongoing Controls

  • Monitor vendor release notes specifically for AI feature additions — make this a mandatory step in patch/upgrade approvals
  • Test AI outputs periodically — spot-check AI-generated content, routing decisions, and auto-resolutions for accuracy and bias
  • User disclosure — Ensure employees know when they are interacting with AI-driven features
  • Establish a right to override — Ensure humans can always override AI-driven ITSM/ITOM decisions

The Regulatory Reality

  • The EU AI Act explicitly holds deployers (not just developers) responsible for high-risk AI use — passive adoption is not a defence.
  • GDPR / data protection laws require organizations to understand and control how personal data is processed — including by vendor AI.
  • Sector regulators (FCA, HIPAA, RBI, etc.) are increasingly issuing guidance that AI accountability rests with the regulated entity, regardless of whether AI was built in-house or embedded by a vendor.

“Passive adoption does not mean passive accountability.” The moment an enterprise’s operational processes are influenced by AI — even vendor-infused AI — that enterprise has an active governance obligation.

The enterprises that fail to recognize this will face regulatory exposure, audit findings, reputational risk, and operational failures — not because they built AI, but simply because they didn’t notice it had arrived.

4. When AI / GenAI features are NOT used

The obligations involved here are genuinely nuanced — it is not a clean yes or no. It depends on several factors.

The Core Legal Question: What Triggers Obligation?

The EU AI Act is the most relevant law here, and its trigger is very specific.

Under the EU AI Act, a “deployer” is defined as a natural or legal person, public authority, agency or other body using an AI system under its authority — except where the AI system is used in the course of a personal non-professional activity. [Ref.1,2]

The operative word is “using.” This creates the first important fork in the road.

Scenario Breakdown: Three Distinct Situations

Situation 1 — Features Are Present But Genuinely, Provably Disabled / Not Activated

This is the most favourable scenario for the enterprise.

For minimal risk AI systems, the EU AI Act does not impose any obligations on providers or deployers of these kinds of systems. [Ref.2,3]

Under the EU AI Act, risk classification is based on how AI is used, not which product or vendor provides it. A general-purpose AI tool starts as “minimal risk” but becomes “high-risk” based on how your organization deploys it.

So if a feature is shipped but never activated, never processing data, and never influencing decisions, there is a strong argument that the enterprise is not a “deployer” under the Act and faces no specific EU AI Act obligations for that feature. However — and this is critical — the enterprise still carries three residual obligations even in this scenario:

  • AI Literacy (Article 4) — From 2 February 2025, AI system providers and deployers must ensure that their employees and contractors using AI have an adequate degree of AI literacy, by implementing training. Failing to ensure compliance with AI literacy obligations comes with particular risks in all EU Member State jurisdictions where director or managerial liability regimes apply. (Ref.4)
  • AI Inventory Obligation — Regulators and auditors increasingly expect enterprises to know what AI they have, even if unused. Not knowing is itself a governance failure.
  • GDPR Vigilance — Even a “disabled” feature may be passively collecting or logging data in the background without the enterprise’s knowledge.

Situation 2 — Features Are Present, Not Deliberately Configured, But Running Passively in the Background

This is the most dangerous scenario and the one most enterprises actually find themselves in.

Survey data reveals that 99% of organizations have experienced financial losses from AI-related risks, with average losses of $4.4 million per company. The most common risks are non-compliance with regulations (57%) and biased outputs (53%). (Ref.5)

Here the enterprise may technically be a “deployer” without knowing it, because the AI feature — even if not deliberately used — may be:

  • Processing personal data from ITSM tickets (names, issues, employee info) in the background
  • Auto-classifying or auto-routing incidents without explicit human instruction
  • Generating suggested responses that employees rely on without realising it is AI

Under GDPR, third-party AI tools complicate compliance. You remain liable for their compliance failures. Data Protection Impact Assessments (DPIAs) are mandatory when processing is “likely to result in high risk to rights and freedoms.” AI systems almost always trigger this requirement. (Ref.6)

In this case, GDPR obligations apply regardless of whether the enterprise intentionally “used” the AI — if personal data flowed through the feature, the enterprise is the data controller and accountable.

Situation 3 — Features Are Present, and Some Employees Are Using Them Without Formal Organisational Approval

This is the shadow AI problem and is extremely common in ITSM platforms. Technically, Shadow AI is usually active adoption by unauthorized users. But this situation is the consequence of failing to govern passive adoption (i.e., the vendor pushed it, and the employees started using it before the company could block it).

Employees using unapproved AI tools outside governance oversight create blind spots and risks — a governance gap that exposes enterprises to data breaches, compliance violations, intellectual property loss, and reputational damage. (Ref.7)

In this scenario, full deployer obligations apply — the enterprise cannot claim it “didn’t use” the AI if its employees were using it, even informally. The enterprise is legally the deployer.

What The Regulations Actually Say About “Non-Use”

Regulation Obligation if AI Not Used? Key Nuance
EU AI Act No specific obligations if genuinely not used But AI literacy (Article 4) applies to anyone who could use it
GDPR Applies if personal data is processed — even passively Background data processing by “unused” features still triggers GDPR
NIST AI RMF Voluntary — no obligation even if used But best practice recommends inventorying all AI, including unused
ISO 42001 Only applies if you’ve adopted it as a standard Certification requires knowing your full AI landscape
Sector Regulations (HIPAA, FCA, RBI) Vary by jurisdiction Many require a complete AI inventory regardless of usage

The Practical Reality: “Not Using” Is Harder to Prove Than You Think

The AI inventory process must be embedded as an ongoing operational practice rather than a sprint deliverable. The registry must capture the use case and intended purpose, the data types and personal data categories processed, the system owner and accountable executive, the vendor or provider if third-party, the deployment context and affected population, and a preliminary risk classification. (Ref.8)

This means that to credibly claim “we are not using this AI feature,” an enterprise must be able to demonstrate and document that:

  • The feature is explicitly disabled or not configured
  • No data flows through it
  • No employee has access to or is using it
  • There is a formal record of this decision

Without that documentation, a regulator or auditor would be fully justified in treating the enterprise as a deployer by default.

The Bottom Line — Three Verdicts

If the feature is genuinely, provably, and documentedly disabled: Most EU AI Act obligations do not apply. However, AI literacy, GDPR data flow vigilance, and inventory obligations still apply.

If the feature is present but not actively monitored or controlled: The enterprise is in a grey zone that regulators will not view favourably. GDPR almost certainly applies. The enterprise is exposed.

If employees are using it even informally: The enterprise is fully a “deployer” under the EU AI Act and all associated obligations apply — regardless of whether leadership was aware.

Imagine the complexity of this when you think of AI Laws of multiple countries.

The safest posture, regardless of scenario, remains the same: know what AI you have, formally decide what you will and will not use, document that decision, and govern accordingly. The absence of a decision is itself a governance failure.

The EU AI Law is specially used here to draw out the requirements as it is one of the most  stringent laws. If you look at a country like India, the indications are that its Digital India Act, which will also govern the AI space, will be a pro-innovation, techno-legal approach-based law. Though it will not go easier on compliance, it will be definitely establish a “differently regulated” environment.

Conclusion

When AI / GenAI comes into an enterprise, not as part of their planning but as part of the platform enhancement from vendor, then whether the owner department plans to leverage it or avoids leveraging it, the department should always bring it to the attention of security, legal and compliance team to seek guidance. Because as the enterprise becomes multi-continental operations entity, the compliance will become more complex exposing the enterprise to business risks. Don’t surprise your legal and compliance team!

References

  1. EU AI Act – Article 3 Definitions: https://artificialintelligenceact.eu/article/3/
  2. EU AI Act Risk Categories: https://www.modelop.com/ai-governance/ai-regulations-standards/eu-ai-act
  3. EU AI Act Compliance by Sentra: https://www.sentra.io/learn/eu-ai-act-compliance-what-enterprise-ai-deployers-need-to-know
  4. Training obligations by Latham & Watkins : https://www.lw.com/en/insights/upcoming-eu-ai-act-obligations-mandatory-training-and-prohibited-practices
  5. AI Governance by Secure Privacy: https://secureprivacy.ai/blog/ai-governance
  6. GDPR for AI by Nexos: https://nexos.ai/blog/gdpr-ai/
  7. Guide to AI Governance by Liminal: https://www.liminal.ai/blog/enterprise-ai-governance-guide
  8. EU AI Act Implementation Sprint by Secure Privacy: https://secureprivacy.ai/blog/eu-ai-act-implementation-guide