/dqc/media/media_files/2025/12/03/chatgpt-access-control-gains-new-oversight-2025-12-03-16-40-10.png)
ChatGPT access control gains new oversight
A new tenant control feature for ChatGPT has been added to eScan’s Enterprise DLP platform, targeting a growing problem inside enterprises: staff using personal AI accounts and, in the process, creating blind spots for compliance teams.
The update focuses on a simple but persistent loophole. Even when an organisation pays for ChatGPT Enterprise licences, employees can still sign in through personal Google, Apple or Microsoft accounts. Once that happens, the enterprise loses visibility. No logs. No oversight. No assurance on what data moves across the boundary.
The data sovereignty challenge
The problem is not abstract. The release points to examples such as confidential design data and client documents surfacing in personal AI accounts. In each case, the information flowed into systems where the company had no ability to retrieve or audit it.
This risk widens as generative AI becomes part of daily work. Tools such as ChatGPT and Claude are already seen as productivity boosters. Some reports show knowledge workers saving significant time with AI assistance. Even so, data security and privacy continue to slow adoption, with many enterprises still cautious about wider deployment.
How the new control works
eScan’s Enterprise DLP already supports tenant control across platforms including Google Workspace, Microsoft 365, Dropbox, Atlassian, Slack and Webex. The logic is straightforward. If an employee tries to log in with personal credentials, the system blocks the attempt. Access is granted only when corporate domain credentials are used.
The new feature extends this model to ChatGPT. When an employee tries to access the service, the DLP engine checks the authentication source. Personal Google, Apple and Microsoft accounts are not allowed. Only accounts tied to the organisation’s ChatGPT Enterprise or Business workspace pass through.
The effect is twofold:
Employees continue using AI tools for legitimate work.
Security teams retain full visibility through enterprise-level compliance interfaces.
The corporate domain remains the boundary. Conversations and shared information stay inside the governed workspace, allowing audits and enforcement of data handling policies.
Why the timing matters
Enterprises are still defining their AI governance models. Forecasts suggest that organisations lacking structured AI data controls could see higher rates of breaches. This reinforces a shift away from banning AI outright. The focus now is on guiding usage into controlled environments.
Govind Rammurthy, CEO of eScan, said the feature addresses the concern many CIOs raise. Employees will use AI tools regardless of the rules; the question is whether the activity happens in monitored corporate systems or through personal accounts that cannot be governed.
Looking ahead
The ChatGPT tenant control is now part of eScan’s Enterprise DLP platform, with support for more AI tools planned. As AI systems become embedded across workflows, enterprises are likely to face stronger expectations to protect data sovereignty across all such platforms.
The update signals a broader shift. AI usage is no longer the challenge. Unmanaged AI usage is. And that distinction is shaping the next phase of enterprise security planning.
Read More:
CrowdStrike on empowering India’s channel partners for cybersecurity’s future
/dqc/media/agency_attachments/3bO5lX4bneNNijz3HbB7.jpg)
Follow Us