/dqc/media/media_files/2025/02/19/MvCK8nSUUpWuj3dlrgcb.jpg)
A number of countries have banned DeepSeek, the China’s answer to ChatGpt. The primary concern is that this tool may collect sensitive private data and release it to Chinese agencies under the Chinese law that sensitive data must be handed to the authorities in case of security and other threats.
Countries that have Banned DeepSeek
Italy – Banned it to comply with the EU data protection laws. The Italian Data Protection Authority (DPA) investigated DeepSeek’s data collection practices and got it deleted from the AI platform from app stores in the country.
Taiwan –Bbanned DeepSeek AI across all public sector organisations, including public schools, state-owned enterprises and critical infrastructure. The Ministry of Digital Affairs cited concerns about cross-border data transmission and information leaks as the reason for the ban.
Australia – The Australian government has banned its employees from using this AI chatbot on government devices. Home Affairs Minister Tony Burke stated that a national intelligence assessment found the AI platform to pose an unacceptable security risk.
South Korea – South Korea has joined the list of countries banning this AI tool. The defence ministry has banned it from accessing its internet-connected military computers. This action was taken after the country’s personal information protection commission requested clarity on DeepSeek’s management of user information.
United States – The American instituions have restricted the use of DeepSeek, and Texas was the first state to ban the Chinese AI app. A large number of companies in the US have asked their employees to not use DeepSeek.
The US Navy, US Congress, Pentagon and NASA have enforced the ban.
The Case of India
India is pledged to protect the personal data of individuals and follow the international standards in this regard.
India has gone ahead of the other nations in this regard and has banned all AI tools including ChatGPT in financial department's platforms and devices.
The Indian Ministry of Finance, including the Department of Economic Affairs, has banned the use of all AI tools in all the departments.
A PIL has been filed in a Delhi court seeking a complete ban of these tools, especially DeepSeek over concerns of leakage of sensitive data to other countries, especially to China. The case is under judicial hearing at the moment.
Indian IT Business Leaders Speak
We interact with some IT business leaders in India to understand what they think about the issue. Following are their commens –
As a startup working with AI and ONDC integration, I believe the conversation around DeepSeek needs to shift from blanket bans to establishing clear certification standards. Many MSMEs like us need affordable AI solutions to stay competitive, but we also understand the security concerns.
What would really help is if the government could work with industry bodies to create a certification programme for trusted deployments – similar to how we have ISO certifications or SOC2 compliance. Local technology providers could get certified to host and manage these open-source models securely, giving small businesses a way to use AI tools while ensuring data stays protected within India.
We’re already seeing this model work with other critical infrastructure – like how payment gateways need RBI certification. A similar approach for AI deployments would give businesses clear guidelines while addressing national security concerns. This would be especially valuable for sectors like healthcare and finance where data sensitivity is crucial.
--Dipankar Sarkar, CTO and Co-founder of KiranaPro
It’s a tricky spot. Governments want to keep data safe. In a world where information is basically gold, nobody wants to accidentally hand over the keys to the vault. It’s like locking your front door at night, except instead of burglars, you’re worried about hackers.
But here’s the catch – DeepSeek is open-source. It’s not a locked. Anyone can grab it, tweak it and use it however they want. So even if a government bans the official website, the model still spreads –running locally, in the Cloud, or baked into other tools.
So, do these bans really work, or are they just for show? In a world where digital borders are fading, it all boils down to trust. One thing’s for sure – this debate isn’t wrapping up anytime soon.
--Rashmi Kulkarni, Co-Founder & Director IndoAI Technologies
I believe the recent bans on DeepSeek by governments in the US, Italy and Australia reflect a growing tension between national security and the open, collaborative nature of AI development. While I understand the concerns about data security and the potential exposure of sensitive information to foreign entities, I question whether banning access to an open-source model like DeepSeek is the most effective solution. Open-source technologies are inherently decentralised, meaning they can be run locally, integrated into private systems, or hosted in secure cloud environments. This makes traditional bans largely symbolic, as they fail to address the underlying risks while creating a false sense of security.
I see the open-source nature of DeepSeek as both a challenge and an opportunity. On one hand, it widens access to cutting-edge AI, empowering developers and researchers to innovate without barriers. On the other hand, it highlights the need for a more nuanced approach to regulation. Instead of outright bans, governments should focus on building robust cybersecurity frameworks and fostering international collaboration to mitigate risks. A proactive approach that emphasises transparency, accountability and ethical AI development would be far more effective than reactive measures.
Moreover, I worry that these bans could stifle innovation and create a fragmented AI ecosystem. AI thrives on global collaboration, and restricting access to tools like DeepSeek risks isolating countries from the broader advancements in the field. This could lead to inefficiencies, duplicated efforts and a lack of interoperability between systems. In the long run, such fragmentation could hinder AI’s potential to address global challenges like healthcare and climate change.
In my view, the future of AI depends on our ability to balance innovation with accountability. Instead of reacting with bans, we should work together to create a global framework that ensures AI remains a force for good, driving progress while safeguarding security and ethical principles. Collaboration, not isolation, is the path forward.
--Avinash Mada, the Founder and CEO, Freedom with AI
Banning DeepSeek on government equipment is a prima facie precautionary measure, however it solves only half of the problem at hand. Nonetheless, open-source models such as DeepSeek can be installed locally or behind private clouds making it complicated to enforce complete ban. Banning DeepSeek is not a long term strategic approach for proper AI governance.
The real concern isn’t just where an AI model is built but how it handles sensitive data. Governments should implement stricter endpoint security, define clear compliance standards and conduct rigorous risk assessments for any AI tool used in critical infrastructure. Additionally, fostering international collaboration on AI regulations can help create a more transparent and security-focused ecosystem.
With increasing adoption of AI, reactive bans will be of little help; only a proactive, policy-driven framework that puts security, accountability and data sovereignty at the core is sustainable.
--Siddharth Ugrankar , Founder, Qila
As LLMs evolve and agentic AI systems leverage distillation techniques for their direct consumption, the real risk lies not just in the foundational model but in how the end application or AI agent is developed and deployed. Organisations must move beyond geopolitical bans and instead focus on ensuring that AI systems are free from bias, discrimination, stereotyping, hatred,aAbuse, profanity and prompt injection at Agentic AI level.
To mitigate against these risks, every enterprise AI system should undergo rigorous assessment tailored to its organisational context. At St. Fox, we have built frameworks within our product that allow enterprises to systematically assess Agentic AIs for such AI based Enterprise Risks.
Banning foundational models based on geopolitical allegiance alone may not be the most optimal approach. Instead, enterprises must take a proactive approach to AI security and governance, ensuring that the AI models they adopt – regardless of origin – meet the highest fairness, transparency (OECB) standards. There should be no enterprise AI adoption without AI security.
--Sunil Tak, CEO & Co-Founder, St. Fox
The growing restrictions on DeepSeek across Asia and beyond signal a critical shift in how nations perceive AI security. Governments are recognising that AI tools, while powerful, can also be conduits for data leakage and cyber threats. At 63SATS, we believe that national security and data sovereignty should never be compromised. The risks posed by AI applications storing sensitive prompts and personal data in foreign jurisdictions demand stringent regulatory oversight. India’s stance aligns with global concerns – ensuring that AI innovations do not become backdoors for espionage or cyber interference. As AI adoption accelerates, enterprises and governments must prioritise trust, transparency and security in AI deployments. The future of AI is not just about intelligence but also about integrity and resilience. It’s time for a robust, sovereign AI framework that safeguards national interests while fostering innovation.
--Neehar Pathare, MD, CEO & CIO, 63SATS
The global move to ban DeepSeek on government devices, spearheaded by the U.S., Italy and now Australia, underscores the escalating concerns surrounding data security and the potential exposure of sensitive information, particularly involving foreign-developed AI.
Reports indicate that its R1 model delivers performance comparable to leading counterparts like OpenAI’s GPT-4 but at a fraction of the development cost. This efficiency can potentially widen AI access, especially for startups and MSMEs aiming to leverage cutting-edge technology without incurring prohibitive expenses.
I recognise the importance of embracing advanced AI tools to drive business growth. However, it’s imperative to balance this enthusiasm with a vigilant assessment of potential risks. For Indian businesses, particularly MSMEs striving for digital transformation, this presents a dilemma. The allure of affordable, high-performance AI solutions must be weighed against the potential vulnerabilities associated with data breaches and unauthorised access. In sectors handling sensitive information – such as finance, healthcare, and critical infrastructure – the stakes are even higher.
While the intention behind these bans is understandable – protecting national security interests – the open-source nature of DeepSeek presents a unique challenge. Simply restricting access to the website or app doesn’t eliminate the risk, as the model can still be deployed locally, through Cloud environments, or integrated into various applications.
For India, this situation serves as a critical reminder of the importance of a balanced approach to AI adoption. We must not stifle innovation, but we also cannot afford to compromise on data security. It is crucial for Indian enterprises and government entities to carefully evaluate the security implications of using open-source LLMs.
Ultimately, the DeepSeek situation highlights the need for greater transparency and accountability in the AI sector. It also emphasises the importance of developing indigenous AI capabilities that align with our security and data privacy standards. By remaining vigilant and prioritising data security, Indian businesses can harness the benefits of AI while safeguarding their interests and contributing to national security. As countries across the globe grapple with these challenges, India has an opportunity to lead the way by fostering responsible innovation that safeguards our national interests and promotes a secure digital economy.
--Mukul Goyal, Co-founder, Stratefix Consulting
Governments in the US, Italy and Australia have moved to ban access to DeepSeek, a Chinese-developed LLM, on government devices. These bans stem from concerns over data security and the potential exposure of sensitive information to the PRC. However, blocking access to DeepSeek is not as simple as restricting its website and mobile app – because the model is open-source, it can still be run locally, through cloud environments, or integrated into various applications.
DeepSeek’s rising success is raising alarm in various nations because the matter is of national security. AI Chatbots, including DeepSeek, use user information to build their model strong. The concern is that with the strict cybersecurity laws, the Chinese government can demand data at any time. Many countries have issued strict laws for the government workers to not install and use DeepSeek.
The ban on DeepSeek will not resolve the issue as the model can be run locally on the devices through Cloud environments and other frameworks which the government workers can download on personal devices, allowing access to share sensitive information. Additionally, India must take immediate actions to promote education and protect IPRs in AI models that are much needed for start-ups to grow globally and also strengthen their rules to detect any unauthorised app which gives access to sensitive information.
DeepSeek has the capability to record commands submitted to chatbot, device performance and even has the power to track the usage on competitor tracks such as ChatGPT and Gemini. Data security is a critical issue in global tech Industry, India’s stance reflects broader efforts to protect privacy and sensitive information.
The rise of DeepSeek highlight the development of AI model that is cost effective. Additionally, it gives India a valuable opportunity to strengthen its role in AI innovation and research to build its own robust AI ecosystem.
--Navkar Jain, Co-Founder, Plus91Labs
AI has become omnipresent in all walks of life. However, discussions on AI are often incomplete without copious references to misinformation, data breaches, and fake news. Students and employees in private and government are leaning heavily on LLMs such as ChatGPT. While using these advanced models, unsuspecting users do not realise the significant risk of inadvertently sharing sensitive information. In many cases, unauthorised access could lead to national security concerns. Unfortunately, DeepSeek has its origins in a politically hostile country. There are concerns about how DeepSeek could be accessing sensitive user data, prompts, device information, keystrokes, user behaviour and location, among others. It can also potentially access app usage, device performance and interactions with other users. There is a lack of transparency regarding data storage and processing. The country should focus on building its own AI ecosystem so that the citizens do not end up using technology that is unsafe. As AI continues to evolve, balancing innovation, security and ethical considerations will remain a challenge.
--Rishi Agrawal CEO and Co-Founder, Teamlease Regtech
Reflctions
The above comments show that the prime concern of the commentators is that DeepSake being an open source tool, may be downloaded and used anyway. Same goes for ChatGPT.
The important point to note here is that the ban is not just on the tool accessing a particular platform. The ban is on the individuals attempting to download these tools and use them. Thus, if an individual downloads and uses these tools on another platform, such as a personal network, the individual would be liable for prosecution for anti-State activities and for breaching personal data privacy laws.
This awareness needs to be spread that the ban on a tool that may attempt to steal sensitive private data is for the security of people and of the State. Especially in case of India, China has not always b a friendly neighbour. Hence, India needs to take extra caution in this regard.
There are some commentators who have welcomed the ban and show compliance for it.
The best option for India would be to develop its own AI tools for the benefit of its people. The large industry conglomerates of India can come forward to invest money in the development of indigenous AI tools, so that Indians don't have to depend upon imported tools and technologies. The government can collaborate with the potential industrialists on this project.