AI Safety Connect debates on risks, digital divide and industry implications

AI Safety Connect at the India AI Impact Summit spotlighted AI governance, digital divide concerns, frontier risks and future-of-work impact, highlighting how safety standards and verification could create new opportunities for industry.

author-image
Bharti Trehan
New Update
AI Safety Connect debates on risks digital divide and industry implications

AI Safety Connect debates on risks, digital divide and industry implications

At a time when Indian enterprises, startups and system integrators are accelerating AI adoption, the question facing the IT channel ecosystem is no longer just about deployment, but about accountability, safety and long-term governance.

Advertisment

During the India AI Impact Summit week in New Delhi, AI Safety Connect (AISC) convened a strategic media briefing bringing together global AI governance experts to examine what “AI safety” really means for India, the Global South and the future of advanced AI systems. For channel partners building AI-led solutions across healthcare, BFSI, education and public services, the discussion has direct implications: standards, compliance, evaluation and verification are fast becoming part of the AI value chain.

The panel featured:

  • Eugene Yiga, Communications Lead, AI Safety Connect (Moderator)

  • Nicolas Miailhe, Co-Founder, AI Safety Connect

  • Renata Dwan, Director of Tech Diplomacy, Simon Institute for Long-term Governance

  • Mark Brakel, Global Director of Policy, Future of Life Institute

  • Stephen Clare, Lead Writer, 2026 International AI Safety Report

  • Cyrus Hodes, Co-Founder, AI Safety Connect

India’s governance moment: Innovation and responsibility must move together

Advertisment

Opening the discussion, Nicolas Miailhe, Co-Founder of AI Safety Connect, framed India’s distinctive position in global AI governance.

He emphasised that the summit is not about choosing between innovation and regulation:

“This is not innovation against safety. We live together, we’ll be able to reap the benefits of innovation if we make them reliable.”

Advertisment

Miailhe argued that today’s AI leaders are no longer startups experimenting at the edge:

“These aren’t startups… they are industrialists and with great powers comes great responsibility.”

For India, the conversation must move beyond abstract catastrophic risk to real societal impacts, livelihoods, the future of work, access to healthcare and education. He pointed to the need to “deliver risk management, protect our kids, protect our families, our markets,” especially as AI systems are now deployed at massive scale with “billions of users.”

Advertisment

Crucially for India’s digital economy, he described safety not as a constraint but as a growth engine:

“You should also see safety as an immense opportunity for innovation.”

He added that stringent implementation of safety standards can localise algorithmic value, create jobs and generate exportable capabilities, an important signal for Indian MSPs, ISVs and system integrators building AI services.

Digital divide and AI concentration: A global reality

Renata Dwan, Director of Tech Diplomacy at the Simon Institute for Long-term Governance, shifted the focus to structural imbalances in the AI ecosystem.

Advertisment

She highlighted how AI harms are not theoretical. From deepfakes influencing political discourse to AI systems being used for health advice in middle-income countries, risks are immediate.

Dwan noted: “Forty per cent of ChatGPT’s users come from middle-income countries… and 40% use it for health advice.”

She questioned whether these systems are trained on diseases prevalent in India, Nigeria or Indonesia. Underscoring the risk of misalignment with local realities.

Advertisment

On the concentration of power, she was explicit:

“87% of notable AI models are coming from two countries - US and China.”

She further noted that: “91% of venture capital funding is going to high-income countries” and “Seven per cent of global data centres are located in the Global South.”

For countries across Asia and Africa, she described this as a structural “digital divide” that will “force us to rethink every single model of development.”

Advertisment

Her key message was clear: the Global South must move from passive adoption to active demand-setting.

“The South cannot just be receivers and consumers of the technology.” Safety, in her framing, becomes an enabler: “Safety is one of our key tools… it will make our societies digitally capable.”

For channel partners in India, this suggests an emerging role in evaluation frameworks, model audits, multilingual standards and sector-specific validation before AI is embedded into digital public infrastructure.

Middle powers and market leverage

Mark Brakel, Global Director of Policy at the Future of Life Institute, addressed what he called “the elephants in the India AI Summit room.”

He cautioned against focusing only on economic upside while ignoring risk exposure:

If critical IT infrastructure were automated by AI systems controlled by external entities, the economic implications could be severe.

Brakel also challenged assumptions around open-source as a complete solution:

“Ninety per cent of all global computing is in the hands of China and the United States.”

Even open ecosystems may still depend on infrastructure controlled by a few actors. He also warned that relinquishing control may increase misuse risks.

However, he emphasised that middle powers, including India, have leverage when they act collectively: “If we band together as demand-side countries outside of China and the United States, there is a lot that is possible.”

For India’s IT channel ecosystem, this reinforces the importance of participating in standards-setting, procurement frameworks and compliance regimes that shape AI deployment at scale.

Scientific evidence: Capabilities rising, risks expanding

Stephen Clare, Lead Writer of the 2026 International AI Safety Report, provided a scientific snapshot of current AI capabilities and risks.

He noted: “The opportunities and the challenges of AI have become very real.”

More than a billion people are now using advanced AI systems, and adoption is accelerating across sectors. At the same time, the report documented:

  • Continued capability gains in frontier systems

  • Heavy investment in data centres

  • Increasing use of AI in real-world cyber operations

  • Deepfakes are becoming “extremely realistic”

Clare pointed to what he called an “evidence dilemma”: technology evolves rapidly, while public evidence and regulatory understanding lag behind.

He highlighted that some companies introduced additional safeguards because testing “could no longer rule out the possibility that people use these new models to develop very serious weapons, including bio weapons.”

For policymakers and enterprises, this creates difficult choices: act early with incomplete information or wait and risk harm.

The report’s aim, he said, is to separate “what we know” from “what we don’t know,” so decisions can be made proportionate to risk.

Coordination before a crisis

Closing the session, Cyrus Hodes, Co-Founder of AI Safety Connect, described AISC’s mission as building “infrastructure for advanced coordination.”

He stressed the need for structured, recurring dialogue: “We believe the tempo should be accelerated for the discussion.”

AISC plans to convene stakeholders at least twice a year to bridge gaps between frontier labs, policymakers and civil society, including dialogue channels between the US, China and other regions.

The objective is to establish coordination mechanisms before advanced AI systems cross critical capability thresholds.

What this means for India’s IT Channel ecosystem

For India’s channel partners, this discussion goes beyond policy rhetoric.

As AI systems move into regulated sectors, healthcare, financial services, public infrastructure, evaluation, verification, reporting, compliance mapping and localisation will become billable services. Safety frameworks will not remain abstract; they will translate into:

  • Model testing and audit services

  • Data governance consulting

  • AI risk assessment offerings

  • Compliance advisory for cross-border deployments

  • Sector-specific AI validation frameworks

If, as Miailhe stated, “safety is an immense opportunity,” Indian partners can position themselves not just as AI implementers but as AI assurance providers.

The India AI Impact Summit week demonstrated that AI safety is no longer a fringe discussion about distant superintelligence. It is about livelihoods, the future of work, digital sovereignty and market accountability.

For the channel ecosystem, the message is clear: the next phase of AI growth in India will be shaped not only by innovation, but by who builds the trust infrastructure around it.

Read More:

India IT Sector Navigating GenAI Transition

Union Budget 2026 positions AI, cloud and tax certainty as pillars of long-term growth