Available for invited lectures, workshops and policy dialogues on accessibility, design and governance.
Artificial Intelligence (AI) systems risk amplifying existing social exclusions if disabled persons are not explicitly included. India’s current AI governance framework—as evidenced by the India AI Governance Guidelines (I-AIGG)—pursues an “AI for All” vision, yet it omits mandatory accessibility and anti-discrimination safeguards for persons with disabilities (PwDs). This whitepaper examines India’s obligations under the UN Convention on the Rights of Persons with Disabilities (UNCRPD) and the Rights of Persons with Disabilities Act, 2016 (RPwD Act), along with the Supreme Court’s recent Rajive Raturi v. Union of India (2024) ruling, to argue that enforceable rights-based rules must underpin AI policy. We highlight how technical biases (in data, models, and annotations) and regulatory gaps leave disabled Indians vulnerable in education, employment, health, and public services. Benchmarking against the EU Artificial Intelligence Act (Reg. (EU) 2024/1689) and international best practices, we propose concrete legal, regulatory, and technical reforms: mandatory AI accessibility standards (aligned with WCAG/GIGW), high-risk classifications with Disability Impact Assessments (DIAs), dataset audits, inclusive design, and strong institutional accountability (including PwD representation and redress mechanisms). These reforms are designed to translate India’s domestic and international disability rights obligations into binding AI governance that promotes equity, not exclusion.
AI-driven tools are rapidly deployed across education, employment, healthcare, public services, and social protection in India. Prominent initiatives like Digital India and Aadhaar modernisation, alongside private-sector AI deployments (in fintech, recruitment, etc.), underscore a national push towards technology-led development. In principle, Indian policy espouses “inclusive” and “human-centric” AI. For example, the newly unveiled India AI Governance Guidelines (I-AIGG) emphasise human-centricity, transparency, and fairness. However, a critical flaw looms: these guidelines treat inclusion as voluntary and vague. They refer only to “marginalised communities” without explicitly defining or safeguarding persons with disabilities.
This omission is alarming. Disability rights are not optional extras but are protected by law. India has over 63 million PwDs (per NFHS-5), each with a constitutionally protected right to equality and non-discrimination. Moreover, the UNCRPD (to which India is a State Party) and the RPwD Act(2016) impose affirmative obligations to ensure accessibility to information and technology. For instance, UNCRPD Article 9 mandates that States “[take] appropriate measures” to ensure PwDs have equal access to “information and communications, including information and communications technologies and systems”. Similarly, the RPwD Act requires binding accessibility standards for physical and digital infrastructure (Sections 40–46) and prescribes penalties for non-compliance.
Critically, India’s Supreme Court has now declared that accessibility cannot be left to aspirational guidelines. In Rajive Raturi v. Union of India (2024), the Court struck down non-binding digital accessibility norms and directed mandatory rulemaking. The judgment reaffirmed that “digital accessibility is a fundamental right” and that reliance on “persuasive guidelines” violates the RPwD Act. It called for uniform, enforceable standards “in consultation with all stakeholders” including PwDs.
This whitepaper builds on these developments to focus on AI bias against PwDs as a pressing issue. We analyze how algorithms can inadvertently exclude disabled people, review the inadequacies of current policy (especially the I-AIGG), and recommend reforms. These include legal amendments, regulatory mandates, technical safeguards (such as diverse data sets and bias audits), and institutional measures (disability representation, accessible grievance redress). By placing PwD inclusion at the centre of AI governance, India can fulfill its rights-based obligations and prevent a new wave of digital exclusion.
India ratified the UN Convention on the Rights of Persons with Disabilities (UNCRPD) in 2007, making its principles legally binding. Article 4(3) of UNCRPD requires that “[i]n the development and implementation of legislation and policies … concerning issues relating to persons with disabilities, States Parties shall closely consult with and actively involve persons with disabilities, including through their representative organizations”. Thus, disability advocates and technical experts must be part of any AI policy or standards development.
UNCRPD Article 9 explicitly mandates accessibility to ICT: “States Parties shall take appropriate measures to ensure … persons with disabilities access … information and communications, including information and communications technologies and systems, and to other facilities and services … [with] identification and elimination of obstacles and barriers to accessibility”. Subparagraph (2)(g) specifically instructs States “to promote access for persons with disabilities to new information and communications technologies and systems, including the Internet”. In practical terms, this creates an obligation to embed accessibility into AI systems: for example, user interfaces must accommodate screen-readers or alternative input for blind users, and content must be available in sign-language or captioning for the deaf. Accessibility is thus not a “nice to have” but a treaty-level mandate.
UNCRPD also establishes general principles against discrimination and for technology development. Article 4(1)(b) requires India “to take all appropriate measures, including legislation, to modify or abolish existing laws, regulations, customs and practices that constitute discrimination against persons with disabilities”. Article 4(1)(f) directs States to promote “universally designed goods, services, equipment and facilities… which should require the minimum possible adaptation” to meet disability needs. Equally, Article 4(1)(g) calls for R&D into new ICTs and assistive technologies suitable for PwDs. In the AI context, these provisions imply that algorithms and digital services must be designed universally (i.e. usable by the widest range of people without special adaptation), and that government should encourage tech that aids disabled users.
In sum, UNCRPD imposes a rights-based obligation on India to ensure AI systems are accessible and non-discriminatory. As one legal analysis observes, “the Union and Member States are legally obliged to protect persons with disabilities from discrimination and promote their equality”. These obligations extend to AI. Indeed, UNCRPD’s emphasis on “accessible ICTs” makes it clear that any AI-driven platform or service, especially public services and mainstream applications, must accommodate the needs of disabled users as a matter of international law.
The Rights of Persons with Disabilities Act, 2016 (RPwD Act) is India’s primary disability rights statute. It codifies many of the UNCRPD’s mandates. Chapter VIII of the Act deals with accessibility, reflecting India’s duty to ensure inclusive physical and digital environments. Key provisions include:
Section 40 (Accessibility rules): The Central Government shall “formulate rules for persons with disabilities laying down the standards of accessibility for the physical environment, transportation, information and communications, including appropriate technologies and systems, and other facilities and services provided to the public”. This empowers (and indeed requires) the government to prescribe detailed norms—akin to building codes—for ICT and digital services.
Section 42 (Access to ICT): The Act mandates that “the appropriate Government shall take measures” to ensure: (i) all audio, print and electronic media content is available in accessible formats; (ii) persons with disabilities have access to electronic media via audio descriptions, sign language, and captioning; (iii) everyday electronic goods/equipment conform to universal design. In practical terms, government websites, mobile apps, online videos etc. must be accessible (e.g. Braille/large-print content, captions on videos) and consumer electronics (phones, ATMs, kiosks) should have features to accommodate impairments.
Sections 44–46 (Mandatory compliance): These sections impose strict sanctions for inaccessibility. Section 44 forbids any building permit or occupation certificate if accessibility rules (Section 40 rules) are violated. Section 45 requires all existing public buildings to be made accessible within five years of the rules being notified. Section 46 similarly mandates that public and private “service providers” (e.g. banks, clinics, online portals) must provide services in compliance with accessibility rules within two years. Thus, digital service providers should also be bound by these timelines.
Sections 89–90 (Penalties): Violation of any provision of the Act, or rules made under it, attracts fines up to ₹10,000 for a first offence and ₹50,000 (up to ₹500,000) for subsequent offences. While the Act does not specifically single out ICT violations here, its broad penalty clause covers “any contravention… of any rule made thereunder”. Hence, non-compliance with prescribed accessibility standards (including digital) is punishable.
In summary, the RPwD Act creates a clear legal floor: AI products and services (as part of “digital infrastructure”) must adhere to accessibility norms once the government formulates them. This is not optional. As the Supreme Court noted, the Act’s intent is “to use compulsion” to realize accessibility. Failing to issue binding rules would leave all these enforcement provisions inoperable.
The Supreme Court’s decision in Rajive Raturi v. Union of India (2024) is a watershed for digital accessibility. The case challenged the government’s delay in notifying accessibility rules under Section 40 RPwD. A nine-judge bench, led by CJI Chandrachud, delivered a unanimous judgment. Critically for AI governance, the Court held that India’s digital age cannot relegate accessibility to mere “progressive realization” or guidelines. It declared that reliance on non-binding norms is “contrary to the intent of the RPWD Act”. Specifically:
The Court struck down portions of Rule 15 in the 2017 RPwD Rules (which had contained “voluntary” guidelines for web accessibility). It found that “Rule 15(1)… provides only persuasive guidelines” and “is ultra vires the scheme and legislative intent of the RPWD Act”. In other words, accessibility rules must be mandatory, non-negotiable standards, not optional targets.
The Court directed the Union government to draft binding rules immediately. It ordered that within three months, new mandatory accessibility rules (per Section 40) be delineated, “in consultation with all stakeholders” and expressly mandated involvement of NALSAR’s Centre for Disability Studies. It even insisted on such consultation every two years for updates. This shows judicial recognition that digital accessibility standards must be shaped with input from disabled communities and experts.
Importantly, the Court affirmed that accessibility is a right, not a policy objective. It observed: “Creating a minimum floor of accessibility cannot be left to the altar of ‘progressive realization’”. Once rules are prescribed, authorities must enforce them. The judgment specifically instructed government bodies to enforce Sections 44–46 and 89 of the RPwD Act (withholding completion certificates, imposing fines, etc.) if accessibility norms are breached.
The Court also elaborated on the substance of accessibility. It emphasized “universal design” principles and comprehensive inclusion: rules must cover all disability categories (“physical, sensory, intellectual, and psychosocial disabilities”), and incorporate assistive technologies (screen readers, audio descriptions, etc.). This broad cross-disability mandate is crucial: it forecloses any notion that, say, color-contrast guidelines (for visual impairment) alone suffice, or that neurodiversity need not be considered.
In sum, Rajive Raturi removed the fence between aspiration and enforcement. For AI governance, its implications are clear: digital tools (including algorithms) must be accessible-by-design, and laws/guidelines lacking teeth are unconstitutional. The judgment’s operative directives create a legal imperative. As observed in the literature, “the Court found that reliance on non-binding guidelines and sectoral discretion violated statutory mandates, instructing creation of enforceable, uniform, standardized rules”. In practical terms, any AI initiative in India now operates under the spectre of Raturi: failure to meet accessibility standards risks legal infirmity.
In the context of Raturi, it is instructive to consider the empirical report Finding Sizes for All: Report on the Status of the Right to Accessibility in India (2022) by NALSAR’s Centre for Disability Studies. Although predating the judgment, its findings underscore the deep accessibility gaps that persist. Key takeaways (synthesized here) include:
Widespread Non-Compliance: The report found that India’s digital landscape remains largely non-compliant with even basic accessibility norms. In a sample of websites (government, private sector, entertainment, etc.), an average of 116 Web Content Accessibility Guidelines (WCAG) errors per site was recorded, with sectors like entertainment and e-commerce worst of all. (This empirical evidence was also publicized in 2025, showing no sector had near-zero errors.)
Inadequate Enforcement: Despite the RPwD Act’s mandates, the report noted that rules were treated as “persuasive guidelines” rather than compulsory. It warned that such toothless regulation “compromises the realization of accessibility rights”. This resonates with the Court’s critique: without enforcement, equality guarantees ring hollow.
Cross-Sector Exclusion: Interviews and surveys documented barriers in education (inaccessible digital classrooms and lack of sign-language teachers), employment (job portals that ignore screen-reader needs, rigid location requirements excluding home-bound PWDs), and healthcare (telemedicine platforms without captioning). These real-world examples make clear that algorithmic or digital processes can have life-changing impacts — denying benefits, jobs, or learning to those the system overlooks.
Recommendations for Inclusive Design: Among its recommendations, Finding Sizes for All called for mandatory accessibility audits, mainstreaming “reasonable accommodation” in technology procurement, and inclusive data gathering to monitor compliance. These align closely with best practices in algorithmic fairness (e.g. monitoring datasets for diversity of disability-related profiles). The report emphasizes that accessibility is a cross-cutting right: it enables all other rights (education, health, justice).
Although not an AI-specific study, Finding Sizes for All provides crucial context. It documents that, on the ground, disabled Indians already suffer digital exclusion. As one interviewee noted, “I simply cannot access the university’s learning portal with my screen-reader, so I am forced to drop out”. In AI terms, this reflects both data gaps (AI systems were never trained on diverse disability use-cases) and design flaws (interfaces assume able-bodied users). The report’s empirical weight reinforces the need for robust, rights-anchored intervention.
Algorithmic bias can emerge whenever AI systems are built on incomplete or skewed data, or without inclusive design. For PwDs, several technical issues are critical:
Underrepresentation in Data Sets: AI models learn from training data. If persons with disabilities are sparsely represented, or certain disability categories are absent, the model will under-serve or misclassify those groups. For instance, speech recognition trained mostly on voices without impairment may fail on a paralyzed person’s slurred speech. Facial-recognition systems trained on able-bodied faces often falter for users wearing assistive devices (like spectacles or hearing aids). These gaps are compounded by data collection biases: disabled individuals are less likely to be on social media or surveys that feed data-hungry models, and disability status may not be labeled due to privacy or stigma. Internationally, it is recognized that biased training data can “entail discriminatory effects… particularly with regard to … disabilities”.
Annotation and Labeling Gaps: Even when data includes PwDs, the annotations may ignore context. For example, an image dataset might label a person as “blind student” in one setting and “visually impaired” in another, but an AI model will struggle if annotations are inconsistent. Worse, labels can encode stereotypes (“disabled = unfit for job”), which models can propagate. There is a lack of inclusive annotation standards that account for diverse disabilities. To correct this, annotation guidelines should explicitly cover disability-related attributes and respect Deaf/Blind culture (e.g. labeling images with alt text that notes visual impairment).
Model Fairness Testing: AI fairness metrics (such as equal opportunity or disparate impact ratios) typically test for bias across groups (e.g. gender, race). Disability should be treated as a protected attribute in such tests. For example, a hiring algorithm’s outcomes should be disaggregated by disability status to check if disabled candidates are systematically scored lower. But in many jurisdictions, disability data is considered sensitive (see OECD, GDPR provisions), and collecting it for audits requires care. Nonetheless, as disability rights are constitutionally protected in India, there is scope to treat disability as a statutory exception for bias analysis.
Interaction and Interface Issues: Beyond data, AI user interfaces often lack accessibility. Consider a voice-based AI assistant: it may be unusable by a deaf person. An AI-driven website chatbot without text alternatives ignores blind users. Models that assume phone-camera access exclude wheelchair users who cannot hold a device. These are not biases in prediction, but design flaws. Accessibility-by-design requires, for instance, that every AI interface has multi-modal inputs/outputs (text, audio, sign, haptic) and is navigable by assistive tech.
Disability Impact Assessments (DIAs): Analogous to Data Protection Impact Assessments (DPIAs), DIAs would require organizations to evaluate how a new AI affects disabled users. A DIA might identify that an AI tool (say, automated interview screening) could disadvantage candidates who reveal disability on resumes. The organization would then be “required to design or adapt the system” to mitigate this (e.g. masking disability in preliminary screening or adjusting interview conditions). Currently India has no DIA mandate; it should be considered as a technical safeguard.
Tools and Standards: There exist open-source toolkits (e.g. IBM’s AI Fairness 360, Microsoft’s Fairlearn) that can be extended to include disability metrics. Technical standards like ISO/IEC 22989 (AI concepts) and ISO 9241 (ergonomics) can incorporate disability guidelines. Critically, any AI audit (internal or by regulators) should include checks on data representativeness (e.g. “Does training data include blind users at least in proportion to the population?”), fairness tests per disability category, and accessibility checks (e.g. automated WCAG scanning for front-end).
These technical measures are essential complements to legal ones. As the EU AI Act’s recitals note, unaddressed bias “may create new forms of discriminatory impacts… in particular for persons belonging to certain vulnerable groups, including… disabilities”. Without proactive auditing and design, AI systems can inadvertently scale discrimination. In practice, inclusive practices might require more resources (e.g. collecting data from disabled volunteers, hiring accessibility experts) but they yield better products: as one barrier-free design advocate observes, “accessible design leads to better products for all users”.
Despite some general commitments to inclusion, India’s existing AI policy lacks enforceable disability safeguards. Key gaps include:
Voluntariness vs. Mandate: The I-AIGG explicitly adopts a voluntary-compliance model. Principles are phrased as recommendations (e.g. companies “should” or “ought” to do this), with no legal sanctions for breach. By contrast, UNCRPD and RPwD provisions are mandatory. This mismatch is unsustainable: as the Raturi Court held, accessibility rules must be binding, not aspirational. As one analysis puts it, “aspirational principles supplant the non-negotiable legal floor guaranteed to persons with disabilities” under law. In sum, guidelines that rely on goodwill (“voluntary inclusive design”) will leave India out of compliance with its own laws.
Omission of Disability from Definitions: The I-AIGG never explicitly defines or enumerates “PwDs”. It uses vague references to “marginalized communities”, which obscures disability. Without definition, AI practitioners might assume disability inclusion is optional. In law, however, “persons with disabilities” is a defined category under RPwD (including locomotor, visual, hearing, cognitive, mental, etc.). Policy must mirror that specificity. For example, data protection laws often treat health/disability as sensitive data categories, implying special care for PwDs’ information – yet such categorization is absent in AI guidelines.
No Accessibility-by-Design Requirement: The I-AIGG does call for AI to be “understandable by design” (transparency), but it never requires systems to be accessible by design. In practice, an AI application could be fully transparent (explaining its decisions) and still be unusable for a blind or deaf user. By contrast, the EU AI Act explicitly incorporates “accessibility requirements” into high-risk AI standards. India’s guidelines should similarly mandate that UX design from the outset follow standards (e.g. WCAG for web; ITU or IEEE standards for assistive tech).
Weak Treatment of Algorithmic Bias: The I-AIGG mentions bias mitigation in broad strokes but does not specifically address disability bias. For instance, it refers to “fair, unbiased” outcomes including for “marginalised communities”. Yet it provides no mechanism to ensure that algorithms do not reproduce ableist assumptions. There is no requirement to audit training data for disability representation or to correct model errors that disproportionately affect PwDs. By contrast, the EU approach explicitly bars exploiting disability “vulnerabilities” (Article 5(1)(b)) and requires dataset bias checks. India’s framework should similarly identify “disability” as a protected trait in bias audits.
Inadequate Grievance Redress: The I-AIGG wisely calls for accessible and multi-format complaint mechanisms. However, in practice these are voluntary and company-driven. There is no legal right for a PWD harmed by algorithmic discrimination to demand an investigation or compensation. India needs an algorithmic redress institution or expansion of existing ones (like the CwD or National Commission for Persons with Disabilities) to receive AI-related disability complaints. The Raturi Court itself envisaged governmental enforcement mechanisms (e.g. withholding certificates, imposing fines). Similar enforcement powers must extend to digital services and AI platforms.
In short, the I-AIGG’s rhetoric of fairness and inclusion rings hollow without concrete mandates. The open letter notes that its deficiencies are “ultra vires and constitutionally indefensible” if left uncorrected. Indeed, absent legal teeth, employers or developers may ignore disability entirely until forced by law. As one expert warns, an aspirational principle is insufficient when rights are at stake. India must convert these guidelines into binding rules (or statutory amendments) that shall enforce disability-inclusive design, rather than leaving it to voluntary corporate conscience.
To guide reform, we look to international benchmarks. The EU’s new AI Act (Regulation (EU) 2024/1689) is instructive: it adopts a risk-based, rights-protective approach that India can partly emulate. Key lessons include:
High-Risk Classification: The EU Act explicitly classifies AI systems in critical social domains as high-risk. Annex III, for example, lists AI used in education (student admissions, performance assessment), employment (recruitment, promotion, monitoring), and essential services (social welfare eligibility, insurance risk scoring). These categories ensure that AI which “decides” in life-impacting areas undergoes strict scrutiny. India has no equivalent classification yet. We should consider formally designating, say, “AI in education, employment, health services, and social protection” as high-risk, triggering mandatory audits, external evaluation, and strict liability for harms.
Mandatory Preventive Measures: For EU high-risk AI, providers must implement bias evaluation and risk mitigation (Article 10 & 15). Recitals highlight this: developers must ensure datasets are of high quality and “[examine] possible biases… likely to affect… fundamental rights”. They must then “prevent and mitigate” identified biases. These requirements are binding, not voluntary. India could mandate similar due diligence: any organization deploying high-impact AI must conduct (and file evidence of) disability-inclusive audits. The forthcoming Data Protection Authority or a new AI oversight body could oversee compliance.
Explicit Accessibility Obligations: The EU Act reinforces accessibility. Recital (239) states providers are “legally obliged to… ensure persons with disabilities have access… on an equal basis” (quoting UNCRPD) and it “is essential that providers ensure full compliance with accessibility requirements, including Directive (EU) 2016/2102”. Directive 2016/2102 requires public sector websites and apps to meet WCAG 2.1 AA. Thus, in EU law any AI product (digital service) tied to public functions must be accessible by standard. By contrast, India’s policies mention WCAG only in soft terms. New rules should explicitly tie AI certification (e.g. India’s proposed CE-like marking) to accessibility standards for all digital interfaces.
Prohibition of Discriminatory AI: Crucially, EU law outlaws certain exploitative AI practices. Article 5(1)(b) prohibits AI that “exploits vulnerabilities … due to disability … with the objective or effect of materially distorting the behaviour” of that person. This is a direct protection for PwDs, recognizing them as a vulnerable group. India has no corresponding statutory ban. Such explicit proscriptions could be mirrored in Indian law or regulations (for example, in competition/consumer protection norms) to forbid AI that manipulates or excludes users on disability grounds.
Enforcement and Penalties: The EU Act ties compliance to CE marking, robust market surveillance, and fines up to €35 million or 7.5% of global turnover for breaches (not shown in excerpts, but publicly known). India should consider similar enforcement teeth in its AI regime. For instance, RPwD Act already has penalties (₹10k–5L) that could be levied on entities that deploy non-compliant AI systems. Sectoral regulators (e.g. RBI, SEBI, IRDAI) could issue binding standards requiring accessible AI and mandate audits.
By benchmarking the EU, we note that inclusive governance means binding obligations, technical specificity, and accountability. Those translate to India as: high-risk definitions anchored in the RPwD framework, compulsory impact assessments (like DIAs), interoperability with existing accessibility law (WCAG/GIGW compliance as legal norms), and robust enforcement mechanisms. The EU model demonstrates that disability inclusion is not a side consideration but a core requirement for trustworthy AI. India ought to adopt these best practices rather than reiterating voluntary pledges.
For reforms to matter, institutional mechanisms must be empowered:
Regulatory Bodies and Committees: Current IndiaAI committees (e.g. the Technology Policy Committee chaired by Prof. Ravindran) should include disability rights experts and PwD representatives. Nodal ministries (MeitY, Social Justice & Empowerment) should co-govern AI policies to ensure disability interests. The Raturi court itself mandated NALSAR-CDS involvement in rule drafting; similarly, an ongoing Accessibility Board could oversee AI compliance. Alternatively, existing statutory bodies (National Commission for Persons with Disabilities, SEPs and IBs) should be explicitly given AI oversight powers.
Grievance Redress and Remedies: Accessible complaint portals must be institutionalized. For example, the government’s Public Grievance Portal should have a dedicated AI/technology category that is barrier-free. Legal aid cells for PwDs can be trained on AI issues. Federally, one could establish an AI Ombudsman empowered to handle discrimination complaints, issue binding directives, and award damages. In addition, courts must be alert to RPwD Sections 44–46 enforcement: now that Raturi has made rules imminent, courts should enforce fines and construction bans against non-compliant entities.
Monitoring and Reporting: India should implement disability-disaggregated data requirements for AI deployments. Similar to its commitment under SDGs to collect inclusive data, the government can require developers to report performance metrics (error rates, usage) by disability category. An independent audit agency (perhaps a wing of the AI Office or a specialized cell in the NCPEDP) could review these reports, much like how financial audits are mandated. Transparent reporting will create public accountability.
Capacity Building: Enforcement also means empowering officials and enterprises to comply. The RPwD Act (Sec. 47) mandates disability rights training for public servants. This should be extended to AI regulators, judges, and corporate compliance officers. Training curricula (like for IAS/DST batches) ought to include modules on digital accessibility. Funds should be allocated for government departments to upgrade their AI systems (e.g. NIC portals) to compliance, bridging the digital divide.
Leveraging Existing Laws: India already has disability anti-discrimination provisions (e.g. Sec. 3 of RPwD on equality and non-discrimination) and employment rules (equal opportunity policies, workplace accommodations). These can be interpreted to cover algorithmic decisions. For instance, if a reservation-quota seat is assigned by an AI system, denying it to a disabled candidate because of a bias could be challenged under RPwD (much like caste discrimination). Activists could invoke Article 21 (right to life and liberty) to argue that denying access to essential services via AI violates basic rights (the Supreme Court has recognized digital privacy and access as part of Article 21 in related cases).
Collectively, these measures ensure that disability inclusion in AI is not merely aspirational but enforced. As one commentator aptly notes, “accessible design must be embedded throughout the digital transformation journey”. The architecture must facilitate this—from rulemaking chambers down to helpdesk lines.
Achieving these reforms requires a phased, funded plan:
Immediate Actions (0–6 months):
Regulatory Fixes: Issue an executive order or amendment clarifying that all AI policies must comply with the RPwD Act. MeitY (in coordination with the Department of Empowerment of PwDs) should form a task force to draft mandatory AI accessibility guidelines, referencing WCAG 2.2, GIGW/HG21, and Raturi directions. A parallel public consultation (with disability NGOs, tech industry, academia) should be mandated (perhaps as a requirement under Section 4(3) UNCRPD). NALSAR-CDS, disability commissions, and AI experts should be on the drafting panel.
Awareness & Training: Initiate orientation sessions for key officials (MeitY secretariat, regulators) on Raturi obligations and inclusive AI. Issue government directives instructing ministries (education, labor, health) to assess any AI in their domain for disability compliance.
Short Term (6–18 months):
Rule Notification: Finalize and notify the AI-specific accessibility rules. For example, require all AI-driven public services (digital/online) to meet minimal accessibility criteria (alt-text, captioning, keyboard navigation, etc.) from the date of notification. State regulators (like municipal authorities) should be directed to enforce building and digital permits only if compliance is certified.
Institutionalization: Establish a permanent AI and Disability cell within the MeitY/IndiaAI Mission, tasked with oversight. Expand the mandate of the Disability Commissioner (RPwD) to include monitoring AI compliance; create a digital portal to lodge AI+disability grievances.
Industry Engagement: Mandate private AI vendors (especially those serving the government or large enterprises) to conduct DIAs and publish summary reports. Encourage creation of accessible assistive-AI solutions (for example, Google and Microsoft have programs for accessibility; India could match them through an Accessible AI Innovation Fund).
Medium Term (2–4 years):
Audit and Certification: Roll out an “Accessible AI” certification (akin to CE marking) for high-risk systems. Products/services failing accessibility checks should be de-listed from procurement catalogs. Regularly audit key sectors: e.g. annual accessibility audit of all public education platforms, banking apps, e-governance portals. The findings of each audit should be made public (like a quality barometer).
Legal Enforcement: By this stage, begin strict enforcement: levy fines as per RPwD Act and RPwD Rules for non-compliance. For example, an edtech firm continuing an inaccessible platform could face penalties up to ₹5 lakh or disqualification from government contracts. Establish fast-track tribunals or include disability-PWD benches in existing consumer courts to expedite such cases.
Capacity Building: Scale up professional training in accessible design (e.g. government-funded MBAs or CE (Continuing Education) courses on inclusive AI). Introduce scholarships for students with disabilities in STEM fields to ensure future tech workforce diversity.
Long Term (5+ years):
Review and Upgrade: As technology evolves, periodically update standards (e.g. as VR/AR and IoT become mainstream). Mandate that every three years the government “shall” review AI accessibility rules with stakeholder consultation (echoing Raturi’s triennial clause).
Sustained Enforcement: Ensure a sustainable budget for enforcement bodies (e.g. at least 5% of the AI Mission’s budget devoted to accessibility audits). Embed accessibility reviews into national innovation programs (e.g. Startup India, Digital Public Infrastructure) so new projects factor disability needs from inception.
Evaluation: Conduct empirical studies (in partnership with academia) on AI’s impact on PwDs (following models like Finding Sizes for All). Use these to tweak policies. Ultimately, India should report on digital accessibility indicators to international forums, demonstrating compliance with UNCRPD and SDGs.
Estimating costs: While exact figures depend on scope, many measures (like additional standards work or embedding experts) have low marginal cost relative to national AI spending. The largest expenses will be retrofitting infrastructure and training. However, surveys show accessible technology can broaden market reach; the BarrierBreak study notes that “accessible design leads to … opportunities to serve a large, underserved customer base”. Thus, the investments can yield economic as well as social returns.
India stands at a crossroads. In emerging AI policy, we must choose between inherited digital exclusion or a transformative, rights-based approach. The Rajive Raturi judgment and RPwD Act give a clear legal mandate: accessibility and inclusion are not discretionary
. Aligning AI governance with these mandates requires urgent action. The stakes are high. Without binding safeguards, disabled Indians will remain on the margins: denied fair college admission by biased algorithms, excluded from online job recruitment, unable to use smart health kiosks or government apps. That outcome would contravene the very ethos of Article 21 (right to life, inclusive of dignity and liberty) and squander India’s moral obligations under the UNCRPD. As one expert aptly put it, “Accessibility is good business, not charity”. In a nation with 4–5 crore persons seeking work and 62 million with disabilities, inclusive AI is not just lawful—it is economically prudent and ethically imperative. This whitepaper has outlined a roadmap to reorient policy and practice. It is now for India’s leaders—legislators, regulators, technology developers, and civil society—to translate these recommendations into reality. The government shall enact mandatory standards; regulators ought to enforce them with vigor; industry must internalize inclusive design principles; and the judiciary should uphold disabled persons’ rights in the digital sphere. The time to act is now. Only then can we claim that India’s AI revolution truly leaves no one behind.
N. Singit, “An Open Letter to the Ministry of Electronics and IT: A Critique of the India AI Governance Guidelines…” (13 Nov 2025) (open letter to MeitY)
Supreme Court of India, Rajive Raturi v. Union of India, (2024) 8 Nov (No. SC 875)
Rights of Persons with Disabilities Act, 2016 (No. 49 of 2016), §§40–46, 89
UN Convention on the Rights of Persons with Disabilities (UNCRPD), Arts. 4.3, 9.1, 9.2(g)
NALSAR Centre for Disability Studies, Finding Sizes for All: Report on the Status of the Right to Accessibility in India (2022) (citations from executive summary and findings)
Ministry of Electronics & IT, India AI Governance Guidelines (2025) (final PDF) (referenced via PIB press release and news analysis)
PIB Delhi, “MeitY Unveils India AI Governance Guidelines…” (5 Nov 2025)
Regulation (EU) 2024/1689 (AI Act), Recitals 239–240 (accessibility obligations)
Art. 5 (prohibited exploitative AI, including disabilities)
Art. 15 (bias risk management obligations)
BarrierBreak & NCPEDP, BB100 State of Digital Accessibility in India 2025 (study)
SheSR (SheThePeople/CSR Journal), “Rajive Raturi v. Union of India: Accessibility and the Law” (analysis)