Available for invited lectures, workshops and policy dialogues on accessibility, design and governance.
This article examines the intersections of Artificial Intelligence, accessibility, and disability-led design, asking who becomes included when machines define what counts as valid human expression. It argues that prototypes often prioritise machinic legibility over genuine access, reinforcing bias and tokenistic participation. By centring disabled knowledge and experience in AI development, the piece calls for systems that adjust to human diversity rather than require conformity to data-driven norms.
This article explores how user prompts shape AI responses, especially in matters of disability, accessibility, and inclusion. It argues that bias often enters through the way questions are framed, revealing social assumptions about disabled persons. Through examples of good and bad prompting, it highlights disability etiquette, rights-based language, and accountability. It encourages users to treat prompting as a powerful tool to challenge ableism rather than reinforce it.
India’s AI policy cannot afford to treat disability inclusion as an afterthought. In light of the Rajive Raturi judgment and NALSAR’s Finding Sizes for All report, we must move from voluntary guidelines to mandatory standards. Drawing lessons from the EU AI Act, this article argues for disability-inclusive, rights-based AI governance that centres accessibility, universal design and accountability.
This whitepaper offers a rigorous legal and technical critique of MeitY’s India AI Governance Guidelines through the lens of disability rights. Building on the open letter and the Supreme Court’s Rajive Raturi ruling, it exposes how the guidelines fail to address AI bias, accessibility, and enforceable inclusion. Drawing on the RPwD Act, UNCRPD, and the NALSAR “Finding Sizes for All” report, it outlines actionable reforms to embed accessibility and anti-discrimination into India’s AI policy framework.