Good Accessibility, Handcuffed Creativity: AI-Generated UIs Between Accessibility Guidelines and Practitioners’ Expectations

Abstract: The emergence of AI-powered UI generation tools presents both opportunities and challenges for accessible design, but their ability to produce truly accessible outcomes remains underexplored. In this work, we examine the effects of different prompt strategies through an evaluation of ninety interfaces generated by two AI tools across three application domains. Our findings reveal that, while these tools consistently achieve basic accessibility compliance, they rely on homogenized design patterns, which can limit their effectiveness in addressing specialized user needs. Through interviews with eight professional designers, we examine how this standardization impacts creativity and challenges the design of inclusive UIs. Our results contribute to the growing discourse on AI-powered design with (i) empirical insights into the capabilities of AI tools for generating accessible UIs, (ii) identification of barriers in this process, and (iii) guidelines for integrating AI into design workflows in ways that support both designers’ creativity and design flexibility.

Authors: Alexandra-Elena Guriță, Radu-Daniel Vatavu

Conference: 2025 ACM Designing Interactive Systems Conference (DIS ’25)

Publication: Association for Computing Machinery, New York, NY, USA, 1197–1209

Link: https://doi.org/10.1145/3715336.3735691

Gesture-A11Y: A Large-Scale Hub for Accessible Gesture Input

Abstract: We introduce Gesture-A11Y, a large-scale, web-based hub serving as a searchable database and tool to assist researchers and practitioners in identifying gestures that align with the abilities and preferences of users experiencing visual or motor disabilities. Gesture-A11Y is the result of an eight-year-long research effort, during which we collected over 22,000 records of touch, motion, on-wheelchair, and on-body gestures performed by users with various abilities, along with their preferences of gesture input across various devices and contexts of use. We offer Gesture-A11Y as an open-source tool to drive more accessible and inclusive gesture interaction design.

Authors: Mihail Terenti, Laura-Bianca Bilius, Ovidiu-Ciprian Ungurean, Radu-Daniel Vatavu

Conference: W4A ’25, the 22nd International Web for All Conference

Publication: ACM, New York, NY, USA, 2025

Recognitions: Recipient of the Accessibility Challenge Judges Award

Linkhttps://dx.doi.org/10.1145/3744257.3744280

 

Empowering Accessible Gesture Input Design with Gesture-A11Y

Abstract: Understanding end-user performance with gesture input is essential for designing intuitive and effective interactions. Unfortunately, open gesture datasets are scarce, in particular those addressing users with impairments, which hinders advancements in accessible and inclusive user interface design for devices and applications featuring gesture interactions. To address this, we introduce Gesture-A11Y, a web-based tool with a large database aimed to help identify gestures aligning with users’ abilities and preferences. Gesture-A11Y offers access to over 22,000 records of touchscreen, mid-air, on-body, and on-wheelchair gestures performed by users with various visual and/or motor abilities, along with their preferences and perceptions of gesture input across different mobile and wearable devices.

Authors: Mihail Terenti, Laura-Bianca Bilius, Ovidiu-Ciprian Ungurean, Radu-Daniel Vatavu

Conference: W4A ’25, the 22nd International Web for All Conference

Publication: ACM, New York, NY, USA

Recognition: Nominated for the Best Communication Paper Award.

Linkhttps://dx.doi.org/10.1145/3744257.3744267

When LLM-Generated Code Perpetuates User Interface Accessibility Barriers, How Can We Break the Cycle?

Abstract: The integration of Large Language Models (LLMs) into web development workflows has the potential to revolutionize user interface design, yet their ability to produce accessible interfaces still remains underexplored. In this paper, we present an evaluation of LLM-generated user interfaces against the accessibility criteria from the Web Content Accessibility Guidelines (WCAG 2.1), comparing the output of ChatGPT and Claude with two distinct prompt types—accessibility-agnostic and accessibility-oriented. Our evaluation approach, consisting of automated testing, expert evaluation, and LLM self-reflection, reveals that accessibility-oriented prompts increase success counts and reduce violation rates in WCAG criteria, but persistent barriers remain, particularly in semantic structure. We argue that advancing accessible user interface development through LLM-generated code requires not just enhanced prompting but deeper semantic understanding and context awareness in these systems. We use our findings to suggest future work opportunities.

Authors: Alexandra-Elena Gurita, Radu-Daniel Vatavu

Conference: W4A ’25, the 22nd International Web for All Conference. ACM, New York, NY, USA

Link: https://dx.doi.org/10.1145/3744257.3744266

Insights and Implications of Evaluating Accessibility Compliance in AI-Generated Web Interfaces

Abstract: Disponibilitatea recentă a instrumentelor de generare a interfețelor de utilizator bazate pe inteligență artificială impune o înțelegere clară a capacității acestora de a produce designuri accesibile, conforme cu standardele stabilite. În acest scop, această lucrare prezintă rezultatele unei evaluări a cincizeci de interfețe generate cu ajutorul a cinci instrumente publice de design bazate pe AI, comparate în raport cu criteriile WCAG 2.1 (de exemplu, contrastul textului și dimensiunea țintelor de interacțiune), folosind atât prompturi generale, cât și prompturi axate pe accesibilitate. Analiza noastră relevă un nivel moderat de severitate a încălcărilor (𝑀=0.47) pe o scară de la 0 (niciuna) la 4 (critic), cu contrastul textului (𝑀=1.08) și dimensiunea țintelor (𝑀=0.86) ca principale probleme, dar ușor de remediat. Contrar așteptărilor noastre, prompturile orientate spre accesibilitate nu au îmbunătățit conformitatea, ci au redus-o (𝑀=0.54 față de 𝑀=0.39). Totuși, trei dintre instrumentele analizate au arătat rezultate mai bune prin dialog și rafinări iterative.

Authors: Alexandra-Elena Gurita, Radu-Daniel Vatavu

Conference: ACM Web Conference 2025 (WWW Companion ’25) April 28-May 2, 2025, Sydney, NSW, Australia

Publication: ACM, New York, NY, USA

Link: https://dl.acm.org/doi/10.1145/3701716.3715552

Breaking Bad (Design): Challenging AI User Interface Accessibility Guardrails

Abstract: What happens when we prompt AI to create “bad” design? To find out, we challenged four AI-driven design tools to create user interfaces that explicitly violate established accessibility criteria, only to discover them as prisoners of their usability-oriented training. This finding raises a critical question: How can we develop AI that understands accessibility deeply enough to know when to comply and when to thoughtfully challenge established design principles? Through systematic attempts to subvert AI tools and make them follow our request, we found them both rigid and limited: capable of reproducing accessible patterns, but incapable of thoughtful deviation when context demanded it. By adopting the lens of intentional inaccessibility as an investigation method, we raise questions about the nature of design intelligence that demand reconsideration of how design knowledge is integrated into AI-driven design tools.

Authors: Alexandra-Elena Gurita, Radu-Daniel Vatavu

Conference: CHI Conference on Human Factors in Computing Systems

Publication: Association for Computing Machinery, New York, NY, USA, Article 624, 1–7

Link: https://doi.org/10.1145/3706599.3716220

Distal-Haptic Touchscreens: Understanding the User Experience of Vibrotactile Feedback Decoupled from the Touch Point

Abstract: In this study we examine the user experience of distal haptics for touchscreen input through confirmatory vibrations of on-screen touches at various on-body locations. To this end, we introduce the Distal Haptics Continuum, a conceptual framework of haptic feedback delivery across the body, organized along the dimensions of Body Laterality and Proximity to the touch point. Our results, from three experiments involving 45 participants and 16 locations across the hand, arm, and whole body, reveal a strong preference for distal haptics over no haptics at all, despite the spatial decoupling from the touch point, with the index finger yielding the highest user experience. We also identify additional on-body locations—the adjacent fingers, wrist, and abdomen—that unlock distinctive design opportunities. Building on our insights, demonstrating haptics effectiveness even when distant from the touch point, we outline implications for integrating various on-body locations, well beyond the index finger, into the user experience of touchscreen input.

Authors: Mihail Terenti, Radu-Daniel Vatavu

Conference: CHI Conference on Human Factors in Computing Systems 2025 (CHI ’25)

Publication: Association for Computing Machinery, New York, NY, USA, Article 500, 1–19

Link: https://doi.org/10.1145/3706598.3713474

Intermanual Deictics: Uncovering Users’ Gesture Preferences for Opposite-Arm Referential Input, from Fingers to Shoulder

Abstract: We examine intermanual deictics, a distinctive class of gesture input characterized by an intermanual structure, asymmetric postural-manipulative articulation, and a deictic nature, drawing from both on-skin and bimanual mid-air gestures. To understand user preferences for gestures featuring these characteristics, we conducted a large-sample end-user elicitation study with 75 participants, who proposed intermanual deictics involving the opposite palm, forearm, and upper arm. Our results reveal a strong preference for physical-contact gestures primarily performed with the index finger, with strokes (62.4%) and touch input (28.8%) being most common, complemented by some preference for non-contact gestures (5.2%). We report similar agreement rates across gestures elicited in the three arm regions, averaging 26.3%, with higher agreement between the forearm and upper arm. We also present a consensus set of sixty gestures for effecting generic commands in interactive systems, along with design principles encompassing multiple practical implications for interactions that incorporate intermanual deictics.

Authors: Radu-Daniel Vatavu, Bogdan-Florin Gheran.

Conference: CHI Conference on Human Factors in Computing Systems 2025 (CHI ’25)

Publication: Association for Computing Machinery, New York, NY, USA, Article 283, 1–16.

Link: https://doi.org/10.1145/3706598.3713474