Enhancing User Trust and Control in AI-Driven Interactive Human-Computer Interfaces
AI assistants have become an integral part of our day-to-day activities. But we still hesitate to trust AI assistants and applications. This research examines prior literature on topics such as transparency, user control, error mitigation, and user experience. We translate these findings into a design framework (Heu-Kano-Adaptive Transparency), using which we build an AI meeting scheduler. The design process includes building Kano-based features and evaluating their impact on user trust using heuristic questions. The prototype is being built using React for the frontend and Supabase for the backend. To evaluate the system, we conducted user testing with both qualitative and quantitative measures. The expected contribution includes a) a replicable design framework to build applications that are trustworthy, especially for conversational tools. b) an interface component that supports a “right-sized” explanation, c) error mitigation techniques to gain user trust, and d) an open dataset of the user testing.
Keywords:
Topic(s):Computer Science
Presentation Type: Oral Presentation
Session: -5
Location: MG 1000
Time: 9:30