Papers
arxiv:2401.03217

Understanding Large-Language Model (LLM)-powered Human-Robot Interaction

Published on Jan 6, 2024
Authors:
,
,

Abstract

Large-language models (<PRE_TAG><PRE_TAG>LLMs</POST_TAG></POST_TAG>) hold significant promise in improving human-robot interaction, offering advanced conversational skills and versatility in managing diverse, open-ended user requests in various tasks and domains. Despite the potential to transform human-robot interaction, very little is known about the distinctive design requirements for utilizing <PRE_TAG><PRE_TAG>LLMs</POST_TAG></POST_TAG> in robots, which may differ from text and voice interaction and vary by task and context. To better understand these requirements, we conducted a user study (n = 32) comparing an LLM-powered social robot against text- and voice-based agents, analyzing task-based requirements in conversational tasks, including choose, generate, execute, and negotiate. Our findings show that LLM-powered robots elevate expectations for sophisticated non-verbal cues and excel in connection-building and deliberation, but fall short in logical communication and may induce anxiety. We provide design implications both for robots integrating <PRE_TAG><PRE_TAG>LLMs</POST_TAG></POST_TAG> and for fine-tuning <PRE_TAG><PRE_TAG>LLMs</POST_TAG></POST_TAG> for use with robots.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2401.03217 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2401.03217 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2401.03217 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.