Date on Master's Thesis/Doctoral Dissertation

8-2017

Document Type

Master's Thesis

Degree Name

M.S.

Department

Computer Engineering and Computer Science

Degree Program

Computer Science, MS

Committee Chair

Nasraoui, Olfa

Committee Co-Chair (if applicable)

Popa, Dan O.

Committee Member

Popa, Dan O.

Committee Member

Altiparmak, Nihat

Author's Keywords

android app; latent semantic analysis; traded control; android robot; nursing assistant robot; speech engine

Abstract

Physical Human-Robot Interaction (pHRI) is inevitable for a human user while working with assistive robots. There are various aspects of pHRI, such as choosing the interface, type of control schemes implemented and the modes of interaction. The research work presented in this thesis concentrates on a health-care assistive robot called Adaptive Robot Nursing Assistant (ARNA). An assistive robot in a health-care environment has to be able to perform routine tasks and be aware of the surrounding environment at the same time. In order to operate the robot, a teleoperation based interaction would be tedious for some patients as it would require a high level of concentration and can cause cognitive fatigue. It would also require a learning curve for the user in order to teleoperate the robot efficiently. The research work involves the development of a proposed Human-Machine Interface (HMI) framework which integrates the decision-making module, interaction module, and a tablet interface module. The HMI framework integrates a traded control based interaction which allows the robot to take decisions on planning and executing a task while the user only has to specify the task through a tablet interface. According to the preliminary experiments conducted as a part of this thesis, the traded control based approach allows a novice user to operate the robot with the same efficiency as an expert user. Past researchers have shown that during a conversation with a speech interface, a user would feel disengaged if the answers received from the interface are not in the context of the conversation. The research work in this thesis explores the different possibilities of implementing a speech interface that would be able to reply to any conversational queries from the user. A speech interface was developed by creating a semantic space out of Wikipedia database using Latent Semantic Analysis (LSA). This allowed the speech interface to have a wide knowledge-base and be able to maintain a conversation in the same context as intended by the user. This interface was developed as a web-service and was deployed on two different robots to exhibit its portability and the ease of implementation with any other robot. In the work presented, a tablet application was developed which integrates speech interface and an onscreen button interface to execute tasks through ARNA robot. This tablet interface application can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provide conversational dialogue during sitting sessions. In this thesis, we present the software and hardware framework that enable a patient sitter HMI, and together with experimental results with a small number of users that demonstrate that the concept is sound and scalable.

Share

COinS