Trust is known as a complex social-emotional concept and its formation is highly affected by non-verbal behavior. Social robots, as any other social entities, are supposed to maintain a level of trustworthiness during their interactions with humans. In this sense, we have examined the influence of a set of factors, including emotional representation, performing small talk and embodiment, on the way people infer trustworthiness of a robot. To examine these factors, we have performed different experiments using two robots, NAO and Emys, with and without physical embodiment respectively. To measure trust levels, we assumed two different metrics, a trust questionnaire and the amount of donations the participants would make. The results suggest that these factors influence significantly the level of trust. As, people tend to trust on Emys significantly differently depending on its facial expressions and making or not making small talk, and, people tend to donate differently to NAO when it is performing different emotional gestures and making or not making small talk. Furthermore, the trust levels were significantly different regarding the embodiment, comparing the experiments with Emys versus with NAO.
In this study, we address the level of trust that a human being displays during an interaction with a robot under different circumstances. The influencing factors considered are the facial expressions of a robot during the interactions, as well as the ability of making small talk. To examine these influences, we ran an experiment in which a robot tells a story to a participant, and then asks for help in form of donations. The experiment was implemented in four different scenarios in order to examine the two influencing factors on trust. The results showed the highest level of trust gained when the robot starts with small talk and expresses facial expression in the same direction of storytelling expected emotion.
Mood, as one of the human affects, plays a vital role in human-human interaction, especially due to its long lasting effects. In this paper, we introduce an approach in which a companion robot, capable of mood detection, is employed to detect and report the mood state of a person to his/her partner to make him/her prepared for upcoming encounters. Such a companion robot may be used at home or at work which would be able to improve the interaction experience for couples, partners, family members, etc. We have implemented the proposed approach using a vision-based method for mood detection. The approach has been tested by an experiment and a follow up study. Descriptive and statistical analysis were performed to analyze the gathered data. The results show that this type of information can have positive impact on interaction of partners.
A great deal of scientific evidence suggests that there is a close relationship between mood and cognitive processes of human in everyday tasks. In this study, we have investigated the feasibility of determining mood from gaze, which is one of the human cognitive processes that can be recorded during interaction with computers. To do so, we have designed a feature vector composed of typical gaze patterns, and piloted the approach on the dataset which we gathered. It consists of 145 samples of 30 people. A supervised machine learning technique was employed for classification and recognition of mood. The results of this pilot test suggests that even during these initial steps, the approach is quite promising and opens other research paths for improvement through multi-modal recognition and information fusion. Multi-modal approach would employ the added information provided by our previously developed mood extraction approach using camera and/or the information gained by the use of EEG signals. Further analysis will be performed in feature extraction process to enhance the model accuracy by enriching the feature-set of each modality.
Determining the mood of a person is an important step in the Human-Robot interaction. In this paper, we propose a human-inspired approach in which the changes in emotions, done using emotion induction, can be used to determine the mood of a person. The emotion induction, which can be done through robot actions or through showing video clips, stimulates changes in the emotions of the person, reducing the observation time needed to estimate mood. Consequently, the changes in the emotions, which are biased by his/her mood can be used by a robot to determine the mood of a person. To do so, we induced happy emotions by showing a comical clip and measured the intensity of each happy and sad emotions, and also the intensity of the neutral state. Then we extracted a feature set, including both time and frequency domain features, which is used to determine the mood of the person. The approach has been implemented and compared to no-emotion-induction approach and shows better results. Based on the classification results, the approach is able to distinguish between good vs. bad moods with the accuracy of 91.5% with 0.1 mean absolute error. But the neutral state was not distinguishable well.
Social Power, the potential for social influence, is a pervasive social process in human interactions. On the other hand, recent advances on Social Robotics raise the question whether a social robot can be used as a persuasive agent. To date, different attempts have been performed using several approaches to tackle this research question. However, few studies looked at the concept of social power in Human-Robot Interaction (HRI) and how it can be beneficial to the development of persuasion skills. This is the precisely the goal of the work that is described here. In this text, we briefly report the results of our recent advancements for this objective and draw suggestion for speculating on future directions.
Social power is defined as one’s ability to influence another to do something which s/he would not do without the presence of such power. Different theories classify alternative ways to achieve social power, such as providing a reward, using coercion, or acting as an expert. In this work, we explored two types of persuasive strategies that are based on social power (specifically Reward and Expertise) and created two social robots that would employ such strategies. To examine the effectiveness of these strategies we performed a user study with 51 participants using two social robots in an adversarial setting in which both robots try to persuade the user on a concrete choice. The results show that even though each of the strategies caused the robots to be perceived differently in terms of their competence and warmth, both were similarly persuasive.
The ability to determine mood is one of fundamental challenges in affective computing. In this paper, we present a novel approach for mood detection via emotional variations. In this approach, the mood is considered as a low magnitude and more stable, i.e. low frequency, emotion that can be detected using emotion detection approaches. A Bayes classification is applied on a feature vector composed of statistical aspects of the intensity of the emotions. The approach has been implemented in which two emotions, i.e. happiness and sadness, and also neutral state, have been targeted to determine the good, bad, and neutral, mood of subjects respectively. A Bayes classification is applied on a feature vector containing statistical aspects of the intensity of the emotions. The obtained Correct Classification Rate (CCR) is 91.1, with 0.09 mean error and variance of 4.9 discriminating good mood vs. neutral.
Recent advances on Social Robotics raise the question whether a social robot can be used as a persuasive agent. To date, a body of literature has been performed using various approaches to answer this research question, ranging from the use of non-verbal behavior to the exploration of different embodiment characteristics. In this paper, we investigate the role of social power for making social robots more persuasive. Social power is defined as one’s ability to influence another to do something which s/he would not do without the presence of such power. Different theories classify alternative ways to achieve social power, such as providing a reward, using coercion, or acting as an expert. In this work, we explored two types of persuasive strategies that are based on social power (specifically Reward and Expertise) and created two social robots that would employ such strategies. To examine the effectiveness of these strategies we performed a user study with 51 participants using two social robots in an adversarial setting in which both robots try to persuade the user on a concrete choice. The results show that even though each of the strategies caused the robots to be perceived differently in terms of their competence and warmth, both were similarly persuasive.
In this paper, we propose a feature-based model to recognize emotions via touching patterns of individuals playing a game on a typical tablet. In this work, novel features, such as Angular Velocity/Acceleration, Angle, Curl, Area and number of strokes within a time window, are introduced and the gold-standard of the data is determined automatically via subjects’ facial expressions. The results show that the approach is promising and the model is able to recognize all the six basic emotions, with a performance of 71.92 % ±0.51. In addition, the recognition of valence and arousal reaches correlation coefficients equal to 0.76 and 0.78 respectively.