MSc Projects

The ability to determine mood is one of fundamental challenges in affective computing. This motivated me to investigate different approaches for automatic mood detection. With this aim, we used machine learning techniques to set-up data-driven models. In doing so, we proposed two methods to recognize mood either in short- or long-term interactions.
On the other hand, a great deal of scientific evidence suggests that there is a close relationship between mood and cognitive processes of human in everyday tasks. Hence, in the second step, we investigatd the relationship between mood and gaze using Tobii X-20.
In final, considering the importance of mood detection in Human-Robot interaction, we carried a user-study to examine the approaches in application.

Method 1

Emotional features

This approach presents a method for mood detection via emotional variations. In this approach, the mood is considered as a low magnitude and more stable, i.e. low frequency, emotion that can be detected using emotion detection approaches. A Bayes classification is applied on a feature vector composed of statistical aspects of the intensity of the emotions. The approach has been implemented in which two emotions, i.e. happiness and sadness, and also neutral state, have been targeted to determine the good, bad, and neutral, mood of subjects respectively. A Bayes classification is applied on a feature vector containing statistical aspects of the intensity of the emotions. The obtained Correct Classification Rate (CCR) is 91.1, with 0.09 mean error and variance of 4.9 discriminating good mood vs. neutral. Find more details here.

2287269183_03cab46602_o

Method 2

Inductive approach

In this approach, we propose a human-inspired approach in which the changes in emotions, done using emotion induction, can be used to determine the mood of a person. The emotion induction, which can be done through robot actions or through showing video clips, stimulates changes in the emotions of the person, reducing the observation time needed to estimate mood. Consequently, the changes in the emotions, which are biased by his/her mood can be used by a robot to determine the mood of a person. To do so, we induced happy emotions by showing a comical clip and measured the intensity of each happy and sad emotions, and also the intensity of the neutral state. Then we extracted a feature set, including both time and frequency domain features, which is used to determine the mood of the person. The approach has been implemented and compared to no-emotion-induction approach and shows better results. Based on the classification results of a pilot study, the approach is able to distinguish between good vs. bad moods with the accuracy of 91.5% with 0.1 mean absolute error. But the neutral state was not distinguishable well. This inductive method is implemented in a final study which 358 subjects participated in the experiment. A set of features were selected from which we gained 88.2% classification accuracy for all three good, bad and neutral moods. The mean absolute error cal-culated equal to 0.1 with variance of 0.2. Find more details here.

4e325870794db

Method 3

Mood and Gaze

In this study, we have investigated the feasibility of determining mood from gaze, which is one of the human cognitive processes that can be recorded during interaction with computers. To do so, we have designed a feature vector composed of typical gaze patterns, and piloted the approach on the dataset which we gathered. It consists of 145 samples of 30 people. A supervised machine learning technique was employed for classification and recognition of mood. The results of this pilot test suggests that even during these initial steps, the approach is quite promising and opens other research paths for improvement through multi-modal recognition and information fusion. Multi-modal approach would employ the added information provided by our previously developed mood extraction approach using camera and/or the information gained by the use of EEG signals. Further analysis will be performed in feature extraction process to enhance the model accuracy by enriching the feature-set of each modality.

image

In application

Mood detection for Social Robots

Mood, as one of the human affects, plays a vital role in human-human interaction, especially due to its long lasting effects. In this paper, we introduce an approach in which a companion robot, capable of mood detection, is employed to detect and report the mood state of a person to his/her partner to make him/her prepared for upcoming encounters. Such a companion robot may be used at home or at work which would be able to improve the interaction experience for couples, partners, family members, etc. We have implemented the proposed approach using a vision-based method for mood detection. The approach has been tested by an experiment and a follow up study. Descriptive and statistical analysis were performed to analyze the gathered data. The results show that this type of information can have positive impact on interaction of partners.

image



PhD Projects

My PhD project aims at investigating social power dynamics in Human-Robot Interaction. Social power is defined as one’s ability to influence others to do something which they would not do otherwise. Different theories classify alternative ways to achieve social power, such as providing rewards, using coercion, or acting as an expert. After conceptualizing social power to allow implementation in social agents, we studied how those power strategies affect persuasion when using robots. Specifically, we attempted to design persuasive robots by creating persuasive strategies inspired from social power.

Step 1

Social Power for Social Agents

Initially, considering the significant impact of social power on social interaction, and its acknowledged role in believability of social agents (e.x. social robots), we proposed a conceptualization of social power for decision making of agents. In this conceptualization, we argued that the ability of reasoning and planning in the presence of social power enhances social believability of social agents, leading to more rational interactions. With this aim, we proposed a computational model of social power inspired by a well-known theory, which identifies five bases of social power (reward, coercion, expert, referent, legitimate). Our proposed model leads the agents to process and generate behavior facing these five bases and make rational decisions. Robots designed based on this model could be beneficial in a vast variety of social interactions (e.x. personal companion), by exhibiting social behaviors under different power-related circumstances. However, further investigation is required to test the model in the application within a user-study to further examine its expressiveness. Find more details here.

2287269183_03cab46602_o

Second step

Trust in HRI

The conceptualization revealed a factor common in most of the five power bases: trust. Hence, in another attempt, we investigated different factors influencing the trust that human users put in social robots. Trust can help to reduce the social complexity, mainly in those cases where it is necessary to cooperate. With this aim, we examined the influence of a set of factors (gender, emotional representation, making small talk (ST), and embodiment) that may affect the trustworthiness of a robot. To do so, we designed a set of user-studies in which a robot asked human subjects to make donations to fix a malfunctioning part of his body. We used two different metrics, a trust questionnaire and the amount of donations. The results show significant differences in trust depending on its facial expression and making or not making ST. In the same sense, people tend to donate significantly different amount when the robot performed different emotional gestures and making or not making ST. Furthermore, the trust levels were significantly different when comparing the experiment using NAO (with full embodiment) and the one using Emys (a robotic head), which proves that the embodiment is another factor that influences trust. A final result showed also that the gender of the participants leads to significant differences in the trust levels regarding the embodiment.

4e325870794db

Final Step

In the third attempt, to investigate how individuals perceive robots in power, we designed two user-studies with powerful robots. That is, we equipped robots with power resources. Since social power endows individuals the ability to influence, we designed scenarios in which the robot attempted to persuade the users. The link between power and persuasion has been investigated for a long time in social psychology. Different theories exist regarding this link, for instance, recent evidence suggests that a higher level of power leads to higher persuasion. Although other approaches are viable to make robots persuasive (as done before in other studies), we used social power strategies that have been neglected in this field. Initially, we selected reward, coercion and expert strategies due to their applicability in making more believable scenarios.

image

Competetive Robots

In the first study, we investigated the role of social power in persuasive social robots. In this work, we explored two types of persuasive strategies that are based on social power (specifically reward and expertise) and created two social robots that would employ such strategies. To examine the effectiveness of these strategies we performed a user-study with 51 participants using two social robots in an adversarial setting in which both robots tried to persuade the user on a concrete choice (3 coffee capsules hidden in 3 boxes). In our design, one robot attempted to persuade the users to select his coffee by giving them information about the good quality of his capsule, while the other robot tried to influence the users by giving them a reward. Also, we put the third coffee, as the control condition, which was not promoted by any of the two robots We considered five dependent variables, the selected coffee, the preferred robot, robots persuasiveness, robot perception, and how likely they are to comply with each robot in the future. The independent variable was the power strategy used by the robots. The results showed that although each of the strategies caused the robots to be perceived differently in terms of their competence and warmth, both were similarly persuasive.

Reward/Coercion Strategies

Similarly, the second study [20] was designed to investigate the persuasiveness of social robots using two persuasive strategies inspired from social power. In this design, we used a single robot in two different conditions, plus a control condition. In this design, we used two coffee capsules with different rankings. In the first condition (reward power strategy), the robot tried to persuade the users to opt for the lower-ranked coffee by giving them a reward (a pen). In the second condition (coercive power strategy), the robot first gave a pen to the users as a reward for participating in the experiment, but later required the pen as payment for the high ranked coffee (punishment for not complying). In the control condition, the robot did not use any persuasive strategy and the users were free to select any of the two capsules. We measured the personality of the users, the robot perception, and the social power of the robots (using the Social Power Scale). The results indicated that, in the two conditions, the robot succeeded to persuade the users to select a less desirable choice compared to a better one. However, no difference was found in the perception of the robot comparing the two strategies, neither the social power level. The results suggested that social robots are capable of persuading users, especially the ones who are new to social robots. However, the collected data did not represent any significant difference regarding the other measured variables, and we aim to investigate them in a future study.

Repeated Interactions

The two previous studies indicated that social power endows persuasiveness to social robots, however, this effect was tested only using a single attempt. It is not clear if the effect of social power on persuasion remains constant over a series of interactions, or it decays or even strengthens over time. Furthermore, earlier in the first step, we proposed a formalization for modeling social power for social agents. The model indicates that social power has a linear relationship with the identified parameters. For instance, an increase in the level of rewards, leads to higher social power. In other words, a higher valued reward leads to higher power and hence compliance. Also, the relationship between power level and persuasion is not clear. For instance, in case of reward power, if the higher reward leads to higher persuasion?

Hence, the last study aims to evaluate the model in application, specifically over a series of repeated interactions. In so doing, we aim to use the proposed formalization of social power using different values for the identified parameters to investigate how the model works under different circumstances.