Here is a nice story to ponder, about robots that at least appear to express human emotions:
It prompts some philosophical questions, including the following:
If something appears to have emotions, does it have them?
What is the significance of the fact that the robot uses human forms of display, while not being a human?
Is there an underlying universal language of emotions (like a language of thought) which then gets translated into the local language of communication, or is the language of communication the only language, so that emotions expressed in different local languages of communication would be incommensurable?
Would such a robot be a moral client, either straight out of the box or after you had lived with it for a while? Would it be alright to turn it off when you went away on holiday, and then never turn it back on again?
I do not have answers, and do not expect ever to have more than provisional answers, but it is good when a philosophers' thought experiment turns out to have a counterpart in a real-world experiment.