Most proposals for emotion in robots involve the addition of a separate ‘emotion module’ – some sort of bolted-on affective architecture that can influence othe…r abilities such as perception and cognition. The idea would be to give the agent access to an enriched set of properties, such as the urgency of an action or the meaning of facial expressions. These properties could help to determine issues such as which visual objects should be processed first, what memories should be recollected, and which decisions will lead to better outcomes.
For more than two millennia, Western thinkers have separated emotion from cognition – emotion being the poorer sibling of the two. Cognition helps to explain the nature of space-time and sends humans to the Moon. Emotion might save the lioness in the savannah, but it also makes humans act irrationally with disconcerting frequency.
In the quest to create intelligent robots, designers tend to focus on purely rational, cognitive capacities. It’s tempting to disregard emotion entirely, or include only as much as necessary. But without emotion to help determine the personal significance of objects and actions, I doubt that true intelligence can exist – not the kind that beats human opponents at chess or the game of Go, but the sort of smarts that we humans recognise as such. Although we can refer to certain behaviours as either ‘emotional’ or ‘cognitive’, this is really a linguistic short-cut. The two can’t be teased apart.
What counts as sophisticated, intelligent behaviour in the first place? Consider a crew of robots on a mission to Mars. To act intelligently, the robots can’t just scuttle about taking pictures of the environment and collecting dirt and mineral samples. They’d need to be able to figure out how to reach a target destination, and come up with alternative tactics if the most direct path is blocked. If pressed for time, the team of robots would have to know which materials are more important and to be prioritised as part of the expedition.