Why bother programming facial expressions for artificial intelligence if humans are bad at recognising them?
Clash Royale CLAN TAG#URR8PPP
$begingroup$
Set in the near future, robots and mankind coexist amicably due to the great technological advancement in robotics and artificial intelligence. They are everywhere and some took a human form. All of them are hardwired to obey the robot version of penal code which keeps updating. I kept wondering, since we are good at recognising patterns but not interpreting them, which is why awkwardness exists between people, why even bother giving robot facial expressions? They definitely don't need them for talking to another robot, plus it will undoubtedly be awkward for us too, I suppose, because we all know they are fake!
To clarify I'm not referring to particular group of people with a specific syndrome, I'm talking about human in general as I believe we have no problem identifying pattern in nature or perhaps arguably one of the best species ever in history hooray however the actual meaning the same pattern is trying to convey differs among different people. This could lead to misunderstanding and tragedy especially when comes to AI a very complex machine.
humans communication artificial-intelligence robots
$endgroup$
add a comment |
$begingroup$
Set in the near future, robots and mankind coexist amicably due to the great technological advancement in robotics and artificial intelligence. They are everywhere and some took a human form. All of them are hardwired to obey the robot version of penal code which keeps updating. I kept wondering, since we are good at recognising patterns but not interpreting them, which is why awkwardness exists between people, why even bother giving robot facial expressions? They definitely don't need them for talking to another robot, plus it will undoubtedly be awkward for us too, I suppose, because we all know they are fake!
To clarify I'm not referring to particular group of people with a specific syndrome, I'm talking about human in general as I believe we have no problem identifying pattern in nature or perhaps arguably one of the best species ever in history hooray however the actual meaning the same pattern is trying to convey differs among different people. This could lead to misunderstanding and tragedy especially when comes to AI a very complex machine.
humans communication artificial-intelligence robots
$endgroup$
$begingroup$
Comments are not for extended discussion; this conversation has been moved to chat.
$endgroup$
– L.Dutch♦
Feb 14 at 15:28
add a comment |
$begingroup$
Set in the near future, robots and mankind coexist amicably due to the great technological advancement in robotics and artificial intelligence. They are everywhere and some took a human form. All of them are hardwired to obey the robot version of penal code which keeps updating. I kept wondering, since we are good at recognising patterns but not interpreting them, which is why awkwardness exists between people, why even bother giving robot facial expressions? They definitely don't need them for talking to another robot, plus it will undoubtedly be awkward for us too, I suppose, because we all know they are fake!
To clarify I'm not referring to particular group of people with a specific syndrome, I'm talking about human in general as I believe we have no problem identifying pattern in nature or perhaps arguably one of the best species ever in history hooray however the actual meaning the same pattern is trying to convey differs among different people. This could lead to misunderstanding and tragedy especially when comes to AI a very complex machine.
humans communication artificial-intelligence robots
$endgroup$
Set in the near future, robots and mankind coexist amicably due to the great technological advancement in robotics and artificial intelligence. They are everywhere and some took a human form. All of them are hardwired to obey the robot version of penal code which keeps updating. I kept wondering, since we are good at recognising patterns but not interpreting them, which is why awkwardness exists between people, why even bother giving robot facial expressions? They definitely don't need them for talking to another robot, plus it will undoubtedly be awkward for us too, I suppose, because we all know they are fake!
To clarify I'm not referring to particular group of people with a specific syndrome, I'm talking about human in general as I believe we have no problem identifying pattern in nature or perhaps arguably one of the best species ever in history hooray however the actual meaning the same pattern is trying to convey differs among different people. This could lead to misunderstanding and tragedy especially when comes to AI a very complex machine.
humans communication artificial-intelligence robots
humans communication artificial-intelligence robots
edited Feb 14 at 2:00
user6760
asked Feb 13 at 2:59
user6760user6760
13k1572158
13k1572158
$begingroup$
Comments are not for extended discussion; this conversation has been moved to chat.
$endgroup$
– L.Dutch♦
Feb 14 at 15:28
add a comment |
$begingroup$
Comments are not for extended discussion; this conversation has been moved to chat.
$endgroup$
– L.Dutch♦
Feb 14 at 15:28
$begingroup$
Comments are not for extended discussion; this conversation has been moved to chat.
$endgroup$
– L.Dutch♦
Feb 14 at 15:28
$begingroup$
Comments are not for extended discussion; this conversation has been moved to chat.
$endgroup$
– L.Dutch♦
Feb 14 at 15:28
add a comment |
8 Answers
8
active
oldest
votes
$begingroup$
Humans are actually very good at recognizing facial expressions and body language.
While it's true that large numbers of people are terrible at it, it's only in comparison to the average human. Assuming unimpaired vision and intelligence, humans use facial expressions to tell who among a group of people is talking, what or whom the person is referring to (because of their gaze)...at least a good guess, and many other things.
Maybe you don't know if someone is slightly annoyed vs upset, or maybe you can't tell if someone is joking, but that's as much word-choice and tone of voice as it is facial expressions. The number of things that people pick up from other people, or from animals, is much larger than you might think. If you're not sure of that, talk to someone who is blind. Or compare conversations on the phone or online with in-person ones.
Another way to look at facial expressions is as a subset of the larger set of non-verbal communication of body language and signaling.
From Turn Signals Are the Facial Expressions of Automobiles by Donald Norman.
Social cooperation requires signals, ways of letting others know one's
actions and intentions. Moreover, it is useful to know reactions to
actions: how do others perceive them? The most powerful method of
signaling, of course, is through language. Emotions, especially the
outward signaling of emotions, play equally important roles. Emotional
and facial expressions are simple signal systems that allow us to
communicate to others our own internal states. In fact, emotions can
act as a communication medium within an individual, helping bridge the
gap between internal, subconscious states and conscious ones.
As I study the interaction of people with technology, I am not happy
with what I see. In some sense, you might say, my goal is to socialize
technology. Right now, technology lacks social graces. The machine
sits there, placid, demanding. It tends to interact only in order to
demand attention, not to communicate, not to interact gracefully.
People and social animals have evolved a wide range of signaling
systems, the better to make their interactions pleasant and
productive. One way to understand the deficiencies of today's
technologies and to see how they might improve is to examine the route
that natural evolution has taken. You know the old saying that history
repeats itself, that those that who fail to study the lessons of
history are doomed to repeat its failures? Well, I think the analogous
statement applies to evolution and technology: those who are unaware
of the lessons of biological evolution are doomed to repeat its
failures.
So, yes, robots need facial expressions. They need external signals in addition to spoken or written or signed language. But they don't have to mimic human expressions. You're right that it is jarring to see fake expressions. If done right, however, they wouldn't be fake, they'd just be specific to a robot. If they are just different enough that they're obviously not human, but not so different that you can't pick up on them right away, you've hit the sweet spot.
$endgroup$
31
$begingroup$
Would you believe that around half of all humans are worse than average at recognising facial expressions!
$endgroup$
– Separatrix
Feb 13 at 8:21
2
$begingroup$
@Separatrix They're a lot better than the average for a chimpanzee too :P
$endgroup$
– Luaan
Feb 13 at 9:24
1
$begingroup$
Who needs facial expressions for robots when you can have other features tor provide the same function - eg in the Culture novels, drones would shade their fields in colours to represent emotional or social subtleties not provided for by language. Alternatives could be an emoji screen or hologram floating above the robot.
$endgroup$
– gbjbaanb
Feb 13 at 15:36
5
$begingroup$
This is an excellent answer-- the last paragraph might benefit from a note on how well humans do with abstractions of a human face. Even infants will stare at a picture that looks like this: :-| longer than other combinations of the same symbols. We understand what :) and :( mean easily, and the Russians even drop the eyballs, conveying the same emotions just with ) and (. The fake expressions on robots don't need to be a perfect mimicry of a human face, because humans are excellent at abstraction.
$endgroup$
– Blue Caboose
Feb 13 at 16:28
1
$begingroup$
@Cyn Exactly! The robots don't even need faces for us to personify them-- think of how people talk about their roombas, as if they were charming pets who get "distressed" when they get stuck. I know I'm prone to gently patting the dashboard of my car when it starts beeping because it thinks I'm about to do something stupid, as if it was a jumpy horse in need of comforting, and not a big chunk of machinery. Anthropomorphizing is just what humans do.
$endgroup$
– Blue Caboose
Feb 13 at 17:00
|
show 4 more comments
$begingroup$
Humans are some of the most advanced pattern recognition and actionary machines that have ever existed. So capable in fact that they have theories of matter (physics/chemistry/geology/...) and theories of what matters (psychology/religion/philosophy/...).
Given that it takes nearly 30 years to train most specimens to be expert recognizers in a given field of endeavor, should underline the difficulty of actually grouping various states of the universe into manageable and actionable patterns.
In your world robots act as autonomous embodied entities. They are capable of communicating at speed with other similar entities using a range of network technologies. This allows them to communicate fairly clearly about a range of topics but they too hit limitations.
- Radiowaves have limited bandwidth and everyone can hear them
- lasers require line of sight, and dust can cause errors
- network cables are high bandwidth but tether the robot to a specific vicinity
This places an upper limit on the ability of these entities to communicate effectively. Also there are entities out there that are not robots. It might pay to be able to communicate with those too.
If the robot was the size of a butterfly, aside from having limited mind/communication space, its likely that birds will treat it as food. The robot probably has the ability to shock the bird, but it might pay to communicate to the bird before it tries that eating the robot is a bad idea. Being a bright iridescent red/yellow is the general approach used by poisonous caterpillars, and generally birds respect that. After all the robot might actually be running low on power when the bird shows up.
Scale this up and there are a wide variety of organisms running around where it might pay better to communicate with than ignore. Particularly those two-legged simians, there are a few of them, they are pretty inventive, and quite happy to do boring repetitive tasks. Perhaps if the communication happened well enough they might be put to use inventing, and dealing with irritations?
Even without all of these other organisms hanging around, the simple fact that these technologies have bandwidth limitations is a problem - Communication is insanely important if you desire to reduce the frequency and severity of problems encountered while existing. There are a few more technological channels that would be useful, but it is relatively simple to adopt proven modes of communication:
- colouration (i am poisonous, i don't care if you recognise me/I'm not good news)
- hearing (locate movement, location, as well as conceptual information)
- vocalisation/stridulation (location as well as conceptual information, ready to fight, ready to mate, ready to serve)
- dance/movement displays (are you willing to invest energy into demonstrating capability, should I press the point to the next level?)
- faces
- eyes (what are you interested in/looking at?)
- eyebrows (are you exploring/frustrated/contented?)
- teeth (are you indicating dominance/subservience, will you challenge me?)
- posture/stance (are you ready? what are you ready to do?)
Obviously no two robots need to have the full or even the same range of communication methods. Many creatures even in the same species have varied capacities. An obvious example is red/green colour blindness in humans.
So do robots need faces? No.
Would robots significantly benefit from having faces? Yes.
$endgroup$
2
$begingroup$
"it might pay to communicate to the bird before it tries that eating the robot is a bad idea." - and not only for robots the size of butterflies. I once saw a large gull take out a drone, which ended up in the sea at the bottom of at 400ft cliff. It was hard to tell if the gull was annoyed at not getting lunch, but it seemed a bit confused that unlike a dead bird in the water, a dead drone doesn't float so it's hard to eat it ;)
$endgroup$
– alephzero
Feb 13 at 21:00
1
$begingroup$
@alephzero that must have been an interesting sight.
$endgroup$
– Kain0_0
Feb 13 at 22:49
add a comment |
$begingroup$
You say that humans, in general, are poor at recognizing facial expressions. As a person with both autism and a brain injury, I beg to differ. I am poor at both recognizing and using facial expressions (I even have a doctor's note!), and the differences are observable to an outsider.
A few years back, before the brain injury and before we knew I was autistic, my family used to play with the Society for Creative Anachronism (SCA). For those not familiar with the SCA, think of them as adults playing dressup in medieval garb, who get together to gossip, drink beer, and beat on each other with big sticks. For the Brits out there, think of stereotypical rugby players dressing up as King Arthur's knights, then playing rugby with swords (that event was called "Blood of Heroes", by the way, emulating a very bad movie of the same name).
Naturally, there is a lot of banter in such a group, and a lot of good natured roughhousing. They can find a week's worth of innuendo in the world innuendo. Shortly after we moved on to other hobbies, my wife told me that I used to scare them because, on the rare occasions I did join the banter, they couldn't tell whether I was serious or joking.
Post brain injury, it is much worse. Recent;y, when being introduced to a new business partner, they were trying to be friendly and informally ask me about myself. Unfortunately, they asked "What gets you out of bed in the morning?". A metaphorical question about emotional state with social cues, it effectively and awkwardly put an end to the conversation because it exactly hit places I physically lack the machinery to process efficiently.
My wife has learned to be very careful with the use of sarcasm and metaphors around me because, without the social cues that go with them, I tend to take them literally. "Why did you (some action) ?" "You Told me to." "I didn't mean for you to really do that!" "Sorry."
A robot mimicking humanoid characteristics to the point they were indistinguishable from humans, without emotional/social emulation of some sort, would tend to make people feel uneasy and awkward, or as my son would say "it would creep them out".
Worse than not mimicking emotions, would be mimicking them badly. A smile whose timing is off by 1/4 second is perceived as deceptive, for example. In contrast, obviously different emotional responses that are specific to the species (robots) can be learned by humans. I used to keep crustaceans in my aquarium, and it is amazing how expressive crabs and crayfish be without flexible faces.
For a good idea of the issues you might run into, watch the first season of Ninjago and the team's interactions with Zane, before anyone knows he is a robot.
$endgroup$
add a comment |
$begingroup$
Humans teach other humans in formal positions to use facial expressions when communicating. When someone is representing an organization or group, the message or information they are speaking more often than not has no connection to their personal opinion. As a result we teach humans in these positions to use a facial expression that communicates the "emotive" part of the message the organization/group wants to get across.
Well if we think that's appropriate for humans, we certainly will do it for robots if we can.
They definitely don't need them for talking to another robot
I suspect it could be used as part of robot-robot interaction as well. There are environments where it would be too loud to use audio, too problematic to use e.g. radio based communications and all you may have is the visual. It might be better to implement a custom set of robot-robot expressions for some comms.
plus it will undoubtedly be awkward for us too, I suppose, because we all know they are fake!
I know the smile on the sales person's face is fake too, but it had better be there or no sale ! We expect facial and body language to match words and we treat inconsistencies as suspicious. That's how humans work and why good communicators spend a lot of time learning to make it all look natural, even when it isn't.
The other side of this is that humans need facial expressions to match what they expect. A robot which has a hardwired smile or scowl would give a very disconcerting feeling to a human in the wrong context.
"I hope you enjoyed your stay with us ?" is going to feel very creepy to a human coming from a robot with a scowl or sneer (or something that could be read that way by a human) than from something with a pattern that's more easily interpreted as friendly.
So the human-like expressions would be useful to humans, avoiding emotional dissonance that would put us off.
We even treat animals like this - we learn to interpret their expressions and body language and "map" human meaning onto them. It's an important part of how we interpret interactions with the world. We do extend this to in-animate objects - signs, logos, advertising.
So facial expressions we can read easily would be there to make humans comfortable and make communications more effective.
$endgroup$
add a comment |
$begingroup$
You raise a very good point.
In many demonstrations, it has been established that robot faces that display really accurate human emotions freak people out. It is very disconcerting to have an inanimate but artificially intelligent object to mimic what is the most basic human attribute - emotion. Rather than trust the robot more, it actually leads to greater mistrust and discomfort.
Personally, I would regard a robot smiling at me with exactly the same trepidation as that of a used car salesperson giving me the 'trust me' smile, or of the clown with that unsettling painted-on grin.
It was much easier to trust the answers from Data of Star Trek fame because they were delivered by a very impartial, non-emotional robotic characterization.
In the movie 'I, Robot' the human connection of Sunny was much more believable BECAUSE he did not display artificial emotions. In fact, Sunny's face was so neutral that the human observer could superimpose their OWN image of emotional expression, that somehow was more appealing than to have Sunny display the emotion. That is, we saw what we believed to be there, not what the artificial intelligence wanted us to see.
Add to this, the fact that these robots will have a hard time being accepted in the first place, and there would already be an element of mistrust, any false impressions that one got from misreading an artificial expression that is dissonant with the message or intent would lead to even greater mistrust and mental tension from cognitive dissonance.
The research into voice responses from our cars, devices, and other voice response technology supports this notion. They all have a bland, neutral tone for the same reasons. People don't WANT a bubbly, upbeat, almost laughing Siri. They don't want any kind of tone or inflection that indicates or that could be interpreted as indicating judgement or acceptance or agreement/disagreement. They want 'just the facts, ma'am' with just enough inflection to make it sound natural.
My bet would be to go with robots having completely neutral expressions, with just the right design that lets the human observer imply their own expected emotion onto the face.
$endgroup$
add a comment |
$begingroup$
Even if we're terrible at interpreting facial expressions, their absence would be extremely disconcerting. Therefore, in order to avoid being unsettling, you'd want the robots to mimic human behaviour, just like we'd do for speech patterns, avoiding monotone, etc.
$endgroup$
$begingroup$
Had the exact same idea. Even if people don't fully appreciate the nuance that goes into a robot's facial expression, it is still important for them to have it to avoid falling into the Uncanny Valley
$endgroup$
– D.Spetz
Feb 13 at 16:00
$begingroup$
As mentioned, according toUncanny Valley
research it's as good to be not human-like at all(like modern industrial robots), or be as human-like as possible, it's only bad if you're trying to make it human-like but miss even tiny humanly feature. And in that case it's easier to not even try to mimic human and create something completely different that won't cause anxiety among people.
$endgroup$
– user28434
Feb 14 at 11:15
add a comment |
$begingroup$
Look for "mentalists" and you will find lots of info about them! They are talented to start with (compared to the average as someone said) and they are sufficiently trained to successfully trick you to think that they can read your mind. They too learn your facial expressions, tone of speech and think ahead of you in choosing a random number. Uri Geller, a famous example is just a first-year student in comparison. Look for two Israeli mentalists like Nimrod Har'el and Hezi Dean:
http://www.nimrodharel.co.il
http://www.hezidean.co.il/ENGLISH/
I think mentalists would have failed if they did not study body language.
Those trained persons would be valuable to AI programmers in training a robot to read your body language. Nobody mentipned that the robots may be sufficiently trained to search for suspects in a crowd.
$endgroup$
add a comment |
$begingroup$
Why bother creating AI when the biological machine is sufficient?
Expressions one could say are also a congantive function of self identity and self awareness. The very thing AI programmers delve volume of topics to unravel. The idea of a fully functioning brain requires all the parts and mechanisms that go along with it. Facial expressions and gestures for example have over thousands of receptors telling our millions of neurons what we are doing with our indvidual muscles. This simple thing like cracking a smile follows a very deep set of instructions if you think about it.
$endgroup$
1
$begingroup$
Welcome to Worldbuilding, Venjik! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun!
$endgroup$
– Gryphon
Feb 13 at 13:36
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "579"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fworldbuilding.stackexchange.com%2fquestions%2f139014%2fwhy-bother-programming-facial-expressions-for-artificial-intelligence-if-humans%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
8 Answers
8
active
oldest
votes
8 Answers
8
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Humans are actually very good at recognizing facial expressions and body language.
While it's true that large numbers of people are terrible at it, it's only in comparison to the average human. Assuming unimpaired vision and intelligence, humans use facial expressions to tell who among a group of people is talking, what or whom the person is referring to (because of their gaze)...at least a good guess, and many other things.
Maybe you don't know if someone is slightly annoyed vs upset, or maybe you can't tell if someone is joking, but that's as much word-choice and tone of voice as it is facial expressions. The number of things that people pick up from other people, or from animals, is much larger than you might think. If you're not sure of that, talk to someone who is blind. Or compare conversations on the phone or online with in-person ones.
Another way to look at facial expressions is as a subset of the larger set of non-verbal communication of body language and signaling.
From Turn Signals Are the Facial Expressions of Automobiles by Donald Norman.
Social cooperation requires signals, ways of letting others know one's
actions and intentions. Moreover, it is useful to know reactions to
actions: how do others perceive them? The most powerful method of
signaling, of course, is through language. Emotions, especially the
outward signaling of emotions, play equally important roles. Emotional
and facial expressions are simple signal systems that allow us to
communicate to others our own internal states. In fact, emotions can
act as a communication medium within an individual, helping bridge the
gap between internal, subconscious states and conscious ones.
As I study the interaction of people with technology, I am not happy
with what I see. In some sense, you might say, my goal is to socialize
technology. Right now, technology lacks social graces. The machine
sits there, placid, demanding. It tends to interact only in order to
demand attention, not to communicate, not to interact gracefully.
People and social animals have evolved a wide range of signaling
systems, the better to make their interactions pleasant and
productive. One way to understand the deficiencies of today's
technologies and to see how they might improve is to examine the route
that natural evolution has taken. You know the old saying that history
repeats itself, that those that who fail to study the lessons of
history are doomed to repeat its failures? Well, I think the analogous
statement applies to evolution and technology: those who are unaware
of the lessons of biological evolution are doomed to repeat its
failures.
So, yes, robots need facial expressions. They need external signals in addition to spoken or written or signed language. But they don't have to mimic human expressions. You're right that it is jarring to see fake expressions. If done right, however, they wouldn't be fake, they'd just be specific to a robot. If they are just different enough that they're obviously not human, but not so different that you can't pick up on them right away, you've hit the sweet spot.
$endgroup$
31
$begingroup$
Would you believe that around half of all humans are worse than average at recognising facial expressions!
$endgroup$
– Separatrix
Feb 13 at 8:21
2
$begingroup$
@Separatrix They're a lot better than the average for a chimpanzee too :P
$endgroup$
– Luaan
Feb 13 at 9:24
1
$begingroup$
Who needs facial expressions for robots when you can have other features tor provide the same function - eg in the Culture novels, drones would shade their fields in colours to represent emotional or social subtleties not provided for by language. Alternatives could be an emoji screen or hologram floating above the robot.
$endgroup$
– gbjbaanb
Feb 13 at 15:36
5
$begingroup$
This is an excellent answer-- the last paragraph might benefit from a note on how well humans do with abstractions of a human face. Even infants will stare at a picture that looks like this: :-| longer than other combinations of the same symbols. We understand what :) and :( mean easily, and the Russians even drop the eyballs, conveying the same emotions just with ) and (. The fake expressions on robots don't need to be a perfect mimicry of a human face, because humans are excellent at abstraction.
$endgroup$
– Blue Caboose
Feb 13 at 16:28
1
$begingroup$
@Cyn Exactly! The robots don't even need faces for us to personify them-- think of how people talk about their roombas, as if they were charming pets who get "distressed" when they get stuck. I know I'm prone to gently patting the dashboard of my car when it starts beeping because it thinks I'm about to do something stupid, as if it was a jumpy horse in need of comforting, and not a big chunk of machinery. Anthropomorphizing is just what humans do.
$endgroup$
– Blue Caboose
Feb 13 at 17:00
|
show 4 more comments
$begingroup$
Humans are actually very good at recognizing facial expressions and body language.
While it's true that large numbers of people are terrible at it, it's only in comparison to the average human. Assuming unimpaired vision and intelligence, humans use facial expressions to tell who among a group of people is talking, what or whom the person is referring to (because of their gaze)...at least a good guess, and many other things.
Maybe you don't know if someone is slightly annoyed vs upset, or maybe you can't tell if someone is joking, but that's as much word-choice and tone of voice as it is facial expressions. The number of things that people pick up from other people, or from animals, is much larger than you might think. If you're not sure of that, talk to someone who is blind. Or compare conversations on the phone or online with in-person ones.
Another way to look at facial expressions is as a subset of the larger set of non-verbal communication of body language and signaling.
From Turn Signals Are the Facial Expressions of Automobiles by Donald Norman.
Social cooperation requires signals, ways of letting others know one's
actions and intentions. Moreover, it is useful to know reactions to
actions: how do others perceive them? The most powerful method of
signaling, of course, is through language. Emotions, especially the
outward signaling of emotions, play equally important roles. Emotional
and facial expressions are simple signal systems that allow us to
communicate to others our own internal states. In fact, emotions can
act as a communication medium within an individual, helping bridge the
gap between internal, subconscious states and conscious ones.
As I study the interaction of people with technology, I am not happy
with what I see. In some sense, you might say, my goal is to socialize
technology. Right now, technology lacks social graces. The machine
sits there, placid, demanding. It tends to interact only in order to
demand attention, not to communicate, not to interact gracefully.
People and social animals have evolved a wide range of signaling
systems, the better to make their interactions pleasant and
productive. One way to understand the deficiencies of today's
technologies and to see how they might improve is to examine the route
that natural evolution has taken. You know the old saying that history
repeats itself, that those that who fail to study the lessons of
history are doomed to repeat its failures? Well, I think the analogous
statement applies to evolution and technology: those who are unaware
of the lessons of biological evolution are doomed to repeat its
failures.
So, yes, robots need facial expressions. They need external signals in addition to spoken or written or signed language. But they don't have to mimic human expressions. You're right that it is jarring to see fake expressions. If done right, however, they wouldn't be fake, they'd just be specific to a robot. If they are just different enough that they're obviously not human, but not so different that you can't pick up on them right away, you've hit the sweet spot.
$endgroup$
31
$begingroup$
Would you believe that around half of all humans are worse than average at recognising facial expressions!
$endgroup$
– Separatrix
Feb 13 at 8:21
2
$begingroup$
@Separatrix They're a lot better than the average for a chimpanzee too :P
$endgroup$
– Luaan
Feb 13 at 9:24
1
$begingroup$
Who needs facial expressions for robots when you can have other features tor provide the same function - eg in the Culture novels, drones would shade their fields in colours to represent emotional or social subtleties not provided for by language. Alternatives could be an emoji screen or hologram floating above the robot.
$endgroup$
– gbjbaanb
Feb 13 at 15:36
5
$begingroup$
This is an excellent answer-- the last paragraph might benefit from a note on how well humans do with abstractions of a human face. Even infants will stare at a picture that looks like this: :-| longer than other combinations of the same symbols. We understand what :) and :( mean easily, and the Russians even drop the eyballs, conveying the same emotions just with ) and (. The fake expressions on robots don't need to be a perfect mimicry of a human face, because humans are excellent at abstraction.
$endgroup$
– Blue Caboose
Feb 13 at 16:28
1
$begingroup$
@Cyn Exactly! The robots don't even need faces for us to personify them-- think of how people talk about their roombas, as if they were charming pets who get "distressed" when they get stuck. I know I'm prone to gently patting the dashboard of my car when it starts beeping because it thinks I'm about to do something stupid, as if it was a jumpy horse in need of comforting, and not a big chunk of machinery. Anthropomorphizing is just what humans do.
$endgroup$
– Blue Caboose
Feb 13 at 17:00
|
show 4 more comments
$begingroup$
Humans are actually very good at recognizing facial expressions and body language.
While it's true that large numbers of people are terrible at it, it's only in comparison to the average human. Assuming unimpaired vision and intelligence, humans use facial expressions to tell who among a group of people is talking, what or whom the person is referring to (because of their gaze)...at least a good guess, and many other things.
Maybe you don't know if someone is slightly annoyed vs upset, or maybe you can't tell if someone is joking, but that's as much word-choice and tone of voice as it is facial expressions. The number of things that people pick up from other people, or from animals, is much larger than you might think. If you're not sure of that, talk to someone who is blind. Or compare conversations on the phone or online with in-person ones.
Another way to look at facial expressions is as a subset of the larger set of non-verbal communication of body language and signaling.
From Turn Signals Are the Facial Expressions of Automobiles by Donald Norman.
Social cooperation requires signals, ways of letting others know one's
actions and intentions. Moreover, it is useful to know reactions to
actions: how do others perceive them? The most powerful method of
signaling, of course, is through language. Emotions, especially the
outward signaling of emotions, play equally important roles. Emotional
and facial expressions are simple signal systems that allow us to
communicate to others our own internal states. In fact, emotions can
act as a communication medium within an individual, helping bridge the
gap between internal, subconscious states and conscious ones.
As I study the interaction of people with technology, I am not happy
with what I see. In some sense, you might say, my goal is to socialize
technology. Right now, technology lacks social graces. The machine
sits there, placid, demanding. It tends to interact only in order to
demand attention, not to communicate, not to interact gracefully.
People and social animals have evolved a wide range of signaling
systems, the better to make their interactions pleasant and
productive. One way to understand the deficiencies of today's
technologies and to see how they might improve is to examine the route
that natural evolution has taken. You know the old saying that history
repeats itself, that those that who fail to study the lessons of
history are doomed to repeat its failures? Well, I think the analogous
statement applies to evolution and technology: those who are unaware
of the lessons of biological evolution are doomed to repeat its
failures.
So, yes, robots need facial expressions. They need external signals in addition to spoken or written or signed language. But they don't have to mimic human expressions. You're right that it is jarring to see fake expressions. If done right, however, they wouldn't be fake, they'd just be specific to a robot. If they are just different enough that they're obviously not human, but not so different that you can't pick up on them right away, you've hit the sweet spot.
$endgroup$
Humans are actually very good at recognizing facial expressions and body language.
While it's true that large numbers of people are terrible at it, it's only in comparison to the average human. Assuming unimpaired vision and intelligence, humans use facial expressions to tell who among a group of people is talking, what or whom the person is referring to (because of their gaze)...at least a good guess, and many other things.
Maybe you don't know if someone is slightly annoyed vs upset, or maybe you can't tell if someone is joking, but that's as much word-choice and tone of voice as it is facial expressions. The number of things that people pick up from other people, or from animals, is much larger than you might think. If you're not sure of that, talk to someone who is blind. Or compare conversations on the phone or online with in-person ones.
Another way to look at facial expressions is as a subset of the larger set of non-verbal communication of body language and signaling.
From Turn Signals Are the Facial Expressions of Automobiles by Donald Norman.
Social cooperation requires signals, ways of letting others know one's
actions and intentions. Moreover, it is useful to know reactions to
actions: how do others perceive them? The most powerful method of
signaling, of course, is through language. Emotions, especially the
outward signaling of emotions, play equally important roles. Emotional
and facial expressions are simple signal systems that allow us to
communicate to others our own internal states. In fact, emotions can
act as a communication medium within an individual, helping bridge the
gap between internal, subconscious states and conscious ones.
As I study the interaction of people with technology, I am not happy
with what I see. In some sense, you might say, my goal is to socialize
technology. Right now, technology lacks social graces. The machine
sits there, placid, demanding. It tends to interact only in order to
demand attention, not to communicate, not to interact gracefully.
People and social animals have evolved a wide range of signaling
systems, the better to make their interactions pleasant and
productive. One way to understand the deficiencies of today's
technologies and to see how they might improve is to examine the route
that natural evolution has taken. You know the old saying that history
repeats itself, that those that who fail to study the lessons of
history are doomed to repeat its failures? Well, I think the analogous
statement applies to evolution and technology: those who are unaware
of the lessons of biological evolution are doomed to repeat its
failures.
So, yes, robots need facial expressions. They need external signals in addition to spoken or written or signed language. But they don't have to mimic human expressions. You're right that it is jarring to see fake expressions. If done right, however, they wouldn't be fake, they'd just be specific to a robot. If they are just different enough that they're obviously not human, but not so different that you can't pick up on them right away, you've hit the sweet spot.
answered Feb 13 at 3:19
CynCyn
9,94612246
9,94612246
31
$begingroup$
Would you believe that around half of all humans are worse than average at recognising facial expressions!
$endgroup$
– Separatrix
Feb 13 at 8:21
2
$begingroup$
@Separatrix They're a lot better than the average for a chimpanzee too :P
$endgroup$
– Luaan
Feb 13 at 9:24
1
$begingroup$
Who needs facial expressions for robots when you can have other features tor provide the same function - eg in the Culture novels, drones would shade their fields in colours to represent emotional or social subtleties not provided for by language. Alternatives could be an emoji screen or hologram floating above the robot.
$endgroup$
– gbjbaanb
Feb 13 at 15:36
5
$begingroup$
This is an excellent answer-- the last paragraph might benefit from a note on how well humans do with abstractions of a human face. Even infants will stare at a picture that looks like this: :-| longer than other combinations of the same symbols. We understand what :) and :( mean easily, and the Russians even drop the eyballs, conveying the same emotions just with ) and (. The fake expressions on robots don't need to be a perfect mimicry of a human face, because humans are excellent at abstraction.
$endgroup$
– Blue Caboose
Feb 13 at 16:28
1
$begingroup$
@Cyn Exactly! The robots don't even need faces for us to personify them-- think of how people talk about their roombas, as if they were charming pets who get "distressed" when they get stuck. I know I'm prone to gently patting the dashboard of my car when it starts beeping because it thinks I'm about to do something stupid, as if it was a jumpy horse in need of comforting, and not a big chunk of machinery. Anthropomorphizing is just what humans do.
$endgroup$
– Blue Caboose
Feb 13 at 17:00
|
show 4 more comments
31
$begingroup$
Would you believe that around half of all humans are worse than average at recognising facial expressions!
$endgroup$
– Separatrix
Feb 13 at 8:21
2
$begingroup$
@Separatrix They're a lot better than the average for a chimpanzee too :P
$endgroup$
– Luaan
Feb 13 at 9:24
1
$begingroup$
Who needs facial expressions for robots when you can have other features tor provide the same function - eg in the Culture novels, drones would shade their fields in colours to represent emotional or social subtleties not provided for by language. Alternatives could be an emoji screen or hologram floating above the robot.
$endgroup$
– gbjbaanb
Feb 13 at 15:36
5
$begingroup$
This is an excellent answer-- the last paragraph might benefit from a note on how well humans do with abstractions of a human face. Even infants will stare at a picture that looks like this: :-| longer than other combinations of the same symbols. We understand what :) and :( mean easily, and the Russians even drop the eyballs, conveying the same emotions just with ) and (. The fake expressions on robots don't need to be a perfect mimicry of a human face, because humans are excellent at abstraction.
$endgroup$
– Blue Caboose
Feb 13 at 16:28
1
$begingroup$
@Cyn Exactly! The robots don't even need faces for us to personify them-- think of how people talk about their roombas, as if they were charming pets who get "distressed" when they get stuck. I know I'm prone to gently patting the dashboard of my car when it starts beeping because it thinks I'm about to do something stupid, as if it was a jumpy horse in need of comforting, and not a big chunk of machinery. Anthropomorphizing is just what humans do.
$endgroup$
– Blue Caboose
Feb 13 at 17:00
31
31
$begingroup$
Would you believe that around half of all humans are worse than average at recognising facial expressions!
$endgroup$
– Separatrix
Feb 13 at 8:21
$begingroup$
Would you believe that around half of all humans are worse than average at recognising facial expressions!
$endgroup$
– Separatrix
Feb 13 at 8:21
2
2
$begingroup$
@Separatrix They're a lot better than the average for a chimpanzee too :P
$endgroup$
– Luaan
Feb 13 at 9:24
$begingroup$
@Separatrix They're a lot better than the average for a chimpanzee too :P
$endgroup$
– Luaan
Feb 13 at 9:24
1
1
$begingroup$
Who needs facial expressions for robots when you can have other features tor provide the same function - eg in the Culture novels, drones would shade their fields in colours to represent emotional or social subtleties not provided for by language. Alternatives could be an emoji screen or hologram floating above the robot.
$endgroup$
– gbjbaanb
Feb 13 at 15:36
$begingroup$
Who needs facial expressions for robots when you can have other features tor provide the same function - eg in the Culture novels, drones would shade their fields in colours to represent emotional or social subtleties not provided for by language. Alternatives could be an emoji screen or hologram floating above the robot.
$endgroup$
– gbjbaanb
Feb 13 at 15:36
5
5
$begingroup$
This is an excellent answer-- the last paragraph might benefit from a note on how well humans do with abstractions of a human face. Even infants will stare at a picture that looks like this: :-| longer than other combinations of the same symbols. We understand what :) and :( mean easily, and the Russians even drop the eyballs, conveying the same emotions just with ) and (. The fake expressions on robots don't need to be a perfect mimicry of a human face, because humans are excellent at abstraction.
$endgroup$
– Blue Caboose
Feb 13 at 16:28
$begingroup$
This is an excellent answer-- the last paragraph might benefit from a note on how well humans do with abstractions of a human face. Even infants will stare at a picture that looks like this: :-| longer than other combinations of the same symbols. We understand what :) and :( mean easily, and the Russians even drop the eyballs, conveying the same emotions just with ) and (. The fake expressions on robots don't need to be a perfect mimicry of a human face, because humans are excellent at abstraction.
$endgroup$
– Blue Caboose
Feb 13 at 16:28
1
1
$begingroup$
@Cyn Exactly! The robots don't even need faces for us to personify them-- think of how people talk about their roombas, as if they were charming pets who get "distressed" when they get stuck. I know I'm prone to gently patting the dashboard of my car when it starts beeping because it thinks I'm about to do something stupid, as if it was a jumpy horse in need of comforting, and not a big chunk of machinery. Anthropomorphizing is just what humans do.
$endgroup$
– Blue Caboose
Feb 13 at 17:00
$begingroup$
@Cyn Exactly! The robots don't even need faces for us to personify them-- think of how people talk about their roombas, as if they were charming pets who get "distressed" when they get stuck. I know I'm prone to gently patting the dashboard of my car when it starts beeping because it thinks I'm about to do something stupid, as if it was a jumpy horse in need of comforting, and not a big chunk of machinery. Anthropomorphizing is just what humans do.
$endgroup$
– Blue Caboose
Feb 13 at 17:00
|
show 4 more comments
$begingroup$
Humans are some of the most advanced pattern recognition and actionary machines that have ever existed. So capable in fact that they have theories of matter (physics/chemistry/geology/...) and theories of what matters (psychology/religion/philosophy/...).
Given that it takes nearly 30 years to train most specimens to be expert recognizers in a given field of endeavor, should underline the difficulty of actually grouping various states of the universe into manageable and actionable patterns.
In your world robots act as autonomous embodied entities. They are capable of communicating at speed with other similar entities using a range of network technologies. This allows them to communicate fairly clearly about a range of topics but they too hit limitations.
- Radiowaves have limited bandwidth and everyone can hear them
- lasers require line of sight, and dust can cause errors
- network cables are high bandwidth but tether the robot to a specific vicinity
This places an upper limit on the ability of these entities to communicate effectively. Also there are entities out there that are not robots. It might pay to be able to communicate with those too.
If the robot was the size of a butterfly, aside from having limited mind/communication space, its likely that birds will treat it as food. The robot probably has the ability to shock the bird, but it might pay to communicate to the bird before it tries that eating the robot is a bad idea. Being a bright iridescent red/yellow is the general approach used by poisonous caterpillars, and generally birds respect that. After all the robot might actually be running low on power when the bird shows up.
Scale this up and there are a wide variety of organisms running around where it might pay better to communicate with than ignore. Particularly those two-legged simians, there are a few of them, they are pretty inventive, and quite happy to do boring repetitive tasks. Perhaps if the communication happened well enough they might be put to use inventing, and dealing with irritations?
Even without all of these other organisms hanging around, the simple fact that these technologies have bandwidth limitations is a problem - Communication is insanely important if you desire to reduce the frequency and severity of problems encountered while existing. There are a few more technological channels that would be useful, but it is relatively simple to adopt proven modes of communication:
- colouration (i am poisonous, i don't care if you recognise me/I'm not good news)
- hearing (locate movement, location, as well as conceptual information)
- vocalisation/stridulation (location as well as conceptual information, ready to fight, ready to mate, ready to serve)
- dance/movement displays (are you willing to invest energy into demonstrating capability, should I press the point to the next level?)
- faces
- eyes (what are you interested in/looking at?)
- eyebrows (are you exploring/frustrated/contented?)
- teeth (are you indicating dominance/subservience, will you challenge me?)
- posture/stance (are you ready? what are you ready to do?)
Obviously no two robots need to have the full or even the same range of communication methods. Many creatures even in the same species have varied capacities. An obvious example is red/green colour blindness in humans.
So do robots need faces? No.
Would robots significantly benefit from having faces? Yes.
$endgroup$
2
$begingroup$
"it might pay to communicate to the bird before it tries that eating the robot is a bad idea." - and not only for robots the size of butterflies. I once saw a large gull take out a drone, which ended up in the sea at the bottom of at 400ft cliff. It was hard to tell if the gull was annoyed at not getting lunch, but it seemed a bit confused that unlike a dead bird in the water, a dead drone doesn't float so it's hard to eat it ;)
$endgroup$
– alephzero
Feb 13 at 21:00
1
$begingroup$
@alephzero that must have been an interesting sight.
$endgroup$
– Kain0_0
Feb 13 at 22:49
add a comment |
$begingroup$
Humans are some of the most advanced pattern recognition and actionary machines that have ever existed. So capable in fact that they have theories of matter (physics/chemistry/geology/...) and theories of what matters (psychology/religion/philosophy/...).
Given that it takes nearly 30 years to train most specimens to be expert recognizers in a given field of endeavor, should underline the difficulty of actually grouping various states of the universe into manageable and actionable patterns.
In your world robots act as autonomous embodied entities. They are capable of communicating at speed with other similar entities using a range of network technologies. This allows them to communicate fairly clearly about a range of topics but they too hit limitations.
- Radiowaves have limited bandwidth and everyone can hear them
- lasers require line of sight, and dust can cause errors
- network cables are high bandwidth but tether the robot to a specific vicinity
This places an upper limit on the ability of these entities to communicate effectively. Also there are entities out there that are not robots. It might pay to be able to communicate with those too.
If the robot was the size of a butterfly, aside from having limited mind/communication space, its likely that birds will treat it as food. The robot probably has the ability to shock the bird, but it might pay to communicate to the bird before it tries that eating the robot is a bad idea. Being a bright iridescent red/yellow is the general approach used by poisonous caterpillars, and generally birds respect that. After all the robot might actually be running low on power when the bird shows up.
Scale this up and there are a wide variety of organisms running around where it might pay better to communicate with than ignore. Particularly those two-legged simians, there are a few of them, they are pretty inventive, and quite happy to do boring repetitive tasks. Perhaps if the communication happened well enough they might be put to use inventing, and dealing with irritations?
Even without all of these other organisms hanging around, the simple fact that these technologies have bandwidth limitations is a problem - Communication is insanely important if you desire to reduce the frequency and severity of problems encountered while existing. There are a few more technological channels that would be useful, but it is relatively simple to adopt proven modes of communication:
- colouration (i am poisonous, i don't care if you recognise me/I'm not good news)
- hearing (locate movement, location, as well as conceptual information)
- vocalisation/stridulation (location as well as conceptual information, ready to fight, ready to mate, ready to serve)
- dance/movement displays (are you willing to invest energy into demonstrating capability, should I press the point to the next level?)
- faces
- eyes (what are you interested in/looking at?)
- eyebrows (are you exploring/frustrated/contented?)
- teeth (are you indicating dominance/subservience, will you challenge me?)
- posture/stance (are you ready? what are you ready to do?)
Obviously no two robots need to have the full or even the same range of communication methods. Many creatures even in the same species have varied capacities. An obvious example is red/green colour blindness in humans.
So do robots need faces? No.
Would robots significantly benefit from having faces? Yes.
$endgroup$
2
$begingroup$
"it might pay to communicate to the bird before it tries that eating the robot is a bad idea." - and not only for robots the size of butterflies. I once saw a large gull take out a drone, which ended up in the sea at the bottom of at 400ft cliff. It was hard to tell if the gull was annoyed at not getting lunch, but it seemed a bit confused that unlike a dead bird in the water, a dead drone doesn't float so it's hard to eat it ;)
$endgroup$
– alephzero
Feb 13 at 21:00
1
$begingroup$
@alephzero that must have been an interesting sight.
$endgroup$
– Kain0_0
Feb 13 at 22:49
add a comment |
$begingroup$
Humans are some of the most advanced pattern recognition and actionary machines that have ever existed. So capable in fact that they have theories of matter (physics/chemistry/geology/...) and theories of what matters (psychology/religion/philosophy/...).
Given that it takes nearly 30 years to train most specimens to be expert recognizers in a given field of endeavor, should underline the difficulty of actually grouping various states of the universe into manageable and actionable patterns.
In your world robots act as autonomous embodied entities. They are capable of communicating at speed with other similar entities using a range of network technologies. This allows them to communicate fairly clearly about a range of topics but they too hit limitations.
- Radiowaves have limited bandwidth and everyone can hear them
- lasers require line of sight, and dust can cause errors
- network cables are high bandwidth but tether the robot to a specific vicinity
This places an upper limit on the ability of these entities to communicate effectively. Also there are entities out there that are not robots. It might pay to be able to communicate with those too.
If the robot was the size of a butterfly, aside from having limited mind/communication space, its likely that birds will treat it as food. The robot probably has the ability to shock the bird, but it might pay to communicate to the bird before it tries that eating the robot is a bad idea. Being a bright iridescent red/yellow is the general approach used by poisonous caterpillars, and generally birds respect that. After all the robot might actually be running low on power when the bird shows up.
Scale this up and there are a wide variety of organisms running around where it might pay better to communicate with than ignore. Particularly those two-legged simians, there are a few of them, they are pretty inventive, and quite happy to do boring repetitive tasks. Perhaps if the communication happened well enough they might be put to use inventing, and dealing with irritations?
Even without all of these other organisms hanging around, the simple fact that these technologies have bandwidth limitations is a problem - Communication is insanely important if you desire to reduce the frequency and severity of problems encountered while existing. There are a few more technological channels that would be useful, but it is relatively simple to adopt proven modes of communication:
- colouration (i am poisonous, i don't care if you recognise me/I'm not good news)
- hearing (locate movement, location, as well as conceptual information)
- vocalisation/stridulation (location as well as conceptual information, ready to fight, ready to mate, ready to serve)
- dance/movement displays (are you willing to invest energy into demonstrating capability, should I press the point to the next level?)
- faces
- eyes (what are you interested in/looking at?)
- eyebrows (are you exploring/frustrated/contented?)
- teeth (are you indicating dominance/subservience, will you challenge me?)
- posture/stance (are you ready? what are you ready to do?)
Obviously no two robots need to have the full or even the same range of communication methods. Many creatures even in the same species have varied capacities. An obvious example is red/green colour blindness in humans.
So do robots need faces? No.
Would robots significantly benefit from having faces? Yes.
$endgroup$
Humans are some of the most advanced pattern recognition and actionary machines that have ever existed. So capable in fact that they have theories of matter (physics/chemistry/geology/...) and theories of what matters (psychology/religion/philosophy/...).
Given that it takes nearly 30 years to train most specimens to be expert recognizers in a given field of endeavor, should underline the difficulty of actually grouping various states of the universe into manageable and actionable patterns.
In your world robots act as autonomous embodied entities. They are capable of communicating at speed with other similar entities using a range of network technologies. This allows them to communicate fairly clearly about a range of topics but they too hit limitations.
- Radiowaves have limited bandwidth and everyone can hear them
- lasers require line of sight, and dust can cause errors
- network cables are high bandwidth but tether the robot to a specific vicinity
This places an upper limit on the ability of these entities to communicate effectively. Also there are entities out there that are not robots. It might pay to be able to communicate with those too.
If the robot was the size of a butterfly, aside from having limited mind/communication space, its likely that birds will treat it as food. The robot probably has the ability to shock the bird, but it might pay to communicate to the bird before it tries that eating the robot is a bad idea. Being a bright iridescent red/yellow is the general approach used by poisonous caterpillars, and generally birds respect that. After all the robot might actually be running low on power when the bird shows up.
Scale this up and there are a wide variety of organisms running around where it might pay better to communicate with than ignore. Particularly those two-legged simians, there are a few of them, they are pretty inventive, and quite happy to do boring repetitive tasks. Perhaps if the communication happened well enough they might be put to use inventing, and dealing with irritations?
Even without all of these other organisms hanging around, the simple fact that these technologies have bandwidth limitations is a problem - Communication is insanely important if you desire to reduce the frequency and severity of problems encountered while existing. There are a few more technological channels that would be useful, but it is relatively simple to adopt proven modes of communication:
- colouration (i am poisonous, i don't care if you recognise me/I'm not good news)
- hearing (locate movement, location, as well as conceptual information)
- vocalisation/stridulation (location as well as conceptual information, ready to fight, ready to mate, ready to serve)
- dance/movement displays (are you willing to invest energy into demonstrating capability, should I press the point to the next level?)
- faces
- eyes (what are you interested in/looking at?)
- eyebrows (are you exploring/frustrated/contented?)
- teeth (are you indicating dominance/subservience, will you challenge me?)
- posture/stance (are you ready? what are you ready to do?)
Obviously no two robots need to have the full or even the same range of communication methods. Many creatures even in the same species have varied capacities. An obvious example is red/green colour blindness in humans.
So do robots need faces? No.
Would robots significantly benefit from having faces? Yes.
edited Feb 14 at 17:08
Community♦
1
1
answered Feb 13 at 4:00
Kain0_0Kain0_0
1,8729
1,8729
2
$begingroup$
"it might pay to communicate to the bird before it tries that eating the robot is a bad idea." - and not only for robots the size of butterflies. I once saw a large gull take out a drone, which ended up in the sea at the bottom of at 400ft cliff. It was hard to tell if the gull was annoyed at not getting lunch, but it seemed a bit confused that unlike a dead bird in the water, a dead drone doesn't float so it's hard to eat it ;)
$endgroup$
– alephzero
Feb 13 at 21:00
1
$begingroup$
@alephzero that must have been an interesting sight.
$endgroup$
– Kain0_0
Feb 13 at 22:49
add a comment |
2
$begingroup$
"it might pay to communicate to the bird before it tries that eating the robot is a bad idea." - and not only for robots the size of butterflies. I once saw a large gull take out a drone, which ended up in the sea at the bottom of at 400ft cliff. It was hard to tell if the gull was annoyed at not getting lunch, but it seemed a bit confused that unlike a dead bird in the water, a dead drone doesn't float so it's hard to eat it ;)
$endgroup$
– alephzero
Feb 13 at 21:00
1
$begingroup$
@alephzero that must have been an interesting sight.
$endgroup$
– Kain0_0
Feb 13 at 22:49
2
2
$begingroup$
"it might pay to communicate to the bird before it tries that eating the robot is a bad idea." - and not only for robots the size of butterflies. I once saw a large gull take out a drone, which ended up in the sea at the bottom of at 400ft cliff. It was hard to tell if the gull was annoyed at not getting lunch, but it seemed a bit confused that unlike a dead bird in the water, a dead drone doesn't float so it's hard to eat it ;)
$endgroup$
– alephzero
Feb 13 at 21:00
$begingroup$
"it might pay to communicate to the bird before it tries that eating the robot is a bad idea." - and not only for robots the size of butterflies. I once saw a large gull take out a drone, which ended up in the sea at the bottom of at 400ft cliff. It was hard to tell if the gull was annoyed at not getting lunch, but it seemed a bit confused that unlike a dead bird in the water, a dead drone doesn't float so it's hard to eat it ;)
$endgroup$
– alephzero
Feb 13 at 21:00
1
1
$begingroup$
@alephzero that must have been an interesting sight.
$endgroup$
– Kain0_0
Feb 13 at 22:49
$begingroup$
@alephzero that must have been an interesting sight.
$endgroup$
– Kain0_0
Feb 13 at 22:49
add a comment |
$begingroup$
You say that humans, in general, are poor at recognizing facial expressions. As a person with both autism and a brain injury, I beg to differ. I am poor at both recognizing and using facial expressions (I even have a doctor's note!), and the differences are observable to an outsider.
A few years back, before the brain injury and before we knew I was autistic, my family used to play with the Society for Creative Anachronism (SCA). For those not familiar with the SCA, think of them as adults playing dressup in medieval garb, who get together to gossip, drink beer, and beat on each other with big sticks. For the Brits out there, think of stereotypical rugby players dressing up as King Arthur's knights, then playing rugby with swords (that event was called "Blood of Heroes", by the way, emulating a very bad movie of the same name).
Naturally, there is a lot of banter in such a group, and a lot of good natured roughhousing. They can find a week's worth of innuendo in the world innuendo. Shortly after we moved on to other hobbies, my wife told me that I used to scare them because, on the rare occasions I did join the banter, they couldn't tell whether I was serious or joking.
Post brain injury, it is much worse. Recent;y, when being introduced to a new business partner, they were trying to be friendly and informally ask me about myself. Unfortunately, they asked "What gets you out of bed in the morning?". A metaphorical question about emotional state with social cues, it effectively and awkwardly put an end to the conversation because it exactly hit places I physically lack the machinery to process efficiently.
My wife has learned to be very careful with the use of sarcasm and metaphors around me because, without the social cues that go with them, I tend to take them literally. "Why did you (some action) ?" "You Told me to." "I didn't mean for you to really do that!" "Sorry."
A robot mimicking humanoid characteristics to the point they were indistinguishable from humans, without emotional/social emulation of some sort, would tend to make people feel uneasy and awkward, or as my son would say "it would creep them out".
Worse than not mimicking emotions, would be mimicking them badly. A smile whose timing is off by 1/4 second is perceived as deceptive, for example. In contrast, obviously different emotional responses that are specific to the species (robots) can be learned by humans. I used to keep crustaceans in my aquarium, and it is amazing how expressive crabs and crayfish be without flexible faces.
For a good idea of the issues you might run into, watch the first season of Ninjago and the team's interactions with Zane, before anyone knows he is a robot.
$endgroup$
add a comment |
$begingroup$
You say that humans, in general, are poor at recognizing facial expressions. As a person with both autism and a brain injury, I beg to differ. I am poor at both recognizing and using facial expressions (I even have a doctor's note!), and the differences are observable to an outsider.
A few years back, before the brain injury and before we knew I was autistic, my family used to play with the Society for Creative Anachronism (SCA). For those not familiar with the SCA, think of them as adults playing dressup in medieval garb, who get together to gossip, drink beer, and beat on each other with big sticks. For the Brits out there, think of stereotypical rugby players dressing up as King Arthur's knights, then playing rugby with swords (that event was called "Blood of Heroes", by the way, emulating a very bad movie of the same name).
Naturally, there is a lot of banter in such a group, and a lot of good natured roughhousing. They can find a week's worth of innuendo in the world innuendo. Shortly after we moved on to other hobbies, my wife told me that I used to scare them because, on the rare occasions I did join the banter, they couldn't tell whether I was serious or joking.
Post brain injury, it is much worse. Recent;y, when being introduced to a new business partner, they were trying to be friendly and informally ask me about myself. Unfortunately, they asked "What gets you out of bed in the morning?". A metaphorical question about emotional state with social cues, it effectively and awkwardly put an end to the conversation because it exactly hit places I physically lack the machinery to process efficiently.
My wife has learned to be very careful with the use of sarcasm and metaphors around me because, without the social cues that go with them, I tend to take them literally. "Why did you (some action) ?" "You Told me to." "I didn't mean for you to really do that!" "Sorry."
A robot mimicking humanoid characteristics to the point they were indistinguishable from humans, without emotional/social emulation of some sort, would tend to make people feel uneasy and awkward, or as my son would say "it would creep them out".
Worse than not mimicking emotions, would be mimicking them badly. A smile whose timing is off by 1/4 second is perceived as deceptive, for example. In contrast, obviously different emotional responses that are specific to the species (robots) can be learned by humans. I used to keep crustaceans in my aquarium, and it is amazing how expressive crabs and crayfish be without flexible faces.
For a good idea of the issues you might run into, watch the first season of Ninjago and the team's interactions with Zane, before anyone knows he is a robot.
$endgroup$
add a comment |
$begingroup$
You say that humans, in general, are poor at recognizing facial expressions. As a person with both autism and a brain injury, I beg to differ. I am poor at both recognizing and using facial expressions (I even have a doctor's note!), and the differences are observable to an outsider.
A few years back, before the brain injury and before we knew I was autistic, my family used to play with the Society for Creative Anachronism (SCA). For those not familiar with the SCA, think of them as adults playing dressup in medieval garb, who get together to gossip, drink beer, and beat on each other with big sticks. For the Brits out there, think of stereotypical rugby players dressing up as King Arthur's knights, then playing rugby with swords (that event was called "Blood of Heroes", by the way, emulating a very bad movie of the same name).
Naturally, there is a lot of banter in such a group, and a lot of good natured roughhousing. They can find a week's worth of innuendo in the world innuendo. Shortly after we moved on to other hobbies, my wife told me that I used to scare them because, on the rare occasions I did join the banter, they couldn't tell whether I was serious or joking.
Post brain injury, it is much worse. Recent;y, when being introduced to a new business partner, they were trying to be friendly and informally ask me about myself. Unfortunately, they asked "What gets you out of bed in the morning?". A metaphorical question about emotional state with social cues, it effectively and awkwardly put an end to the conversation because it exactly hit places I physically lack the machinery to process efficiently.
My wife has learned to be very careful with the use of sarcasm and metaphors around me because, without the social cues that go with them, I tend to take them literally. "Why did you (some action) ?" "You Told me to." "I didn't mean for you to really do that!" "Sorry."
A robot mimicking humanoid characteristics to the point they were indistinguishable from humans, without emotional/social emulation of some sort, would tend to make people feel uneasy and awkward, or as my son would say "it would creep them out".
Worse than not mimicking emotions, would be mimicking them badly. A smile whose timing is off by 1/4 second is perceived as deceptive, for example. In contrast, obviously different emotional responses that are specific to the species (robots) can be learned by humans. I used to keep crustaceans in my aquarium, and it is amazing how expressive crabs and crayfish be without flexible faces.
For a good idea of the issues you might run into, watch the first season of Ninjago and the team's interactions with Zane, before anyone knows he is a robot.
$endgroup$
You say that humans, in general, are poor at recognizing facial expressions. As a person with both autism and a brain injury, I beg to differ. I am poor at both recognizing and using facial expressions (I even have a doctor's note!), and the differences are observable to an outsider.
A few years back, before the brain injury and before we knew I was autistic, my family used to play with the Society for Creative Anachronism (SCA). For those not familiar with the SCA, think of them as adults playing dressup in medieval garb, who get together to gossip, drink beer, and beat on each other with big sticks. For the Brits out there, think of stereotypical rugby players dressing up as King Arthur's knights, then playing rugby with swords (that event was called "Blood of Heroes", by the way, emulating a very bad movie of the same name).
Naturally, there is a lot of banter in such a group, and a lot of good natured roughhousing. They can find a week's worth of innuendo in the world innuendo. Shortly after we moved on to other hobbies, my wife told me that I used to scare them because, on the rare occasions I did join the banter, they couldn't tell whether I was serious or joking.
Post brain injury, it is much worse. Recent;y, when being introduced to a new business partner, they were trying to be friendly and informally ask me about myself. Unfortunately, they asked "What gets you out of bed in the morning?". A metaphorical question about emotional state with social cues, it effectively and awkwardly put an end to the conversation because it exactly hit places I physically lack the machinery to process efficiently.
My wife has learned to be very careful with the use of sarcasm and metaphors around me because, without the social cues that go with them, I tend to take them literally. "Why did you (some action) ?" "You Told me to." "I didn't mean for you to really do that!" "Sorry."
A robot mimicking humanoid characteristics to the point they were indistinguishable from humans, without emotional/social emulation of some sort, would tend to make people feel uneasy and awkward, or as my son would say "it would creep them out".
Worse than not mimicking emotions, would be mimicking them badly. A smile whose timing is off by 1/4 second is perceived as deceptive, for example. In contrast, obviously different emotional responses that are specific to the species (robots) can be learned by humans. I used to keep crustaceans in my aquarium, and it is amazing how expressive crabs and crayfish be without flexible faces.
For a good idea of the issues you might run into, watch the first season of Ninjago and the team's interactions with Zane, before anyone knows he is a robot.
edited Feb 13 at 17:18
answered Feb 13 at 16:27
pojo-guypojo-guy
9,08211928
9,08211928
add a comment |
add a comment |
$begingroup$
Humans teach other humans in formal positions to use facial expressions when communicating. When someone is representing an organization or group, the message or information they are speaking more often than not has no connection to their personal opinion. As a result we teach humans in these positions to use a facial expression that communicates the "emotive" part of the message the organization/group wants to get across.
Well if we think that's appropriate for humans, we certainly will do it for robots if we can.
They definitely don't need them for talking to another robot
I suspect it could be used as part of robot-robot interaction as well. There are environments where it would be too loud to use audio, too problematic to use e.g. radio based communications and all you may have is the visual. It might be better to implement a custom set of robot-robot expressions for some comms.
plus it will undoubtedly be awkward for us too, I suppose, because we all know they are fake!
I know the smile on the sales person's face is fake too, but it had better be there or no sale ! We expect facial and body language to match words and we treat inconsistencies as suspicious. That's how humans work and why good communicators spend a lot of time learning to make it all look natural, even when it isn't.
The other side of this is that humans need facial expressions to match what they expect. A robot which has a hardwired smile or scowl would give a very disconcerting feeling to a human in the wrong context.
"I hope you enjoyed your stay with us ?" is going to feel very creepy to a human coming from a robot with a scowl or sneer (or something that could be read that way by a human) than from something with a pattern that's more easily interpreted as friendly.
So the human-like expressions would be useful to humans, avoiding emotional dissonance that would put us off.
We even treat animals like this - we learn to interpret their expressions and body language and "map" human meaning onto them. It's an important part of how we interpret interactions with the world. We do extend this to in-animate objects - signs, logos, advertising.
So facial expressions we can read easily would be there to make humans comfortable and make communications more effective.
$endgroup$
add a comment |
$begingroup$
Humans teach other humans in formal positions to use facial expressions when communicating. When someone is representing an organization or group, the message or information they are speaking more often than not has no connection to their personal opinion. As a result we teach humans in these positions to use a facial expression that communicates the "emotive" part of the message the organization/group wants to get across.
Well if we think that's appropriate for humans, we certainly will do it for robots if we can.
They definitely don't need them for talking to another robot
I suspect it could be used as part of robot-robot interaction as well. There are environments where it would be too loud to use audio, too problematic to use e.g. radio based communications and all you may have is the visual. It might be better to implement a custom set of robot-robot expressions for some comms.
plus it will undoubtedly be awkward for us too, I suppose, because we all know they are fake!
I know the smile on the sales person's face is fake too, but it had better be there or no sale ! We expect facial and body language to match words and we treat inconsistencies as suspicious. That's how humans work and why good communicators spend a lot of time learning to make it all look natural, even when it isn't.
The other side of this is that humans need facial expressions to match what they expect. A robot which has a hardwired smile or scowl would give a very disconcerting feeling to a human in the wrong context.
"I hope you enjoyed your stay with us ?" is going to feel very creepy to a human coming from a robot with a scowl or sneer (or something that could be read that way by a human) than from something with a pattern that's more easily interpreted as friendly.
So the human-like expressions would be useful to humans, avoiding emotional dissonance that would put us off.
We even treat animals like this - we learn to interpret their expressions and body language and "map" human meaning onto them. It's an important part of how we interpret interactions with the world. We do extend this to in-animate objects - signs, logos, advertising.
So facial expressions we can read easily would be there to make humans comfortable and make communications more effective.
$endgroup$
add a comment |
$begingroup$
Humans teach other humans in formal positions to use facial expressions when communicating. When someone is representing an organization or group, the message or information they are speaking more often than not has no connection to their personal opinion. As a result we teach humans in these positions to use a facial expression that communicates the "emotive" part of the message the organization/group wants to get across.
Well if we think that's appropriate for humans, we certainly will do it for robots if we can.
They definitely don't need them for talking to another robot
I suspect it could be used as part of robot-robot interaction as well. There are environments where it would be too loud to use audio, too problematic to use e.g. radio based communications and all you may have is the visual. It might be better to implement a custom set of robot-robot expressions for some comms.
plus it will undoubtedly be awkward for us too, I suppose, because we all know they are fake!
I know the smile on the sales person's face is fake too, but it had better be there or no sale ! We expect facial and body language to match words and we treat inconsistencies as suspicious. That's how humans work and why good communicators spend a lot of time learning to make it all look natural, even when it isn't.
The other side of this is that humans need facial expressions to match what they expect. A robot which has a hardwired smile or scowl would give a very disconcerting feeling to a human in the wrong context.
"I hope you enjoyed your stay with us ?" is going to feel very creepy to a human coming from a robot with a scowl or sneer (or something that could be read that way by a human) than from something with a pattern that's more easily interpreted as friendly.
So the human-like expressions would be useful to humans, avoiding emotional dissonance that would put us off.
We even treat animals like this - we learn to interpret their expressions and body language and "map" human meaning onto them. It's an important part of how we interpret interactions with the world. We do extend this to in-animate objects - signs, logos, advertising.
So facial expressions we can read easily would be there to make humans comfortable and make communications more effective.
$endgroup$
Humans teach other humans in formal positions to use facial expressions when communicating. When someone is representing an organization or group, the message or information they are speaking more often than not has no connection to their personal opinion. As a result we teach humans in these positions to use a facial expression that communicates the "emotive" part of the message the organization/group wants to get across.
Well if we think that's appropriate for humans, we certainly will do it for robots if we can.
They definitely don't need them for talking to another robot
I suspect it could be used as part of robot-robot interaction as well. There are environments where it would be too loud to use audio, too problematic to use e.g. radio based communications and all you may have is the visual. It might be better to implement a custom set of robot-robot expressions for some comms.
plus it will undoubtedly be awkward for us too, I suppose, because we all know they are fake!
I know the smile on the sales person's face is fake too, but it had better be there or no sale ! We expect facial and body language to match words and we treat inconsistencies as suspicious. That's how humans work and why good communicators spend a lot of time learning to make it all look natural, even when it isn't.
The other side of this is that humans need facial expressions to match what they expect. A robot which has a hardwired smile or scowl would give a very disconcerting feeling to a human in the wrong context.
"I hope you enjoyed your stay with us ?" is going to feel very creepy to a human coming from a robot with a scowl or sneer (or something that could be read that way by a human) than from something with a pattern that's more easily interpreted as friendly.
So the human-like expressions would be useful to humans, avoiding emotional dissonance that would put us off.
We even treat animals like this - we learn to interpret their expressions and body language and "map" human meaning onto them. It's an important part of how we interpret interactions with the world. We do extend this to in-animate objects - signs, logos, advertising.
So facial expressions we can read easily would be there to make humans comfortable and make communications more effective.
edited Feb 13 at 7:17
answered Feb 13 at 4:47
StephenGStephenG
14k72051
14k72051
add a comment |
add a comment |
$begingroup$
You raise a very good point.
In many demonstrations, it has been established that robot faces that display really accurate human emotions freak people out. It is very disconcerting to have an inanimate but artificially intelligent object to mimic what is the most basic human attribute - emotion. Rather than trust the robot more, it actually leads to greater mistrust and discomfort.
Personally, I would regard a robot smiling at me with exactly the same trepidation as that of a used car salesperson giving me the 'trust me' smile, or of the clown with that unsettling painted-on grin.
It was much easier to trust the answers from Data of Star Trek fame because they were delivered by a very impartial, non-emotional robotic characterization.
In the movie 'I, Robot' the human connection of Sunny was much more believable BECAUSE he did not display artificial emotions. In fact, Sunny's face was so neutral that the human observer could superimpose their OWN image of emotional expression, that somehow was more appealing than to have Sunny display the emotion. That is, we saw what we believed to be there, not what the artificial intelligence wanted us to see.
Add to this, the fact that these robots will have a hard time being accepted in the first place, and there would already be an element of mistrust, any false impressions that one got from misreading an artificial expression that is dissonant with the message or intent would lead to even greater mistrust and mental tension from cognitive dissonance.
The research into voice responses from our cars, devices, and other voice response technology supports this notion. They all have a bland, neutral tone for the same reasons. People don't WANT a bubbly, upbeat, almost laughing Siri. They don't want any kind of tone or inflection that indicates or that could be interpreted as indicating judgement or acceptance or agreement/disagreement. They want 'just the facts, ma'am' with just enough inflection to make it sound natural.
My bet would be to go with robots having completely neutral expressions, with just the right design that lets the human observer imply their own expected emotion onto the face.
$endgroup$
add a comment |
$begingroup$
You raise a very good point.
In many demonstrations, it has been established that robot faces that display really accurate human emotions freak people out. It is very disconcerting to have an inanimate but artificially intelligent object to mimic what is the most basic human attribute - emotion. Rather than trust the robot more, it actually leads to greater mistrust and discomfort.
Personally, I would regard a robot smiling at me with exactly the same trepidation as that of a used car salesperson giving me the 'trust me' smile, or of the clown with that unsettling painted-on grin.
It was much easier to trust the answers from Data of Star Trek fame because they were delivered by a very impartial, non-emotional robotic characterization.
In the movie 'I, Robot' the human connection of Sunny was much more believable BECAUSE he did not display artificial emotions. In fact, Sunny's face was so neutral that the human observer could superimpose their OWN image of emotional expression, that somehow was more appealing than to have Sunny display the emotion. That is, we saw what we believed to be there, not what the artificial intelligence wanted us to see.
Add to this, the fact that these robots will have a hard time being accepted in the first place, and there would already be an element of mistrust, any false impressions that one got from misreading an artificial expression that is dissonant with the message or intent would lead to even greater mistrust and mental tension from cognitive dissonance.
The research into voice responses from our cars, devices, and other voice response technology supports this notion. They all have a bland, neutral tone for the same reasons. People don't WANT a bubbly, upbeat, almost laughing Siri. They don't want any kind of tone or inflection that indicates or that could be interpreted as indicating judgement or acceptance or agreement/disagreement. They want 'just the facts, ma'am' with just enough inflection to make it sound natural.
My bet would be to go with robots having completely neutral expressions, with just the right design that lets the human observer imply their own expected emotion onto the face.
$endgroup$
add a comment |
$begingroup$
You raise a very good point.
In many demonstrations, it has been established that robot faces that display really accurate human emotions freak people out. It is very disconcerting to have an inanimate but artificially intelligent object to mimic what is the most basic human attribute - emotion. Rather than trust the robot more, it actually leads to greater mistrust and discomfort.
Personally, I would regard a robot smiling at me with exactly the same trepidation as that of a used car salesperson giving me the 'trust me' smile, or of the clown with that unsettling painted-on grin.
It was much easier to trust the answers from Data of Star Trek fame because they were delivered by a very impartial, non-emotional robotic characterization.
In the movie 'I, Robot' the human connection of Sunny was much more believable BECAUSE he did not display artificial emotions. In fact, Sunny's face was so neutral that the human observer could superimpose their OWN image of emotional expression, that somehow was more appealing than to have Sunny display the emotion. That is, we saw what we believed to be there, not what the artificial intelligence wanted us to see.
Add to this, the fact that these robots will have a hard time being accepted in the first place, and there would already be an element of mistrust, any false impressions that one got from misreading an artificial expression that is dissonant with the message or intent would lead to even greater mistrust and mental tension from cognitive dissonance.
The research into voice responses from our cars, devices, and other voice response technology supports this notion. They all have a bland, neutral tone for the same reasons. People don't WANT a bubbly, upbeat, almost laughing Siri. They don't want any kind of tone or inflection that indicates or that could be interpreted as indicating judgement or acceptance or agreement/disagreement. They want 'just the facts, ma'am' with just enough inflection to make it sound natural.
My bet would be to go with robots having completely neutral expressions, with just the right design that lets the human observer imply their own expected emotion onto the face.
$endgroup$
You raise a very good point.
In many demonstrations, it has been established that robot faces that display really accurate human emotions freak people out. It is very disconcerting to have an inanimate but artificially intelligent object to mimic what is the most basic human attribute - emotion. Rather than trust the robot more, it actually leads to greater mistrust and discomfort.
Personally, I would regard a robot smiling at me with exactly the same trepidation as that of a used car salesperson giving me the 'trust me' smile, or of the clown with that unsettling painted-on grin.
It was much easier to trust the answers from Data of Star Trek fame because they were delivered by a very impartial, non-emotional robotic characterization.
In the movie 'I, Robot' the human connection of Sunny was much more believable BECAUSE he did not display artificial emotions. In fact, Sunny's face was so neutral that the human observer could superimpose their OWN image of emotional expression, that somehow was more appealing than to have Sunny display the emotion. That is, we saw what we believed to be there, not what the artificial intelligence wanted us to see.
Add to this, the fact that these robots will have a hard time being accepted in the first place, and there would already be an element of mistrust, any false impressions that one got from misreading an artificial expression that is dissonant with the message or intent would lead to even greater mistrust and mental tension from cognitive dissonance.
The research into voice responses from our cars, devices, and other voice response technology supports this notion. They all have a bland, neutral tone for the same reasons. People don't WANT a bubbly, upbeat, almost laughing Siri. They don't want any kind of tone or inflection that indicates or that could be interpreted as indicating judgement or acceptance or agreement/disagreement. They want 'just the facts, ma'am' with just enough inflection to make it sound natural.
My bet would be to go with robots having completely neutral expressions, with just the right design that lets the human observer imply their own expected emotion onto the face.
answered Feb 13 at 19:03
Justin ThymeJustin Thyme
8,48911042
8,48911042
add a comment |
add a comment |
$begingroup$
Even if we're terrible at interpreting facial expressions, their absence would be extremely disconcerting. Therefore, in order to avoid being unsettling, you'd want the robots to mimic human behaviour, just like we'd do for speech patterns, avoiding monotone, etc.
$endgroup$
$begingroup$
Had the exact same idea. Even if people don't fully appreciate the nuance that goes into a robot's facial expression, it is still important for them to have it to avoid falling into the Uncanny Valley
$endgroup$
– D.Spetz
Feb 13 at 16:00
$begingroup$
As mentioned, according toUncanny Valley
research it's as good to be not human-like at all(like modern industrial robots), or be as human-like as possible, it's only bad if you're trying to make it human-like but miss even tiny humanly feature. And in that case it's easier to not even try to mimic human and create something completely different that won't cause anxiety among people.
$endgroup$
– user28434
Feb 14 at 11:15
add a comment |
$begingroup$
Even if we're terrible at interpreting facial expressions, their absence would be extremely disconcerting. Therefore, in order to avoid being unsettling, you'd want the robots to mimic human behaviour, just like we'd do for speech patterns, avoiding monotone, etc.
$endgroup$
$begingroup$
Had the exact same idea. Even if people don't fully appreciate the nuance that goes into a robot's facial expression, it is still important for them to have it to avoid falling into the Uncanny Valley
$endgroup$
– D.Spetz
Feb 13 at 16:00
$begingroup$
As mentioned, according toUncanny Valley
research it's as good to be not human-like at all(like modern industrial robots), or be as human-like as possible, it's only bad if you're trying to make it human-like but miss even tiny humanly feature. And in that case it's easier to not even try to mimic human and create something completely different that won't cause anxiety among people.
$endgroup$
– user28434
Feb 14 at 11:15
add a comment |
$begingroup$
Even if we're terrible at interpreting facial expressions, their absence would be extremely disconcerting. Therefore, in order to avoid being unsettling, you'd want the robots to mimic human behaviour, just like we'd do for speech patterns, avoiding monotone, etc.
$endgroup$
Even if we're terrible at interpreting facial expressions, their absence would be extremely disconcerting. Therefore, in order to avoid being unsettling, you'd want the robots to mimic human behaviour, just like we'd do for speech patterns, avoiding monotone, etc.
answered Feb 13 at 10:17
OrangesandlemonsOrangesandlemons
1214
1214
$begingroup$
Had the exact same idea. Even if people don't fully appreciate the nuance that goes into a robot's facial expression, it is still important for them to have it to avoid falling into the Uncanny Valley
$endgroup$
– D.Spetz
Feb 13 at 16:00
$begingroup$
As mentioned, according toUncanny Valley
research it's as good to be not human-like at all(like modern industrial robots), or be as human-like as possible, it's only bad if you're trying to make it human-like but miss even tiny humanly feature. And in that case it's easier to not even try to mimic human and create something completely different that won't cause anxiety among people.
$endgroup$
– user28434
Feb 14 at 11:15
add a comment |
$begingroup$
Had the exact same idea. Even if people don't fully appreciate the nuance that goes into a robot's facial expression, it is still important for them to have it to avoid falling into the Uncanny Valley
$endgroup$
– D.Spetz
Feb 13 at 16:00
$begingroup$
As mentioned, according toUncanny Valley
research it's as good to be not human-like at all(like modern industrial robots), or be as human-like as possible, it's only bad if you're trying to make it human-like but miss even tiny humanly feature. And in that case it's easier to not even try to mimic human and create something completely different that won't cause anxiety among people.
$endgroup$
– user28434
Feb 14 at 11:15
$begingroup$
Had the exact same idea. Even if people don't fully appreciate the nuance that goes into a robot's facial expression, it is still important for them to have it to avoid falling into the Uncanny Valley
$endgroup$
– D.Spetz
Feb 13 at 16:00
$begingroup$
Had the exact same idea. Even if people don't fully appreciate the nuance that goes into a robot's facial expression, it is still important for them to have it to avoid falling into the Uncanny Valley
$endgroup$
– D.Spetz
Feb 13 at 16:00
$begingroup$
As mentioned, according to
Uncanny Valley
research it's as good to be not human-like at all(like modern industrial robots), or be as human-like as possible, it's only bad if you're trying to make it human-like but miss even tiny humanly feature. And in that case it's easier to not even try to mimic human and create something completely different that won't cause anxiety among people.$endgroup$
– user28434
Feb 14 at 11:15
$begingroup$
As mentioned, according to
Uncanny Valley
research it's as good to be not human-like at all(like modern industrial robots), or be as human-like as possible, it's only bad if you're trying to make it human-like but miss even tiny humanly feature. And in that case it's easier to not even try to mimic human and create something completely different that won't cause anxiety among people.$endgroup$
– user28434
Feb 14 at 11:15
add a comment |
$begingroup$
Look for "mentalists" and you will find lots of info about them! They are talented to start with (compared to the average as someone said) and they are sufficiently trained to successfully trick you to think that they can read your mind. They too learn your facial expressions, tone of speech and think ahead of you in choosing a random number. Uri Geller, a famous example is just a first-year student in comparison. Look for two Israeli mentalists like Nimrod Har'el and Hezi Dean:
http://www.nimrodharel.co.il
http://www.hezidean.co.il/ENGLISH/
I think mentalists would have failed if they did not study body language.
Those trained persons would be valuable to AI programmers in training a robot to read your body language. Nobody mentipned that the robots may be sufficiently trained to search for suspects in a crowd.
$endgroup$
add a comment |
$begingroup$
Look for "mentalists" and you will find lots of info about them! They are talented to start with (compared to the average as someone said) and they are sufficiently trained to successfully trick you to think that they can read your mind. They too learn your facial expressions, tone of speech and think ahead of you in choosing a random number. Uri Geller, a famous example is just a first-year student in comparison. Look for two Israeli mentalists like Nimrod Har'el and Hezi Dean:
http://www.nimrodharel.co.il
http://www.hezidean.co.il/ENGLISH/
I think mentalists would have failed if they did not study body language.
Those trained persons would be valuable to AI programmers in training a robot to read your body language. Nobody mentipned that the robots may be sufficiently trained to search for suspects in a crowd.
$endgroup$
add a comment |
$begingroup$
Look for "mentalists" and you will find lots of info about them! They are talented to start with (compared to the average as someone said) and they are sufficiently trained to successfully trick you to think that they can read your mind. They too learn your facial expressions, tone of speech and think ahead of you in choosing a random number. Uri Geller, a famous example is just a first-year student in comparison. Look for two Israeli mentalists like Nimrod Har'el and Hezi Dean:
http://www.nimrodharel.co.il
http://www.hezidean.co.il/ENGLISH/
I think mentalists would have failed if they did not study body language.
Those trained persons would be valuable to AI programmers in training a robot to read your body language. Nobody mentipned that the robots may be sufficiently trained to search for suspects in a crowd.
$endgroup$
Look for "mentalists" and you will find lots of info about them! They are talented to start with (compared to the average as someone said) and they are sufficiently trained to successfully trick you to think that they can read your mind. They too learn your facial expressions, tone of speech and think ahead of you in choosing a random number. Uri Geller, a famous example is just a first-year student in comparison. Look for two Israeli mentalists like Nimrod Har'el and Hezi Dean:
http://www.nimrodharel.co.il
http://www.hezidean.co.il/ENGLISH/
I think mentalists would have failed if they did not study body language.
Those trained persons would be valuable to AI programmers in training a robot to read your body language. Nobody mentipned that the robots may be sufficiently trained to search for suspects in a crowd.
answered Feb 13 at 17:42
Christmas SnowChristmas Snow
2,682314
2,682314
add a comment |
add a comment |
$begingroup$
Why bother creating AI when the biological machine is sufficient?
Expressions one could say are also a congantive function of self identity and self awareness. The very thing AI programmers delve volume of topics to unravel. The idea of a fully functioning brain requires all the parts and mechanisms that go along with it. Facial expressions and gestures for example have over thousands of receptors telling our millions of neurons what we are doing with our indvidual muscles. This simple thing like cracking a smile follows a very deep set of instructions if you think about it.
$endgroup$
1
$begingroup$
Welcome to Worldbuilding, Venjik! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun!
$endgroup$
– Gryphon
Feb 13 at 13:36
add a comment |
$begingroup$
Why bother creating AI when the biological machine is sufficient?
Expressions one could say are also a congantive function of self identity and self awareness. The very thing AI programmers delve volume of topics to unravel. The idea of a fully functioning brain requires all the parts and mechanisms that go along with it. Facial expressions and gestures for example have over thousands of receptors telling our millions of neurons what we are doing with our indvidual muscles. This simple thing like cracking a smile follows a very deep set of instructions if you think about it.
$endgroup$
1
$begingroup$
Welcome to Worldbuilding, Venjik! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun!
$endgroup$
– Gryphon
Feb 13 at 13:36
add a comment |
$begingroup$
Why bother creating AI when the biological machine is sufficient?
Expressions one could say are also a congantive function of self identity and self awareness. The very thing AI programmers delve volume of topics to unravel. The idea of a fully functioning brain requires all the parts and mechanisms that go along with it. Facial expressions and gestures for example have over thousands of receptors telling our millions of neurons what we are doing with our indvidual muscles. This simple thing like cracking a smile follows a very deep set of instructions if you think about it.
$endgroup$
Why bother creating AI when the biological machine is sufficient?
Expressions one could say are also a congantive function of self identity and self awareness. The very thing AI programmers delve volume of topics to unravel. The idea of a fully functioning brain requires all the parts and mechanisms that go along with it. Facial expressions and gestures for example have over thousands of receptors telling our millions of neurons what we are doing with our indvidual muscles. This simple thing like cracking a smile follows a very deep set of instructions if you think about it.
answered Feb 13 at 13:20
VenjikVenjik
1
1
1
$begingroup$
Welcome to Worldbuilding, Venjik! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun!
$endgroup$
– Gryphon
Feb 13 at 13:36
add a comment |
1
$begingroup$
Welcome to Worldbuilding, Venjik! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun!
$endgroup$
– Gryphon
Feb 13 at 13:36
1
1
$begingroup$
Welcome to Worldbuilding, Venjik! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun!
$endgroup$
– Gryphon
Feb 13 at 13:36
$begingroup$
Welcome to Worldbuilding, Venjik! If you have a moment, please take the tour and visit the help center to learn more about the site. You may also find Worldbuilding Meta and The Sandbox useful. Here is a meta post on the culture and style of Worldbuilding.SE, just to help you understand our scope and methods, and how we do things here. Have fun!
$endgroup$
– Gryphon
Feb 13 at 13:36
add a comment |
Thanks for contributing an answer to Worldbuilding Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fworldbuilding.stackexchange.com%2fquestions%2f139014%2fwhy-bother-programming-facial-expressions-for-artificial-intelligence-if-humans%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Comments are not for extended discussion; this conversation has been moved to chat.
$endgroup$
– L.Dutch♦
Feb 14 at 15:28