Anthropomorphic User Interface
Anthropomorphism, according to the Collins Concise English, is defined as the projection of human-like traits and characteristics onto inanimate or non-human objects such as cars or our pets. Anthropomorphic computer interfaces are not a new thing, or at least, not in the entertainment industry. The movie industry has been showing complex science fictional movies depicting artificial computer intelligence for many decades now. The main reason for employing anthropomorphism is to come up with an emotional responsive relationship between computer interfaces and their users.
Even though this would seem to be of great value in creating a more interactive user interfaces for users, the argument arising from this is if it would be of the best interest of the user to integrate human traits onto computer interfaces. This argument is unexceptional and merely puts up together a few notable sources from philosophy, rhetoric and interaction design. From Ben Schneiderman’s perspective, he talks of the need to have computers to “talk” just like people. He mentions that people have a primitive childish drive to implement anthropomorphism onto computers as the act of hiving intelligence and autonomy to computers can deceive, confuse and even deceive the users. Users may be misled on suggestion that computers can think and understand them without actually realizing the capabilities of their machines which leads to user disappointment. Even with shortcomings, Schneiderman is certainly right about a couple of things such as the fact that anthropomorphism is not only primitive but also pervasive to users. However, both of these traits are highly advantageous while designing an artificial intelligent computer user interface.
On the pervasiveness, we take a look at Daniel Dennett’s (A contemporary philosopher) perspective of how anthropomorphism is used in coordination with artificial intelligence. For example, this can be used to design better traps for mammals by understanding how these animals perceive various things. Such a strategy is employed in chess playing computers whereby the computer does not take the users knight because it realizes there’s a line of ensuring play that would lead to losing its rook which should not happen.
Even so Schneiderman shows big concern about the application of the anthropomorphic strategy to make artificial intelligent computers that it is not always a successful. The actual problem really has nothing to do with anthropomorphism itself. People aren’t frustrated with their machines because they don’t turn out to be as human as they thought but because they don’t do what they thought they would actually do. Take an example of a user who purchased an artificially intelligent car but has to repeat voice commands a couple of times before they are understood or forced to park it conveniently himself to his/her disappointment of advertisements that make consumers believe that the car can do absolutely anything! Such outcomes frustrate the users. This does not mean artificial intelligence has failed for there is still a lot of hope for better anthropomorphic computer developments.
This therefore means that even though there’s a limitation of design (not hardware), this can be solved with time as artificial intelligence technologies evolve with time. Take example of biometric secure technologies that recognize voice in order to grant users some sort of access. Let’s says the authorized user comes back home with sore throat and needs to access his biometric secured home, he/she will not be allowed entry since the voice signatures would not match. This would not be the case if a real human was in charge. Computers do not show sympathy or emotion but only work as an interface of how it is programmed in contrary of a human being who has the freedom to choose what is right or wrong in this case. Such shortcomings do not mean that artificial intelligence is not significant but that more efforts need to be put into it I order to improve its relationship with users so as to be more complex and responsive. Even so, there is been a lot of progress towards creating more emotionally responsive computer systems especially in the field of robotics
In conclusion it is therefore important for users and developers not put themselves into a lock-step with anti-anthropomorphism. As even though Schneiderman explains that there are weaknesses in giving full autonomy to artificial intelligence machines, a lot of advantage would also be in place as long as developers come up with progressive ways to make such machines more responsive to human emotions which is what most of them lack.
Morgan, J. (2013) Anthropomorphism on Trial
Retrieved from: http://usabilityetc.com/articles/anthropomorphism-on-trial/
Randy, H & Paula, L. (2011) (Anti-) Anthropomorphism and Interface Design