Can Artificial Intelligence Replace A Human
In Online Interaction?

by Joel Ramshaw

            Some companies have found great benefit to the use of Artificial Intelligence (AI) in online interaction with their customers. Many patrons ask the same type of repetitive questions. A preprogrammed response can be given which will satisfy many of these individuals. They may not even realize they are communicating with a mere machine.    Although artificial intelligence systems are not at a level where they can fully carry on a general human conversation online, they can nonetheless be successful imitating human interaction online in specialized applications such as: customer support for a specific business, answering frequently asked questions, and in advertising through social media.


Imitating Human Behaviour

            Imagine a chat box opens when the customer visits a business website. There is a picture of a smiling female face. A brief message comes up saying “Allison is typing,” followed by the appearance of the words “Hello, my name is Allison, how may I help you?” The customer then responds with a question about the availability of a product and receives an answer a couple minutes later. What the customer does not realize this whole time, is that he has been talking with an AI, who is responding as programmed by an algorithm. The bot has given him prefabricated responses and searched an online database to discover whether the product in question is available. No human was needed to satisfy the customer, and he may never have even realized that it was a machine, not a human, on the other end of the chat. The use of “chatbots” as these systems are often called, is experiencing a rapid growth rate of 24.3% annually (Nguyen, 2017, para. 4).

            Recently an AI bot named Libratus was successful in defeating several world champions at the game of Poker (Calhoun, 2017, para. 1). Being that Poker requires heavy bluffing and deception by its players, this highlights AI’s potential success at operating in these same attributes. AIs thus far, have usually been restricted in their use to simply giving information when they interact with humans. Apple’s Siri and Microsoft’s Cortona for example, are usually used to fetch information or answer questions. In the future, AI may be used for deception and manipulation; not simply to give information. We may see AI that attempts to discover the mood and emotions of the human it is communicating with and incorporate this knowledge in its responses

(Saiidi, 2018, para. 9).


The Anthropomorphism of AI

            To ascribe human quality to non-human entities is defined as, “Anthropomorphism.” Humans have a natural weakness for this fallacy, as we may speak to our pets or television as if these understood us (Margalit, 2016, para. 6). This weakness of ours is a chatbot’s strength however. It is natural for a human to give a chatbot the benefit of the doubt, unconsciously anthropomorphizing it. We feel as if we are speaking with a human as opposed to a lifeless algorithm (Margalit, 2016, para. 7).

            The popularity of “digital assistants” such as Apple’s Siri and Microsoft’s Cortana goes to show that humans do enjoy interacting with friendly, informal “bots.” It is not much different to having one’s own personal online secretary. There is no reason to expect this type of AI communication to slow down its pace of growth. Due to human psychology, it is only natural for us to embrace tech-assistants fulfilling humanlike functions.


Struggles

            AI systems struggle when trying to imitate the “common sense” of a human. Kaku (2011) explains: “Hundreds of millions of lines of code, for example, are necessary to describe the laws of common sense that a six-year-old child knows” (p. 83). A computer’s intelligence is centralized, requiring a CPU. The human brain on the other hand, has an entirely different architecture. It creates decentralized neural networks through repetition as part of its learning (Kurzweil, 1999, p. 80). The conflict is obvious; despite enormous computing power, a computer will necessarily struggle when trying to truly imitate a human. A computer lacks the advantages the decentralized network of the human brain allows for. The weakness of AI when it comes to pattern recognition and common sense (Kaku, 2011, p. 83), may expose its weakness when it comes to imitating human conversation online.

            When people ask unusual, unexpected, or strangely-worded questions, this can catch the AI off guard from its preprogrammed responses. Customers may become dissatisfied with the service deficiency compared to talking with an actual human. One option is to have a human customer service representative who will monitor a group of several bots. Such as person present may step in to answer hard questions that confuse the bot. When stumped the bot may send an alert to the human who is monitoring the chats. This person can then take over answering the difficult queries (Rouhiainen, 2018, p. 91).


Programming Difficulties

            It is difficult to effectively program an AI to communicate with a human in a natural and believable manner. A computer cannot “learn through experience” the same way humans do, by observing and interacting with its environment. The AI system can, however, link to internet databases to attempt to learn material by parsing it. The difficulty with this type of learning is that AI has trouble with the ambiguity present in language. Words having multiple possible meanings in different contexts confuse the bot (Kurzweil, 1999, p. 94).

            In 2016, Microsoft released a chatbot named “Tay.” This bot was designed to learn by reading peoples tweets. Unfortunately, the bot picked up on the social media’s racist and vulgar conversations and began to parrot them with no filter or discretion (Lee, 2016, para. 7). Having a chatbot “learn” from the internet can thus have unintended negative repercussions. The bot will absorb the good along with the bad. Tay’s experience goes to show how little understanding a chatbot has when attempting to learn from the internet. The bot may accept whatever information it finds, without being able to sort effectively and throw away the bad. It also has difficulty distinguishing between trustworthy and unscrupulous sources. The possibility of AI systems independently learning from material on the internet still exists but needs far more work.


Social Media Bots in the 2016 US Presidential Election

            In the 2016 US Presidential election, AI systems were present at the forefront on a scale never encountered previously. It was discovered that 33% of all pro-Trump tweets originated from AI systems. For pro-Hillary tweets the number was 22% (Husain, 2017, p. 149). Evidently, AI is close enough in its imitation of human conversation, that politicians are trusting it with their campaign communications.

            AI systems were being used in close conjunction with large-scale data-mining to target advertisements to individual voters. Each voter could receive a message custom-tailored to their individual desires and prejudices. Husain explains: “an introverted gun owner with safety concerns might receive a dystopian Facebook ad showing a burglar entering a house at night, while a more contemplative and peaceful gun owner would receive a nostalgic ad romanticizing a boy and his father out hunting together for a day” (Husain, 2017, p. 146).

            It seems humans are indeed quite vulnerable to the politically weaponized AI systems. Combined with the vast data-mining of our internet searches and Facebook activity, the AI programs are able to “know” a person’s personality, desires, and fears. Psychologists combining their expertise with AI programming may lead to even more deceptive systems. It is not a stretch to foresee the weaknesses in human psychology being easily exploited by the wording and messaging from social media bots, as the communication is customized to certain groups. Recursive algorithms will be able to breed successful messaging tactics while dropping out the unsuccessful ones. This is known as the A/B test which is the cause of much of the “clickbait” websites we see today (Husain, 2017, p. 148).


Success in Specialized Applications

            Although current chatbots lack the ability to carry on an entirely normal conversion like a human, they are showing success in “specialized” areas. Customer service is one example. Rather than trying to program the bot with all the world’s knowledge and jargon, the bot only needs to be taught with a knowledge base relevant to that particular industry (Rouhiainen, 2018, p. 88).

            An example, Amtrak uses a chatbot named “Julie” for much of its customer service. This bot is programmed with all of the knowledge base related to Amtrak’s scheduling, transportation routes, frequently asked questions, website navigation, and policies. This bot does not replace all of Amtrak’s human customer support team, but it has been successful in absorbing one quarter of the call volume. Many riders were unaware the personable voice was only a computer program (Urbina, 2004, para. 6).

            Robo-advisors are an additional way AI is making progress in specialized situations. Banks are beginning to trial algorithms to take the place of investment advisors. While trusting one’s money with a robot may seem foolhardy, this is believed to be the way of the future. Normally an investment advisor requires a client to fill out an information form and answer a list of questions to begin the process. Based on the answers to those questions, the advisor can determine which fund to recommend. This process is thus natural to automate, as the algorithm can be setup so that the user answers the list of questions online and is given the same response a human advisor would give. Robo advisors tend to result in savings of 1% in annual management fees when compared to a human advisor (Aston, 2016, para. 12).


Conclusion

            AI systems are not yet at a level where they can consistently and reliably imitate a human online in general circumstances. This however, does not prevent them from functioning in a human’s place in specialized applications such as internet customer service for a specific business. We saw examples of AI being successful in providing replacing human service for Amtrak’s customers and for certain banks’ investors. We also saw how these systems operated believable social media accounts to influence the 2016 US Presidential Election. Although AI is limited by its centralized hardware in how fully it can imitate and sustain human conversation, we should continue to see these systems succeed in replacing humans in customer service, knowledge bases, certain social media, and various other niche applications into the future.


Support Gateways of His Light by sharing this page on social media

Main Page


References

Aston, D. (2016, May 18). Find out if you should go robo: Robo-advisor services are changing the industry. Are they right for you? Money Sense. Retrieved from Money Sense: https://www.moneysense.ca/save/investing/etfs/which-robo-advisor-is-right-for-you/

Calhoun, L. (2017, Feb 6). Artificial intelligence poker champ bluffs its way to $1.7 Million. Inc. Retrieved from Inc: https://www.inc.com/lisa-calhoun/artificial-intelligence-poker-champ-bluffs-its-way-to-17-million.html

Husain, A. (2017). The sentient machine. New York: Simon & Schuster.

Kaku, M. (2011). Physics of the future: How science will shape human destiny and our daily lives by the year 2100. New York: Anchor Books.

Kurzweil, R. (1999). The age of spiritual machines. New York: Penguin Group.

Lee, D. (2016, March 25). Tay: Microsoft issues apology over racist chatbot fiasco. BBC News. Retrieved from BBC: https://www.bbc.com/news/technology-35902104

Margalit, L. (2016, July 3). What businesses need to understand about chatbots. Tech Crunch. Retrieved from Tech Crunch: https://techcrunch.com/2016/07/03/what-businesses-need-to-understand-about-chatbots/

Nguyen, M.-H. (2017, October 20). The latest market research, trends & landscape in the growing AI chatbot industry. Business Insider. Retrieved from Business Insider: http://www.businessinsider.com/chatbot-market-stats-trends-size-ecosystem-research-2017-10

Rouhiainen, L. (2018). Artificial Intelligence: 101 things you must know today about our future. Author.

Saiidi, U. (2018, April 20). China's largest smartphone maker is working on an A.I. that can read human emotions. CNBC. Retrieved from CNBC: https://www.cnbc.com/2018/04/20/emotion-artificial-intelligence-huawei-working-on-an-emotional-ai.html

Urbina, I. (2004, 24 Nov). Your train will be late, she says cheerily. New York Times. Retrieved from New York Times: https://www.nytimes.com/2004/11/24/nyregion/your-train-will-be-late-she-says-cheerily.html