Table of Contents (click to expand)
The easiest way to tell if the person on the other side is a bot is by looking for grammatical imperfections and language-related nuances. Another way to tell if the person is a bot is by the limited scope of expertise the bot is programmed with. A bot, no matter how advanced, typically talks about one subject at a time.
We all know that bots are taking over the Internet, especially in the area of online customer support or assistance. However, not many people know that the industry on the other side is also booming. Surprisingly, many people are being paid to be bots! In a world facing the rising phobia of “robots replacing humans”, some tech start-ups that often boast about their artificial intelligence have discovered that, on a smaller scale, humans are a cheaper, easier and better alternative to building a bot that can do the task.
Sometimes there is no AI bot managing human queries. The AI is simply a mockup powered behind the scenes by humans—in pursuit of the “fake it till you make it” approach to win over investors or customers. Other times, a software bot is combined with actual human employees—who intervene if the bot fails to reach a resolution or simply can’t perform a given task. This approach is called pseudo AI or hybrid AI.
Blurring The Line Between Bot And Human
Many tech companies intentionally try to blur the line between bots and humans. For example, consider the case of Cloudsight.ai, which markets their image recognition system to “leverage the best of human and machine intelligence”. Cloudsight management has clarified this by confessing that trickier images (that cannot be discerned/processed by the software) is sent to human employees. This human-software collaboration makes Cloudsight’s technology even smarter. Also, thanks to their inbuilt delay of several seconds, it becomes tricky to tell if the given photo was labeled by the bot or by a human.
Many companies feel that humans masquerading as AI bots are just a temporary bridge until bots become more competent; others are embracing hybrid AI as a customer support method that combines AI’s scalability with human competence. Some of them advertise these as “hybrid AI bots”, and if they work according to plan, it will become a nearly impossible task to tell if it’s a bot or a human.
So, how can you be sure if that bot is pretending to be human or is simply a human masquerading as an intelligent bot?
Also Read: What Is Artificial Intelligence And How Is It Powering Our Lives?
Evolution Of AI Bots
A few years back, when online bots were nascent, the easiest way to tell if the person on the other side was human was by looking at grammatical imperfections and language-related nuances. Remember, bots of the past were coded with template dialogues built to be delivered after the triggering of a specific condition. Bot speech was conspicuous and had a typical formality. In early Turing tests, mistakes in spelling were an easy indication that the speaker was indeed human. Things have changed now, however, with rapid progress in machine learning and artificial intelligence in recent years. Many bots powered with advanced AI are no longer just confined to programmer-defined rules, but are also learning from the interaction and data sets generated from those interactions. These data sets are loaded with casual speech, regional dialects and other nuances that make them seem much more like a human.
Increasingly, the programmers of today, especially in the data science domain, are deliberately adding these things to perfectly masquerade bots as humans. Last year, Google’s AI made headlines for its ability to convincingly imitate nuances of human speech like “ums” and “uhs” to make the bot appearance eerily human.
Also Read: How Effective Is The “Not A Robot” Check On Websites?
Telltale Signs Of A Bot
Although sophisticated algorithms can now imitate some nuances of human speech, there is still a long way to go for them to be as proficient with language as humans. Fortunately, there are certain telltale signs insinuating that a given text is bot-generated.
The biggest sign lies in the limited scope of expertise the bots are programmed with. A bot, no matter how advanced, typically talks about one subject at a time. To test this, I tried chatting with a customer service agent at a grocery store on Facebook Messenger to determine my grocery requirement and infer if the agent was a human or a bot. The interaction went like this:
Grocery Store: Hello! I’m here to assist you in finding the correct ingredients and recipe.
Me: Tell me a recipe for guacamole?
Grocery Store: [replied with a recipe for guacamole]
Me: Can I use green peas to make guacamole?
Grocery Store: [replied with a recipe for green pea guacamole]
Me: Do you have a recipe that uses avocado. Not guacamole, please.
Grocery Store: [replied with a recipe for avocado salsa with cilantro and olives]
So far, the conversation had gone fine, and the agent did not identify itself explicitly as a bot. Although it had aptly handled queries related to recipes and ingredients, I now tried to deviate from the core topic to determine if it was a bot or a human:
Me: Can you share a technique to confirm if avocado is ripe?
Grocery Store: [replied with a recipe for edamame guacamole]
Me: Have you watched Star Wars?
Grocery Store: [replied with a recipe for sautéed shrimp with polenta and manchego]
So, you can clearly see it was a bot, as it gave a completely haywire response to stuff that was not related to food or the grocery store.
For an AI algorithm to emulate a human successfully, it needs to be specialized. Popular algorithms used in journalism like Wordsmith or Heliograf can smartly read data tables and form content from it by writing documents, or news feed articles. These algorithms are popular and successful because the task is basically formulaic—reading data from spreadsheets and converting it into relevant stock phrases or sentences. For example, by looking at the data, those algorithms could write—“Barcelona defeated Real Madrid in a close game on Saturday, 1-0”
It’s not that scintillating, but it does a decent enough job of summarizing the game. However, an AI algorithm even as smart as Heliograf would fail when encountering information that doesn’t correctly fit into prescribed tables. For example, did a cat run onto the field in the middle of the game? Did the goalkeeper make an unbelievable stop that kept the game alive? AI algorithms can only report if the data rightly fits into spreadsheets or similar table-like structures, which it must refer to for interaction.
Also, bots tend to have a poor memory. Beyond the standard formulaic text, algorithms have a hard time churning out stories that make sense. Characters mismatch, plots contort, and conversations often turn repetitive because the algorithm finds it hard to keep track of everything going on in a coherent manner. That’s when you realize you’ve encountered yet another poor bot who has failed at being a human!
How well do you understand the article above!