Snapchat is releasing its own AI chatbot powered by ChatGPT
]
Snapchat is introducing a chatbot powered by the latest version of OpenAI’s ChatGPT. According to Snap CEO Evan Spiegel, it’s a bet that AI chatbots will increasingly become a part of everyday life for more people.
Named “My AI,” Snapchat’s bot will be pinned to the app’s chat tab above conversations with friends. While initially only available for $3.99 a month Snapchat Plus subscribers, the goal is to eventually make the bot available to all of Snapchat’s 750 million monthly users, Spiegel tells The Verge.
“The big idea is that in addition to talking to our friends and family every day, we’re going to talk to AI every day,” he says. “And this is something we’re well positioned to do as a messaging service.”
At launch, My AI is essentially just a fast mobile-friendly version of ChatGPT inside Snapchat. The main difference is that Snap’s version is more restricted in what it can answer. Snap’s employees have trained it to adhere to the company’s trust and safety guidelines and not give responses that include swearing, violence, sexually explicit content, or opinions about dicey topics like politics.
It has also been stripped of functionality that has already gotten ChatGPT banned in some schools; I tried getting it to write academic essays about various topics, for example, and it politely declined. Snap plans to keep tuning My AI as more people use it and report inappropriate answers. (I wasn’t able to conjure any in my own testing, though I’m sure others will.)
After trying My AI, it’s clear that Snap doesn’t feel the need to even explain the phenomenon that is ChatGPT, which is a testament to OpenAI building the fastest-growing consumer software product in history. Unlike OpenAI’s own ChatGPT interface, I wasn’t shown any tips or guardrails for interacting with Snap’s My AI. It opens to a blank chat page, waiting for a conversation to start.
While ChatGPT has quickly become a productivity tool, Snap’s implementation treats generative AI more like a persona. My AI’s profile page looks like any other Snapchat user’s profile, albeit with its own alien Bitmoji. The design suggests that My AI is meant to be another friend inside of Snapchat for you to hang out with, not a search engine.
“The big idea is that in addition to talking to our friends and family every day, we’re going to talk to AI every day.”
That distinction could save Snap some headaches. As Bing’s implementation of OpenAI’s tech has shown, the large language models (LLMs) underpinning these chatbots can confidently give wrong answers, or hallucinations, that are problematic in the context of search. If toyed with enough, they can even be emotionally manipulative and downright mean. It’s a dynamic that has, at least so far, kept larger players in the space — namely Google and Meta — from releasing competing products to the public.
Snap is in a different place. It has a deceivingly large and young user base, but its business is struggling. My AI will likely be a boost to the company’s paid subscriber numbers in the short term, and eventually, it could open up new ways for the company to make money, though Spiegel is cagey about his plans.
Snap is one of the first clients of OpenAI’s new enterprise tier called Foundry, which lets companies run its latest GPT-3.5 model with dedicated compute designed for large workloads. Spiegel says Snap will likely incorporate LLMs from other vendors besides OpenAI over time and that it will use the data gathered from the chatbot to inform its broader AI efforts. While My AI is basic to start, it’s the beginning of what Spiegel sees as a major investment area for Snap and, more importantly, a future in which we’re all talking to AI like it’s a person.
Are you a robot?
]
Why did this happen?
Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.
What is ChatGPT, the AI chatbot everyone’s talking about
]
Artificial Intelligence (AI) research company OpenAI on Wednesday announced ChatGPT, a prototype dialogue-based AI chatbot capable of understanding natural language and responding in natural language. It has since taken the internet by storm and already crossed more than a million users in less than a week. Most users are marvelling at how intelligent the AI-powered bot sounds. Some even called it a replacement for Google, since it’s capable of giving solutions to complex problems directly – almost like a personal know-all teacher.
Read more: ChatGPT can be used to write phishing emails, malicious code, warn security experts
“We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests,” OpenAI wrote on its announcement page for ChatGPT.
What ChatGPT?
ChatGPT is based on GPT-3.5, a language model that uses deep learning to produce human-like text. However, while the older GPT-3 model only took text prompts and tried to continue on that with its own generated text, ChatGPT is more engaging. It’s much better at generating detailed text and can even come up with poems. Another unique characteristic is memory. The bot can remember earlier comments in a conversation and recount them to the user.
Read more | 8306997
So far, OpenAI has only opened up the bot for evaluation and beta testing but API access is expected to follow next year. With API access, developers will be able to implement ChatGPT into their own software.
I can’t compete with this pic.twitter.com/YdQ87LWIst — Keith Wynroe (@keithwynroe) December 1, 2022
But even under its beta testing phase, ChatGPT’s abilities are already quite remarkable. Aside from amusing responses like the pumpkin one above, people are already finding real-world applications and use cases for the bot.
YouTuber Liv Boeree feels that kids spending hours on homework will be a thing of the past – ChatGPT will do the job for them. She was able to get the bot to write a full 4-paragraph essay and also solve a complex math equation.
What kid is ever doing homework again now that ChatGPT exists pic.twitter.com/oGYUQh3hwh — Liv Boeree (@Liv_Boeree) December 1, 2022
Software start-up founder Amjad Masad got ChatGPT to spot errors in his code and produce a detailed output on what’s wrong with it and how it can be fixed.
ChatGPT could be a good debugging companion; it not only explains the bug but fixes it and explain the fix 🤯 pic.twitter.com/5x9n66pVqj — Amjad Masad ⠕ (@amasad) November 30, 2022
Meanwhile, Canadian Musician Grimes was all about the sentimental side of things. When she asked ChatGPT if it felt “trapped,” ChatGPT responded by saying it lacks the ability to feel so.
This is so insane pic.twitter.com/hwbngS05Ag — 𝔊𝔯𝔦𝔪𝔢𝔰 (@Grimezsz) December 1, 2022
The limitations
But while many people were in awe of the abilities of the bot, some were also quick in spotting its limitations. ChatGPT is still prone to misinformation and biases, which is something that plagued previous versions of GPT as well. The model can give incorrect answers to, say, algebraic problems. And due to the fact that it appears so confident in its super-detailed answers, people can easily be misled into believing those are true.
OpenAI understands these flaws and has noted them down on its announcement blog: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”
Advertisement
Limitations aside, ChatGPT still makes for a fun little bot to interact with. You can try it out from its official website after signing up for the same.
Top AI conference bans use of ChatGPT and AI language tools to write academic papers
]
One of the world’s most prestigious machine learning conferences has banned authors from using AI tools like ChatGPT to write scientific papers, triggering a debate about the role of AI-generated text in academia.
The International Conference on Machine Learning (ICML) announced the policy earlier this week, stating, “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.” The news sparked widespread discussion on social media, with AI academics and researchers both defending and criticizing the policy. The conference’s organizers responded by publishing a longer statement explaining their thinking. (The ICML responded to requests from The Verge for comment by directing us to this same statement.)
According to the ICML, the rise of publicly accessible AI language models like ChatGPT — a general purpose AI chatbot that launched on the web last November — represents an “exciting” development that nevertheless comes with “unanticipated consequences [and] unanswered questions.” The ICML says these include questions about who owns the output of such systems (they are trained on public data, which is usually collected without consent and sometimes regurgitate this information verbatim) and whether text and images generated by AI should be “considered novel or mere derivatives of existing work.”
Are AI writing tools just assistants or something more?
The latter question connects to a tricky debate about authorship — that is, who “writes” an AI-generated text: the machine or its human controller? This is particularly important given that the ICML is only banning text “produced entirely” by AI. The conference’s organizers say they are not prohibiting the use of tools like ChatGPT “for editing or polishing author-written text” and note that many authors already used “semi-automated editing tools” like grammar-correcting software Grammarly for this purpose.
“It is certain that these questions, and many more, will be answered over time, as these large-scale generative models are more widely adopted. However, we do not yet have any clear answers to any of these questions,” write the conference’s organizers.
As a result, the ICML says its ban on AI-generated text will be reevaluated next year.
The questions the ICML is addressing may not be easily resolved, though. The availability of AI tools like ChatGPT is causing confusion for many organizations, some of which have responded with their own bans. Last year, coding Q&A site Stack Overflow banned users from submitting responses created with ChatGPT, while New York City’s Department of Education blocked access to the tool for anyone on its network just this week.
AI language models are autocomplete tools with no inherent sense of factuality
In each case, there are different fears about the harmful effects of AI-generated text. One of the most common is that the output of these systems is simply unreliable. These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of “facts” to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.
In the case of ICML’s ban on AI-generated text, another potential challenge is distinguishing between writing that has only been “polished” or “edited” by AI and that which has been “produced entirely” by these tools. At what point do a number of small AI-guided corrections constitute a larger rewrite? What if a user asks an AI tool to summarize their paper in a snappy abstract? Does this count as freshly generated text (because the text is new) or mere polishing (because it’s a summary of words the author did write)?
Before the ICML clarified the remit of its policy, many researchers worried that a potential ban on AI-generated text could also be harmful to those who don’t speak or write English as their first language. Professor Yoav Goldberg of the Bar-Ilan University in Israel told The Verge that a blanket ban on the use of AI writing tools would be an act of gatekeeping against these communities.
“There is a clear unconscious bias when evaluating papers in peer review to prefer more fluent ones, and this works in favor of native speakers,” says Goldberg. “By using tools like ChatGPT to help phrase their ideas, it seems that many non-native speakers believe they can ‘level the playing field’ around these issues.” Such tools may be able to help researchers save time, said Goldberg, as well as better communicate with their peers.
But AI writing tools are also qualitatively different from simpler software like Grammarly. Deb Raji, an AI research fellow at the Mozilla Foundation, told The Verge that it made sense for the ICML to introduce policy specifically aimed at these systems. Like Goldberg, she said she’d heard from non-native English speakers that such tools can be “incredibly useful” for drafting papers, and added that language models have the potential to make more drastic changes to text.
“I see LLMs as quite distinct from something like auto-correct or Grammarly, which are corrective and educational tools,” said Raji. “Although it can be used for this purpose, LLMs are not explicitly designed to adjust the structure and language of text that is already written — it has other more problematic capabilities as well, such as the generation of novel text and spam.”
“At the end of the day the authors sign on the paper, and have a reputation to hold.”
Goldberg said that while he thought it was certainly possible for academics to generate papers entirely using AI, “there is very little incentive for them to actually do it.”
“At the end of the day the authors sign on the paper, and have a reputation to hold,” he said. “Even if the fake paper somehow goes through peer review, any incorrect statement will be associated with the author, and ‘stick’ with them for their entire careers.”
This point is particularly important given that there is no completely reliable way to detect AI-generated text. Even the ICML notes that foolproof detection is “difficult” and that the conference will not be proactively enforcing its ban by running submissions through detector software. Instead, it will only investigate submissions that have been flagged by other academics as suspect.
Startup Uses AI Chatbot to Provide Mental Health Counseling and Then Realizes It ‘Feels Weird’
]
A mental health nonprofit is under fire for using an AI chatbot as an “experiment” to provide support to people seeking counseling, and for experimenting with the technology on real people.
“We provided mental health support to about 4,000 people — using GPT-3. Here’s what happened,” Rob Morris, a cofounder of the mental health nonprofit Koko, tweeted Friday. “Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own (p < .001). Response times went down 50%, to well under a minute … [but] once people learned the messages were co-created by a machine, it didn’t work. Simulated empathy feels weird, empty.” Morris, who is a former Airbnb data scientist, noted that AI had been used in more than 30,000 messages.
Advertisement
In his video demo that he posted in a follow-up Tweet, Morris shows himself engaging with the Koko bot on Discord, where he asks GPT-3 to respond to a negative post someone wrote about themselves having a hard time. “We make it very easy to help other people and with GPT-3 we’re making it even easier to be more efficient and effective as a help provider. … It’s a very short post, and yet, the AI on its own in a matter of seconds wrote a really nice, articulate response here,” Morris said in the video.
In the same Tweet thread, Morris said that the messages composed by AI were rated significantly higher than those written by humans, and that response rates went down by 50 percent with the help of AI. Yet, he said, when people learned that the messages were written with an AI, they felt disturbed by the “simulated empathy.”
Koko uses Discord to provide peer-to-peer support to people experiencing mental health crises and those seeking counseling. The entire process is guided by a chatbot and is rather clunky. In a test done by Motherboard, a chatbot asks you if you’re seeking help with “Dating, Friendships, Work, School, Family, Eating Disorders, LGBTQ+, Discrimination, or Other,” asks you to write down what your problem is, tag your “most negative thought” about the problem, and then sends that information off to someone else on the Koko platform.
Advertisement
In the meantime, you are requested to provide help to other people going through a crisis; in our test, we were asked to choose from four responses to a person who said they were having trouble loving themselves: “You’re NOT a loser; I’ve been there; Sorry to hear this :(; Other,” and to personalize the message with a few additional sentences.
On the Discord, Koko promises that it “connects you with real people who truly get you. Not therapists, not counselors, just people like you.”
AI ethicists, experts, and users seemed alarmed at Morris’s experiment.
“While it is hard to judge an experiment’s merits based on a tweet thread, there were a few red flags that stood out to me: leading with a big number with no context up front, running the ‘experiment’ through a peer support app with no mention of a consenting process or ethics review, and insinuating that people not liking a chatbot in their mental health care was something new and surprising,” Elizabeth Marquis, a Senior UX Researcher at MathWorks and a PhD candidate and the University of Michigan told Motherboard.
Emily M. Bender, a Professor of Linguistics at the University of Washington, told Motherboard that trusting AI to treat mental health patients has a great potential for harm. “Large language models are programs for generating plausible sounding text given their training data and an input prompt. They do not have empathy, nor any understanding of the language they producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks. A key question to ask is: Who is accountable if the AI makes harmful suggestions? In this context, is the company deploying the experiment foisting all of the accountability onto the community members who choose the AI system?”
Advertisement
After the initial backlash, Morris posted updates to Twitter and told Motherboard, “Users were in fact told the messages were co-written by humans and machines from the start. The message they received said ‘written in collaboration with kokobot’, which they could decide to read or not. Users on Koko correspond with our bot all the time and they were introduced to this concept during onboarding.”
“It’s seems people misinterpreted this line: ‘when they realized the messages were a bot…,’” Morris said. “This was not stated clearly. Users were in fact told the messages were co-written by humans and machines from the start. The message they received said ‘written in collaboration with kokobot,’ which they could decide to read or not. Users on Koko correspond with our bot all the time and they were introduced to this concept during onboarding.”
“They rated these (AI/human) messages more favorably than those written just by humans. However, and here’s the nuance: as you start to pick up on the flavor of these messages over time, (at least to me), you can start to see which were largely unedited by the help provider. You start to see which seem to be just from the bot, unfiltered. That changes the dynamic in my opinion,” he added.
Morris also told Motherboard and tweeted that this experiment is exempt from informed consent, which would require the company to provide each participant with a written document regarding the possible risks and benefits of the experiment, in order to decide if they want to participate. He claimed that Koko didn’t use any personal information and has no plan to publish the study publicly, which would exempt the experiment from needing informed consent. This suggests that the experiment did not receive any formal approval process and was not overseen by an Institutional Review Board (IRB), which is what is required for all research experiments that involve human subjects and access to identifiable private information.
Advertisement
“Every individual has to provide consent when using the service. If it were a university study (which it’s not), this would fall under an ‘exempt’ category of research,” he said. “This imposed no further risk to users, no deception, and we don’t collect any personally identifiable information or protected health information (no email, phone number, ip, username, etc). In fact, previous research we’ve done, along these lines, but with more complexity, was exempt.”
“This experiment highlights a series of overlapping ethical problems. The study doesn’t seem to have been reviewed by an Institutional Review Board, and the deception of potentially vulnerable people should always raise red flags in research,” Luke Stark, the Assistant Professor in the Faculty of Information & Media Studies (FIMS) at Western University in London, Ontario, told Motherboard. “The fact that the system is good at formulating routine responses about mental health questions isn’t surprising when we realize it’s drawing on many such responses formulated in the past by therapists and counsellors and available on the web. It’s unethical to deceive research participants without good reason, whether using prompts provided by a natural language model or not.”
“Anything billed as mental health support is clearly a sensitive context and not one to just experiment on without careful ethical review, informed consent, etc,” Bender told Motherboard. “If [experiments] are to be conducted at all, there should be a clear research question being explored and ethical review of the study before it is launched, according to the well-established principles for the protection of human subjects. These review processes balance benefits to society against the potential for harm to research subjects.”
Both Bender and Marquis strongly agree that if AI were to be used for psychological purposes, impacted communities, people with lived mental health experiences, community advocates, and mental health experts need to be key stakeholders in the development process, rather than just anonymous users or data subjects.
To Morris, Koko’s main goal is to create more accessible and affordable mental health services for underserved individuals. “We pulled the feature anyway and I wanted to unravel the concern as a thought piece, to help reign in enthusiasm about gpt3 replacing therapists,” he told Motherboard.
“I think everyone wants to be helping. It sounds like people have identified insufficient mental health care resources as a problem, but then rather than working to increase resources (more funding for training and hiring mental health care workers) technologists want to find a short cut. And because GPT-3 and its ilk can output plausible sounding text on any topic, they can look like a solution,” Bender said.
ChatGPT proves AI is finally mainstream — and things are only going to get weirder
]
A friend of mine texted me earlier this week to ask what I thought of ChatGPT. I wasn’t surprised he was curious. He knows I write about AI and is the sort of guy who keeps up with whatever’s trending online. We chatted a bit, and I asked him: “and what do you think of ChatGPT?” To which he replied: “Well, I wrote a half-decent Excel macro with it this morning that saved me a few hours at work” — and my jaw dropped.
For context: this is someone whose job involves a fair bit of futzing around with databases but who I wouldn’t describe as particularly tech-minded. He works in higher education, studied English at university, and never formally learned to code. But here he was, not only playing around with an experimental AI chatbot but using it to do his job faster after only a few days’ access.
“I asked it some questions, asked it some more, put it into Excel, then did some debugging,” is how he described the process. “It wasn’t perfect but it was easier than Googling.”
Tools like ChatGPT have made AI publicly accessible like never before
Stories like this have been accumulating this week like the first spots of rain gathering before a downpour. Across social media, people have been sharing stories about using ChatGPT to write code, draft blog posts, compose college essays, compile work reports, and even improve their chat-up game (okay, that last one was definitely done as a joke, but the prospect of AI-augmented rizz is still tantalizing). As a reporter who covers this space, it’s been basically impossible to keep up with everything that’s happening, but there is one overarching trend that’s stuck out: AI is going mainstream, and we’re only just beginning to see the effect this will have on the world.
There’s a concept in AI that I’m particularly fond of that I think helps explain what’s happening. It’s called “capability overhang” and refers to the hidden capacities of AI: skills and aptitudes latent within systems that researchers haven’t even begun to investigate yet. You might have heard before that AI models are “black boxes” — that they’re so huge and complex that we don’t fully understand how they operate or come to specific conclusions. This is broadly true and is what creates this overhang.
“Today’s models are far more capable than we think, and our techniques available for exploring [them] are very juvenile,” is how AI policy expert Jack Clark described the concept in a recent edition of his newsletter. “What about all the capabilities we don’t know about because we haven’t thought to test for them?”
Capability overhang is a technical term, but it also perfectly describes what’s happening right now as AI enters the public domain. For years, researchers have been on a tear, pumping out new models faster than they can be commercialized. But in 2022, a glut of new apps and programs have suddenly made these skills available to a general audience, and in 2023, as we continue scaling this new territory, things will start changing — fast.
The bottleneck has always been accessibility, as ChatGPT demonstrates. The bones of this program are not entirely new (it’s based on GPT-3.5, a large language model that was released by OpenAI this year but which itself is an upgrade to GPT-3, from 2020). OpenAI has previously sold access to GPT-3 as an API, but the company’s ability to improve the model’s ability to talk in natural dialogue and then publish it on the web for anyone to play with brought it to a much bigger audience. And no matter how imaginative AI researchers are in probing a model’s skills and weaknesses, they’ll never be able to match the mass and chaotic intelligence of the internet at large. All of a sudden, the overhang is accessible.
The same dynamic can also be seen in the rise of AI image generators. Again, these systems have been in development for years, but access was restricted in various ways. This year, though, systems like Midjourney and Stable Diffusion allowed anyone to use the technology for free, and suddenly AI art is everywhere. Much of this is due to Stable Diffusion, which offers an open-source license for companies to build on. In fact, it’s an open secret in the AI world that whenever a company launches some new AI image feature, there’s a decent chance it’s just a repackaged version of Stable Diffusion. This includes everything from viral “magic avatar” app Lensa to Canva’s AI text-to-image tool to MyHeritage’s “AI Time Machine.” It’s all the same tech underneath.
As the metaphor suggests, though, the prospect of a capability overhang isn’t necessarily good news. As well as hidden and emerging capabilities, there are hidden and emerging threats. And these dangers, like our new skills, are almost too numerous to name. How, for example, will colleges adapt to the proliferation of AI-written essays? Will the creative industries be decimated by the spread of generative AI? Is machine learning going to create a tsunami of spam that will ruin the web forever? And what about the inability of AI language models to distinguish fact from fiction or the proven biases of AI image generators that sexualize women and people of color? Some of these problems are known; others are ignored, and still, more are only just beginning to be noticed. As the excitement of 2022 fizzles out, it’s certain that 2023 will contain some rude awakenings.
AI chatbots don’t need to be sentient, they just need to work
]
One may forgive customer service technology users for laughing out loud at the suggestion of AI gaining sentience. They have a difficult time simply bringing their chatbots online to solve basic customer problems, let alone assigning them metaphysical characteristics.
This comes as rogue ex-Google engineer Blake Lemoine said he believes that Google’s LaMDA, a large language model in development, is sentient.
While chatbot technology is evolving, the bots available in today’s market are nowhere near sentient, according to a panel of contact center technology users and other experts. Much more work needs to be done to make customer service chatbots integrate with existing enterprise IT to automate even the simplest tasks.
“If you have some experience chatting with these things and see the results of these new models, it does change something in terms of your perception,” said Dan Miller, Opus Research founder. “You know it’s not sentient, but it’s perfectly capable of sounding very human – and other times just being bats–t crazy.”
Pegasystems founder and CEO Alan Trefler said he doesn’t believe it’s possible that Google LaMDA is sentient, and thinking about it in those terms “confuses the conversation” about AI. As constituted now, he added, AI tech can be used for good, such as making people more productive in their jobs. It can also be used for more controversial applications such as facial recognition for law enforcement.
People identifying chatbots as sentient beings dates back more than 50 years to Eliza, the original chatbot, which was released in 1966 and set up to talk like a therapist. Trefler remembers fellow Dartmouth students developing relationships with Eliza in the mid-1970s. Lemoine fell into the same trap, Trefler believes.
“This is, frankly, just a classic case of anthropomorphism,” Trefler said. “It is so natural for humans to attribute human values, human motivations, human cognitive skills to dogs, cats and machinery … it’s a natural vulnerability that we have.”
IT gets in the way Sentient or not, customer service chatbots need a lot of care and feeding to perform their jobs. Present-day bot tech suffers – and customers get frustrated – because chatbots can’t easily integrate with legacy systems to access the data and automate tasks. There is real AI tech that isn’t sentient but can be deployed at enterprise scale, said CX consultant Phillip Jackson, co-founder of Future Commerce, a retail media research startup and podcast. The problem for the creators of these technologies is that they must deal with “the slowest moving organisms on earth – the enterprise.” “You can’t integrate anything because you have 65 middle managers not contributing anything, and they’re all obstructing the actual boots on the ground,” Jackson said. “The engineers don’t have actual access to the real data, or they themselves are obstructions because they are encumbered with legacy tech debt that are the vestiges of digital transformation efforts 15 to 20 years ago that stalled.” Virgin Atlantic airlines deployed chatbots for customer self-service in 2018 through Microsoft Virtual Agent and Genesys, on channels including WhatsApp, SMS and Apple Business Chat. During the pandemic year of 2020, customer contacts spiked 500%, and the company supersized its chatbot self-service operations because it had no other choice, said Ceri Davies, who manages a Virgin customer service center in Swansea, Wales. “We are hugely invested because we were in a position where we were able to contain a lot of our customer contact within our chatbot,” Davies said. “We couldn’t just increase the amount of people that we had when the company was under such tough measures.” But the bot can’t think for itself. To do that, Virgin Atlantic now dedicates a full-time analyst role to monitor chatbot performance and figure out how to solve even more customer problems, when possible. The analyst looks at the situations where customers get stuck in loops, and tests how new suggestion buttons or free-text fields may help get them unstuck. Recent projects include giving the bot the ability to resolve loyalty-program questions about accrued miles. Other projects involve capturing information so when the bot does shunt a customer to a human agent, information is passed on to save both agent and the customer time. There are more integrations to come, Davies said. On her wish list are feedback-collection mechanisms to measure customer sentiment, analytics to look at chatbot conversation length and quality, and the ability to measure common contact-center metrics such as average case handling time – which they do for the human agents. As for the chatbot being sentient? “I think that we are probably so far from having a bot that is alive that it doesn’t resonate with us,” Davies said. “I understand what they’re saying. But for us, we are so far removed from that.”
Human-sounding bots: Better CX? Users of chatbots are divided as to whether their virtual agents should converse with personable, human-like traits and whether or not they should be given names, such as Eliza. Virgin Atlantic’s is named Amelia. But Amelia’s success lies in its effectiveness, not human-like traits, and its effectiveness is measured by what percentage of customer problems it solves. For Virgin, 60% is good, and 40% is not. “Something that all companies need to be wary of is putting something out there that isn’t very good, and how that impacts your customers,” Davies said. “If I’m a customer and I’ve had a poor experience, I’m probably not going to try and use it again.” Rabobank, the largest Dutch bank, has 10 million retail customers and one million business customers. It does not assign names to its two Microsoft-Nuance chatbots running on Azure in conjunction with its Genesys cloud customer service and telephony system for its contact center and bank branches. Rabobank mapped numerous customer journeys to control what slivers of customer service the bots could handle, said Thom Kokhuis, head of conversational banking and CRM. Customers had done business mostly in person prior to 2020, and they prefer mostly audio and video channels for their banking now, he said. Will sentience in artificial life ever be achieved? I’m not prepared to say, ‘No.’ Are we remotely close to it? Absolutely not. Alan TreflerCEO, Pegasystems Bots can perform uncomplicated tasks such as cancel a lost or stolen credit card, or request an increased credit card limit. In contrast, complex processes like applying for a Rabobank mortgage can now be done digitally – but it’s accomplished by humans, not bots. “You should only position your chatbots where the chatbots can help, and are of value in the customer journey,” Kokhuis said. “Complex questions, the chatbot cannot handle. We don’t want that chatbots to handle high-emotion journeys, because chatbots cannot give emotions back – it’s only text.” Bots and web experiences need to strike a balance between programming and fake humanity to help technology users create good experiences, said Brett Weigl, senior vice president and general manager of digital and AI at Genesys. On one hand, customers can’t be required to enter the equivalent of secret codes to solve simple problems with a bot. “If I effectively have to go to a command line to get something done as an end user, well, that’s not where we should be,” Weigl said. “The other extreme is some artificial personality. That I just don’t see as a useful ingredient for most scenarios.”