Jump to content

Air Canada found liable for chatbot's bad advice on plane tickets

brningpyre

Summary

Air Canada has been ordered to pay compensation to a grieving grandchild who claimed they were misled into purchasing full-price flight tickets by an ill-informed chatbot.

 

Quotes

Quote

In an argument that appeared to flabbergast a small claims adjudicator in British Columbia, the airline attempted to distance itself from its own chatbot's bad advice by claiming the online tool was "a separate legal entity that is responsible for its own actions."


"This is a remarkable submission," Civil Resolution Tribunal (CRT) member Christopher Rivers wrote.
"While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot."

 

My thoughts

It's a pretty wild argument to suggest that the chatbot hosted on your website is a separate legal entity. I think my only issue is that this was only in B.C. small claims court, so we may need a bigger legal case to establish an actual legal precedent.

 

Sources

 https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416

Link to comment
Share on other sites

Link to post
Share on other sites

Who are they suggesting is liable? A 3rd party that provided the tech for the chatbot? Maybe there's something to that (although I'd expect AC to be liable, and then they can try and go chase the 3rd party). 

 

Or are they actually suggesting that you need to just sue the chatbot directly? lol like wtf.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Holmes108 said:

Who are they suggesting is liable? A 3rd party that provided the tech for the chatbot? Maybe there's something to that (although I'd expect AC to be liable, and then they can try and go chase the 3rd party). 

 

Or are they actually suggesting that you need to just sue the chatbot directly? lol like wtf.

Wouldn't it be hilarious if they tried to cross-examine a chatbot, though?

Link to comment
Share on other sites

Link to post
Share on other sites

Anyone using a website chatbot for anything deserves to be misled.

I cannot think of one instance where I see those as anything other than a pop-up ad on the corner of my screen.

Link to comment
Share on other sites

Link to post
Share on other sites

It's interesting that we're finally getting some legal precedent set on this. Along with some of the other stuff that has been happening in the space, I bet more companies will think twice about just slapping AI/LLM chatbots all over their sites.

Link to comment
Share on other sites

Link to post
Share on other sites

Chatbots need to go. They have never provided anything useful that you haven't already found, they just act as a way to try to discourage you from talking to a human by making it incredibly hard

Gaming PC: Ryzen 5 5600x, 32GB, GTX 1080

NAS & Home server: i7-7700k, 16GB

M2 Pro Macbook for everything else

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/16/2024 at 8:40 AM, 8tg said:

Anyone using a website chatbot for anything deserves to be misled.

I cannot think of one instance where I see those as anything other than a pop-up ad on the corner of my screen.

oh gawd, let me tell you my experience with skip the dishes "customer support"

 

I'm 100% confident that it's always a chatbot. You know why? Because I ordered something when the store was open, but the customer support seemed to think a driver had already picked it up. The problem was the order was pending for 4+ hours, well into the time the store should have been closed. So I was like "fine I will wait" so I waited an hour and nothing happened, so I checked back and the customer support could clearly not understand anything I was saying until I said "refund"

 

This is the problem with "chatbots", is that they are fed a very limited amount of information, they possess no means of thinking about the situation. No means of defusing a situation. And it will pretty much offer wrong information the second you ask it something it doesn't have any context for.

 

It basically invents answers that are wrong the second you aren't asking about anything it was not trained to answer.

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/16/2024 at 12:40 PM, 8tg said:

Anyone using a website chatbot for anything deserves to be misled.

I cannot think of one instance where I see those as anything other than a pop-up ad on the corner of my screen.

It shouldn't be on the customer to go out of their way to make sure that the help they are getting, provided by the company on their official website no less, is accurate and competent. If the customer got bad, third party information, that would be a whole other story, but this is Air Canada providing this tool as "help," and said "help" directly screwed over a customer...

Razer Blade 14 (RZ09-0508x): AMD Ryzen 9 8945HS, nVidia GeForce RTX 4070, 64 GB 5200 DDR5, Win11 Pro [Workhorse/Gaming]

Apple MacBook Pro 14 (A2442): Apple M1 Pro APU (8 Core), 16 GB Unified Memory, macOS Sonoma 14.3 [Creative Rig]

Samsung GalaxyBook Pro (NP930QDB-KF4CA): Intel Core i7-1165G7, 16 GB DDR4, Win11 Pro [WinTablet]

HP Envy 15-k257ca: Intel Core i5 5200U, nVidia GeForce 840M, 16GB 1600 DDR3, Win7 Pro [Retro]

Toshiba Satellite A70-S2591:  Intel Pentium 4 538, ATI Radeon 9000 IGP, 1.5 GB DDR RAM, WinXP Pro [Antique]

Link to comment
Share on other sites

Link to post
Share on other sites

For those like me who read the OP and didn't understand what was going on, this is what happened:

 

1) Some guy wants to buy an airplane ticket because his grandma has died.

2) Air Canada lets customers fly at a low price if the flight is booked because of death-related things with close family members.

3) The guy asked the chatbot how to get the reduced price and was told to buy a regular ticket and then apply for a refund afterward.

4) The guy booked the full-price flight and then when trying to apply for the refund was rejected because that's not how you should do it. What he should have done was call them and book the flight in a special way that would have made the flight cheaper upfront. 

5) The guy decided to sue Air Canada.

 

Unfortunate situation but luckily it seems like the guy was compensated for it.

 

 

 

On 2/17/2024 at 3:51 AM, leadeater said:

Companies are liable for bad advice and actions of employees, chatbots are no exception 🤦‍♂️

 

Not even a good try.

That's what I find the most interesting in all of this. Air Canada tries to argue that they aren't responsible for what their employees do. 

Quote

Air Canada argued that it can't be held liable for information provided by one its "agents, servants or representatives — including a chatbot.

 

The story involving a chatbot is kind of irrelevant because it seems like Air Canada would have had the same position if it was a customer representative who gave a customer incorrect information (which happens quite a lot I might add). 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

For those like me who read the OP and didn't understand what was going on, this is what happened:

 

1) Some guy wants to buy an airplane ticket because his grandma has died.

2) Air Canada lets customers fly at a low price if the flight is booked because of death-related things with close family members.

3) The guy asked the chatbot how to get the reduced price and was told to buy a regular ticket and then apply for a refund afterward.

4) The guy booked the full-price flight and then when trying to apply for the refund was rejected because that's not how you should do it. What he should have done was call them and book the flight in a special way that would have made the flight cheaper upfront. 

5) The guy decided to sue Air Canada.

 

Unfortunate situation but luckily it seems like the guy was compensated for it.

 

 

 

That's what I find the most interesting in all of this. Air Canada tries to argue that they aren't responsible for what their employees do. 

 

The story involving a chatbot is kind of irrelevant because it seems like Air Canada would have had the same position if it was a customer representative who gave a customer incorrect information (which happens quite a lot I might add). 

Did you read the article? It was about the bot tell the customer that they could get it refunded:

Moffatt provided the CRT with a screenshot of the chatbot's words: "If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form." 

 

And the Air Canada denied the request: 

But when they contacted Air Canada to get their money back, they were told bereavement rates don't apply to completed travel — something explained on a different part of their website. 

 

So it IS about a chatbot providing false information. 

And yes, given its a tool implented by the company, the company is obviously liable. They can then choose to go after the boy-provider but that should be of no concern to the customer. 

mITX is awesome! I regret nothing (apart from when picking parts or have to do maintainance *cough*cough*)

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/17/2024 at 12:40 AM, 8tg said:

Anyone using a website chatbot for anything deserves to be misled.

I cannot think of one instance where I see those as anything other than a pop-up ad on the corner of my screen.

What a terrible take. Blame the person, not the company? I get that you don't like chatbot help, but it should be taken out on Air Canada, not the customer. After all, like many sites, they push the AI help first and tend to hide proper contact channels.

 

And net everyone is as tech savvy as the typical user here. They may not even know that the chatbot is some random AI. If I sat my mom down in front of it, she'd likely think it is a real person on the other end.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, DeerDK said:

Did you read the article? It was about the bot tell the customer that they could get it refunded:

Yes I did. Did you not read my post?

 

1 hour ago, DeerDK said:

So it IS about a chatbot providing false information. 

I never said it wasn't...

 

 

1 hour ago, Blue4130 said:

What a terrible take. Blame the person, not the company? I get that you don't like chatbot help, but it should be taken out on Air Canada, not the customer. After all, like many sites, they push the AI help first and tend to hide proper contact channels.

 

And net everyone is as tech savvy as the typical user here. They may not even know that the chatbot is some random AI. If I sat my mom down in front of it, she'd likely think it is a real person on the other end.

I agree that it's bad to blame the customer for this. The blame is on the company.

I don't think we should throw the baby out with the bathwater though. We don't just throw everything out as soon as it does a single mistake. It's not like I yell at a fast food worker and tell them to get replaced by robots just because one human one time got my order wrong.

 

The issue here is that Air Canada didn't take responsibility for the error, and that they also said they wouldn't take responsibility if one of their humans had made the error. They should be held responsible for what their employees (humans or bots) do. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, LAwLz said:

Yes I did. Did you not read my post?

 

I never said it wasn't...

 

 

I agree that it's bad to blame the customer for this. The blame is on the company.

I don't think we should throw the baby out with the bathwater though. We don't just throw everything out as soon as it does a single mistake. It's not like I yell at a fast food worker and tell them to get replaced by robots just because one human one time got my order wrong.

 

The issue here is that Air Canada didn't take responsibility for the error, and that they also said they wouldn't take responsibility if one of their humans had made the error. They should be held responsible for what their employees (humans or bots) do. 

My apologies. I somehow got your whole message backwards. I was convince you were arguing that OP just booked a full price flight and the rest happend as an afterthought. Must have had my thoughts somewhere else.

My bad.

 

3 hours ago, LAwLz said:

The story involving a chatbot is kind of irrelevant because it seems like Air Canada would have had the same position if it was a customer representative who gave a customer incorrect information (which happens quite a lot I might add). 

I guess this played a part to my reading. I would still say that the Chatbot is a core aspect since this was what they claimed should exculpate them.
I do agree that it wouldn't fly in either case, though.

mITX is awesome! I regret nothing (apart from when picking parts or have to do maintainance *cough*cough*)

Link to comment
Share on other sites

Link to post
Share on other sites

I expect a lot of companies that have deployed GPT based customer support are going to end up regretting it soon.   

Anything that is hosted/accessed through on your website is going to be considered offical policy.  In particular any form of customer support chat, even if you have some terms of use that say that anything the support chat says is garbage and you should ignore it the courts are going to find that this is non binding and you are responsible for honouring what your customer support offers regardless of if it is a human or a chatBot. 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, LAwLz said:

The issue here is that Air Canada didn't take responsibility for the error, and that they also said they wouldn't take responsibility if one of their humans had made the error. They should be held responsible for what their employees (humans or bots) do. 

In most of the world if a company resistive in a customer support role were to clarify the refund policy the company would be compelled to fulfil it so long as the policy is readable and not obviously incorrect.   

Some employers (bad ones) would then try to extract that cost from the human (or I suppose agancy that runs the chat bot service) but the correct thing to do here is to look at your training and guidance any maybe your refund policy itself... ... 

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, hishnash said:

I expect a lot of companies that have deployed GPT based customer support are going to end up regretting it soon.   

While I agree, there's no mention in the article of this being a GPT or any form of neural network.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Sauron said:

While I agree, there's no mention in the article of this being a GPT or any form of neural network.

In the end it does not matter from a legal perspective, your write this case might not have been an modern ML based solution however for it to produce text in a valid looking sentence that was incorrect on its own (not just selecting a canned response) suggests some form of LLM or at least LM (if not large).

However a log of start ups have been created in the last 2 years that are based on using LLMs to replace traditional canned response branching chatbot services with LLMs... the selling point of these is that they can re-write content to feel more approachable to the user but it will mean that a LOT of them end up making up company policies all the time.    

Link to comment
Share on other sites

Link to post
Share on other sites

I have had this sort of argument within groups and companies. Where you have an "ai" bot supporting people. They hype up this bot and how its "AI" and it will support people so much faster and whatever. Where I bring up questions like "what if it finds incorrect info or its platform is circumvented?" where nobody wants to answer who is responsible. Is it the bot maker, the company impingement it, the admin in charge of feeding its data. If its the bot created company does their initial agreement or a clause let them off the hook? Since there isnt a fall human to simply re-educated or simply scapegoat companies are going to have that scapegoat go more and more uphill lol.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/19/2024 at 4:28 AM, DeerDK said:

Did you read the article? It was about the bot tell the customer that they could get it refunded:

Moffatt provided the CRT with a screenshot of the chatbot's words: "If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form." 

 

And the Air Canada denied the request: 

But when they contacted Air Canada to get their money back, they were told bereavement rates don't apply to completed travel — something explained on a different part of their website. 

 

So it IS about a chatbot providing false information. 

And yes, given its a tool implented by the company, the company is obviously liable. They can then choose to go after the boy-provider but that should be of no concern to the customer. 

I mean the word "kindly" from the  chatbot should have been a red flag it was a scam, clearly. 😂

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/26/2024 at 11:06 AM, Touch My Hamm said:

I have had this sort of argument within groups and companies. Where you have an "ai" bot supporting people. They hype up this bot and how its "AI" and it will support people so much faster and whatever.

Cause time and time again "this will make THING, better, FASTER, cheaper"

 

The Big Lie of “Good, Fast, Cheap” | by Benek Lisefski | The ...

https://medium.com/swlh/the-big-lie-of-good-fast-cheap-fb8905818250

 

This venn diagram is always true, and doubly so for generative AI. A proper, "good" and "fast" LLM that gives high quality answers, would require, literately, curating every piece of input that goes into it. "Cheap and fast" is what nearly every "chatbot" is.

 

Companies want to make everything cheap, and just like every goddamned software upgrade, software plataued around 1995. Every office software you could ever want, already existed by 1995. Spell check? Grammar check? Graphs? Clip art? all this stuff has beeen around for 30 years. What has changed that justifies software upgrades? New OS? Why? 

 

It comes over time, that every justification to upgrade, tends to be the opposite. Like for example look at the LTT forum. Does any upgrade to Invision improve anything? No. The OS wants an update, then PHP wants an update, and then because php is update the forum needs to be updated, or sometimes it's top down, where Invision wants you to run the latest PHP, which means you need to upgrade PHP and in turn the underlying SQL database due to micro-aggressive changes to how it works out of the box, and in turn having to upgrade the OS because system libraries require an upgrade.

 

For every "this will make it faster, better" requires a huge time sink and cost. You're often better ignoring all insistence by the OS, and the CMS to upgrade, unless it's a critical security problem. Cause what I've seen is that there's a lot of pointless "fixed security bugs" patch notes with no explanation of what it actually fixes. Why should I trust these changes?

 

Everyone needs to walk back from Chrome/Firefox version inflation. This has utterly ruined stability of websites globally. Everything is dictated by Google now.

 

Link to comment
Share on other sites

Link to post
Share on other sites

I find it funny how much effort is spent on implementing a solution that is completely dead-on-arrival, either because it's useless or, as demonstrated, a liability. I'd argue that any customer interaction that can't be conclusively modeled in a flowchart should always be handled by a person who can reason their way to a proper solution. That in turn means that if you have processes that can be modeled in a flowchart, you do that. You don't implement a chatbot to walk through that, you just have a guided FAQ form where you click the appropriate responses until the desired outcome is reached. And if anything falls out of the expected guardrails, you immediately switch to a real person to help.

 

I really don't get this fascination of companies introducing these chatbots when all you really need is a competent search function on your website to let people find necessary information, the aforementioned interactive FAQ that allows for a few branching options along the way and some human staff to handle the rest. I can't imagine a scenario where a chatbot improves on anything I've ever had to do on a website.

 

And the idea to shift blame away by claiming the chatbot is a separate legal entity is just bonkers. But hey, we've all been fooled into believing corporations are people in order to make it easier for owners and executives to escape prosecution. In their mind, it was obviously worth the attempt to see if they could do one better.

And now a word from our sponsor: 💩

-.-. --- --- .-.. --..-- / -.-- --- ..- / -.- -. --- .-- / -- --- .-. ... . / -.-. --- -.. .

ᑐᑌᑐᑢ

Spoiler

    ▄██████                                                      ▄██▀

  ▄█▀   ███                                                      ██

▄██     ███                                                      ██

███   ▄████  ▄█▀  ▀██▄    ▄████▄     ▄████▄     ▄████▄     ▄████▄██   ▄████▄

███████████ ███     ███ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀███▄ ▄██▀ ▀████ ▄██▀ ▀███▄

████▀   ███ ▀██▄   ▄██▀ ███    ███ ███        ███    ███ ███    ███ ███    ███

 ██▄    ███ ▄ ▀██▄██▀    ███▄ ▄██   ███▄ ▄██   ███▄ ▄███  ███▄ ▄███▄ ███▄ ▄██

  ▀█▄    ▀█ ██▄ ▀█▀     ▄ ▀████▀     ▀████▀     ▀████▀▀██▄ ▀████▀▀██▄ ▀████▀

       ▄█ ▄▄      ▄█▄  █▀            █▄                   ▄██  ▄▀

       ▀  ██      ███                ██                    ▄█

          ██      ███   ▄   ▄████▄   ██▄████▄     ▄████▄   ██   ▄

          ██      ███ ▄██ ▄██▀ ▀███▄ ███▀ ▀███▄ ▄██▀ ▀███▄ ██ ▄██

          ██     ███▀  ▄█ ███    ███ ███    ███ ███    ███ ██  ▄█

        █▄██  ▄▄██▀    ██  ███▄ ▄███▄ ███▄ ▄██   ███▄ ▄██  ██  ██

        ▀███████▀    ▄████▄ ▀████▀▀██▄ ▀████▀     ▀████▀ ▄█████████▄

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×