Skip to main content

How generative AI can help and challenge companies looking to improve productivity

Share

Jurisage CEO Aaron Wenner speaks with Alberta Primetime Host Michael Higgins about how businesses can effectively implement generative AI to increase efficiency, and how to mitigate any risks associated with its use.

Michael Higgins: Air Canada was ordered to compensate a B.C. man earlier this year because its chatbot gave him inaccurate information about applying for bereavement rates after travel to a funeral.

The airline later denied the claim and argued the chatbot is responsible for its own actions.

A civil resolutions tribunal disagreed, eventually ruling in the passenger's favour. In the case of the airline, responsibility for the actions of its chatbot, a blip on the radar or a window into the road ahead as more and more businesses look to AI?

Aaron Wenner: I think absolutely a window into the road ahead. I think that generative AI offers some really extraordinary potentials, some really extraordinary opportunities for making things more efficient for helping generate text in creative ways.

But it comes with a lot of potential liability as that case showed, especially when it comes to delegating what was traditionally a human job to, effectively, a robot.

And there are a lot of risks for companies that they do need to take into account. As they say, with great power comes great responsibility and these technologies, while very exciting, are still quite new.

So it is incumbent on all of us to figure out how best to use them and what types of frameworks are going to be the ones where they can be productive and what types of scenarios are they potentially risky.

Michael Higgins: Maybe on that point, what level of risk is there in the pervasive use of AI? What do businesses need to take away from the Air Canada example?

Aaron Wenner: Businesses need to take away that these are robots, they're not people, and even if they were people there still is an overall level of responsibility to the company, to the organization as to what their human employees do and what their delegated technologies do.

And so that means thinking very carefully about what these tools are going to be doing, to what extent are they interacting with the public, to what extent are they being put in situations where decisions need to be made, and how are we going to mitigate the risk?

Is it a matter of training these machines better? Is it a matter of putting in guardrails or constraints? Is it a matter of putting a human in the loop?

Probably all of those things are what's necessary.

The human in the loop I think is the most important one. These technologies are force multipliers, you can do a lot more than you could before, but it's really helpful to have a Sensible thinking human somewhere in the mix to make sure that the tools don't go off the rails.

Michael Higgins: In terms of humans in the loop, where does this leave consumers and maybe confidence in the actions of the business community?

Aaron Wenner: I think it's fair to say that you can expect to see chatbots and these types of technologies coming to a phone call near you or a website near you quite soon.

I think that the general public does need to be aware and concerned about what types of representations are being made but that would still be true if there was a human on the other end and so there's a bit of a balance here between being aware that you might be interacting with a chatbot, but also being aware that the fundamental liabilities, and the legal relationship between you and the company you're dealing with has not changed.

I think that decision from the civil resolutions tribunal discusses the company, Air Canada in this case, was held responsible for what their chatbot said in the same way as if an employee had misrepresented what their policies were.

Michael Higgins: Is there any point in which a chatbot is a separate legal entity and as such responsible for its own actions?

Aaron Wenner: That’s a great question and probably beyond my paygrade.

I think it's safe to say that we're not nearly at the place, from a technology perspective, where that's a reasonable concern.

The other thing is that there are great frameworks within the law already between employer and employee that govern the relationship between what an employee does in relation to the general public.

So first of all, technologically, we're not there yet, we're not even close to being there yet, and even if we were there, there are some pretty good frameworks already out there that we can rely on.

So even if the technology were there wouldn't dramatically change the overall level of responsibility that a company owes to its employees or to its consumers.

Michael Higgins: What's your company's approach then to integrating generative AI and the legal profession and how is it different?

Aaron Wenner: We operate in a slightly different space. We don't provide services to the general public.

Our company helps lawyers reuse the information that they've already written, information that’s scattered within their law firm’s file systems, data scattered within perhaps a document here, a document there, and allows them to access that information and reuse it so that they can work faster and more efficiently and they can serve more clients with excellent quality and excellent results.

So for us, we've always thought of our technology, and forgive me for this, but it's kind of like a robotic exoskeleton. We’re not talking about robots that will do something that a human can do. We're talking about using technology, using tools that help lawyers get more effective at the tasks they were already doing and that's, I think, a great place for generative AI to be.

We can help lawyers write submissions to court better by helping them find other language that they might have used in the past in similar submissions to court.

That's a great use of technology because it keeps the human in the loop.

What it does is it keeps the lawyers in control, it helps them solve a real meaningful problem without going beyond its guardrails, and it's always based on trusted data.

And so those are the parameters that we built for ourselves when we decided we wanted to use generative AI and these technologies, and we think that we've seen excellent results as a result of that.

Michael Higgins: Okay, so no chatbots then?

Aaron Wenner: No chatbots for sure. I think chatbots are actually really interesting as a use case.

We did a lot of conversations with our customers and we found that to have one, they despised interacting with chatbots.

What they found is it was difficult to have to teach the robot to do a thing and if you have to teach the robot to do a thing, you may as well have to write all those instructions to a person.

And so when we built our technology we said look, chatbots are great, they represent one way of interacting with generative AI, but there are other ways where you can hide the chatbot behind some user interfaces that are, perhaps, more familiar to our customers.

So simply clicking from a drop down what you can do is pick, for example, a summary of a legal decision that you might want to get.

Our technology can take that drop down and match it with a really good prompt that we've developed so that the end result is a lawyer clicks a button and gets a really good result.

One of the things that chatbots require you to do, as I mentioned, is create a prompt, a series of instructions for the large language model to work with, and there's an art to that. It's an art that, quite frankly, lawyers don't need to learn.

Lawyers are better off using their fundamental skills of writing and thinking and providing good representation for clients. If we can abstract away the need to interact with a chatbot, then what we can do is effectively cut out the middleman and give a really good result and a really good user interface and a really good set of capabilities to our customers without having to get in the middle of that, sort of, chatbot interaction.

The other thing on chat bots is that they do represent a risk for law firms.

Anybody can type anything in a chat bot, and that information is going to go outside the firm's security perimeter.

And so by removing chatbots we can make it easier for our customers, who make up most of the large law firms in Canada at this stage, to be able to adopt this technology more comfortably knowing that a major security risk is closed at the outset.

CTVNews.ca Top Stories

Stay Connected