Close

WEX: A Spectrum of AI Experimentation

By Steven Melendez |  July 17, 2023
LinkedInTwitterFacebookEmail

Portland, Maine-based WEX is a provider of business financial services, including fuel payment cards for vehicle fleets, employee benefit administration tools, and payment management systems. Karen Stroup joined the company in 2022 as its first Chief Digital Officer, focused on pursuing online commerce and other digital opportunities across WEX’s divisions. In 2022, WEX had $2.4 billion in revenue.

She spoke to InnoLead about WEX’s recent projects and ongoing experiments integrating artificial intelligence throughout the business.

What are some of the big experiments and big projects you’re doing involving artificial intelligence today?

Karen Stroup, Chief Digital Officer, WEX

If I take a step backwards, we’re fundamentally starting from a place where AI is no longer a nice to have, it’s absolutely essential that we lean into AI to evolve as a company and to drive revenue growth, to stay ahead of our competition, meet our customer needs, and reimagine ways of working.

I would say we’re probably two-thirds experimentation and learning, [and] one third scale. And we’re scaling in areas where the applications of AI are known, they’re safer, and they’re more proven use cases. And so think about things like risk and fraud and credit — [those] are really big parts for a payments business.

We’re actively using AI and our credit models to understand and evaluate risk, to understand and proactively monitor that credit line that we have extended to you. If we see things that suggest you may be having some struggles, we may lower your credit limit, or [the opposite] if you’re doing great — so we’ll use AI to help make those decisions. And then as we make decisions…we have to provide transparency and rationale. So we need AI to help communicate back the rationale behind that decision.

The same thing goes for fraud. Obviously, with the…pace of all the change in technology and AI, it’s great for industry, but the fraudsters and the bad actors have also really latched on to those tools. And so we’re using AI to try to keep our drivers on the road. If you imagine a truck driver who is filling up on average $1,500 every time they go to the pump, if we shut off the card because there’s a fraud audit, it’s a tricky situation for that driver, right?

We’re trying to make sure that we don’t have massive fraud, and…that we prevent fraud proactively.

And then from an experimentation perspective, we really are leaning into experimentation with a desire to learn — learn not only how the technology works, but as an organization, what is the art of the possible? How could we experiment with it? How do we need to work differently as an organization as we play with AI? And we are doing that across every function.

We are actively encouraging people to come and bring use cases, and we have office hours where people can come and pitch ideas…

With those experiments, do you coordinate those centrally, or do different units of the company run their own experiments?

I think there’s a spectrum. And I actually love the spectrum of experimentation in terms of driving innovation. We have just launched a new committee, and the AI Center of Excellence. We are actively encouraging people to come and bring use cases, and we have office hours where people can come and pitch ideas and get help in thinking about things.

[In] sales [and] marketing operations, we know that there’s a potential for AI there. [The technology team] is obviously experimenting with a lot of the new emerging tools to help build code and automate QA more effectively. So some is very intentional and top-down. But we want to encourage that culture of experimentation.

We have a hackathon where one of the themes is AI, and people can pitch any idea.

And then I organically hear about these cool ideas that are just happening, unscripted. And so we’re trying to encourage that playfulness and experimentation.

And as these ideas surface, whether internally or at the hackathons, how do you test those out? What do those experiments look like?

I like to create the lowest bar [for] an experiment. So we had—I won’t give the specifics on it—but we had a group that for a new product innovation wanted to use ChatGPT to create a customer benefit. And literally, they just said, “Can we take a day and code this and see if it works?” And so they worked all day, they didn’t do any other meetings, and they had a working [proof of concept] by the end of the day. And it’s super cool.

And then they took the work and POC to the sales team and said, “Hey, if we could do this, would that add value to you?” The sales team was like, “Oh my gosh, this is amazing. Yes, I spent hours trying to get this data.’”

I think the hurdle we have is we don’t want year-long experiments, where we don’t find out anything until 12 months have passed. How you break down the problem into quick hypotheses, and how quickly can you test them to validate them? The important part of this is testing it with the customers, whether it be internal customers or external customers. Is it really driving the intended result? And are there any unintended consequences?

Do you have a formal process for evaluating that, or is it on a case-by-case basis?

We have an AI governance committee, so there’s a certain bar, which we say, “Hey, these are not okay.” And obviously, with all of all the publicity right now about the potential downsides of AI, we want to exercise caution. We want to be careful about regulatory confidentiality, proprietary information, bias, transparency, fairness, all of that is super important to us.

[For our evaluation criteria], I really think about three things. One, is it driving higher revenue? Two, is it reducing cost? And three, is it improving the customer experience? I generally believe investing in those three things leads to competitive advantage. And if it’s not doing any of those three things, maybe there’s another reason to do it. But you kind of wonder, what’s the “so what” about it?

Are you primarily developing these AI projects in house, or do you work with partners and external vendors?

Yes is the answer. We start with the use case, and we start with the outcome, and we figure out what’s the best and fastest way to get that outcome. We start from a belief at WEX that differentiation will be the data, not the source models themselves. We’re probably not going to build a better ChatGPT model than OpenAI. [But we] believe that the data will be one of the key dividers between the haves and have nots. And how well we do at harnessing that data and applying it will help us build sustainable competitive advantage.

So there are places where we build our own models. But in general, I would say we’re really more focused on the business and customer outcome. And we’ve certainly partnered with external vendors. There’s a pilot we’re doing right now on legal contract review. And there’s someone who has a ton of expertise in it. And we could get to pilot in a couple of weeks by using an external vendor.

With some of the more established projects that you talked about — we could start with credit risk — how did the AI come to be used? And how did that grow from an idea to something you’re using in practice?

We’ve been using AI in credit for several years. It’s evolved and gotten smarter and more sophisticated over time, and will continue to do so as the tools, the data, and the application changes. …This actually predated me, but it started as an experiment as well. There was kind of a core way of doing credit and making credit decisions, and then a team created a pilot that was a challenge or model, and that ran in parallel. And then they said, “How are we doing?”

[We recognize] that a lot of AI isn’t going to outperform at the very beginning, so you’ve got to learn and see how that performs, and credit also takes a little while to see how the results mature over different credit vintages. Most people aren’t going to default on their credit card in the first month that they get it. So we took time to explore, experiment, and harden the model. …That idea of test, launch, learn, iterate is the model by which we do our core product innovation, but also AI.

In the fraud solution, was that a similar approach?

I think all of these are. They evolve based on what we see happening in the market.

We have really good proactive leading indicators so that we can monitor when we see deviations of customer behavior. …We talked about this in our earnings last year. We saw an increase in application fraud, and fraud at the pump. So we very quickly put models out that were able to identify potential fraudulent applications, and to the level of the pump, we could figure out which pump had a skimmer on it. And then we could call that [station] and be like, “Hey, we think you have a skimmer on Pump 7.” The most important part of it is [that] we’re able to be proactive and predictive…

Your approach to iteration, which you’ve mentioned several times — can you say more about that?

A big part of our methodology at WEX is customer-driven innovation, which simply is starting by deeply understanding the customer problem. I think Einstein said, “If I had 100 hours, I’d spend 99 figuring out the problem.”

The second step is going bold in the solution. Oftentimes, your first idea is not your best idea. We have a template that says that you put the problem in the middle, and you have to come up with seven different ways to solve the problem. That’s just the methodology to get the creativity going. How would you gamify it? How would you use AI? How could you partner with another team to do it?

Step one is, understand the problem. Step two, go bold. Step three, understand what your riskiest assumptions are, and how can you test them. And then that leads to an MVP, and continued testing and iterating.

If you’ve got time carved out each month in which you’re talking with a cohort of people about what’s possible, what are you doing, there’s a little bit of pressure to keep up with the Joneses.

Earlier, you mentioned the new AI committee. What’s their role going to be?

We have two different committees. We have a goal of making AI accessible across the organization. So there are really three different things that we’re doing in service to that. The first is education. …We partnered with the Roux Institute to create the WEX Data School, to help train individuals in data and AI.

The second thing is this AI Center of Excellence. …You’re making the time and space to think about how you could use AI. And if you’ve got time carved out each month in which you’re talking with a cohort of people about what’s possible, what are you doing, there’s a little bit of pressure to keep up with the Joneses.

And then the third piece is the AI Governance Committee, which is obviously much more about governance than the education or the exploration. And it’s playing an important role in providing constructive feedback to junior data scientists through code reviews, through targeted coaching, helping us create controls over time, think about what our position is on the data and the different AI models and ways of engaging.

Are you actively recruiting for data scientists and engineers, and what does it look like to build out those teams?

Yes, we absolutely are. We believe that we are going to try and figure out how to double or triple our data team holistically, somewhere in that ballpark.

You mentioned LLMs and ChatGPT. But are there other AI technologies you’re actively testing?

…I’d say generative AI is probably the one that I think has the most fanfare. And so what we are primarily using right now, we would say operationalizing, are [machine learning and deep learning] models, natural language processing, efficient use of data, and really starting to explore decision science, causal AI, and generative AI being the next category that we’re experimenting with.

The thing that weighs on my mind is making sure that we’re responsible with it, and we’re exercising caution along the way. So it’s this push and pull: so much power, and with power comes a responsibility to make sure that we are being good stewards.

To me, this has been the fastest pace of change that I’ve seen since at least the Internet has come out.

Do you already have examples you can share with how you’re using generative AI?

A couple of the areas that we’re experimenting with [are] in sales and operations. …How do you automate some of more routine processes in a way that empowers customers and empowers our employees to be more effective, and drive more results. With operations, we’ve already used AI to augment our claims processing… We used to have people reviewing all of them, regardless of size, or regardless of the clarity of the decisions… [The AI] enables us to process claims significantly faster.

And as you start to expand, you can imagine AI helping us understand what customers needs are, so we can proactively connect them with the right answer. It could be a person, it could be the right data source — but having more information about why they’re reaching out to us.  Because let’s be honest, no one likes calling into a call center. …We’re really exploring how we can use personalization and AI to create awesome customer experiences.

It’s a super fun time. To me, this has been the fastest pace of change that I’ve seen since at least the Internet has come out. I would say AI is bigger than cellphones. It is such an exciting time to be innovating and creating value for customers.

Key insights…
• “We don’t want year-long experiments,” says WEX Chief Digital Officer Karen Stroup. Find ways to break a problem down into smaller hypotheses, and find ways to quickly test and validate — or invalidate — them.
 
• Take a three-step approach: Understand the problem. Go bold in considering multiple ways to solve it. Identify your riskiest assumptions, and look for ways to test them.
 
• Especially in regulated industries like financial services, consider creating an AI governance committee to evaluate issues related to bias, transparency, privacy, and compliance.
 
• Hackathons can give people space to think about AI’s potential, along with business use cases.
LinkedInTwitterFacebookEmail