Close

The Pursuit of AI ROI at Dayforce, an HR Software Provider

By Nicole Lewis, Contributing Writer |  June 19, 2025
LinkedInTwitterFacebookEmail

David Lloyd, Chief AI Officer at Dayforce, is on the hunt for ways to translate data about human capital into actionable, AI-driven solutions. 

That can be a challenge, says Lloyd — especially when you consider that “employee data is the most sensitive data outside your health care data you could ever imagine.”

Headquartered in Minneapolis, Dayforce sells human capital management software used by nearly 7,000 organizations to manage payroll, benefits, talent, workforce planning, and more. Dayforce, with $1.76 billion in revenue, was known as Ceridian until 2024.

David Lloyd, CAIO, Dayforce

Data related to recruitment, talent management, payroll, benefits and compensation, and employee performance presents a wealth of opportunities for Lloyd and his team. We spoke with him recently about both challenges and opportunities of deploying AI across the company — and among companies around the globe using Dayforce’s AI-driven solutions. Lloyd serves as both Chief AI Officer and Chief Data Officer.

• • •

Why did Dayforce create a Chief AI Officer role?

AI is going to become a predominant technology force across organizations. My experience in AI goes back over 20 years… I was the Chief Data Officer for all of Dayforce as a starting point four and a half years ago. The growth of the CAIO office is really a recognition that when you start bringing in what would have been prediction before generative AI — which is machine learning — you start bringing in the importance of data, deep analytics, and things of that nature… they converge around artificial intelligence.

AI cannot exist without high quality data… and those other components need the whole package, which for me is what a Chief AI Officer is. Establishing this position is probably an acknowledgment from organizations who are sophisticated in their use of data, analytics, and predictions.

My role is a bit different… I focus both on our products — teams that report to me build our AI products for our customers, nearly 7,000 of them — and I work closely with our Chief Digital Officer, Carrie Rasmussen. We vet all the AI requests that come through her organization through the same governance and AI process… We look at things such as data privacy, use on a geographic basis, and how data is consumed. We make sure that ethically we are using customer data appropriately — just as we do on the product side — and then we build the enabling products on top of it.

Who do you report to at Dayforce?

I report to Joe Korngiebel, who is our Chief Strategy, Product, and Technology Officer for approximately 2,400 hundred people in our strategy, product and technology organization, and Joe reports directly to David Ossip, our chairman and CEO.

What’s a standout AI use case at Dayforce?

One of the most prolific examples is the use of our Dayforce AI assistant. We use the same software our customers use to drive our payroll and employee engagement. This enables our HR organization to support approximately 9,610 employees worldwide, because employees are interacting with our Dayforce AI assistant to get answers to questions like: Why did my paycheck change from one week to the next? What is our dental benefits plan? And it knows that you live in Europe versus the U.S. or Canada.

It could be something as simple as: What are the wellness days available to me in the company?… Something where it spawns one of our agents to say, ‘Hey, I’d like to take the last Friday off in June and I want to use it as a volunteer day.’ Automatically, our AI assistant understands the intent and engages an [AI] agent to handle that whole business process through to approval.

Those are just some examples from that one approach. Another example — we just finished our engagement survey. We had thousands and thousands of comments, which is just unstructured text… We wrapped it up about a week ago and had our results out in a day. Why? Because of our capability to take that data, bring it together in themes and intents, organize it, analyze the sentiment, call out key concepts… then our HR group uses that data.

One of the taglines we use frequently is: Do the work you were meant to do. And that does not mean sitting down trying to go through 7,000 comments — figuring out which ones are about our values, which ones are about our flexible work programs. Let the systems we’ve built through AI capability — and generative AI capability — solve that data problem, saving probably 100 hours of work, and let the HR team, which is lean, focus on the truly cerebral work: pulling nuance out of that information and using it to create the actual plans to drive further employee engagement.

…What has the ROI been on your use of Microsoft Excel over the last five years? I don’t think anybody knows.

How do you measure AI ROI?

First of all, I think it’s a great question because I don’t think most organizations start off being deliberate about ROI. A great example is I could say to most companies: So what has the ROI been on your use of Microsoft Excel over the last five years? I don’t think anybody knows. I think we know that if we started that process early, we could establish some key performance indicators to do it.

One of the things we look at for our organization is efficiency and productivity, specifically around concepts like deflection. So, if I was going to ask a question that I would normally ask of HR and I can get an answer from the AI assistant and never make that call, that’s a deflection. There’s also the ability to look at not just deflection, but a very complex problem — 70 percent gets rationalized by talking to our AI assistant… then it gets stuck… and when it reaches HR, they are dealing with 30 percent of the problem. Probably the more cerebral part… but they are able to focus on that, so there’s great impact from that standpoint.

When we look at something like engagement surveys, that’s very easy to measure because we know the amount of time it takes our HR organization to process. Our last major engagement survey was almost 10,000 comments. We know it took over 100 hours to bring those comments together, sort them, cluster them into logical groups, then assess each of those… so we are very deliberate about measuring those things.

Even something as simple as a time away from work request — you go in, get the form, complete the forms, submit the form, wait for approval — it usually takes a couple of minutes, maybe two or three. That doesn’t include approval time. Where we can do it in 30 seconds — literally: here’s what I want to take off, I want it as a vacation day — and the next answer from our assistant is: yes, I’ve already sent that off for approval, and you’re done.

Now what’s really good about this — and you can measure it — is engagement from the employees, or employee satisfaction. It goes back to when you are phoning your credit card or bank — how much do you like to sit on hold? No one does. So think about that with your HR team. I’m sending an email, or a message in Slack or Teams, or I’m picking up the phone. That latency, that time, really impacts the employee experience. The fact that we can deliver one right answer, or help them through a more complex problem, actually helps increase employee satisfaction because you are not spending time on hold. You are one and done.

This is key for our [customers in retail, healthcare, and other industries that have employees] who are not spending a lot of time at a computer. They are on the floor, helping patients, servicing customers. They can pick up the mobile [app] and say, “Hey, I want to take next Wednesday off for a vacation day” — and be done. That’s great. Everybody wins.

Is it harder to measure ROI when it’s based on time savings?

No, actually I wouldn’t specifically agree with that. You can understand the cost of a particular activity, which helps inform ROI if that activity is reallocated. For example, let’s say an HR team member is paid $40 per hour. If a repetitive task, such as classifying and reviewing engagement survey comments, takes two weeks to complete — or 80 hours of work — then the cost of that activity is $3,200. Reallocating that activity frees up $3,200 of that team member’s time to be used for another task, potentially one with larger ROI.

Has Dayforce saved significant money with generative AI?

On an annualized basis, hundreds of thousands of dollars. I think you need to frame this in two different ways. There’s the potential for savings — time for example — and what that time represents — true dollars — or the ability to focus on additional activities that drive revenue or further the business in ways you may not have had time for. 

How do you prioritize where to deploy AI?

Sometimes I feel like it’s all the barbarians around the gate with their swords — because everybody wants AI tools. We take a business case approach. The first thing I would recommend to anybody is: you should have a portfolio of different projects, some simpler and some more complex.

We have a single intake method for any idea across the company, and any of our 9,610 employees can submit ideas. That’s one method of intake — whether it’s buying the latest Adobe software for the PR department, or looking at Gong for our sales team, or our engineers looking at GitHub Pro. We take all of those in a single intake.

…Let’s not complicate it. Let’s use AI where it is appropriate.

The first thing we do is make sure that AI is actually necessary to [address] that business problem. Why? Because most of the time — it is not. So let’s not complicate it. Let’s use AI where it is appropriate.

Do we have the data to make that AI capability come to life — or can we synthesize it or buy it? Then you ask that key question. It’s really important to know: what are the regulatory or privacy issues? We take that as the utmost watermark for our employees and our customers. If it passes those questions — which we usually clear in a day or two—then we can get on to the business justification.

From there, our departments each focus on their own particular areas. Our Chief Digital Officer, Carrie, looks across at how we use large language models across the organization. We don’t use a lot of commercial ones — we use open-source models so we can host and control them, because we want to be careful with customer data.

She can look horizontally, whereas each business unit leader is accountable for assessing what types of applications they’re seeing — Salesforce, Gong, Lenovo… these different ones you can buy that help drive their business goals. So that’s the two intersection points of how we look at it. From there, it’s the business case that drives it.

The case studies that you spoke about earlier, are they using traditional AI or generative AI?

A lot of the case studies in sales and marketing and engineering are arguably a combination. They are primarily generative AI.

When generative AI was introduced, a lot of companies restricted or even banned their employees from using generative AI applications because they may innocently enter sensitive corporate information into their queries. How did you deal with that?

We dealt with it differently than most. Employee data is the most sensitive data outside your health care data you could ever imagine… so we have to have a level of care to the privacy and security that you would expect from a compliance leader.

We deployed models internally ourselves. Where a lot of people might use OpenAI models, or Anthropic’s models, or other foundational generative large language models, we actually use models like Mistral AI and now Llama… we run them on our own cloud infrastructure, but they’re not connected to those companies. We can protect that data in a far different way.

For our customers, when we deploy these models, they are never trained on their data and their data never stays in those models. Yet we’ve built sophisticated solutions that allow customers to benefit from generative capability — without the risks of training on their data.

Corporately, we’re very careful about the models we use. Some of the models are external — like in marketing, if they’re writing new content or things of that nature — we need employees with strong AI literacy. But they understand they’re putting content into a system that we are told is private. It’s not our customer data — it may be used to write copy or things like that.

In other cases, we run models on our infrastructure, where the only people who can see the results of what you asked, and what it applies to, would be our company — and that’s it. We are very careful about it. We cannot solve all the problems, but we can use commercial software and what we build ourselves to manage risk accordingly.

How do you view AI platforms like OpenAI, Claude, Copilot, and Vertex?

First of all, we’ve actually built quite a reputation for using small large language models. They return results faster and are purpose-built for what we need. We use Mistral — a French company that makes open-source models — and we use their smaller models. What’s great about that is much less power consumption and compute consumption, so far more friendly to the environment… but again, purpose-built for what we need.

In general terms — you mentioned Anthropic, Microsoft, Google, Meta, OpenAI — it seems every day one model is passing another, passing another. I think very strongly around the work Anthropic is doing. They’ve taken a very deliberate approach to privacy and building ethically. I think they’ve got a very strong one. I don’t think the others don’t have that—Cohere is another one that comes to mind.

Our tendency right now is toward open-source models because we can control them, and we have the sophistication to do it.

Our tendency right now is toward open-source models because we can control them, and we have the sophistication to do it. I think you’re going to see other organizations potentially move that way as well. Meta’s Llama is a very strong, capable model — and it’s now open source. You can use it commercially. So I think there’s a lot of opportunity there.

At the same time, there’s going to be this leapfrogging constantly — suddenly you hear about DeepSeek. One is always passing another. That’s the reality as this continues to evolve — not just over the next couple of years, but on the technology path forward.

I also think you’re going to see substantial changes. Large language models are the thing now… but two or three years from now, we may be looking at something very different in how it solves problems. We’re on an evolutionary path with foundational models. I don’t think picking any one of them is a terrible thing — just make sure you do your due diligence about how they use your data, how they store your data… be really, really picky about that.

How can AI help with siloed and unstructured HR data?

I think we have been fortunate to be in a blessed position that way. Our Dayforce products… just to give you a point to think about… are a single data model — both highly structured data and some unstructured data as well — but it’s in one place. The trust, the efficacy, the quality is very, very high… because we don’t have to pull data in from many different places, maybe transform it, and then bring it together. It already starts its life that way.

That’s how Dayforce was built. Probably one of the most powerful things about our system is that single source of truth for data — from a quality point of view — that you can trust. So that makes it much easier for us when we lay our tools on top of it, because you already trust it.

To your point — the bigger challenge for HR systems out there is that many organizations have 12 different systems: one for payroll, one for workforce management, one for recruiting, one for talent… so you’ve created a problem right there. Now you have 12 data islands—or maybe deserts. Then you have to bring those islands together.

That whole process — move the data, transform the data into a format that works with the other pieces, and create the model — introduces latency… the likelihood that you are missing or have incomplete data… and the potential in the data can’t be used effectively together because of that.

I think that’s the problem HR is in. If there is bad data or data inconsistencies, AI will fail — full stop. That hasn’t changed in 50 years — whether it’s AI or not.

The best thing an HR organization can do is pick those projects and really ensure that the data you’re going to place that AI capability on top of is complete, trustworthy, and high quality. If you don’t do that, then your organization will never have trust in the decision-making or the use of that data — through AI or any other mechanism. That’s job number one.

Many companies have shut down their DEI initiatives. Without DEI data, how do you prevent bias in AI?

Wow, that is a massive question to unpack. Let me figure out where I’m going to start on that.

What I would say is, when I look at Dayforce as a software platform, the data we would ingest or bring into the system — through the performance review process, through the compensation process or things of that nature — we will always be collecting that data. You’ll see where I’m going in a minute. That includes concepts around gender identification, ethnicity questions, and things of that nature. That data is available to be collected by the platform constantly.

Why I say that is DE&I is about the willingness of the organization — the people — to use the data that is in the system, or surfaced out of the systems, in the decision-making process. So the bias will be more in the human’s hands than in the AI system’s hands. Because it’s not like we’re not bringing that data in and making it available in a way that would allow for that assessment.

That being said, there are very specific ways of analyzing from an audit point of view [DEI] data. In our talent product, when someone submits a résumé, our product should not be adding or bringing bias into the process — or should be limiting it as much as possible.

There are different techniques we apply… general things: remove age, remove ethnicity, remove name, remove school names — because you may have gone to a women’s college, and I may not have. So it does things like that to remove that type of data — but it does not remove it completely.

Once that résumé comes in, it’s still there. In positioning that set of potential candidates, those considerations are removed — so they don’t reflect in the way the human in the loop reviews them. That’s where the bias usually starts — because the data came from humans.

Now, there are ways to test for this. We do this in two stages. First, our end-to-end platform — which benefits both our corporate side and our customers — has been through audits of all our AI governance, privacy, and security processes.

We actually look for things in the way our models behave. Let’s say we have a model that’s predicting whether a group is high turnover — we watch how that changes over time. It’s called drift. We constantly look at our AI infrastructure and models for drift. Even in generative large language models, we purposely set aside training data that’s consistently used — whether a new model is trained or we’re checking how an existing model responds — to see if it’s responding the same way now as it did two months ago.

We look for changes in predictions over time. There’s also a brilliant quote and concept called adverse impacts. We use a third-party audit firm to look at our talent product and the decisions it has supported — to see if it’s introducing adverse impact.

We put ourselves through that process constantly. That’s how you build ethical AI that is principled and secure.

What AI partnerships do you have — with startups or universities?

We are working with the Living Wage Institute right now, looking at what the living wage is across the U.S. Because we have so many employees on our platform — all with the permission of our customers — we are very clear about that. We also partner with research groups, typically nonprofits and universities, combining their data with our de-identified, aggregated data to look at trends.

The one we are working on now is looking at retirement savings across the United States and whether it’s adequate. We do a lot of what I call thought leadership work with universities and research areas, so we can share insights with our customers. That sometimes helps us improve our products — like building a new dashboard that focuses on living wage data by geography.

We also have very close partnerships. We work with Microsoft. We work a lot with the startup community. But we’re careful about that. We look for established partnerships. In AI, there are a litany of startups, but many are not necessarily doing all the things they should with data. So we’re fairly limited, especially because it all has to go through our governance process.

We don’t actively work with a lot of small startups using our data. We will work with a startup if they want to partner around a specific service — but even that goes under intense scrutiny. It’s usually more mature startups — Series B or further along — that we partner with differently because they have a different maturity path.

We also work closely with our consulting partners — those that use our platform or help customers implement our technology.

What’s the role of your AI Innovation Lab?

 Our lab is there for a very specific reason. Most AI ideas could fail. But I think most people have the attitude that everything works. The job of a lab is not to productize something — although we consult and support our corporate ideation. The job of the lab, specifically for our product side, is to bring ideas into the lab for two to three weeks with a group interested in exploring whether it’s possible.

This is to answer the question: Can we? Should we — that’s a different question.

The lab has individuals with expertise in all areas of software development, including AI and data science. This group partners with two or three people who come in to test whether the idea is valid and has legs — can it actually be done and solved? If it can’t, we put it back into our AI process to make sure we’re doing all the right things ethically — and ask the tough questions.

The lab is there to graphically determine success or failure. That’s a big difference from people who say, “Everything works in AI.” No, it doesn’t.

We use the lab to litmus-test ideas — so the business can decide whether to buy software or implement an idea. 

We use the lab to litmus-test ideas — so the business can decide whether to buy software or implement an idea. When large language models came out, we worked with them — using OpenAI, for example — just for delivering support information. It wasn’t using customer data in a compromising way. We worked closely to prove what could be done.

Some of the leading work we’re doing now involves using our AI assistant to answer any question about customer data — almost without a user interface. Just: I need to know where everybody is supposed to be on the floor right now… and the system responds. A manager can say, “It looks like Allison hasn’t shown up. Can we call in someone to replace her?”

That kind of work — with agents and capabilities like that — is what we use the AI Innovation Lab for.

LinkedInTwitterFacebookEmail