Close

DraftKings: Generative AI is Moving Faster Than Anyone Could’ve Expected

By Steven Melendez |  July 12, 2023
LinkedInTwitterFacebookEmail

DraftKings, the Boston-based sports betting company, has long used AI technology in taking bets and targeting advertising to consumers. Now, the company is exploring ways to use generative AI technology for routine automation and code reviews. Paul Liberman, DraftKings’ co-founder and President of Global Product and Technology, spoke to us about how the company rolls out new AI models, its approach to generative AI, and how working in the sports world impacts data engineering recruitment.

Could you walk us through how you use AI technology at DraftKings?

Paul Liberman, President of Global Product and Technology, DraftKings

We have been using AI technology for quite some time at DraftKings…

We’ve been using it for pricing our odds feeds when you go and make a bet. We have machine learning models that are taking a whole bunch of data, they’re simulating it — when you’re doing the same game parlays, we’re using machine learning to understand the correlations between data. And that has been something that we’ve been running on  AI and machine learning models for a long time. We’ve done a lot of personalization — everything from the widgets and the quick links on the homepage that understand customers’ behavior and betting, to even deposit amounts and understanding what people want to deposit. Really just simplifying the customer experience using AI and machine learning models. But there’s a lot more that we can we can do from there.

We’ve actually had some things in our DFS [daily fantasy sports] product with machine learning since back in 2013 or 2014, organizing and ranking the contests that people would want to select just much faster. There’s hundreds of contests — we’re making it a lot faster for people to find them. We also have been using it a lot on the marketing side, making sure that we’re retargeting our marketing appropriately to the right segment of audience that’s going to respond. As we’ve done more work in personalization, and machine learning and AI, we’ve seen our cost for customer acquisition come down quite a bit.

Recently, with a lot of this generative AI — ChatGPT, etc. — there’s been so much more work that we’ve done on our end. [We’re] having AI do things like identify problems with our code before an engineer would do it — as a supplement for code reviews, we’ll leverage some AI products to see if we can identify areas or hotspots that might not be working as intended. And even generating or automating some simple procedures that previously would have taken a developer a day or two to write. We can leverage AI to make that process substantially faster. And improving things like customer service, whether identifying questions, putting the right Q&A out there for customers, or even just having a faster time to handle them.

Whenever we’re deploying new machine learning models into production, we’re always testing in the background against other machine learning models that exist today.

With some of the technology that you mentioned on the bet-making side, it sounds like there’s lot of money at stake, so how do you ensure that the AI is accurate and won’t lead to any errors there?

One of the things that we’ve built out internally is a testing platform. Whenever we’re deploying new machine learning models into production, we’re always testing in the background against other machine learning models that exist today. We use platforms and technologies like Databricks that help us with model management, and sometimes those models operate silently in the background, so we see how the model would have performed in that same type of environment as our original model. Sometimes we’re able to give new models to a small percentage of our users, and we see how those models compare to other models.

First, you start out with the development and you test it and you back test it. Back testing means that we use historical data to see how the model would have performed, had it been around at that particular point in time. Then, after back testing, we do split testing, so a percentage of the users get it, and we see how it performs against the current model. And then we ramp that up over time so we get confidence on how it works. And a lot of that technology has been built internally over time to make it easy for our data scientists and software engineers to manage multiple models, test them, iterate on them, and do so really, really quickly.

Can you talk a little more about how AI is being used in marketing?

We’ve been doing it for ad targeting for a long time—everything from targeting ads on Facebook and other different channels to direct mail targeting, using machine learning models. Recently, we’ve started testing more copy generation with AI — even image manipulation and image generation. That’s something that we’re really leaning into right now. That’s become a lot more commonplace, and we’re testing that right now.

As you’re building these models, is it a challenge to staff your team with experts in data science and AI, or is that something else that you’ve streamlined?

I would be lying if I didn’t say it was hard to find great people, although we have managed to get a great team. And what we’ve noticed, and what we’ve learned over time, is when you get the root core of really great people, it’s actually easier for us to find other great people [for] that team.

One of the things that separates DraftKings apart is the sports side. And sports modeling, for people that are fans and love sports, is a really, really interesting problem set. Predicting and understanding sports, understanding the correlations, modeling it. It’s more than just writing another data science model. This is in many cases a passion of the data science engineers that work on those problems. So when we’re able to tap into that passion for sports, the passion for understanding sports at that level, we’re able to [recruit] a lot easier…

The one other thing that I would say is that at DraftKings, I feel like our data science engineers, given the processes that we created, they can have an impact fast. And we’ve simplified the process of going from a model and an idea to production quite substantially, it’s actually been something that I personally have been working towards: making sure that we can get more data science engineers delivering code faster to production than in other places. So in some companies, you might run a random model, and it might never see the light of day, or it might take years. But we’ve tried to really simplify that process so we can iterate much faster than other people, and to make it more engaging for our data scientists so they can actually see their stuff in the wild.

….Our team is taking a step back to say, ‘How do we improve our operational efficiency? How do we deliver value to customers and shareholders faster?’

With generative AI, are you testing that in the same ways? You mentioned generating code and generating content?

We’re using generative AI right now more internally. We haven’t used generative AI, outside of marketing, externally. Our core use cases are things like developer efficiency, which is really important, and making developer testing a lot faster. We’re seeing some value there.

And we’re looking at other ways to leverage generative AI. One of the things that we’re working on is copywriting for marketing creative, and the image generation for marketing creative. That is going on right now. But really across the board, our team is taking a step back to say, “How do we improve our operational efficiency? How do we deliver value to customers and shareholders faster?” There are a lot of companies out there, there’s a lot of products, and this space is heating up quite quickly. And I’m pretty happy that we made some pretty good early traction, at least in those two particular areas.

As far as code, how do you make sure that the code the AI is generating does what it’s supposed to do?

Most of the work that we’re doing right now is actually more code reviews. The AI is not necessarily writing all sorts of production code for us. We are more looking at it for, “Hey, here’s some code that we’ve wrote, how do we review whether there might be issues that were not necessarily caught by our current internal processes?”

The other place that we’ve used it more is for developing scripts that are not necessarily not necessarily going into our production systems, but more automating processes. One that was recent, we had to go through a whole bunch of compression across a bunch of different audio and images and all this other stuff. It was really a one-time task. So instead of going in and having an engineer maybe take one or two days to write a script to do it, we were able to leverage AI, test it, and iterate on that.

To write the scripts themselves?

To write the scripts that may be one-time-use scripts to do something. Another example that we’ve actually used a lot is linking two different software packages together. So how do we link, for example, Workday with Snowflake? How do we get those working together? AI is definitely very helpful in writing some of those scripts and procedures.

Do you have policies on who can use and test generative AI when?

We definitely have policies on what tools we use, when we use them, and how we use them. So it’s not necessarily which individual person.

It’s not that these things are completely out there in the wild with no supervision. I would say that for the most part, they’re still operating in many cases in supervised environments.

As you mentioned earlier, you’re in a highly-regulated industry. So does that affect how you roll out models in general? Do some of those need regulatory review, or what does that process look like?

A lot of our stuff is approved. Some of the stuff that we’ve talked about on AI, like the scripts to connect a Workday to a Snowflake, doesn’t necessarily need to go through regulatory process.

But then a lot of our sports models and all of that do go through regulatory reviews as necessary, by the different states. But generally speaking, and this is kind of what I was mentioning earlier, a lot of what we’re doing is iterations and making these things better over time, so we’re not necessarily going and saying, “Hey, go start from scratch, go build this model” every single day.

It is, “How do we make our models a little bit smarter, a little bit faster?” One of the things I’ll give an example is, for Super Bowl, Draft Kings had, the highest uptime of any other operator for the bets. And what uptime means is that during the Super Bowl, 94 percent of the game, you could place a live bet in real time. Whereas others, if there was a penalty or a flag thrown in a play, or a touchdown, they may take down their betting, so you can’t bet. And a lot of that type of uptime comes from us iterating on our live models, and understanding the probabilities better, and being able to forecast what’s going to happen at the next point in time faster. And those are iterative things that we do with our modeling and with our machine learning and AI teams to make things faster. Those go through the regulatory processes, and it depends on the states we operate in.

You mentioned there was an appeal to a lot of people to working on sports. Do you recruit targeting sports fans in particular, or how do you leverage that to find talent?

We start off with the data science, AI, analytics. We go to universities, we go to events around these, more on the technical side, but the type of person that might talk to us, they naturally have more of an interest if they’re a sports fan. So, I think we start off with the technology, and then look for people that are interested in this space, or like the challenges that we offer as an as an organization, moreso than start with sports, and then look for AI.

One example I have is Worcester Polytechnic Institute, which is also my alma mater. We work with WPI and fund a lot of their research projects in the math department—math, engineering, and computer science—that focuses around their machine learning and AI interests, and they have great students. They have great professors.

[Generative AI] will fundamentally change the way we operate over the next few years.

I know we’ve covered a lot of ground, but is there anything else you wanted to point out about how you’re using AI and machine learning that we haven’t gotten to yet?

A point that resonates with me is the AI is only as good as the people behind it.

I actually do think that [generative AI] will fundamentally change the way we operate over the next few years. And it’s happening faster than I think anyone could have could have expected.

And I think we take a business-first approach: fast speed to market, and empowering our data science engineers to get stuff into production, quickly test them, and iterate them. And I think we do a good job at that.

Key insights…
• The AI you develop “is only as good as the people behind it.”

• Leveraging AI can create capabilities that competitors don’t have, like speed and responsiveness with a digital product.

• AI and ML capabilities don’t appear overnight; DraftKings has built its capabilities over a decade.

• AI can create efficiencies internally, in creating scripts to solve one-time problems or connecting two systems to share data.

• Creating a solid testing regimen for new AI software is crucial, especially in regulated industries.
LinkedInTwitterFacebookEmail