Karim Lakhani is one of the world’s foremost authorities on tapping into the expertise of the crowd — whether that means employees, or all the smart people outside your walls. Lakhani is a professor at Harvard Business School and principal investigator at the Crowd Innovation Lab. He has also partnered with NASA, TopCoder, and Harvard Medical School to conduct field experiments on the design of crowd innovation programs. Lakhani was co-editor of the 2016 book “Revolutionizing Innovation: Users, Communities, and Open Innovation.”
On a recent Innovation Leader Live call, he discussed some of the cultural reasons people resist seeking solutions from the crowd; ways that crowdsourcing competitions can fail; whether to run competitions on your own or work with an existing crowdsourcing platform; and appropriate incentives.
Lakhani also discussed the importance of having an implementation plan to spell out how you’ll actually deploy good ideas and solutions that emerge from crowdsourcing initiatives. “Assume the solution arrives,” he said. “How will you test the solution, how will you implement the solution, and then what return will this give to you and your business, or in your operations?”
A lightly-edited transcript of the call is below. You can also click play to hear the complete audio, or the “down arrow” to download an MP3 file for later listening.
There are two categorical ways to think about crowds. One is to organize the external world — external innovators — in a community. One of the most successful examples of that we’ve seen has been in open source software.
Here, people self‑select and choose to work on projects that they like, and contribute their code. When I first studied open source almost two decades ago, nobody thought that this was going to have any staying power or any kind of an impact in the software industry, which was at the time dominated by Microsoft, and by Sun, and by Oracle.
Today, of course, 20 years later, we now have Facebook, and Apple, and Google, both consuming lots of open source, but also contributing back to open source. The entire cloud computing infrastructure… has been helped by open source.
The other model [for crowdsourcing] is a prize‑based content model, where a company or an organization has a problem that needs to be solved and is willing to pay money to people from outside the company to go at it. If you are successfully able to solve the problem, the company will pay you cash in return for the IP of the solution that you have generated.
I started looking at contests in crowdsourcing types of settings in 2002, when there was a burst of activity with companies like InnoCentive and TopCoder being established for science, engineering, and computer science problems.
Our lab has now shown, in a variety of settings — Harvard Medical School, the Broad Institute for Genomics here in Cambridge, Pfizer, Scripps Research Institute, with NASA — [that the crowd is] cheaper, faster, better than the folks inside those companies.
Even in providing these unquestionable results, there’s still a tremendous degree of resistance. I think I would put the resistance on [four] dimensions.
Everybody thinks that their problems are special to them and nobody else inside the world would be able to solve them. Everybody’s their own special snowflake. They go, “We’re so specialized that there’s no way anybody else could help us.”
The second is [that] there are always concerns about intellectual property and about secrecy. “If I reveal this stuff, my competitors will find out what I’m working on, and that’s going to be dangerous, or that somehow I’m going to lose my IP rights on this as well.”
In both those cases, I think they’re actually categorically wrong. Most competitors already know what other people are working on. If I was to ask anybody who’s on this call, “Tell me about your competitor. Tell me what they’re working on. Who’s working on those problems?” Most will be able to tell you those things.
[But] even if you learn that competitor X or Y is working on something, the amount of time it takes for you to replicate…it actually is non‑trivial.
This worry about competitors finding out is legitimate, but…I don’t think it stands to scrutiny. On the IP side, what I say is, “Look, both contests and communities are based on contracts.” You establish contracts…as a sponsor of either contests or a community.
Because people volunteer their efforts, then the IP rights are based on what you feel the most comfortable about, and what you think is going to both encourage people to participate, and also allow you to capture value.
The third element is really a sense of identity. If you are a scientist, or an engineer, or a technologist in a company, you are going to be highly skeptical…”What, some kid in Estonia is going to solve my problem? How can that be?” The NASA guy would say, “Hey, I’m the rocket scientist. There aren’t too many people that can have access to microgravity. There’s no way somebody will understand these problems or will be able to solve them.”
The last thing is that the model of going to the crowd and waiting for a solution is inconsistent with the way we’ve run our own organizations.
Inside of our organization, there’s a manager who defines a problem, who says, “OK, Jenny, Jim, and Joanne are going to work on these engineering challenges. I’m going to have a schedule. I spent a lot of time and effort thinking about hiring these people. I pay them. They are going to check in with me. Hopefully, stuff will get developed.”
That top‑down process is turned upside‑down [when] we say, “I’m going to just put out a challenge, an incentive for the program, and I’m not going to care who gets to work on them. I’m not going to even limit who gets to work on these things. Somehow I’m going to get a working solution.”
The dominant paradigm is a top‑down one. A bottoms‑up paradigm actually is very jarring. When you try to convince people that this is actually a legitimate way to organize, that also causes resistance, as well.
A good example of community failure is if all you care about is the benefits to you as a company, and not to the people that are participating.
Let me give you one anecdotal example of it. Facebook, when it was starting up, was literally able to get language translations done for its entire website through a community effort. They said, “If you know a language, go on and translate these pages for us,” and people did it.
And as they were scaling, they had enough contributions that they could cross‑verify the translations through machine learning and say, here’s the best translation for all the different features that they had on their website.
LinkedIn saw this and thought, “This is a great idea.” But what they said is, “You have to apply to become a language translator. We’re going to screen you.” …You are now making [it] a manager‑employee relationship. That’s not what this is. This is voluntary effort.
Just this one small tweak, they had so much outrage about this. This is about seven or eight years ago. [There was] so much outrage about the fact that LinkedIn was trying to get labor for free, but they wanted a manager‑employee relationship with these folks. It just collapsed. The story has to be, when you’re in a community setting, it really has to be one of value‑sharing.
You’re going to generate a bunch of value, and the participants need to be able to share in that value as well. For example, in open source communities, a big sign of failure that we see often is that when the company decides to have a private process by which they will admit code. They’ll take away the transparency, and they’ll invent code through a private process.
That lack of transparency destroys value for the community members, because then they just don’t know how their efforts are going be returned into the working product. That can lead to failure. Again, the failure in communities is that we try to manage communities like we manage our employees.
The issue is that in the community setting, we have volunteers who are working on these challenges for intrinsic motivations, because they love to do these things or they’re having lots of fun doing these things; for extrinsic motivations, because they want to use a product or demonstrate to the world that they’re good at making this product; or for social reasons, because they feel like the mission is there. Those motivations can be impacted significantly if you start to manage them like employees, even when you’re not paying them.
On the contest side, one big failure we see is that we don’t provide enough incentives for the challenge at hand. When I say cheaper, faster, better, we objectively are cheaper — almost a third or a quarter of what it might cost to be done internally, in terms of costs to create the software code that we develop through a contest. But you can’t say, “Find me a cure for cancer and it’s worth $10,000.”
You have to respect the fact that if you are [working on] big problems, you are going to then provide with ample incentives for people to benefit from them. In some cases, you as a company will extract the vast majority of the benefits because you have some specialized assets, some specific assets, that only that code is going to be useful for, that solution will be useful for.
But you still need to be cognizant of the fact that people need to be rewarded both for the effort that they’re exerting if they win, but also as a sign of respect. …Your effort, if it’s successful, is worth something.
[If you’re] a company starting off in the prize setting, I would [recommend that you work] with these preexisting platforms like Kaggle for data science, InnoCentive for scientific problems, TopCoder for software problems, Tongal for marketing challenges.
They already have hundreds of thousands of people, or millions of people, on their platforms already that can absorb these challenges. If you are going to go on your own, and there are companies that do that, then make sure that you have the right marketing strategy to attract people to the problem.
For most companies, you’re better off using these preexisting platforms because what do the platforms do? The platforms provide the crowd. They provide a mechanism to transfer money to somebody in Estonia and transfer IP back to you, and also expertise on designing the problem as well. That is part and parcel of what the platforms provide.
Even in our work [at the Crowd Innovation Lab at Harvard and with NASA], we never decided to build our own crowd. We’ve used InnoCentive, Kaggle, and TopCoder to help us reach the folks that they already accumulated so we can get our problems solved.
We’ve been working with NASA for the past seven years now with their human exploration program. We have stuff now flying in the space station that we’ve developed through a contest platform —a food intake app for astronauts, how to best position the solar panels on the space station, how to better run their robotic arms, and so forth. When we first started working with companies and with NASA as well, what we noticed was that everybody wanted to pilot the project. They would say, “OK, I’m not sure this is going to work, so I’ll just give a little funding and we’ll see if it works.” Then it would work. Then they’re like, “Well, we’re not ready yet to take this new approach and bring it into our organization.”
After a year’s worth of banging our heads against this, we said to our NASA colleagues and our NASA partners, “We’re not going to help you run any challenges unless there is actually an implementation plan. Unless you know that once a working solution shows up — what will you need to do to actually implement this?”
The challenge is, “Look, I don’t want to lose face in front of all of these amazing people that are working for your project, and then they’re going to ask and say, ‘Hey, what’s going on?’ We’re like, ‘Oh, sorry. They’re sitting on the cutting room floor.’”
…You need to force the sponsoring organization, the team to really say, “Is this even worth it for me? I may have the funds [to run the challenge], but if it’s not worth it for me to implement, then why am I even doing this?”
That’s a nice filter. Assume the solution arrives. How will you test the solution, how will you implement the solutions, and then what return will this give to you and your business, or in your operations?
The other part is that it gets the value story straight, but it also highlights what form of solution needs to come so that, in fact, it can be dropped in with minimal changes.
There are two reasons why the crowd process works so well. One is, we just simply get more shots on goal. For any problem we can get 30, 40, 50, 60 solutions, so if you think about it as a normal distribution of solutions, we can find the tail of solutions — those really high value, extreme value solutions, [with] lots of people working independently, versus any one company working on a problem, [and] we’ll just get the average value of the solution. We just get more shots on goal, and we can achieve a high‑value solution this way.
We also get diversity. We get many different types of solutions. Sometimes what happens is that the companies go, “We never thought about a solution this way. This is actually a really good idea, but we don’t have any way to implement this solution within our structure, so we now need to make a bunch of changes to our processes to be able to adopt that.”
The best practice is that [the IP is] owned by the submitter until the prize money transfers. If you give the prize money, then the IP transfers over to you.
You could say, “I get exclusive use [of the IP],” or you could say, “You can use it for these kinds of applications, [as] the submitter,” or you could say, “You can use it for whatever you want, but I get use of this solution as well.”
I’ve just done a bunch of projects for internal innovation in some organizations. Let me just put on my professor hat for one second and tell you [that] what happens in an internal contest is you’re asking people to work on a project for which they’re not being paid. You’re basically asking them to create an internal public good. All the economic theory says that public goods creation need subsidies. I have my job. I’m working 9:00 to 5:00. I’m working 9:00 to 9:00. All of a sudden the VP of Innovation comes to me and says, “Hey, I’ve got a great idea. Work on this internal challenge.” I’m like, “What? Maybe I’ll have motivation to do that, but the company needs to offer something as well.” We’ve run field experiments inside of companies where we’ve seen that providing even an iPad goes a long way to drive participation.
Because you want to both signal the importance of this and the seriousness of this, but also it’s a way to subsidize the effort that people are going to be making towards this and subsidize the public good creation.
I would recommend prizes — varied amounts of cash prizes — it doesn’t have to be exorbitant, but it still has to be significant. But [you] also want “priceless prizes” as well. “Hey, if you win, you get to present your ideas to the executive committee.” “If you win, you get to have a dinner with our chairman.” “If you win, you can take 20 percent of your time and work on this project, and we’ll give you funding for that project.” There are many different ways to incentivize effort for internal things as well.
I work with our Harvard Medical School scientists to tell them that there is going to be somebody smarter than them outside the Harvard universe. They’re like, “Come on, Karim. Don’t waste our time.”
What we say is, “Hey, let’s just run the experiment. Let’s just take a reasonable problem that you guys are working on, and let’s put it out there. If it doesn’t work, then great, you were right. You were perfect. If it does work, then we can go back and reevaluate our priors, and see if this is actually going to be useful at all.”
I would recommend two things. One is to have a pilot strategy. Create the pilots, and pitch this as a complement — it complements their work, it’s not a substitute for their work.
For any given problem, if you go out to an appropriate platform, you get to see the entire universe of solutions that exist. All of a sudden, for any one problem, you are now up to speed on all the different approaches that could work.
If you get a working solution, great, awesome. If you don’t get a solution, then all of a sudden what you realize is that I now know all the different pathways that are not going to work. Either I come up with a better pathway, or I need to redefine the problem. That, in itself, is so useful when we’re innovating.