In October 1994, Palo Alto-based Stanford Federal Credit Union became the first business in the US to offer online banking to its customers. Twenty-five years later, these digital transactions can be completed anywhere — from the desktop at work to a smartphone app on the train.
While digital banking has become widespread, Apurva Shah says, these payments require trust between banks and users. However, poorly-designed interfaces or vague wording can break that bond with customers.
He points to his time as Head of Creative Technologies at Capital One. According to Shah, the company’s online payment method initially set the default option to pay off card minimums. “It turned out people…just typically paid off the minimum debt, not realizing that would have cost implications for them,” he says. When the company shifted the language to read, “Pay off your balance,” people started making bigger payments toward what they owed.
“Capital One [as much as] they make profit from the credit, they’re very aligned with the success of their customers,” Shah says. “And when you make a choice like that in your interface, you’re signaling very strongly to the customer that ‘we are aligned with your best interest,’ and that builds trust.”
Today, Shah is an adjunct professor at California College of the Arts, where he teaches a course called “Designing for Trust.” The class introduces budding designers to emerging technologies and teaches them best practices for applying trust in an increasingly digital world. Shah is also the CTO of Duality, a technology startup based in the Bay Area and Pittsburgh.
During a conversation with InnoLead, Shah highlighted three pillars of trust: alignment between customers and the company; transparency in the relationship; and user control. He also discussed the best ways innovators can integrate trust into their emerging technology projects.
How ‘Trust Points’ Work
Every action your organization takes — implicit or explicit — think of it as having trust points. And those points are either positive or negative. Now, unfortunately, it’s not a fair game. So when you do the right thing, you earn a small amount of trust points. But if you do the wrong thing…you will accrue a lot of negative points. And you may even wipe out a lot of the credit you built. Trust takes a long time to build, but can be broken in a matter of seconds. …
When missteps happen, how are you dealing with those missteps? I think the best thing you can do is not to spin it, not to try to hide it, but try to be very upfront and open about what has happened. … This is where the transparency piece comes in. It’s not just, “Hey, we are taking responsibility for what went wrong.” That’s definitely required. But then, what are you actually doing as innovators? How are you going to make it better?
Spread Emerging Technology Skills
Innovation leaders tend to be very optimistic about technology. … However…in some ways, that also blinds us to some of the negative aspects…that can arise from deploying new technology. … One way to deal with that is to be super-conservative, and not engage actively with new technology or innovation. But that’s not the right solution either. Because, in fact…we’re making these improvements to make our customers’ lives better and our company more successful.
The solution to some extent is to be open…to voices in the conversation that may be more critical of how new technology might shape the experience. If the only voices you have in the room are technologists and engineers…you’re basically getting a monoculture in the innovation process. [You need to arm] designers [and] other leaders in the organization with [technology skills]. Even if it’s not the same deep level of new technology or emerging technologies, giving them exposure to it…will allow them to feel more confident to bring their voice to the conversation…
Why Trust Matters
Having a trust-centric approach to design can actually reduce the downsides of some of these emerging technologies. The easiest way to understand that is if you think about…trust in relationships. When we trust the other person…[and] they behave in an unexpected way…we tend to forgive the other person. … We are delegating more and more [responsibility] to machine learning systems or artificial agents — including how you should be treated, or health care decisions. We really need to start thinking about that digital trust also in the framework of interpersonal trust.
Companies have had their stock prices fall by 10 percent in one day when a data leak happens. … What could they have done? As we roll out these technologies [and] it blows up, how are we as a society going to learn to trust these products? And especially from the standpoint of a brand or a product leader, how are you going to position your brand so that it earns that trust credit? Are you thinking about that as you design your product?
Bringing In Customers
Asking customers [questions] without prototypes, or asking customers conceptual questions, like, “What do you think about machine learning?” You’re not going to get a lot of useful information.
But if you ask the customer, “Hey, how would you feel if I took your credit card history and gave you recommendations about how you might be able to save another $10 a month?” They will be very upfront with you. … [In innovation,] we tend not to want to have a conversation with the customer. We just feel like we already know what is the right thing to do.
[We think that] if we ask the customer something, that means you’re adding friction to the experience. No, actually, you’re providing control to the customer. You’re making sure they understand that you are empathizing with them. The funny thing is…people’s favorite topic to talk about is themselves. Why are we not asking them what they feel, or what they want, or what they intend to do? … [You need a] balance between implicit assumptions or actions, and more explicit conversation.
Use Design Thinking Tools
Personas, journey map, and service design — these are the standard tools in the design thinking toolbox. One of the things students did [at CCA] is actually layer in the level of automation. So which parts of that customer journey and customer touchpoints benefited from having more automation, versus having less automation? At certain points in that interaction, the customer cares more about control. [Sometimes,] they actually care about saving a little bit of time. Now, if you keep shoving automation consistently everywhere, you’re not creating a better experience. You’re creating an experience that is now thinner, in terms of trust.
When you take those tools and you use them with more of a trust mindset, you suddenly start seeing patterns… [and] you can start making key decisions about your product…
Nudging the User vs. Manipulating Them
One of the most powerful things you can do is nudge the user into better behaviors. But what is the difference between nudging them into better behaviors, and potentially manipulating them for your own ends? [It’s a] very thin, ethical line. … I’m not saying that you will always work for the user’s interest. You will sometimes do things for your interest as well. In that case, you have to be more upfront about what your intentions are.
Communicating with the user is very important. … Imagine you [start dating someone]. When they first meet you, they look at your credit card statement, they look at your health record, and they look at certain documents about you. They don’t talk to you at all. And they go like, “Okay, great. I know what you need. … From now on, I just start making decisions on your behalf. And, and I know it’ll be great. Trust me, it’s gonna be awesome for you.”
Think about a Venn diagram where one circle is…the explicit intentions, and the other circle is the implicit actions. At the intersection of your implicit actions and your explicit intentions is the area where your behaviors are aligned with what your intentions are.
As an example, you might say, “Hey, I’m actually really interested in saving for a house.” But your actions are that you’re basically blowing your money on dinners every night. And so your implicit actions are different than explicitly what you’re saying about yourself. … [If a user’s actions and intentions are aligned], there is an opportunity to communicate and tell the user, “Hey, you’re doing this right… You actually said you want to save, and we noticed that in the last three months, you’ve cut down how often you eat out.” … Now, where you’ve got implicit actions that are misaligned with your explicit intentions, it’s an opportunity to improve [and] go back to the user, have a conversation, [and say], “We are seeing these behaviors that we think are misaligned with where you want to go. Would you like us to nudge you or help you or intervene in this in any way?” [Or] the user has certain intentions, but we are not seeing any implicit actions. And that’s an opportunity for potentially improving the experience.