Greg Kihlström

View Original

Using qualitative customer data well

The following was transcribed from a recent interview on The Agile Brand with Greg Kihlström podcast. 



Listen to the Episode

Prefer to listen? Click play on the video below to listen to the episode:. 

See this content in the original post

Today we’re going to talk about the value of qualitative customer data, and how to use it well. To help me discuss this topic, I’d like to welcome Daniel Erickson. Founder & CEO at Viable, an AI analytics tool that enables businesses to instantly access and act on valuable insights from customer feedback.

[Greg Kihlstrom] There is lots of talk about AI these days in a number of areas, so looking forward to talking about this aspect with you. So we're going to talk about qualitative customer data and how to use these insights most effectively. So let's start with a little background on how qualitative and quantitative data work, both together and separately. How would you characterize the relationship between quantitative and qualitative customer feedback?

[Daniel Erickson, Viable] Definitely. So, first, let's do some definitions here. So quantitative data are things like app usage metrics, actual ratings on the app store, any sort of sales pipeline numbers. Basically, if it's a hard number, that's the quantitative stuff. On the qualitative side, it's unstructured feedback. It's things like open-ended survey responses and app store reviews and social media mentions and help desk tickets and sales call transcripts. So these are all different ways of getting feedback. Generally speaking, quantitative feedback will tell you what your users are doing, but it can't tell you why they're doing it. That's where quantitative feedback comes in. So qualitative feedback can come in and help you understand the “why” behind the actions that your customers are taking and behind the sort of experiences that they're having and their ratings for those things. So, to really, sort of, characterize these things separately, we've got quantitative data that tells you the “what” and you’ve got qualitative that tells you the “why.” And when you marry those two things together, you can get the full picture of what your customers think about your product or service.

You mentioned some of the sources of quantitative. What are some of the sources of qualitative feedback, knowing that, again, we'll talk a little bit more about this in a minute, but knowing that these can be aggregated and used by your tool. But what are some of the sources of qualitative feedback that you can pull from?

Yeah, absolutely, the most common source for qualitative feedback might be surprising, but it is help desk tickets. It's your support queue, so your Zendesk, your Intercom fronts, all of these sort of large companies that help you on the help desk side. So we plug into all of those. On top of that, there's things like call transcripts of Gong or Chorus. There are app store ratings, so at the app store in the iOS app store and the Android Play store and surveys on top of that, so NPS. CSATs, any sort of PMF, product market fit surveys you're doing, or any sort of targeted surveys as well. So in-app feedback, reviews on Amazon and Walmart. So basically all of these places are places that your customers are talking to you or about you. And it really helps to, sort of, aggregate all of those things into one place so that you can get a bird's eye view of the whole thing.

You mentioned quite a few different types of customer feedback. But what are they overlooking in terms of utilizing this effectively, not just seeing what their rating is but really utilizing it to make changes or make improvements or whatnot?

Most companies will look at things like what their average rating is on the app stores or what their current NPS score is, but all of those platforms actually also allow customers to write in reviews and explanations for why they gave the rating that they did. And right now, lots of companies use those top-line graphs, and they're like, “OK, yeah, cool, our NPS is growing,” or “Our app store, or our iOS app is dropping in ratings.” But it can be really tough for teams to understand why those things are happening. So it's pretty easy to to understand what your current, sort of, customer sentiment looks like overall, but when you're digging in to understand why is my NPS score dropping, or why is my app store rating dropping, it's pretty tough to do that without digging in and reading through every response that you got through those qualitative data chains.

I think that's one of my criticisms of the surveying, or over-surveying in general, is just that often there are lagging indicators, and the quantitative stuff is really what's required sometimes to do the forensics or, you know, just kind of figure out what's going on. So you're saying even sometimes there's a good NPS rating, let's say, but someone puts something critical in it. Can you use all of that information, as long as it's one of those open text fields or whatever, and get a better picture of what may be going right or wrong?

Yeah, exactly. So you might have a really great NPS score overall, and somebody may even give you, like, a 9 or something on NPS, but within the comment itself, they'll say things like “I really loved the onboarding that I had with John, but my experience at checkout was a little buggy,” right? And they might have given you a 9 overall, so it would reflect very positively in the NPS score itself, but there was some extra context there in that comment that got missed. If you're not reading through and reviewing every single one of these, sorting them into multiple different buckets of pain points and features and different personas, then you're not getting the full benefit that you get from collecting all of this data. And it's really, really tough to do if you're just humans reading through it. Many, many companies are spending dozens of hours a week just reading this stuff. And it's maybe not the best use of time, if computers can start doing that for us.

Along those lines, whether it's a human or a machine, how does a brand determine whether they should accept or reject customer feedback as something to pay attention to? Is it a threshold, you know what is it that really helps a brand determine that they should be paying attention to it?

Yeah, so first off I would say that you should probably be paying attention to all of the customer feedback that you're getting in. That said, there is some feedback that's a lot more valuable than others. And what that boils down to is actually the persona of the person that is submitting that feedback. So it's all that extra metadata around the person that you should care about, so things like, “Is this from my free tier or is it from my enterprise tier? Is this in a market that we really care about or is it in one of the markets that we don't really care about?” So it's really defining what that sort of ideal customer profile looks like for you and then really, really paying attention to the feedback that you get from people who match that profile.

You've touched on this already, but I want to talk a little more specifically about the AI component and how that ties into customer feedback. So your platform, Viable, as you mentioned at the top of the show, utilizes AI, helps analyze customer feedback. Can you talk, a little bit, about how this works? Because you mentioned several different types of sources that it's coming from. How is this effective and why do you think that AI is the right fit for this type of work?

Absolutely, so, yeah, Viable aggregates data from all of those sources that I just listed. We've got native integrations with all of those different platforms. And what it does is it actually pulls in the data, in real time, as conversations are happening in these platforms. In real time actually enriches those conversations. So say you've got a chat that's happening in Intercom. As soon as that chat ends, it gets sent over to Viable. Viable then takes a look at the transcript for that chat and identifies all the different topics that the customer was talking about. So it will talk about any bugs they reported, any complaints they have, any feature requests they might have asked for, any compliments they may have given you, the service, or the product itself, and identify all of those different topics within the data set there. Now, it's doing this for every single conversation that your team is having with your customers, and actually every survey response that’s coming in, every review that you receive. All of those things are being, sort of, in real time, enriched. Now, now that we've got, sort of, all that enriched data in one place, we can then do a periodic report on that entire data set. So our customers basically grab all their data from all those different platforms, pull it in, and then our system goes through and, on a weekly basis, writes up an AI-generated report for our customers to easily find the top complaints, the top compliments, the top requests and top questions that their customers have about their service. So we do this using. GPT-4 and other large language models, to really deeply identify the context of the data coming in. We actually have done, sort of, a head-to-head comparison between us and a manual process. And we generally will come up with about three to five different topics for these conversations, whereas most humans would sort them into just one or two. So we get a lot more granular in our insights because of that. So we actually are now producing what I would call superhuman reports on these qualitative data sets.

So I would consider myself relatively an optimist, at least when it comes to technology and AI, and I think my feelings on it are AI can really augment human teams’ ability to do things. It sounds like that's what this is about, is it's not just you set it and forget it and AI starts running all of this stuff on its own, but it's AI taking vast amounts of information that humans couldn't possibly go through, or if they do, to your point, I don't know if it's bias or if it's just humans have a hard time just parsing mountains of of data, but, you know, AI is able to come up with better recommendations. I mean, do you see this as AI-human augmentation? Is that, kind of, the philosophy behind this?

Yes, absolutely. I firmly believe that humans alone can perform at maybe, you know, 80 percent. AI alone could perform at maybe 80 percent. But AI plus humans can perform at, like, 150 percent. And so our whole thesis here is that every team within a company should be able to access the customer feedback that helps them achieve their goal as well as they can. You know, the marketing team might use it to identify what benefits and compliments that people have for the product so that they can craft better messaging. The product team might use it to identify all of the gaps in the product so that they can add more features to satisfy the market there. Sales teams might use it to go figure out where all of the objections are and try to, sort of, fix all of that up. The customer experience teams might use it to identify all the frequently asked questions that are coming up so that they can better build macros and responses to their customer support requests. So, overall, the giant amounts of qualitative data that these companies are already collecting can be applied to almost every team within the company. And in doing so, you're, sort of, marrying this sort of AI that really deeply understands your customer feedback with your company's objectives and goals.

Last topic I wanted to talk about is just from the customer perspective, and I think all of us, no matter what we do in our day jobs, we're also consumers, at least by night. And we're inundated with surveys and requests for feedback. 

Brief tangent here, but it's timely because I just flew back home last night from a conference. My flight was delayed a couple hours. I had to deplane, you know, and wait in the airport for two hours – not the airline’s fault. The airline will remain nameless. But this morning I get a survey, completely generic survey from them, saying, “We hope your flight was pleasant,” or whatever. Well, you know, yeah, they did a fine job and did the best they could. It was nobody's fault. Somebody in the flight crew got sick or something like that. But don't send me a survey asking about my flight being pleasant or not, when you have the information to know that there was an issue. 

Long way of getting back to the point, which is, you know, we’re over-surveyed. There's lots of reasons for that. It’s probably the topic of a whole other podcast, let alone maybe just even an episode. But what should brands be keeping in mind in order to get the best feedback from their customer? Because in order to get good feedback, you need to ask the right questions, and at the right time.

Yeah, absolutely. I could probably talk forever on survey design and triggers and all of that. But, high level, here's a few things. So, one, I think most surveys have way too many questions in them. And the reason we do this is because it makes it actually easier to process the data after the fact, because, most of the time, what people do is they ask a bunch of multiple choice questions. You know, it's like, “Which job title do you fit under,” or, “If you had to choose one of these features, which one would be the one that you want us to build?” They organize these surveys this way because it makes it much easier to actually analyze the results. But you're optimizing for your own internal processes there, not for your actual customer's perspective. 

So my advice is actually to greatly limit the number of questions you're asking your customers. You don't want to go ask them 50 questions. You want to ask them the three most salient questions and allow them to type answers out in their own words. Historically, it's been tough to do that because you have to go through and read everything. But we've got AI now that can help you do that. So that's the first bit, is ask more open-ended questions and fewer multiple-choice questions. So things like, how can we improve the product for you, or what is the main benefit you receive from this product, or what kind of user would get the most benefit from using this product? All of those things actually are much better questions than, you know, “Stack rank these features” or “What's your biggest complaint? Choose one of the above.”

So that's the first step there. And then the second is context. We're to the point now where we're tracking basically everything that our customers are doing in our tools. So we should be able to understand what kinds of experiences our customers are having and tailor the trigger of these surveys to certain events. So, for example, if I just rated my delivery for some delivery app at a 1 star, you know that I just had a bad experience. So for that one, I would actually honestly suggest just calling up the customer and directly talking to them. But for, say, like a 2 star or a 3 star, maybe let’s send a “How can we improve” survey. If they do a 5 star rating on it, maybe you send a “What do you love about it,” or “Who else do you think would love this kind of thing?” So you can tailor those surveys to really just specifically hit the exact type of experience that you're hoping to get feedback about.

And then, lastly, and this is, kind of, more future-looking for us. But I do believe that we're about to move into a world that has much smarter surveys. So instead of just asking the big question of “How can we improve this product for you,” you might ask that question at first, and then the survey could be smart enough to start digging into specifics to help, you know, pull out the very specific things that you need to know about your customers.

About the Guest

Daniel is the Founder and CEO of Viable, an AI analytics tool that enables businesses to instantly access and act on valuable insights from customer feedback, saving them hundreds of hours spent analyzing feedback.

They launched in 2020, and have already raised $8.9M from high-profile investors such as Craft Ventures, whose notable exits include Tesla, Bird, and Airbnb, and Javelin Venture Partners, whose portfolio includes Mythical Games and SmartAsset—both of which have recently joined the unicorn club.

As you know, many businesses collect data from a variety of sources, including help desk tickets, surveys, product reviews, internal customer notes, call transcripts, etc., and store it in various databases and formats. As a result, much of it goes unused.

Viable changes this by allowing businesses to automatically aggregate, structure, and analyze all of their data, so they can understand what their customers are telling them and use this to better serve them.

Daniel, an engineer by trade, co-founded Viable with his identical twin brother Jeff, who is a designer. They took a different path than most, skipping college entirely and starting a consulting firm in Portland straight out of high school to help early-stage companies build their very first products, create MVPs, get their first users, and/or get their first investment.

Prior to founding Viable, Daniel was VP Engineering at Eaze. Before that he spent time as CTO at Getable and also had a front-row seat to Yammer's rapid growth during a 3-year stint as a Senior Engineer at the company.

About the Host, Greg Kihlström

Greg Kihlstrom is a best selling author, speaker, and entrepreneur and host of The Agile Brand podcast. He has worked with some of the world’s leading organizations on customer experience, employee experience, and digital transformation initiatives, both before and after selling his award-winning digital experience agency, Carousel30, in 2017.  Currently, he is Principal and Chief Strategist at GK5A. He has worked with some of the world’s top brands, including AOL, Choice Hotels, Coca-Cola, Dell, FedEx, GEICO, Marriott, MTV, Starbucks, Toyota and VMware. He currently serves on the University of Richmond’s Customer Experience Advisory Board, was the founding Chair of the American Advertising Federation’s National Innovation Committee, and served on the Virginia Tech Pamplin College of Business Marketing Mentorship Advisory Board.  Greg is Lean Six Sigma Black Belt certified, and holds a certification in Business Agility from ICP-BAF.