S6 | 529: Better software testing for improved CX with Stephen Feloney, Perforce
About the Episode
Today we’re going to talk about the impact of software testing on the customer experience, and how rising customer expectations mean that brands need to up their testing game, using more agile methods, and AI-based solutions.
To help me discuss this topic, I’d like to welcome Stephen Feloney, VP of Products - Continuous Testing at Perforce.
About Stephen Feloney
Stephen Feloney is the Vice President of Products at Perforce. Prior to this role, for the last 11 years, Stephen has been in Product Management, focused on enterprise software, at various companies spanning from the very large, like HP, to startups. Before product management, Stephen spent 12+ years as a software engineer. Stephen holds a B.S. in Computer Engineering from Santa Clara University.
Resources
Perforce website: https://www.perforce.com
Listen to The Agile Brand without the ads. Learn more here: https://bit.ly/3ymf7hd
Headed to MAICON 24 - the premier marketing and AI conference? Use our discount code AGILE150 for $150 off your registration code. Register here: http://tinyurl.com/5jpwhycv
Don't miss a thing: get the latest episodes, sign up for our newsletter and more: https://www.theagilebrand.show
Check out The Agile Brand Guide website with articles, insights, and Martechipedia, the wiki for marketing technology: https://www.agilebrandguide.com
The Agile Brand podcast is brought to you by TEKsystems. Learn more here: https://www.teksystems.com/versionnextnow
The Agile Brand is produced by Missing Link—a Latina-owned strategy-driven, creatively fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging and informative content. https://www.missinglink.company
Transcript
Please note: this was AI-generated and it was only lightly edited; there may be some errors.
Greg Kihlstrom:
Today we're going to talk about the impact of software testing on the customer experience, and how rising customer expectations mean that brands need to up their testing game using more agile methods and AI-based solutions. To help me discuss this topic, I'd like to welcome Stephen Filoni, VP of Products, Continuous Testing at Perforce. Steven, welcome to the show. Thank you. How are you doing? Good, good. Yeah. Looking forward to talking about this with you. Why don't we get started with you giving a little background on yourself as well as what you're currently doing at Perforce? Sure.Stephen Feloney: As quick background, I started as a computer engineer. So I was a developer for 10 years. I turned over to the dark side of product management when I was at a company called Mercury Interactive. Mercury Interactive is where I learned all about testing. I've been in the testing space now for about 23 years. And what I'm doing currently at Perforce, I was acquired into Perforce through Broadcom. Broadcom sold off a product called BlazeMeter. BlazeMeter is a SaaS-based performance functional testing platform. And so when Broadcom sold that into Perforce, I went with it. And then I've been the VP of the testing products at Perforce ever since.
Greg Kihlstrom: Great, great. So yeah, let's dive in here and let's start by talking about the impact of software testing on the customer experience. And we're gonna do this by looking at an article that you wrote. We'll link to it in the show notes as well for those listening. But an article that you wrote that had some 2024 predictions for software testing and its impact on brands and their customers. So I wanna start with the impact of AI. In the article, you mentioned that there are both positives and negatives in leveraging AI and software testing. And so let's start with the positive. You know, what are some of those positives that AI brings to the table?
Stephen Feloney: AI, for the most part, I see a lot of positive coming with AI. AI is going to make test generation easier. It'll be able to generate tests from requirements, from how users are doing things in production. It'll be able to easily point out differences and changes in tests. You'll be able to validate to make sure the tests are running properly with AI. So many benefits with AI. When it comes to data, big problem with testing is getting the right data. AI will help generate, synchronize, keep all that together as well. So there's a lot of benefits that I see that AI has coming. You'll notice I say coming, not necessarily here yet.
Greg Kihlstrom: Yeah, yeah, got it. And so, along those lines, are you seeing reluctance to adopt AI? And if, when you do, what are some misconceptions maybe that many might have about the negative impacts of AI on software testing?
Stephen Feloney: Are they reluctant to use AI? I would say not necessarily AI in and of itself. Most companies are really excited about AI. Where we find the reluctance is generative AI. There is a reluctance in larger companies to using public domain accessible AI tools that are being trained by all data that it's collecting. So that's where we see that reluctance.
Greg Kihlstrom: Got it, got it. And then, you know, beyond misconceptions, you know, we talked about the positives and I certainly agree with you. I think there are a lot of positives. Any negatives or just, you know, warnings, you know, when trying to leverage AI and software testing?
Stephen Feloney: So some of the negatives right now is you'll see quite a few different testing tools that claim that they're using AI, that they've tried to use generative AI, and they're saying, we'll generate your tests for you automatically. The challenge there is the tests that are being created are not exactly right. They are missing things. It takes then a long time for a tester or the developer to try to figure out what went wrong in the development of that test. So there are some challenges right now in trying to go full board and getting your test developed. So I would say that's kind of a negative right now. Cause people think, well, it's AI. I should be able to do everything right, right now. And that's not necessarily there. Another negative that we see coming are possibly coming is. The reduction of the need of as many testers, as many developers, um, as you need right now, because AI looks like. If you're going forward, it looks like they may be able to do a lot more than what it's doing right now. And there's a possibility of having, um, a change in the workforce because of it.
Greg Kihlstrom: Yeah. Yeah, to the to the first point, would you say it may be hard to, you know, just make a blanket statement, but you know, would you say it's the the lack of maybe quality results or something? Maybe is that maturity of the training models? Is it maturity of the software? Is it misapplication of, you know, maybe it's some or all of the above, but, you know, what do you think it's attributed to now that, you know, as you noted, is improving and getting better, but, you know, why are we not quite there yet?
Stephen Feloney: Well, I think it's, I think it's a few fold. One, I do think it's, it's a matter of the AIs need more training.
Greg Kihlstrom: Yeah.
Stephen Feloney: And the more training they get, the better they're going to be. I also think the AI itself just needs to be built better. And as we move forward in time, the AIs themselves will get smarter. It's not just the training, it's how the AI takes that training and implements it.
Greg Kihlstrom: Yeah, yeah, makes sense. So another prediction of yours is that interest in a unified platform or a single solution for testers is going to grow. What does this mean and, you know, What should brands be paying attention to here as far as benefits to them and something like that?
Stephen Feloney: Yeah. So as I said, I've been in, in testing for a very long time. So there's, there's a, an ebb and flow when it comes to what customers are looking for. Yeah. And so for let's say the last 10, 12 years, it's been best of breed tools. Once there was a shift to agile. Companies changed and they were saying, okay, the individual teams, the individual business units, they can decide what tools they want to use that will benefit them. We don't want to slow them down. So there has been a lot. of tools created. And if you look at any one company, any large enterprise company probably has 20, 30 different testing tools they're using. So the challenge with that comes, how do I know who's testing what? How well it's been tested? How does management get any sort of results? How do you have any sort of control over any of it? And so what people have been doing is these value stream management, where you integrate all these pieces together. So you have all these different tools that you integrate together and trying to display and highlight the results, the analytics, trying to give a thumbs up, whether it's good for a release or not. The challenge with all that is every time any one of these pieces change, they give a release, they give a new update, the integrations break. And so it's a never ending challenge in order to keep this all up to date. So what we have seen is now a switch, where mostly larger enterprises are looking to consolidate and have a central management of these tools. These are generally falling into, let's say the DevOps teams. And part of these DevOps teams become the tools management team. And what they've decided instead of them trying to constantly fight the integration battle, getting the data in the right formats and all that, they're now looking for companies such as Perforce to help them with that and come with a solution. And in our case, a testing solution that will help answer all of their testing needs. So they don't have to worry about that integrations.
Greg Kihlstrom: Yeah. Yeah. I mean, it also sounds like. Maybe an analogy, I work a lot on the marketing operation side of things. And so, you know, governance is certainly an issue there. So, you know, we use that term a lot. I mean, it sounds like there's some governance issues and consistency and, you know, benefits in using a unified platform. Would you say that's correct?
Stephen Feloney: Oh yeah, there's definitely a governance, a management thing that comes from a unified platform. The other challenge that we started seeing and enterprises started seeing is the movability of their employees. Which tools do they know? If they don't know the specific tool, this other business unit, it's harder for them to move to that and be productive as quickly. And so having a unified set that is used across the different business units and the different challenges they're using, having a unified set makes it easier for them to move people around as well.
Greg Kihlstrom: So the last prediction I wanted to talk about, um, from your article is based around the increased adoption and testing early and testing often with an increase of what we call shift left mentality. And so first, just for those that are less familiar with that term, uh, can you define, you know, what, what do we mean by shift left? Sure.
Stephen Feloney: So if we go back historically, let's go back. Yeah. 20 years. You had your development. your development when they were done developing, they would throw it over the wall, or throw it to the right to a testing team. That testing team would be in some sort of centers of excellence. And generally, that would be a functional testing team, then they would throw it over to the right, and that would go to a performance testing team. And then if you were lucky, you had a security testing team that they would throw it over to the right. And eventually you keep throwing it over to the right, it gets out into production. So by shifting left, what that means is getting the development groups or the development teams, they're now agile teams, so you can have testers in these teams, get them to test earlier in the cycle. Don't wait till everything is coded and send it to the right to a testing organization. Get the agile team to do the testing earlier. Does that make sense?
Greg Kihlstrom: Yeah, yeah, it does. Definitely. That helps a lot. So then, you know, based on that, it seems like, you know, doing more testing earlier in the process, then is ideal. But, you know, it might seem to those that are new to this concept, you know, that doing more testing earlier in the process, it just sounds like more testing time and more time, you know, resource time and stuff like that. But, you know, certainly I'm versed in the Agile space and know a little bit better than, you know, we're probably heading off bigger issues later on that are more costly and time consuming. And and so on and so forth. So, you know, not to ask too leading a question to you, but, you know, how would you characterize this, you know, early and often approach as far as like overall efficiency and as well as, you know, effectiveness of the product?
Stephen Feloney: Well, I mean, you said it. As the earlier you test, the faster you find the problem. And the faster you find the problem, the easier it is to fix. When you wait and test later, you're probably no longer just testing a component or a service, you're now trying to do end-to-end testing. And when you're trying to do end-to-end testing, one, you'll find the bugs later on in the cycle, but two, you now have to debug the entire application, the entire system, what is causing the problem, which then slows the entire thing down. No, you're not just testing later, which is just a problem in of itself, The debugging process to figure out what is causing the problem is a lot harder to do. So when you shift left and you test components, you isolate those components or those services. So when you are testing, you know you're just testing that. And then you get that right. And then you have all these other components that you're testing individually, isolated. You make sure that they are right. And when I say right, I mean, they functionally work, they perform the way you want them to. They're secure. Once you get that testing done, then when you go to that end-to-end testing, the end-to-end testing is no longer an event. It just happens. And generally you're not going to find a whole lot. It's just a, something that has to happen. Whereas right now, or in the past, when you are just doing that full end-to-end testing, It's all hands on deck. Everyone's got to be there. Everyone's got to help out. Where do they think the problems are? It's a huge challenge. It's a huge cost. So the faster you find the problem, the faster you can fix the problem, the cheaper it is to fix the problem. And that means you're getting the application, the service, the software out faster to the consumers that you want to use.
Greg Kihlstrom: Yeah. Yeah. I mean, to me, it sounds very similar to the, you know, the agile versus waterfall, you know, for those familiar with that, you know, that, that kind of discussion.
Stephen Feloney: When I was saying the shift left is happening for testing. A lot of people may say, well, that's, that's been going on for 10 years, 15 years. So there has been a shift left with functional testing that has happened. But that hasn't even happened correctly. The shift left we have seen is for some functional testing, generally just positive functional testing, and very limited use case positive functional testing. So not doing a whole lot of negative testing, which needs to be done, not doing any performance testing, clearly not doing any security testing. So when I say that the shift left is happening, I'm saying the entirety of testing, accessibility, performance, all of that is being shifted left and will be shifted left. And you will see more of that this year.
Greg Kihlstrom: Got it. Got it. Okay. Yeah. And that's a great clarification because yeah, that definitely helps. And so along those lines, I think I know the answer to the first part of this question, but I'll make it a little harder by adding an additional component to it. So first is, you know, those out there, there's a lot of marketers, and, you know, product marketers and others listening to this, not necessarily, you know, testers and engineers, but, you know, when they're thinking about cost implications here, you know, I assume there's, there's cost savings, but also, there's also costs, I assume there's cost reallocation as well. Could you maybe talk a little bit to both?
Stephen Feloney: Yeah. So clearly there's a cost reallocation. In these centers of excellence, they had specific tools that would help them in their centers of excellence. So there's, I'll call them legacy tools, and companies are either spending subscription money on those, or they are spending support money on those. As they shift left, those legacy tools that were really built and they came to be around these centers of excellence don't generally fit a shift left model. Yeah. In the shift left, you don't want things installed on prem. You want easy access to it. You want to be able to work through IDEs the way the developers want to work. So there is definitely going to be a shift in allocation on those types of testing tools, for sure. And then there's also reallocation of the types of testers you have. So instead of having completely specialized testers, where you have a tester that's doing functional, a tester doing performance, a different tester doing security, you might still have testers, but those testers become groups of people that will test everything. They're no longer specialized. So there's a shift in allocation that way as well. And as well, you'll have developers doing more testing. So again, there's a shift in allocation of cost allocation there. But then you also mentioned the cost savings. The cost savings, as I mentioned in the previous answer, is You're actually getting your software out faster. It sounds like you're not because you think you're doing all this testing in the sprint that you're doing the development. And how do you get all this stuff done in the sprint? But as companies and as organizations get better at that, the quality becomes better.
Greg Kihlstrom: Yeah.
Stephen Feloney: And you're able to release faster and you're not releasing big releases. You're releasing little releases. And so you're getting incremental value out to your customer base faster.
Greg Kihlstrom: Yeah. Yeah, yeah, definitely. And that kind of that brings us to the next question. And so, you know, faster is certainly a benefit here. But you know, what, what are, what are some of the benefits for the end customer in this process, since they're, you know, they're not privy to the inner workings and everything, but they are seeing the end results?
Stephen Feloney: Yeah, I mean, the The true benefit, the whole reason why anyone tests, we don't test for testing sake. We're testing to make sure that there's going to be a quality application, a quality service, a quality piece of software that's being released. And the whole point that we're doing this generally is either to acquire customers, acquire money, acquire both. being able to have higher quality releases go out the door faster. That means you're going to hopefully gain those customers, gain that revenue, gain that reputation better. The other challenge or the other benefit I should say that you get is Many companies, most companies are trying to release faster. You're seeing this with mobile applications. Mobile applications are being updated all the time. You see that on your phone, you always see your your applications on your phone being updated. So if if company A is not releasing as fast as company B, guess what company B is going to win out. So by releasing faster, you're now keeping up with the competition. And having high quality or higher quality, your reputation of your software is going to be better because it is so much easier now to switch, whether it's a game you're switching to, or it's a bank you're switching or whatever, or it's a food service you're switching. It's so much easier to switch. So you have to have that high quality and you have to get out fast. And the only way to do that is with the shift left, shifting left, getting the testing done in these agile teams, and bringing this full circle with new AI capabilities coming out in these testing tools, which will enable that shift left to be so much easier.
Greg Kihlstrom: Yeah, that's great. Well, Steven, thanks so much for joining today. One last question before we wrap up here. You've given a lot of great advice and insights already. And again, we'll link to the blog post that we pulled some of these insights from as well. But what's one next best action that you'd recommend for those listening that want to improve their software testing? What's something they could start on today?
Stephen Feloney: I would say the biggest thing to do is Look at your entire process. Hopefully there's a process. Even if you think there's a lack of process, there's a process. Look at it, pinpoint what you think is the biggest challenge. Is the biggest challenge getting the functional testing done faster? Or is it getting performance done faster? Or is your biggest challenge getting the environment available? Or is your biggest challenge getting the right data? Focus on one challenge and try to improve that. Don't try to improve the whole thing at once. That's not going to work. And don't try to improve, all right, I'm just going to shift all my performance and do all that. Do it incrementally, so you can show incremental value, just like when you're releasing software faster, you're giving incremental value to the customer, do incremental value internally for the testing. And over time, you'll have the improvements, but find the ones that you think are the biggest challenge that you think will be the biggest bang early, whichever that is, focus on that and do that first.
Greg Kihlstrom: Wonderful. Love it. Well, again, I'd like to thank Stephen Filoni, VP of Products, Continuous Testing at Perforce for joining the show. You can learn more about Stephen and Perforce by following the links in the show notes. Thanks again for listening to the Agile Brand, brought to you by Tech Systems. If you enjoyed the show, please take a minute to subscribe and leave us a rating so that others can find the show more easily. You can access more episodes of the show at www.GregKihlstrom.com. That's G-R-E-G-K-I-H-L-S-T-R-O-M.com. While you're there, check out my series of best-selling Agile brand guides covering a wide variety of marketing technology topics, or you can search for Greg Kihlstrom on Amazon. The Agile brand is produced by Missing Link, a Latina-owned, strategy-driven, creatively-fueled production co-op. From ideation to creation, they craft human connections through intelligent, engaging, and informative content. Until next time, stay Agile. the Agile brand.