<< All Episodes

Season 1, Episode 4:

An Intro to Performance Valuation (Pillar 2 of 3)


Apple Google Spotify Overcast RSS

Episode Summary

Performance Valuation is all about uncovering the user outcomes that create the most value for your business, and ensuring that you're delivering those outcomes more effectively.

Here we provide an introduction into measuring how well your system is currently producing outcomes, and confirming that you're focusing on delivering the most valuable outcomes to begin with.

📚  References

🔤  Transcript

Samuel (00:00:14):
Hi, I'm Samuel from UserOnboard.

Yohann (00:00:16):
And I'm Yohann, also from UserOnboard.

Samuel (00:00:20):
Today, our topic is pillar two of the three pillars of healthy growth, that pillar being called Performance Valuation. To my mind, Performance Valuation is really about two main premises. One of those is that we should be able to tell if we're getting better at producing desired outcomes.

Samuel (00:00:46):
In the last episode, pillar one with Path Design, we were talking about how can we look at our offerings not as much as a static product that users have an experience with, but how can we instead map the process that they go through of helping them reach their goal instead and look at things along those lines. If you are going to try to better arrange things to help people reach their desired outcomes, you want to have a way of telling whether you're getting better at producing those outcomes or not. Otherwise, you're just flying blind.

Samuel (00:01:28):
That's one premise. So far, so good, Yohann?

Yohann (00:01:32):
So far, so good.

Samuel (00:01:33):
All right. And the second premise is that we should be able to tell whether we're investing in user outcomes that actually matter. In that sense, that they're actually valued by the users and that they actually have a pretty strong correlation with being a healthy customer and producing revenue for the business. That we should be able to pick out the winners from the maybe doesn't really move the needle so much in terms of what desired outcomes we help produce to begin with.

Yohann (00:02:11):
Right, there's one small nuance that I want to jump in with at this point. At the moment, if you're thinking about your product as a product and not as a piece of support within a process, then you're probably thinking of your product as supporting a number of different outcomes.

Yohann (00:02:30):
One of the things we're talking about in Path Design is choosing one of those outcomes and really owning it so that you support the entire path to that particular outcome. With Performance Valuation, what we'll be doing is leaning further into that. So rather than thinking of your product as a single "thing" that supports six different outcomes because you have three different personas, we're going to lean into thinking about which one of those outcomes creates the most value for you for your business and how you can double down on what's working.

Samuel (00:03:16):
I think that's a really good point. I would even say that if you find your top two or three or four user outcomes that demonstrably move the needle for both your users and for your business, that it doesn't hurt to at least just take up a pass of the low hanging fruit of how you can position your product and offering to be more helpful in that regard. One thing I really like about the value pass process is that it doesn't say your product is built on a bad paradigm, you need to do away with everything and overhaul it completely. But instead, what we can do is keep your product in place and become better at detecting when people are seeking particular outcomes that you know move the needle for your business.

Samuel (00:04:08):
When that happens, you can create a segmented path for them that takes them not through your product like a tour in your interface, but that helps them complete each stage of the process of arriving at this bigger desired outcome that they're pursuing. It's something where you can almost consider your current product to be like the fallback option if you don't know which outcome you can be actively guiding people toward, then they get the generic static one size fits all version of the product. But if you know that there's one particular outcome or two particular outcomes that really move the needle for your business, then it would certainly make sense to be able to find out when people are seeking those and invest in whatever early gains can be made in improving those processes when your product comes into play, especially.

Yohann (00:05:08):
Just to clarify, in the five minutes that we've been talking, you're probably wondering what outcomes we're talking about, because we've talked about user outcomes and we've talked about business outcomes. To us, they are two sides of the same coin. They're not even separate. If you can find out which user outcomes create or directly cause your business outcomes and improving one is going to improve the other. Those are the kinds of outcomes that we're seeking to discover and improve the performance of delivering.

A Personal Story About Measuring Impact of Design Changes

Samuel (00:05:42):
I actually thought you were going to jump in with another clarification but I guess we should make this one too. When we talk about business outcomes, we are generally talking about measuring in terms of revenue, that every business needs revenue to function and currency is a pretty easy way of calculating things in relatable terms. Instead of focusing on abstract numbers, really looking at elements of cash flow, revenue, lifetime value, things along those lines. But anyway, we're getting a little ahead of ourselves. If I may, Yohann, I would like to start off premise one, which is getting better at actually knowing that you're producing the desired outcomes with a personal story, if I may.

Yohann (00:06:32):
Yes, please.

Samuel (00:06:35):
Okay. My first official user experience designer job, was at a creative agency, that focused on creating digital experiences for big consumer brands. So if they were hosting an event, we would make some sort of interactive, screensaver looking thing that would follow people along as they walked by it or something.

Yohann (00:07:04):
Can't have events without those.

Samuel (00:07:06):
Yeah, very important stuff. I came in as the first user experience hire, very fired up about implementing user experience as I was learning it and really trying to do it at a high level. It was something where I would very often find... I would really think out the whole user flow and we would use wireframes to try to keep everybody on the same page about what was being built and so on, and so forth.

Samuel (00:07:40):
Following UX best practices, traditional UX best practices. All of this thought and consideration would go up against like... The CEO, who would be like, "My wife likes purple. So let's go with purple." And we would just be like, "What?" It was just kind of like Lucy with the football from Charlie Brown, where he's like, every time I would come in with this really thoughtful thing and it would arbitrarily just get picked apart in ways that really didn't service the end user. I was like, "I need to make a better case for this and I need to have a better way of articulating my recommendations." Ultimately, what I really need is a way to get to tell if I'm getting better as a user experience designer. I need to be able to have feedback that tells me, "Hey, we came in and we said these square corners have got to go. You need rounded rectangles or whatever the recommendation is and that we saw some improvement to the experience that users have like the fundamental premise."

Samuel (00:08:57):
I was like, we should really start tracking this and seeing what really is important to pay attention to and confirm that it's actually getting better once we've launched this UX project and then walk away and go solve the next client's problems. I have generally found throughout this industry, whether a creative agency for design experiences or interactive experiences or digital experiences, excuse me, that was the term, digital experiences, whether you're like... Maybe that's an extreme example, but I went into the world of SaaS intentionally because I wanted to have that sort of feedback loop.

Samuel (00:09:37):
I was like, clearly, the people of the subscription software industry will care about measuring whether users are actually performing or being affected in a positive way by the changes that we make because it's just baked in there. That's just part of the nature of the medium. I remember around this time, going to a movie and walking out of the movie. In the lobby as I'm walking by the concession stand and the person vacuuming up the popcorn that spilled or whatever, there's a cardboard cutout poster that sort of props itself up with its own little cardboard crux kind of thing.

Samuel (00:10:33):
If I recall correctly, it's advertising the movie The Lorax. And I remember walking by it and being like, whoever the designer of this cardboard cutout thing was, they will never know if that design decision that they went to bat for actually worked out or not. If there was some discussion of like, "Maybe we should have the logo bigger. Maybe we should use the photo where the Lorax is smiling instead of the one where he's frowning." All of these decisions that people are pouring their expertise and hearts into trying to get to the bottom of, you're never going to find out if you're just making cardboard cutouts that sit in movie theater lobbies, because you're never going to have a way of attributing how many people went see The Lorax after having walked by your cardboard cutout poster. Am I going totally off the rails here, Yohann, or is this tracking so far?

Yohann (00:11:28):
Tracking so far.

Samuel (00:11:29):
All right.

Yohann (00:11:30):
There's a distinction. I want to "yes, and" here.

Samuel (00:11:33):
All right. Jump on in.

Yohann (00:11:36):
There's a distinction to make between... Actually, I'm talking about a definition here. How do you define what improving someone's experience is? What does it mean to improve someone's experience? Because someone walking past the screensavers in a conference and smiling at the screen because they're pleasantly designed — is that what improving an experience is? In Value Paths, we think about it a little differently, right? Improving someone's experience is helping them get to the place they want to get in a better way, not just creating something pleasant.

Samuel (00:12:21):
Yeah. I think one big distinction that's useful there is when we're talking about UX, we're talking about user experiences. The experiences that users have when they encounter your offering. I think a lot of times people think of UX as the user experience, like the quality of using the product, the digital product that we're creating as its own innate sort of thing. That is not the lens that we use, I would say. Agreed?

Yohann (00:12:55):
When you say it's innate sort of thing, you mean the experience is something that the product has and anyone who's coming to the product experiences the experience?

Samuel (00:13:09):
Well, I guess... I wouldn't claim that but I think that that's kind of... The thinking that would be represented in something like a user journey map or something like that.

Yohann (00:13:19):
Right, right. I was just trying to clarify that. Go on.

Samuel (00:13:22):
Well, or you could say that like a pen has a particular user experience to it. But you're not saying that you're paying attention to every single individual encounter that somebody has with that pen and finding out whether things worked out or not and what they were trying to use it for. It just doesn't make sense in the material world.

Samuel (00:13:43):
But that's the distinction I'm trying to draw with The Lorax poster. You're never going to know how well The Lorax poster worked in getting more people to go to the movies to see that movie. But with the internet, it's baked in. All of the user behavior is just sitting there in your user table. Your interface knows particular things about each user, because it's pulling data from somewhere. For user experience designers, especially in the SaaS world, to not really have a lot of working vocabulary in that department, to me seems like a gross oversight.

Samuel (00:14:26):
I imagine people might be saying, "Well, we do measure the user experience through NPS," or, "we do sentiment analysis," or, "we started seeing friendlier customer support tickets coming in, so we figured that was a good sign." None of these are actual feedback loops where you're saying, "We think people are desiring X outcome. We think that making the changes to our offering will result in a better/more effective path to getting to X outcome and we intend to be able to see an uplift in the success rate of that." That doesn't seem like too much to ask for me.

Samuel (00:15:09):
But I am shocked, honestly, at the ambivalence that our industry has toward that, especially because it's just sitting there and you're just counting receipts. The information is literally just sitting in your databases. And there is not really any sort of working vocabulary or framework in place to help people work through what's currently, I guess, something that's in artistic, intuitive sort of process. To me, we can just do a lot better there. That's a lot of what value pass is about.

Yohann (00:15:45):
Can I summarize what I think the shift in mindset is here?

Samuel (00:15:49):
Yohann, it would truly be a relief if you could.

Yohann (00:15:54):
Okay, so I think you're not tracking the performance of an asset based on how it is engaged with. You are tracking the performance of an asset based on how effectively it moves people along the timeline that you've mapped out with Path Design.

Samuel (00:16:18):
Yeah. To me, that does summarize the whole concept very neatly. If you're thinking in terms of timelines, a lot of what we talked about in the Path Design episode and what we will talk about here is talking about better acquainting yourself and fitting into the user's timeline. Because if you're creating a product, it always unfolds into time by being used by somebody. It has its own opinion about what order things happen in sort of like the training wheels versus the balance bike we were talking about previously.

Moving from the business's timeline to the user's timeline

Samuel (00:17:00):
Those are all considerations that we would classify as being on the user's timeline. But if we think in contrast to the business's timeline, it's amazing how many operational practices are predicated on that instead. If you're thinking about your company's timeline, it started maybe as a scrappy startup in a garage with a couple of friends and then you got funding and went to Y Combinator and started hiring like crazy and hit product market fit and your net MRR went on a hockey stick rise and now you're thinking about an IPO, whatever, the whole narrative of your company, you're somewhere in your company's timeline when you're thinking about those kind of things or when you're thinking about what your feature roadmap should be.

Samuel (00:17:50):
Or even when you're thinking about when different projects launch. Those are all things that you're thinking how does this time with what our competitors are doing, does this keep me relevant in the market? Marketing promotions themselves are very centric around the company's timeline, usually, rather than the user's timeline. A lot of what we're advocating for here is in Path Design, let's give a crap about the user's timeline and try to design in a way that accommodates it. Here in pillar two, what we're talking about is let's also care enough to just bother counting the number of people who actually succeed at doing this thing, especially if a significant amount of the user data that we need to be able to do that is already just abundantly sitting there, completely overlooked. Conceptually, I would say that's my summary of your summary, which took significantly longer than your summary for the record. But that's where I stand. That's my stance.

Yohann (00:18:56):
I think we squared away introduction-wise and it's time to dig into the meat of this.

Premise 1: Getting better at producing desired outcomes

Samuel (00:19:01):
All right, so digging into premise number one, which is are we getting better at producing desired outcomes or not? For me, it's all about having that feedback loop in place. What you can't have with The Lorax cardboard cut out, you can have and you have generally by default already in the form of user behavioral data. Your system has to keep track of what users are doing. Generally, there's some sort of record of what they did and when, so that your product can function and in a similar way, we know that we can go and access that information. Really the only challenge there is the bureaucratic red tape, rather than whether or not the information itself is scarce for example. Agreed so far, Yohann?

Yohann (00:19:53):

Samuel (00:19:53):

Yohann (00:19:55):
Performance Valuation is... You're not going to find a new metric here or a new anything really. We're just taking stuff that you already have and already know and providing a different system to it. It's a different way of doing things rather than a new thing to do.

Samuel (00:20:18):
I agree. For us, I think that the thing that is new maybe is just a shift in focus or a shift in paradigm, where instead of taking a repeated approach to improving the product, what we're instead is we're starting with performance of the system in producing desired outcomes. When we iterate, we iterate to bring the success rate of that particular process up. We're not thinking of how can we iterate toward making our product higher and higher quality, but we're thinking, if we're an invoice sending company, how can we iterate toward having more of our users successfully get paid once. That doesn't seem like too much to ask from a reporting standpoint. I think it's a crucially important indicator of how well your value proposition is being delivered on just a fundamental basis. That would be an easy example of the kind of feedback loop that we would be talking about paying attention to.

Yohann (00:21:32):

Samuel (00:21:34):
On a practical level, in order to be able to do that, you need to be measuring user behavior in cohorts. You can't just kind of say, "Well, we looked into it and if you send five emails within six days after signing up, then you're more likely to convert. So let's just try to mash everybody through that process," or something along those lines. That doesn't tell you why somebody is doing what they're doing. So that's not necessarily an outcome that we would recommend designing for. But you can still pay attention to whether you're getting better at getting someone to that sort of outcome or not, if you measure in cohorts. If you think in terms-

The key benefit of tracking performance in weekly cohorts

Yohann (00:22:22):
When you say cohorts, what exactly do you mean?

Samuel (00:22:25):
So what cohorts means to me is visibility, where if you think over the last year's worth of user experiences that have happened in your product or how many different people have signed up for your product over the span of the last year, there are probably different peaks and valleys where sometimes you might have been running a promotion or something might have happened, where you got a viral video on TikTok and it led to some sort of big uptick in engagement and then maybe there are other times where it's ebbed a little bit, certainly if you have a seasonal business, where it's summer versus winter, different levels of engagement, things like that happen.

Samuel (00:23:10):
We know that there's variability in in what has transpired over the last year of your business. If you aren't measuring in cohorts, you can't see where things got better and where things got worse. You can do it at a really high level from a marketing standpoint and say, "Well, we got more signups." You can look at it from a customer level and say, "Well, we got more customers." But tracking that doesn't give you the attribution of knowing that you are turning those signups more effectively into customers who will not only become customers, but also stay customers. There isn't a lot of visibility into what happens in between there.

Yohann (00:23:56):
One of the questions we usually ask when we do have the visibility and we can see a peak is what caused this peak or what caused this valley. That line of questioning is really healthy because it helps us pinpoint certain changes that caused a particular group of users and this is why it's useful to think in cohorts, a particular group of users to convert better at that particular time. You go from "what is my conversion rate?" to "what is my conversion rate for this group of people and why?"

Samuel (00:24:35):
Yeah. When you think of conversion rate, you usually (I would assume most people would) think of it in terms of sign up to customer conversion rate. But what we're saying is that you can pay attention to sign up to anything conversion rate. If you want to improve the conversion rate of people getting paid on their first invoice, that's probably a really healthy thing to invest your UX energy toward, because the incentives are so highly aligned with your own user's motivation. You know that that's the motivation that's causing them to be customers. Therefore, the better you can get at that, the better you can get at creating an entire system that scales really, really well, because it's inherently rich with user motivation.

Yohann (00:25:26):
I love the way you put that just now. Each step that you lay out in Path Design is a conversion. It's an insight. It's a light bulb that's going off in my head right now. Users moving from one step to the other is them converting on to the next step.

Premise 2: Identifying which outcomes are creating the most value

Samuel (00:25:52):
To hard recap here: On premise one, we want to pay attention to whether we're getting better at helping users achieve meaningful outcomes that help drive our business or not. In premise two, we're talking about how do we tell the user outcomes that are meaningful to users and help drive our business from user outcomes that we're just inventing on our own and just kind of sound good or that just can't demonstrably move the needle or are maybe based off of engagement sort of things. Like we just want to get them back 20 days after they sign up, even though that's completely devoid of a value proposition of any sort for the user.

Samuel (00:26:39):
Generally speaking, we're trying to separate the good outcomes from the not good outcomes, so that when we invest our time in Path Design and measure the performance of that path with the Performance Valuation, we know that we're investing our time in the right outcomes to begin with.

Yohann (00:26:56):
What I'm hearing you say is that the user's timeline is more predictive of outcomes than the business's timeline. This includes business outcomes too, because a user is never going, "Now, I'm a retained user. I've successfully moved from activation into the retention phase of the business." A user is never saying that. The more attention you can pay to the user's timeline, the better you can have a sense of business outcomes.

Samuel (00:27:30):
Yeah, exactly. Even in the sense of becoming a customer, nobody thinks, "Did you hear what happened to me today? I became a Target customer." You'd be like, "I bought something at Target." You don't think of your relationship with Target as being like, "My primary identity is a customer of yours." It's they think of themselves as the primary actor and that Target is providing resources to help them do what they're trying to do from their perspective. Right?

Yohann (00:28:01):
Right, right.

Samuel (00:28:03):
The idea of producing customers is kind of gross. Nobody's consenting to become a customer, they're not coming to your products so that they can go through a customer journey, you're trying to steer people into doing things that might be required in route to the things that they actually want to do. But if you invested that time in guiding them to the things that they actually want to do, then you can be selective and strategic in where you insert your payment gateway along that timeline and think of the fundamental way that your value proposition relates to the way that you're making money.

Samuel (00:28:45):
If you have a subscription revenue service, you're dependent on recurring revenue. If you're not solving a recurring problem, it's probably not going to be a really good alignment. For example, if you're a dating app, most people are getting onto a dating app so that they can find somebody to be in a relationship with or no judgments or whatever. But many people are seeking a relationship at which point they would no longer be users of the dating app. If you create a dating app with a larger upfront payment in mind rather than hoping you can just string people along and engagement hack and create like habit loops or whatever, you're going to be more fundamentally aligned with what the user is trying to do. Makes sense so far?

Yohann (00:29:32):
Right. Makes sense.

Samuel (00:29:34):

Using the user's timeline to create a performance feedback loop

Yohann (00:29:34):
Paying attention to the user's timeline, how does that tie back to the feedback loops that we want to pay attention to?

Samuel (00:29:42):
It's that if we are measuring things in weekly cohorts, then we have visibility on a weekly basis of how successfully our different systems are producing different outcomes. In the same way that we can tell whether or not people who sign up for our imaginary invoicing company actually go on to get paid or not, we can use cohorts to see if and when that percentage of success is going up or to see if that number happens to be going down, which would be a negative indicator. If your product stayed exactly the same, maybe that would be an indication that the signups that marketing is driving are not converting as well or different things along those lines.

Samuel (00:30:28):
It makes sense to just pay basic attention from one week to the next are people who are signing up generally engaging in the core realization of value that we offer. If you take that same concept and say hey, if we measure every week's batch of users as a cohort and not only measure the success rate of that particular user outcome happening, but we can also see of the people who sign up, how many of those people become customers. That's another success rate that we can improve. That's what we generally call conversion rate.

Samuel (00:31:06):
But as you pointed out, that's not how the user sees it. They are pursuing their particular value. And so what we want to do is design to help them pursue their value and confirm that creating a... Getting your invoice paid, which is the example that I keep using is truly moving the needle in terms of whether that helps decide whether somebody becomes a customer or not. It's just a theory that we have. We want to be able to see, not only can we get more people to be paid from their invoices overall week over week, but also does that then lead to more of those people going on to become customers week over week?

Yohann (00:31:47):
Tracking user value in this way moves the growth focus from getting more users into the system to making the system perform better.

Samuel (00:31:59):
That is 100% true. Instead of thinking how do we double our customers by doubling our marketing budget, for example, you could also just make it very slightly less poorly performing if you're operating at two or 3% conversion rate. Or even if you're operating at like a 25% conversion rate, why not get it up to 40, if you can? So the process of doing that is by taking the Path Design principles that we talked about in the previous pillar of identifying the changes that need to take place, the smaller compounding changes that need to take place in route to the bigger outcome being achieved.

Samuel (00:32:45):
What we can do, and again, this is just a statement on how the information is literally just sitting there for most companies, what we can do is we can see of the people who signed up in a particular week, how many of those people converted into becoming customers? We can see that one number is lower, that there was some percentage that did it, probably a minority and some percentage, probably a majority that did not do it. That's called churn.

Samuel (00:33:15):
And so if we identify the steps that take place between people signing up and converting, things that we know they have to do just to... Like if you just took the most straightforward path from signup and creating the account and answering the dumb sale survey and clicking next 20 times on the tool tip tour and going in and just... If you just had to just click next over and over and over again to get to actually going into the billing flow, uploading your credit card, the transaction being attempted, the transaction being approved, all of these are steps, you can see where they're turning along the way. So if you're going from 100% down to 20% at the outcome state, whether it's a user's outcome, business's outcome, whatever, then you know you're churning out 80% of the people along the way.

Samuel (00:34:10):
If you look at the individual steps along the way, you can see all right, well, how many people went from sign up to finishing creating their account? How many of those people went on to confirm their email address by going to the distraction nightmare that is their inbox? Okay, and then how many people survive the next step. You can see where you're churning people from one step to the next to the next. Thinking of it like a conversion funnel, but not just for converting people into becoming customers, but for converting them into getting to where they're trying to get to. To tie this all back together, when we're asking how do we tell which outcomes actually produce revenue and which ones don't is we can track the revenue production from one month to the next to the next, one year to the next, if you want to per weekly cohort.

Samuel (00:35:02):
And you can then say, of the people in this cohort, who got an invoice paid, what was their customership like? Of the people who didn't get an invoice paid, maybe even just during the trial or whatever, what did their customership look like? And if there's no real difference between the two, then getting an invoice paid, despite the fact that it sounds pretty straightforward is not demonstrably moving the needle in terms of user revenue.

Samuel (00:35:32):
So if you're seeing that there really isn't a difference in the revenue production of the users who did X versus the users who didn't do X, then that probably means that X didn't really move the needle. Instead of our theory, being that getting an invoice paid really moves the needle, maybe the data doesn't support that and maybe we reframe our theory and think instead that may be let's see, of the people who signed up and used our new template builder to build the invoice versus doing it the old way from which is whatever, I don't even know. But somebody who picked a template, let's say, did that make a big difference in terms of customer performance?

Samuel (00:36:13):
If people went down that path, did that lead to more business outcomes taking place for our company or not? What's really interesting about this is that you can say, you can see the difference between the performance of the people who did a particular thing that they care about and the performance of the undifferentiated masses of people who signed up and didn't do that particular thing. Once you're able to start drawing a, I don't want to say causal relationship, but a strongly correlated relationship between a user achieving a particular outcome and vastly improved performance for the members of the cohort who did that thing, then you really have an idea of this is an outcome that we should see if we can flex the levers and get this outcome to take place more because we might continue to see a strong correlation between this and from a company's perspective, having a healthy customer relationship.

Yohann (00:37:17):
This whole bit that we just went through, it sounds a lot like identifying Common Conversion Activities. Lincoln Murphy's thing. It sounds a lot like you find what correlates with revenue and you double down on that, but that's not what we're trying to say, right? We're trying to say if you find that a product behavior correlates with revenue, Value Path is trying to say, put that product behavior in the user's timeline and then you've got a tighter causal relationship for what users care about and business value.

Yohann (00:38:01):
We're moving from thinking about engagement as an endpoint to thinking about outcomes as an endpoint. That's the key difference from thinking about something on a business timeline to thinking about something on a user's timeline.

Samuel (00:38:18):
Yeah. That's exactly the difference is that when you're looking at it from a business timeline standpoint, you're thinking about how can we make a higher and higher quality product that we can more reliably sell access to in the form of whatever your subscription revenue might be or if you're not SaaS, then however, you make money from users. That if you make a more compelling product, then more users will want to engage with it. Thus, you will have more revenue coming in.

Samuel (00:38:47):
What we're saying is the quality of the product is not as important as we would like to believe. The quality of the outcome that the product facilitates is really hitting the nail on the head here. That's what we should be paying attention to as designers and also what we should be paying attention to from a measurement standpoint, both in terms of whether we're getting better at producing that outcome or not, and also in terms of and how is producing that outcome helping to drive our business model in terms of producing revenue as well.

Yohann (00:39:21):
Right, right. And you want to lean into the outcomes that correlate with business value.

Samuel (00:39:29):

Improving performance by amplifying what's working

Yohann (00:39:31):
Once you've identified a particular outcome, how do you improve the performance of delivering that outcome more reliably?

Samuel (00:39:43):
As we were talking about before, looking at the overall success rate of it from one cohort to the next. Let's just say for example we're looking at the success rate of some early activity in the user timeline and from one week to the next to the next, it goes from 21% success rate to 20% success rate to 22% success rate. You could say that it's probably holding pretty steady at a 21% average if you extrapolate that out for the other weeks, let's say that's the case.

Samuel (00:40:14):
Even though you might see your acquisition, the number of total signups from one cohort to the next go wildly up and down, maybe wildly up and down, maybe that's pretty predictable, too. Who knows, but regardless of whether it's going wildly up or down or not, you can see pretty clearly what the percentage of the people who are succeeding of the given total acquisition amount of that cohort is and track that from one cohort to the next as well. If you're seeing it goes from 22 to 20%, to 21% and then you've made a change and for future weeks, it goes 30-32-33-31%, then there's a pretty good likelihood that the change that you made, especially if you're doing this in a thoughtful and surgical kind of manner had that sort of effect.

Samuel (00:41:05):
Now, I would not recommend expecting any single change to have an effect that large, but I'm just using this to paint an illustration of what we're talking about here, that ultimately what we want to be seeing is higher value per signup, that the number of signups that we're getting through our acquisition efforts, we want to be converting those people into revenue producers more reliably and for longer so that if we take the full amount of money that a given week's cohort has made for our business over its full lifespan or that we project it to over its full lifespan and divide it by the number of users who entered our system during that week, we can see very clearly how effective we were at turning raw users into dollars.

Samuel (00:41:57):
It's an apples-to-apples kind of a measurement. What we ultimately really want to be doing is seeing which marketing channels lead to better revenue production. What sort of early user activities and outcomes lead to better revenue production? And how can we align our whole system around almost thinking of it as bringing up your signup stock price. You want the value of somebody who walks in your door to be going up over time, not down, especially as you ramp up your acquisition efforts, because otherwise, you're just smashing the gas pedal down on a really leaky bucket and you're just going to be churning through tons and tons of users instead of building a sustainable ecosystem of ongoing usage and therefore ongoing revenue production.

Yohann (00:42:53):
This brings us back to how this is just a reframing of things that you're already doing rather than having to track a new metric from scratch. You make three simple changes to what you're already doing, which is a track in weekly cohorts, track to an outcome rather than an engagement, and track all of the steps between starting point and resulting point...you make these three simple changes and suddenly, you've got a whole lot of more visibility and pattern drawing ability that you can rely on.

Using Step Drop-Off to improve conversion

Samuel (00:43:28):
It's really interesting because when we work as analysts and look at a particular company's data, we can see that the acquisition totals from one cohort to the other if we look over the past year, for example. We can see that the amount of acquisition goes up and down. Relatively proportionally, you also see that the number of results of whatever outcome you're measuring go up and down accordingly.

Samuel (00:43:58):
If you look at the conversion rate from one week to the next to the next, it does not go up and down generally. It generally holds pretty flat, which doesn't really make sense because you're like, "Man, we're reaching a new audience. We've launched all these new features. We've redesigned our logo now. Now it seems a lot cooler." All of these different changes that you might be making, but they're not actually affecting your bottom line performance. If you can take an X-ray to all the steps that take place between signing up and producing revenue and doing the things that the users care about as well, then that gives you a lever to handle growth of your company with and to handle scaling your company with by managing the unit economics to a degree that people currently can be doing and just seem to choose not to.

Yohann (00:44:57):
The question of what is the industry benchmark for conversion doesn't really make sense to us, because the people that you're losing at a particular step, even if that step is conversion, the people that you're losing there are an opportunity for you to do better with your support. Rather than thinking, the best we can do with our support is 50%-

Samuel (00:45:27):
You don't mean customer support, you mean like process support?

Yohann (00:45:31):
Process support, yes.

Samuel (00:45:32):
Yeah. Okay.

Yohann (00:45:34):
If you're thinking, the best that our support can result in is 50% of the users make it to this particular outcome, then clearly, you're not paying attention to the steps between where users begin and that particular outcome.

Samuel (00:45:49):
Yeah. Or as I always say, if your conversion rate is 40% and that's right in line with whatever "industry benchmark", which I am doing heavy air quotes when I say that, because who even knows where that study came from? But let's say theoretically, your conversion rate happened to be exactly in line with industry benchmarks, but you could raise it from 40% to 60%. Why wouldn't you? You wouldn't be like, "No, no. We're at industry average. No, thanks, we don't want more money." Of course, you would just... You don't know how effective any given lever is going to be until you try to wiggle it and see how much give there is and how much range there is in being able to improve the step churn, the amount of people who you're losing at any given stage of an outcome pursuit.

Samuel (00:46:43):
If you pay attention to the amount of people who you lose from one step to the next to next, you'll find that it is remarkably consistent. That even again, kind of like what we were talking about before, maybe you redesigned your logo and add a new feature. Shipped dark mode, that was so important to so many companies, all these different things that are very public announcements of how much better the product is now. If you go back and you look at the business performance on a signup to LTV ratio, not a lot of changes from one week to another.

Samuel (00:47:23):
At first, I thought that that was so odd, because it seems like so many things do change. But then I started thinking, well, what else would you expect if you're taking maybe a slightly changing market or a slightly changing feature set, but you're ultimately just taking relatively undifferentiated users and mashing them through the exact same intro screens and you're just... When you have them take the six question survey that makes them say how big their company is and what their role is and so on and so forth, just so a salesperson doesn't have to do a little bit of work in a macro, then you're losing 30%. Well, 30 would be extreme. Let's say you're losing 3%, even that is unforgivable. You're just taking 3% of the people who you could otherwise be serving off the table every week, whenever they encountered that step.

Samuel (00:48:18):
If you do that from one step to the next to the next, that's how you wind up with a 20% trial to paid conversion rate or a 2% trial to paid conversion rate. Getting an understanding of the remarkable consistency of drop off from one step to another to another gives you a handle in being able to see, wow, if we can just get more people through the email confirmation step, I wonder if they would convert better. Or maybe if we can delay that step entirely, so that it doesn't even have to happen before outcome X, then that's even better. Then you're taking all the turn that happens at that step and letting those people cascade further down through the timeline and through the flow and have them be more likely to be able to arrive at the outcomes that they desire and to produce the outcomes that you desire in the form of revenue.

Using in-product behaviors to discover Motivational Outcomes

Yohann (00:49:13):
I feel like a lot of companies approach growth, where you find that correlation between product behavior and revenue and then stop there and then double down on making that product behavior happen more. This is where the engagement hacks come in, because you use whatever engagement hacks you can to make that product behavior happen more. But we are not talking about outcomes in this way. We're not talking about certain product behaviors being outcomes in themselves. We're trying to figure out how we can use this correlation as a starting point. Can we use this correlation to shine a light on what's actually happening in the user's timeline that causes them to produce more revenue?

Samuel (00:50:05):
Yeah. I completely agree. The difference that you're identifying here is anything that happens within the app, any in-app behavior does not explain the out-of-app outcome that the user is engaging with the app to achieve. That's like the one sacred law of Value Paths: every time somebody uses something, it's because they're trying to change something about their situation external to the product or offering itself. When we look at correlations like getting an invoice paid, for example, is something that takes place within the software to a particular degree, but it's also something where you can clearly see how that would be a motivational pursuit for someone and would also very neatly tie in with the basic value proposition of the product most likely.

Samuel (00:51:12):
But where we see danger is when we see something like we just need to get people to send three documents or even worse, even if it's not a particular usage milestone, but instead is just an engagement milestone, we want to bring people back more on day three. That's not how you want to nurture the relationship. That's just, if you do some data science and torture the data enough, it will say, "Well, people who come back on day three are more likely to convert because obviously people who were more likely to engage with the product are more likely to convert."

Samuel (00:51:50):
It doesn't give you the motivational good stuff to be able to design around and to form a timeline around. If people will sign up for your invoice sending app and at the very beginning, you say, "Hey, we're offering this free bonus to our customers. We have a custom path that is fully supported in helping you send three documents. Would you like to do that?" Nobody's like, "I'm not here to send three documents. What does that even mean? I'm here to get paid." Or whatever it is that they're there to do. Very much to your point, instead of looking within the app's behavior to trying to find proxies to mindlessly try to trick people into attaining, instead, you want to use those, I really like the way that you put it, use those as a starting point to say okay, it turns out we looked into it and there either is or is not a big difference in revenue production between people who get an invoice paid before their trial ends or not.

Samuel (00:53:06):
Let's say for example, we look into it and contrary to the previous example, getting an invoice paid does correlate strongly with ongoing revenue production, then that gives us a starting point to start understanding why that's the case for people. The why might be kind of straightforward here where it's like the value proposition is you're going to help me get my invoices paid. Here we are, so thank you for doing what it says on the tin. But if there are other things where it's like... We find that people who come back on day 20 after signing up are more likely to convert. That's a really open-ended... There's not a lot of... Like if you're a detective, you're like, "What's the motive? What's the reason that somebody would be doing this?" Very unclear. You can use that instead of as a design direction, which we see happen very often where a chief revenue officer might hand a growth team or even growth department, a quarterly strategy of bringing up day seven retention. And go.

Samuel (00:54:26):
That's the general context that they're working under and they have to try to scratch their brains and try to think of how to bring day seven retention up. Obviously, you can do things like send them three emails on that day and it's probably more likely that they're going to come back, maybe just to unsubscribe or something. You can always game all these kinds of things. But if you're looking at it instead and saying, "Why do the people who continue to show up on day seven actually convert on a better basis, other than the fact that they're just continuing to engage with the app in general?" You can use that as a springboard for further user research, interviewing, surveying and you can understand what it is that's the common themes of what external life centric motivations are driving people to say not, "I became a customer of Target today," but, "I swung by Target because I needed to get my friend a new inhaler," or whatever.

Yohann (00:55:25):
Right. In the previous episode, we talked about operating with a theory of change. Some theories of change are better than others. A bad theory of change would be those who return to the product on day 20 go on to become revenue producers.

Samuel (00:55:47):
It doesn't tell you anything about why.

Yohann (00:55:50):
Exactly. Exactly. It's no coincidence that the why is always an out of act why because the app exists to serve something outside of itself.

Samuel (00:56:01):
Yes. Or should.

Yohann (00:56:02):
A good theory of change-

Samuel (00:56:03):
Or should.

Yohann (00:56:03):
Or should.

Samuel (00:56:05):
Let's be honest here. Not nearly as often as it ought to be, but yes, it should exist to serve that sort of a thing.

Yohann (00:56:15):
Right. A good theory of change would be something that uncovers the relationship between the in-app stuff and the out of app stuff.

Samuel (00:56:29):
Correct. That is what I would draw as a difference between what we might consider traditional growth if there is such a thing or the Lincoln Murphy Conversion Rate Optimization sort of school of thought. Or even looking at Facebook-type, big corporate viral growth practices. All of those are company timeline centric or are shallow in so far as they only explore loose proxies of behavior that take place within the product. Whereas what we're talking about is, let's first of all, just have our shit together in this department and have a theory of what people are coming for and a way to measure whether they're getting it or not. And then also with more nuance, be able to see whether the people who get those outcomes go on to actually produce more value for our company or not and see what the potential of relationships predicated on particular outcomes or the pursuit thereof, look like. Not all of those are created equally.

Samuel (00:57:43):
For us, we want to be looking at the internal user behavior, especially because going back to a recurring theme in this episode, it is just sitting there waiting for somebody to come along and care about it and use that as a way to build some working theories around where we should be investing our user research energy so that we can be really targeted on the things that we have a pretty solid body of evidence supporting matters to our business.

Samuel (00:58:17):
If users are doing it, then it matters to the users, we've just got to figure out why and the Why is always outside of the product. That comes with a lot of complexity unto itself. But the general way that you attach these bigger whys to the same sort of revenue correlation that we were talking about with the other outcomes, is to just get better at understanding which big meta, the big why outcomes that are outside of your app, getting a better understanding of which of those people are seeking when they're creating your product and seeing how often the people who say, "Yeah, I'm here to get my invoices paid," actually got their invoices paid. Or if somebody is signing up for the invoice sending app because the person who previously managed invoices had an encrypted folder full of Word documents and there's no way to tell who's been invoiced or what, because they rage quit one day.

Samuel (00:59:23):
The next person is coming in, they're inheriting this huge mess and they want to sign up for our invoicing software so that they can better organize it. Is getting invoices paid their most urgent concern? Probably not. But that is a timeline, that's a process that's unfolding in their life that has a clear target outcome of just being on top of your invoices and having a system that is not total chaos. There are ways to detect with either some pervasive information like if they clicked into the signup flow from a blog post that was written about what to do when somebody leaves a company with a burning pile of invoices behind, then maybe it's pretty likely that they're thinking along the same lines, if that's the last thing that drove them into your signup flow.

Samuel (01:00:18):
Or maybe you could just have a couple of surgically placed questions that would help people categorize themselves into not the kind of people that they are, that a lot of times when we see segmentation, we see segmenting by, are you a marketer, designer, developer. Anybody could be any of those things at any given time. Instead of asking, what kind of person should we be segmenting your experience around, asking what kind of goals should we really be segmenting your experience around, to me, it's just a no-brainer.

Yohann (01:00:55):
Luckily, we have given as much thought to these kinds of big user goals as we have to Path Design and Performance Valuation. And that's exactly what we want to dig into in the next episode.

Samuel (01:01:10):
Pillar three, Super-outcomes, the outcomes that take place outside of your offering that drive people to your offering or as I like to say, through your offering, because they're coming in on one end and they're hoping that you're offering will help them arrive in a different situation on the other end. It's like a magic tunnel of getting to where you want to get to. The more we can recognize that we're creating timelines of progress toward outcomes that we can empirically demonstrate are meaningful to both the users and to our business, the much more intimately we are acquainted with the actual way that our business model perpetuates itself by generating revenue.

Yohann (01:02:01):
Right. If you remember, way back when we were talking about how all of the three pillars interrelate, getting clearer on the super-outcome changes the path, getting clearer on the path changes how evaluate and how you assess the performance of it. All the three pillars are very tied together. super-outcomes just makes Path Design better, Performance Valuation just sets you up to uncover better super-outcomes. The deeper you go into one particular pillar, the better you can execute the other two.

Samuel (01:02:38):
Absolutely. One example I use for Super-outcomes, since we've been talking about this invoice app, the hypothetical invoice app so much is maybe what's driving somebody to create their account for the invoice sending app. Let's say you do research just to understand what are the major categorical reasons that are driving people to become users and try out our offering. Maybe you come across a theme of people saying, "Well, I usually just send my invoices in Word documents. But I just got this big, possible new project with a big company and I don't want to look like a rinky-dink thing sending Word documents out of my Gmail. So I want to have like a professional looking invoice and see if I can win that job."

Samuel (01:03:27):
That's a totally different... There's so much rich detail in that user timeline that provides you with just obvious no-brainer improvements that you can make to the way that you're offering integrates with that timeline. If you can then also demonstrate that bringing people to the point where they're like, "Yeah, I feel really good about sending this out," or ideally gets them to the point where they actually close the job and got paid for it, then you're understanding a type of life situation that your product is being called into that you know is a big driver of what produces revenue for your business. The tighter you can situate your offering into the meta real life thing that they're trying to do, "real life" in quotes, because I feel kind of weird about that, but the external to the app thing, that's ultimately what do you want to get better at and that is what we will be covering in-depth in our next episode.

Yohann (01:04:38):
Which I'm so excited about.

Samuel (01:04:40):
Same. Do we have anything else to cover here or have we beaten this horse enough?

Yohann (01:04:51):
I think for now we have. But if you have questions about anything we've covered here, we would love to hear them. You tell us if there are things that we've missed talking about in this episode.

Samuel (01:05:04):
We are just an email away at podcasts@useronboard.com. We would absolutely love to hear any of your questions. If they're critical, if you want to give us a little praise, I wouldn't say no. But we especially like the critical ones. To us, feedback is a gift. So if you would be generous enough to let us know what you think, good, bad, indifferent or otherwise, we would be grateful to hear your thoughts.

Yohann (01:05:32):
Absolutely. Keep fighting the good fight and we will see you soon.

Samuel (01:05:36):
Hey, that's my line.

Yohann (01:05:40):
I stole it, Samuel.