Engineering Enterprise SaaS Pricing

I’m an advocate of data-driven decision-making, and so with TribeHR we spend a lot of time digging into our numbers, both from a business metrics perspective as well as from a product analytics perspective. When the question of product editions came up, we took a data-driven approach to the problem.

Why Make the Switch to Per-Seat Pricing in Enterprise SaaS

Note: if you’re already convinced you need to make the move to per-seat pricing, feel free to skip this part.

Many enterprise SaaS apps take a similar approach to what we did when we first started: pricing tiers that scale features and yet are closely related to the number of users that have access to the app. The problem with this approach is that it turns your customers’ decision into either a purely consumption-based decision, or an awkward consumption/feature decision, which leads to frustration.

In the first-case, a consumption decision is ultimately asking a customer to assign value to a quantity of a resource, and to determine how much they want to consume. In the case of beverage sizes, for example, it’s clear that the best “deal” is the largest, but I might balance my decision based on how thirsty I am.

As we increase our consumption, the price per ounce of beverage is increasingly attractive – many SaaS apps scale with the number of users available to a package, adding additional users in trenches at a larger and larger discount. Unforuntatley this approach overlooks a crucial factor – unlike a beverage, the number of users a customer has is a fixed requirement, not a preference. We have a specific number of users/employees – going for a smaller or larger size doesn’t apply.

At the same time, SaaS vendors often complicate the decision by adding in extra features to entice upgrades. These added features result in product versions where the consumption divisions feel like a poor fit, and features distract from the decision, ultimately leading to dissatisfaction.

This is the original TribeHR pricing – yes, we were doing it wrong. We scaled users, disk space, job postings and features. Confusion FTW!

In David Skok’s blog post on multi-axis pricing (which is a must read if you want to maximize revenues) he mentions that  “[multi-axis pricing] allows you to capture more of the revenue that your customers are willing to pay, without putting off smaller customers that are not able to pay high prices.” It’s this statement that is a key element in the concept – when defining the fence between pricing tiers, every customer should believe they have made the single right decision for themselves.

To apply this to our product as an example, letting the product scale on a per-user model ensures the company always has the exactly correct consumption billing, so there is no wastage. Furthermore, by scaling product benefits on a per-product edition, you let your customers make a decision on the features that’s uncompromised by the number of users they have.

You can see here that our new per-user pricing is offered in three editions – yes, we want people to buy the largest.

Benefit-Based Fences vs. Feature-Based Fences

When defining fences it’s very easy to fall into the trap of thinking about how much you feel your product is worth. The problem with this approach, is that you aren’t your customers, so ultimately what’s included in your editions ends up resembling something that looks like this:

 

I’ve actually seen entrepreneurs chart out their feature sets like this. Precise, but not savvy.

This happens when the product owner or developer gets locked in the trap of prescribing value to the amount of work they are performing, whereas they should be thinking about how much value the customer attributes to the editions. The most important note here is that your customers will perceive drastically different values than you will. It’s an immutable truth that you will be surprised by how your customers attribute value to the different components of your product.

When defining your fences, your goal shouldn’t be to distribute the cost of development among editions. Your goal should be to distribute customer value across the editions.

To do this, you need to segment your customers (and market) by value tiers. To give you a more concrete example, when defined the TribeHR pricing editions, after speaking to our customers we managed to identify the following value-sets and match them to our market positioning statements.

Edition 1 Edition 2 Edition 3
Save the HR Dept Time by helping them move to electronic record keeping, so that they can spend there time working on strategic issues, rather than simply pushing paper.Positioning Statement: Focus on What Matters Save the Company Time and Money by automating employee tasks that were previously manual, helping employees be more product and feel that the company values their timeand effort.Positioning Statements: Focus on What Matters and Build Better Teams Help the Company flourish and succeed, by helping employees be more productive and feel significantly more engaged. Drive cultural change and growth then leverage that culture to become a great company.Positioning Statements: Build Culture of Success, Build Better Teams, and Focus on What Matters

Once we were able to define our value sets, we could then move on to identifying which of our current customers fit into each segment.

Separate Customers Based on Data

The goal of separating our customers into the different segments was to give us enough information to find out how the different segments might use our product. When we first started the process, our instinctive reactions were to go down the list of our customers manually and segment them based on our knowledge of the individuals involved – we quickly realized this approach was both prone to error as well as completely unscalable. To overcome this challenge, we embarked on an extremely data-intensive journey.

First, we asked the question “How could we ideally figure out someone’s segment?” and in response identified 13 leading indicators, unique to our customer base. For example, if they customer had avoided adding their whole workforce into the system, they likely saw TribeHR as primarily a time-saver for the HR department and so were likely an Edition 1 customer. Similarly, if they had employees that contacted us directly or through social media channels, they likely have a culture of recognition and would be an Edition 3 customer.

We reduced these 13 indicators down to simply tests and measurements; each would contribute to the categorization of the company.  A small selection:

  • Ask for employee training: Edition 2 or Edition 3
  • < 100% employee coverage: Edition 1
  • Converted automatically: Edition 2
  • Inquired about discounts: Edition 1
  • High usage of social features by employees: Edition 3

To analyze our customer activity, we compiled data from our helpdesk (Zendesk), our CRM system (Salesforce), our in-app databases, and in some cases email history for specific questions. We then evaluated each customer programmatically and based on the results, assigned them an edition/segment.

Note: we could have taken several approaches including simple fuzzy logic, weighted factors for each test, or a rules-based engine. In the end, we opted for an extremely simple method of tracking “segment points” for each customer, and after checking each test, adding the appropriate segment point. This very linear approach might be sub-optimal, but it was very easy.

At the end of this process, we had a neatly segmented customer base  – an extremely useful data set.

Mining our Activity Logs

To then gauge which features and elements to include in our various editions, we turned to our activity logs and calculated the frequency of customer actions. In a practical example, we would end up with observations such as:

On average, Administrators in Segment 1 post X new jobs per month, while Administrators in Segment 2 post Y jobs per month.

At the same time, we categorized all the actions we tracked into subject-area buckets so that we could take the analysis a bit futher. For example, we considered “Submitted a Vacation Request” and “Called in Sick” as independent actions, but both in the category of  “Time Off Management”. This resulted in data sets that looked like this:

You can see from the above list, that we had a very high level of usage in Performance Management events and a fairly low usage of “Customization” events, but that still event – editing a Custom Field Record was unusually high. By measuring both individual features as well as feature categories, we could gain useful insights about the high-level value individuals realized, as well as key features that could significantly impact the decision making process.

After performing this analysis for all three editions and segments, we could pull this all together into an interesting comparison. When visualizing the data as a collective (see below for a sample w/ sanitized numbers) it was easy to then pull out common levels of engagement.

When looking at the above data, some of the conclusions we could draw include comments like “Edition 3 customers spend more time customizing the app” and “Edition 1 and 2 customers maintain higher levels of engagement”.  Based on these conclusions, we began the process of categorizing our features – and we picked up some interesting insights.

By way of example, let me share with you one of our insights: in our original guess on editions, we had expected to include “Vacation Management” features in all three editions. Looking at our data, however, we were able to determine that although all three segments used our vacation features, our Segment 1 customers weren’t making any use of our custom time off types (in TribeHR you can add custom time off types like “Bereavement” or “Personal”). Based on this insight, we modified our vacation management features, and now only make custom types available to our two higher packages.

As a final cautionary note, one of the challenges we had early in the analysis was correcting data for confusing influences. For example, companies that attracted more job candidates would naturally have more comments in the system about those very candidates, which would artificially inflate how many comment events were tracked. In order to balance our conclusions, we modified some of our tests and frequency calculations to measure event frequency as ratio to other events. In the above example, we changed our algorithms to measure “comments per applicant per month” not just “total comments per month”. Similarly, we adjusted other measures to be “events per user per month” rather than just “total events per month”

After evaluating and calibrating all our measures, it became a simple task to distribute our feature set across editions.

Taking it Back to the Customer

Ultimately, this exercise is designed to build revenues by maximizing customer satisfaction with the product they choose. To make sure this happens, you’ll need to confirm that you segmented your customers properly, and then you’ll need to go through a price-definition exercise for each edition. Although I don’t cover the pricing question in this post (others have covered it much better than I) I can describe our validation work.

To validate our segmentation, we drew a random sample of customers and sent them a short survey email that followed this pattern:

Hi John, I hope you are doing well!

We are surveying a few people so that we may better understand customer needs.  I would very much appreciate it if you could take a moment to identify which of the following statements you identify most with.  If you have comments or questions, as always, please do not hesitate to pass them along!

1. Value statement from Edition 1
2. Value statement from Edition 2
3. Value statement from Edition 3

Thanks in advance for taking the time,

Donna

With this simple email, we were able to generate a 64% response rate within 24 hours.

When comparing the self-identified segmentation to our automated segmentation, we were originally surprised that we only had a 62% accuracy rate. When looking at the customer accounts where the results didn’t match, we were able to identify that the customers we had incorrectly segmented were each segmented primarily due to two specific indicators. Once we removed these indicators and re-ran the segmentation, we were able to raise the accuracy rating to 89%.

Although one could short-circuit the segmentation process by simply going straight to the survey, the benefit of first segmenting then verifying is that we now have a reliable set of indicators that we can use to identify prospects for up-sells (i.e. an account that shows Edition 3 indicators but is only subscribed to Edition 2) or identify accounts at risk of downgrading or churning.

Since rolling out the new pricing and feature set, we’ve been very happy with the results – although I can’t share the actual numbers, the metrics we watch to gauge success are:

  • Click-Through-Rate on our pricing page
  • Upgrades & Downgrades within pricing packages
  • Breadth of Feature Usage
  • Requests for Features to be moved between packages

Pulling it All Together

Looking back at the entire process, the entire journey took a little over 6 weeks to complete and involved very few customer concerns. I attribute this success, in a large-part, to the care we took in defining the new packages.

Finally, for those that just want the summary, here is the power-point version of engineering your price fencing in an enterprise SaaS product:

  1. Identify the value segments in your market
  2. Separate existing customers into the value segments
  3. Measure future usage of the segments to define feature sets
  4. Validate segmentation and establish price.
  5. Sell lots of software.

 

Tags: , , ,

Leave a Reply

JosephFung
web tech entrepreneur
waterloo region enthusiast
ninja in-training