Solution Validation - A Primer
Recently I’ve been thinking about my time over the past 20 years or so as a leader in technical alliances, partner solutions architecture and partner engineering work. There are certain activities that come up again and again, but when they do I find that we don’t often have a common vocabulary or conceptual framework for talking about these activities. Since we don’t have a common framework, it’s difficult for various groups in the company to rationally assess whether we’re spending good money on the go-to-market collateral and technical project work, or if we’re simply wasting it and producing documents and technical results that never get looked at.
So I thought I’d write here about the topic of Solution Validation, in the hopes of providing us all a common way of talking about the kinds of work that partner engineers, partner solutions architects, and technical marketing staff do all day long.
(note: if you’ve worked for me in the last 15 years, you probably have seen this; you may safely skip to the end of this posting haha)
Scope of Discussion
For the sake of this posting, I’m going to focus on product-based solutions, meaning what happens when you are creating a technical solution that combines several technologies, usually your company’s tech with other layers in the tech stack from your partners (e.g., database, middleware, application logic, cloud management, etc.). Out of scope for this discussion: services-based solutions, like the offerings you would see from a global systems integrator like IBM Consulting, TCS, Accenture, Infosys, etc. (although I claim that some of the principles I’ll talk about here also apply for services-based solution validation).
Solution Validation Is A Customer Promise
Why bother doing the work to validate a solution that you’ve built? You do the work because if you’re building a solution from multiple companies’ technologies, you’re trying to give assurances to your joint customers that it’s okay for them to buy all this stuff and deploy it to solve their business problem.
There are different ways to validation a solution; validations have different “depths” and a simple way to think about it is like this: going from shallow to deep, your solution validation is telling the customer:
(shallow) “This solution runs”
(deeper) “This solution runs well”
(deepest) “This solution runs well for me”
Let me explain each of these in a bit more detail
Shallow Validation: The Solution “Runs”
This is the most basic, “shallow” of validations. At this level, you’re basically telling the customer that when they buy this set of products and runs them, the companies as part of the solution will take the customer’s support call. It’s that simple. We’re not saying the solution is particularly cost-effective or high-performance or easy to stand up and run, but if you do use it, we’ll take your call and support you. That’s not a great message, but it removes some basic risk for the customer; if they trust your support organization, this is a decent message to the customer.
Types Of Proof
So what kinds of evidence serves as validation at this level? Primarily, it’s product certification. You know, where you go to a company’s website to find out if version X.Y of the application is certified to run on version Z of the other company’s operating system.
When I worked at Sun Microsystems, we partnered with database vendors and storage vendors and spent a lot of time producting these multi-variable product certification matrices, saying stuff like “This disk drive is certified to work on that server product line with Solaris 9 and above.” Every time either the OS or hardware or storage came out with a new version of that product, we’d run a new set of certification suite tests.
How To Save Money Here: all of that testing can get expensive and time-consuming, but if you keep in mind that the only purpose of this kind of validation is to assure the customer you’ll take their support call for the set of product versions in the solution, you can do this more cheaply. For example, you can decide not to test every single combination, if you know that a family of related products is functionally the same. You’re taking a small chance of something actually being functionally different, but usually not. You could go further than that, and simply decide not to test at all. If you’re that confident with your products and your partners, you can always just decide to take the customer’s support call without doing all that up front testing.
Medium-depth Validation: The Solution Runs Well
Now we’re starting to get into more nuanced, valuable promises to your customers. At this level what you’re saying about your solution is that you know how to deploy the solution economically on the underlying infrastructure in that solution. You’re probably proving this by showing you know how to tune the various components of the solution so that the customer gets good performance and doesn’t overpay for running the solution on the infrastructure. You’re telling your customer “We’ve actually stood up the solution ourselves and we know enough about the components that we know how to tune it to perform well. And we can show you how to do that yourself.”
Types of Proof
At this deeper level of validation, there are a few kinds of activities and collateral you can use. Among these, some are easier to do, some are harder. I would say that in general, the harder the project, the greater the ultimate value for the customer. And you don’t always have to go for the most high-value option to serve your purpose.
Installation / configuration guides: pretty self-explanatory, the idea is to be as clear as possible about the steps to stand up the parts of the solution together. This reduces Time-To-Value for the customer.
Reference Architectures: I think of a good ref arch as a combination of an architectural overview + an installation guide + a tuning guide + a few “case studies”, showing how to modify the tunables for “t-shirt sizing” (i.e., running the ref arch in small / medium / large configurations). A ref arch that only shows the tuning parameters for one size is not providing much trust; when you show how the tunables change depending on the size of the configuration, that’s when the customer can see that you really did try this solution.
Sizing Guides / Sizing Studies: A sizing guide answers the customer question “How much hardware and software must I buy to support <X> concurrent user and <Y> total users for my workload?” Simple as that. A good sizing study will help the vendors building the solution figure out how to do the configuration pricing when helping the customer figure out how many units of the product to buy (how many servers, machine instances, seats, storage capacity, etc.)
Competitive Benchmarks: These are increasingly uncommon in enterprise software, in my view, and they are complicated and difficult to build and to execute well. I’ve seen benchmarks that have taken literally years to complete (I’ll write about the value of benchmarks in a different post). There are very few application benchmarks; most of the benchmarks I see are lower-level or generic benchmarks (like the SPEC benchmarks that are meant to test standardized compute platforms, not specific business applications). One way to think of a good benchmark is as a superset of a reference architecture: it clearly describes the SUT (System Under Test) architecturally, how to configure and tune it, and reams of data showing performance results and analysis. A good benchmark is hard to do, and well worth trusting.
Deep Validation: The Solution Runs Well For Me
When you think about it, the best validation a customer can get from the vendors is knowing that the vendors have actually seen this solution in action with a customer very much like you. “much like you” could mean same industry, same customer size, same geography, etc., depending on what are the most relevant factors. Put yourself in the customer’s shoes here: it’s one thing to see a generic set of performance tests (reference architecture, benchmark, etc.); it’s another to know that your key competitor is already successfully deploying the solution you’ve been looking at.
Types of Proof
In my view, the kinds of proof that work best here aren’t so deeply technical; they rely more on the use cases you get from your services organization, your field salesforce, and the partners’ sales staffs. Some examples:
A customer reference (collateral describing the customer that uses the solution, plus high-level description of the solution, plus the customer’s willingness to talk to the vendors’ prospects about their experience working with the solution and vendors)
First Customer Win documented (a case study with information to help the salesforce identify suitable customers for the solution, and talking points to arm the salesforce to use in conversations with these prospective customers).
Joint collateral like a Solution Brief: shows the joint value proposition of the integrated solutiors for this type of customer. In other words, the vendors get together and lay out the overview of the solution, describing why this combination of vendors together offers unique value for a particular type of customer.
Levels Of Effort
These validation activities vary in the amount of time and effort they take. Typically, the certification validation work is the technically least complicated. As I mentioned before, the vendors can even choose to bypass certification tests entirely if they’re confident that the support organizations can handle incoming requests. I wouldn’t recommend that if this is the first time you’re creating a particular solution, but for mature product solutions, you can start to make assumptions about how safe a new revision of a solution is and whether you need a full round of certification testing. In my experience, certifications can take from days to weeks, plus the coordination time involved in getting together multiple vendors’ products.
At the deeper “Runs Well” validation level, things get more complex. An install / config guide is still pretty straightforward; it just takes time to fully document the steps your solution architects are taking their work. We’re talking a few weeks here, maybe less. A reference architecture adds in a set of sample data for testing and different tuning parameters. A good reference architecture can take as little as 2-3 weeks if you’re simply updating previous work with not much changed; it can take a few months if you’re starting from scratch.
Sizing studies and benchmarks are the most involved here, because you’re usually building a sample data set and testing suite, and you’re trying to scale up the data set. That’s usually more tricky than the engineers think when they first create their test suite. I’ve seen a good sizing study take as little as 3 months. An application benchmark: more like 3 to 12 months, including mulitple revisions to get the sample data right and a good handle on the tunables.
At the “Runs Well For Me” deepest validation level activities, I don’t have good time estimates on the customer references, but building a Solutions Brief is not usually a heavyweight effort; most of your time is taken in coordinating amongst the vendors involved in the solution, and a few days of focused writing work crafting the joint value proposition and maybe an architecture diagram of the solution.
Wrapping Up
In my experience, thinking about what proof we need to provide to a customer is what best drives solution validation activities; having a framework like what I’ve attempted to lay out here helps you negotiate with your partners about how much effort to spend, and what tools your salesforce needs to open conversations with customers and close deals with them.
I’d love to hear from you if you have a different way of thinking about solution validation. What customer questions are your validation efforts answering? Do you have additional kinds of solutions validation activities that your organization does?