The Coronavirus pandemic forced a lot of knowledge workers into a virtual working environment. At this point the organizations seem to have wholly embraced digital collaboration technologies, enabling a relatively smooth transition to a 100% remote work environment. Yet, we are still figuring out how to approach some situations in a remote environment – group decision-making is one of those situations.
While newly remote teams might initially find it challenging to generate alignment around high-stakes decisions, the nature of remote work does present an opportunity to take a more intentional approach to this decision-making. Leveraging the right tools and frameworks to make these decisions can drive increased transparency, alignment and, ultimately, lead to better decisions.
In this brief guide, we’ll tak a look at when and how distributed teams should make collaborative decisions.
When should we make decisions collaboratively?
First, let’s take a look at a few situations when the larger group should get involved into the decision-making process. Here are the basic situations when you should involve others:
When the stakes are high
This one is probably obvious – when the stakes are high, you, as a decision-maker, need to be as confident as you possibly can be. Involving others can help you gain more confidence in the chosen course of actions and make the decision more defensible.
Big strategic decisions, for example, should involve a number of individuals with relevant expertise to provide a rounded perspective and reduce the risk of making a wrong choice. Using purpose-built tools, like Rationalize, can help in facilitating these types of high-impact decisions.
When the uncertainty is high
You, the decision-maker, might not have all the pieces of information and thus the problem might seem hazy. In these situations, when the uncertainty is high, involving a group of individuals with relevant expertise can help reduce this uncertainty. Identifying the specific areas of uncertainty and involving the individuals with the knowledge of that subject matter can help you reduce the uncertainty even further.
For example, if you are a design-engineer deciding between different prototypes of a product, you might have a good understanding of technical pros and cons of the design but not a lot of insight into how the customer could view this product. In this situation it might be a good idea to bring in an end-user or at least a marketing person to represent that point of view when making that decision.
When others have a stake in the decision
When the outcome of your decision directly affects others, it might be a good idea to solicit their perspectives. Sometimes, of course, this might be the exact reason not to involve them – if their objective judgement can be swayed by their own self-interest. In general, however, it is beneficial to, at minimum, understand their perspective on the problem. Whether you take that perspective into your final calculus is up to you.
How to make collaborative decisions
Now let’s take a look at what you can do to get the most out of your group decision.
Involve a diverse group of people with relevant expertise
First of all, make sure that you are soliciting the expertise from individuals who understand the subject, yet have perspectives that are different. Soliciting a wide-ranging perspective can be quite useful in eliminating blind spots in your decision-making.
For example, when trying to make a decision about which product strategy to pursue, an organization might want to bring in people with expertise in marketing and business development to provide a customer perspective; people from engineering to provide an insight into technical feasibility, and operational experts to understand potential issues when scaling. The important part here is the fact that everyone involved has an understanding of the basic subject matter, yet approaches it from their unique perspective.
For a better insight into how you can leverage diverse expertise when making strategic decisions like entering a new market, take a look at our case study.
Leverage an objective decision-making framework like weighted decision matrix or MCDA
Having a structured decision-making process is crucial for distributed teams. While “hashing it out” in a conference room might work (imperfectly) for on-site teams, when it comes to distributed teams you need some organizing framework to approach a decision.
Using the MCDA framework (also known as weighted decision matrix and grid analysis) can help you ensure you are solving a problem in a systematic and proven way. If you need more background on these frameworks, feel free to check out our guide on evaluating ideas using grid analysis.
Use purpose-built tools
While spreadsheets and Slack might work for simple decisions, when you are looking to solicit the input from a lot of stakeholders and really analyze the response, leveraging grid analysis tools like Rationalize.io can save a lot of time while also providing built-in analytical features to really understand how people are thinking.
In summary, you should involve others when the stakes and uncertainty are high and when the outcomes of the decision affect others. Once you do decide to involve a larger group, ensure that you are soliciting the perspective of a diverse group of people with relevant expertise. Use proven approaches like MCDA (also known as weighted decision matrix approach) and leverage purpose-built tools like Rationalize in order to set up your decision, solicit the perspective and analyze the outcomes.
At the beginning of 2021, Zoom Video Communications (Zoom) had a market capitalization of $116B. Zoom was the result of the inability of Cisco Systems (market cap $189B), to recognize a huge, new opportunity. Eric Yuan, the founder of Zoom, was one of the first 20 employees of WebEx that was acquired by Cisco Systems in 2007. Yuan was Cisco’s vice president of engineering and in 2011 he pitched a new smartphone-friendly video conferencing system to Cisco management. The idea was rejected. In frustration, Yuan left Cisco to establish his own company, Zoom Video Communications. Cisco’s WebEx is now struggling to catch up.
Everyone who has tried to get support (and resources) for a new concept within their large company will undoubtedly have their own story of an idea that their company declined to pursue and eventually was a great business success…for someone else.
SORTING THE WHEAT FROM THE CHAFF WHEN UNCERTAINTY IS HIGH
Making decisions about a future concept that does not yet exist is fraught with uncertainty. In the face of uncertainty people’s opinions and biases come through. Of the different types of uncertainty facing a new concept, organizational uncertainty is by far the most pernicious and the most underappreciated. Companies can deal with technological and market uncertainty, even ecosystem uncertainty, using innovation processes, methods and tools developed over the decades. But navigating the internal webs of support and resistance within a large company is what almost always makes or breaks the success of a good new concept.
A good decision support system helps address organizational uncertainty. It consists of processes, methods and tools that support making decisions inside complex organizations. It increases the quality of decision making by reducing the overall error rate – both chasing losers (false positives) and passing on winners (false negatives). In addition, a good decision system provides support for making decisions in the face of deep uncertainty, when there are many different perspectives, opinions and biases that come to the fore.
It is no longer enough for a decision system to just indicate if an idea is good or not – a go or no-go. It is increasingly necessary for a decision system to help an idea navigate the complex intra-company dynamics that can make or break its coming to fruition.
COLLABORATIVE CONCEPT EVALUATION TO ENHANCE DECISION MAKING UNDER UNCERTAINTY
In the course of working with hundreds of companies on multiple innovation initiatives, the difficulties of getting a transformational new product, service or business model accepted within a company is a matter of fact.
The reason for this is understandable. Many companies have been burned by ‘great ideas’ that haven’t lived up to their promise. It is much easier to say no (and not realize the idea was great until someone else does it) than it is to say yes, spend a lot of time, effort, and money, only to have the new concept fail. The reaction is so common it is even enshrined as one of the many cognitive biases afflicting decision makers: the Status quo bias. This bias surfaces when we are faced with ideas for new offerings, business models or strategies that the company is not used to. This is especially true for those that are farther from the core competency of the company, but which have the potential to be disruptive.
This approach of soliciting group input often goes under the name “the wisdom of crowds” and it is a proven way to evaluate alternatives. In James Surowiecki’s book of the same name, he cites four conditions required to achieve ‘wisdom’: (1) independent opinions, (2) diverse opinions, (3) decentralized input, and (4) a way to aggregate the results. While his focus is on the merit of the ideas, if you approach the fourth point, aggregating the results, in the right way, you can cast a mirror to the crowd and, for a business, use that to reduce organizational uncertainty.
USING COLLABORATIVE CONCEPT EVALUATION
Not all input aggregators and crowd voting systems focus on both the idea and the crowd. A collaborative concept evaluation tool like Rationalize is one of the exceptions. It provides the base “wisdom of crowd” on the alternatives, but also allows one to gauge the “Mind of the Company” in all its complexity and diversity. With a window into the Mind of the Company you can see where the organizational thoughts on the matter differ and have a means to figure out why. Having an idea the company can say “yes” to is more important as having an attractive idea the organization will tacitly reject sooner or later.
A collaborative concept assessment system requires four key components.
A set of concepts to be evaluated. These can be any type of concept – a strategy, a design, an idea for a new offering, etc. – but the set of concepts should be plausible alternatives to each other. In other words, evaluate like against like.
A set of criteria upon which the concepts should be evaluated. These criteria can be anything that can be ranked on an ordinal scale (e.g., 1 to 10) based on the evaluative opinion of an individual. In the above example, one criterion may be “Potential for significant revenue within 3 years” with 1 being “no potential” and 10 being “it’s certain to happen”.
A group of individuals to do the evaluation. These are people, usually within the organization but not necessarily limited to that, who have the requisite knowledge and expertise in their respective areas to render a reasonable evaluation. They all do not need, and indeed should not have, the same perspectives, mindsets or even types of expertise and knowledge. The larger and more diverse the group the better.
A means of categorizing individuals doing the evaluation. Classifying every individual according to specific criteria, for example seniority, geography, and function, is key to figuring out the patterns of thought within the company. Does executive leadership see things differently than individual contributors? Does Europe have a different perspective than the US, does marketing know things that R&D does not or vice versa.
A tool like Rationalize has all these components, as well as many other features that provide even more sophisticated analysis, to collect the respective evaluations of a selected ‘crowd’.
DECISION MAKING IS KNOWLEDGE DISCOVERY
The impact of the insights that collaborative concept assessment can provide for decision-making are far-reaching. By being able to specify any number of categorization criteria, setting relative weights on specific criteria, or assigning different weights to individuals assessments, a wealth of information can be gleaned down to the individual level.
A collaborative decision support tool such as Rationalize is more than just a way to ‘pick the winner’, it is a tool for discovering the “Mind of the Company”. It is a way to discover how people think about alternatives. These insights are not always easy to get at. It is a means for fostering more in-depth and nuanced discussions than would otherwise happen. It is a tool that should be a part of a decision-making process, especially for decisions where uncertainty is high. The benefits are:
Overcome biases and combat groupthink by bringing in diverse perspectives. Use collaborative assessment tools, such as Rationalize, to tap into the ‘wisdom-of-crowds’.
Understand how the organization sees the world. Categorize individuals and groups to see patterns that are not obvious.
Identify critical follow-up engagements to make sure everyone is heard.
Identify pockets of support and resistance and potential paths to gain alignment.
Understanding the ‘Mind-of-the-company’ is one of the most important and effective things leadership can do when deciding to pursue something that is inherently uncertain.
THE MIND-OF-THE-COMPANY IN PRACTICE
A company is considering entering a new market with a new product and business model. The new product and business model is quite different than anything the company has previously done, and it has the potential to cannibalize at least some of the company’s existing business. Senior leadership needs to decide how to proceed. Note that even getting to this point would have involved many prior decisions about which new product idea to focus on and details about the jobs-to-be-done, value propositions and technologies behind the product concept and its business model. But assuming that this path has been successfully navigated, the current decision the company must make is among the following four options.
Enter the new market by developing a new product and business model
Enter into a joint venture with another organization that has the relevant expertise to develop a product and enter the market
Acquire a smaller organization with an established presence in the target market
Decide not to develop a new product or enter the market
This decision would inevitably involve weeks or months of detailed analysis and preparation of documents and slide decks, involving multiple reviews and iterations, leading up to a presentation to executive management who would then carefully consider these options and render a decision.
However, while being quite enthusiastic about option #1, the new product and business model development path, the leaders understand that the deck they were presented with is a highly processed document that necessarily hides a lot of detail and differences of opinion and perspective. They decide to take an additional step to understand how the rest of the company feels about these options. Perhaps different groups within the firm see the proposed options differently and there are pockets of support and resistance within the company that could affect how well one option, or another, could succeed or fail.
They reach out to a group of 35 people within the company to get their assessment. In a matter of days they get back is the following.
There is a stark difference between the decision the executive leadership is leaning towards and how the rest of the company sees things. While the leaders clearly see new product development as the best market entry strategy, that sentiment is clearly not shared by a large majority of others within the organization. Overall, joint venture and acquisition are the preferred entry strategies for many.
If this were the only information that the results of the assessment provided, then it would not be much use. There is no clear winner. Joint venture and acquisition are relatively close to each other. In addition, the overall results hide some important insights which can be gained if the group was segmented into several categories.
Because a tool like Rationalize allows for secure, individual, independent, cloud-based interactions, it can gather much more information about the individual assessors. In this case, each person who was solicited for their assessment was classified according to the following categories.
Research & EngineeringBusiness Development & SalesMarketingOperations
With the group segmented in these ways, more interesting insights into how the organization feels about the alternatives are available.
Looking at the geographic breakdown, not all regions are aligned. The European office seems to favor a joint venture while North America is in favor of acquisition. Furthermore, each of the offices has a strong preference for one over another.
When looking at the results by department, there is not much difference between the four options. It is notable, however, that the total number of votes for the three alternatives that are not the leadership’s choice vastly outnumber those that think internal product development is the way to go.
Looking at results by seniority indicates that leadership is out of alignment with the rest of the company. While the executives believe new product development is the best path forward, it is the strategy least liked by the mid-management and independent contributors.
This information helps leadership navigate the internal dynamics that will, if left unaddressed, present challenges moving forward with any of the options. Understanding which exact individuals or groups might put up roadblocks to, or promote, a given strategy is crucial in generating alignment.
In the case of the four options presented, leadership realizes that they need to have conversations with several people with strong preferences for one strategy over another to understand their thinking. They schedule follow up conversations to understand what may not have been considered in the original decision. Some questions that were explored are:
Are there potential roadblocks that might be put up by individuals with strong preferences for one strategy over another?
Do the individuals with strong preferences going against consensus know something that others do not? How can leadership address their concerns before making the decision?
Is there a target organization in the European market with the right expertise in the product development where a joint venture makes a lot of sense?
Is there an organization which the North American office wants to acquire?
What major problems does the North American office see with new product development?
The conversations immediately start providing insights. The leadership team gains a deeper understanding of other perspectives and their perspective gradually starts evolving. This helps the leaders understand the reasons why their initially preferred path could backfire and why an alternative path could be better. After productive and insightful discussions, leadership identifies a previously overlooked attractive company in the North American market and pivot to pursue an acquisition strategy.
Those interested in learning more about decision making under uncertainty can find further insights in the following resources.
The Wisdom of Crowds by James Surowiecki – One of the original works and still one of the most popular pieces on when and how independent, distributed input can improve a decision.
In these unpredictable and fast-changing times, individuals and organizations have to make decisions within an unprecedented level of uncertainty. How will the aftermath of the pandemic and shift to remote work affect Amazon’s decision to build the second headquarters in DC? If you are a commercial real estate company, what do you do with your assets when companies are going remote? How does the rise of autonomous vehicles affect the investment decisions of a large oil and gas company? The success in situations like that hangs on a multitude of factors, all of which increase the level of uncertainty for the decision-maker.
Fortunately, there are concrete approaches and tools a decision-maker can use to reduce the uncertainty and gain a clear perspective. Using a structured approach to identify potential solutions and multiple relevant desired outcomes can help ensure the best potential decision is being considered. Furthermore, using a structured and collaborative approach to understand how well each alternative satisfies the desired outcome can help select the best strategy to follow.
Below, we are going to explore, in concrete terms, how we can reduce uncertainty in decision-making when the stakes are high.
Step 1: Thoroughly explore the space of solutions and alternatives
While this first step may seem obvious, many people fail to consider the full spectrum of options available to them and settle with only the most obvious alternatives. Taking the time to explore and clearly articulate all the available options is crucial for thorough understanding of the problem and proceeding to explore the desired outcomes. Some of the tools you could use to help exploring alternatives space:
Consider non-starter options
Considering non-starters can be useful, when uncertainty is high. As you explore and break down the problem further, you may find some parts of these non-starters to be quite useful and may decide to incorporate them into your eventual final choice. Furthermore, if you involve other decision-makers, they may have an entirely different but valuable perspective, so introducing these non-starters can end up being useful.
Consider hybrid options
People often think about solutions as either one or another but fail to consider permutations that incorporate parts of 2 or more options.
For example, many companies face “build or buy” decisions when it comes to software. The build option is generally more expensive but allows the company to get an application to suit its exact needs. Buying an application, on the other hand, is, generally, cheaper but forces the organization to change some of its processes to conform to the way the software workflows are set up. Neither of these options might seem appealing and thus the decision might seem uncertain. In this scenario, the company might consider approaching one of the software vendors with a proposition to build-out the use-case on top of the existing platform. This could drive down the costs of development for the asking organization while providing an application conforming to its exact specs.
Consider doing nothing
Some decisions may appear uncertain when none of the options are appealing. People, however, tend to get stuck in the mindset that they have to do “something”. Considering pros and cons of not doing anything, in this case, can be quite informative.
When considering potential ways to comply with rules and regulations, many companies fail to consider costs of non-compliance – or at least not complying with the regulations immediately. Analyzing the costs of compliance in year one versus year two may yield a more clear path forward.
Step 2: Clearly define multipledesired outcomes
Rarely does a decision have just a single success criteria, but that is how a lot of people think about decisions. So before starting to evaluate the alternatives on that single outcome, take some time to think through “What matters in this decision?”. Within Rationalize we are calling this decision criteria and it is a crucial part of the evaluation setup process. Try some of these things to identify multiple outcomes:
Go one level deeper
For every desired outcome you identify, ask “what does it really mean?” and try to go a level deeper. For example, if you are buying a house one of the outcomes might be “Living in a good neighborhood”. This outcome might seem ambiguous and thus introduce some uncertainty in your decision. Asking what that really means, could yield a number of outcomes that are more quantifiable and interesting:
Proximity to parks
Identifying more of these desired outcomes or criteria, allows us to get to things that are much more concrete and thus will drive uncertainty down.
Identify minor outcomes and criteria
Identifying minor criteria might reduce uncertainty when two or more alternatives seem quite close to each other. When you are faced with options that are similar and seemingly of identical value, try to look for outcomes that are less valuable to you but still might be the difference between the options.
Prioritize your desired outcomes
Not everything is equally important (at least not always), so understanding the relative hierarchy of priorities is crucial. Prioritizing your desired outcomes can be done in a variety of ways, some of the most common are:
In pairwise you tradeoff alternatives one against another. The more alternatives you have, the more pairs you have to rank, so this method is only advisable when you have 10 or fewer outcomes you are considering
This is a pretty straight-forward method where you rank each outcome relative to others. The outcome is the same as in pairwise, but the process generally takes less time
This is the method we use within Rationalize. The idea is that a fixed amount of points gets distributed between the outcomes. In our case, we use 100% points distributed across the outcomes or criteria.
Step 3: Systematically score your alternatives
Now that you have your alternatives and outcomes identified, you can get to understanding how well each alternative satisfies the criteria or outcomes you have identified. Using a basic decision matrix approach works really well here – you already have all the ingredients to do the ranking.
You can use a spreadsheet to run through this type of evaluation, but we recommend using tools like Rationalize.io because it allows you to leverage the wisdom of the crowds to further reduce the uncertainty – which is the next step.
If you need more insight into how to do a decision matrix evaluation, check out our article on that here.
Step 4: Solicit other perspectives
Up to this point, we took a rational approach to understand the problem and break down the solution space. You might have a decent perspective already. However, your perspective is inherently limited by your own experiences and biases. Involving other individuals in this decision-making process is a great antidote against this built-in bias that your individualized perspective brings.
As a rule, people are happy to contribute their perspective – if you are asking for it, then you probably value their opinion. However, there are a number of things you can do to make it easier for these individuals to provide their perspective while also making it more valuable for yourself. Here’s what you can do:
Make it easy for people to contribute
If you are not paying people to provide their perspective, then making it super easy for them to contribute is crucial. Using tools like Rationalize.io is the best way to make it as seamless as possible for these people to contribute their opinion. In fact, enabling collaboration on complex decisions is the main reason we decided to build this platform. So take advantage of these purpose-built tools to make it easy for your responders and yourself to make the best decision.
Find points of disagreement
Understanding where people disagree can help identify where real uncertainty lies. The best way to find the points of disagreement is to calculate the standard deviation between the responses. Rationalize.io comes with this type of functionality built in, making it very easy to understand where the real uncertainty is.
Talk to the outliers
Once you have a feel for how people and groups think, you can start seeing where they agree and disagree. These points of disagreement is where you should dig deeper and explore why people disagree. Conversations with people who buck the consensus can be particularly interesting. Do these individuals know something others don’t? In organizations, it is often important to ensure that the key decision-makers are aligned, so if you have a key stakeholder not aligned with the group, it is often crucial to understand what needs to happen to bring them along.
Figure out how groups of people think
If you were able to get a sizable group of people to contribute to your decision, you might want to consider segmenting this population to understand how similar people think. This technique is particularly useful when making cross-departmental decisions in large organizations. Figuring out how the perspective of marketing is different from that of engineering can provide some very good insights.
Summing it up
Making decisions with a high level of uncertainty is often a daunting task, especially when the stakes are high. Fortunately, there are some concrete tools a decision-maker can leverage in order to reduce the uncertainty and gain confidence to stand behind the decision. The four basic steps are:
There has been a lot written about how to “best” evaluate ideas in an innovation process. Practically every company working within or adjacent to the space of innovation has created some kind of framework they use to evaluate ideas. While it is not up to us to determine which particular framework works best, we can outline some of these approaches. Furthermore we want to provide our readers with tools to quickly and easily use these frameworks for their own idea evaluation – afterall, that is why we created Rationalize.
The frameworks outlined are broken down into two types – Simple and Complex. The simple frameworks are better for early-stage idea vetting. When the stakes are low, it doesn’t make a lot of sense to spend a considerable amount of time trying to gauge the merits of ideas that have not been clearly defined. But the more time and attention has been devoted to each individual idea, the more attention should be paid to vetting those ideas. In addition to that, the more we know about each individual idea, the deeper we can go in the assessment of its merits.
Finally, there will, likely, be more posts along the same lines – there are a lot of evaluation frameworks out there and we are aiming at providing a complete library of these. Let me know if there are frameworks you’d like to see featured.
With that, let’s dive into the actual scorecards.
These approaches to idea evaluation are slightly more complicated than a general thumbs up/thumbs down method. Still, they provide a basic amount of rigor when it comes to evaluation. Use these frameworks at early stage of idea development when you do not have a clearly defined concept.
Another simple framework for prioritization. Without diving into the details, organizations can sort out their priorities based on:
Time: How long will it take to execute a project (a change, a test, or full scale roll-out) until its completion? This includes staff hours/days to execute and the number of calendar days until the project’s impact would be recognized. A score of 5 would be given to a project that takes the minimal amount of time to execute and to realize the impact.
Impact: The amount of revenue potential (or reduced costs) from the execution of your project. Will the project impact all of your customers or only certain segments? Will it increase conversion rates by 1 percent or by 20 percent? A score of 5 is for projects that have the greatest lift or cost reduction potential.
Resources: The associated costs (people, tools, space, etc.) needed to execute a project. Keep in mind: No matter how good a project is, it will not succeed if you do not have resources to execute an initiative. A score of 5 is given when resources needed are few and are available for the project.
Timmons framework focuses its attention on the fit among 3 elements it deems to be critical to evaluation of a startup opportunity:
The merits of the opportunity itself.
Because the framework focuses on the interaction between the three factors, simple scorecarding of the 3 factors is not as effective as thinking through the interactions. Nevertheless, this framework can be a useful to evaluate ideas in a pinch.
Complex Evaluation Frameworks
These are a bit more involved than the frameworks outlined above. As you devote more time and attention to your ideas, the more dimensions you are able to assess. These frameworks have levels of abstraction to provide an aggregate score for a set of criteria. Use these frameworks when evaluating more mature and formed opportunities.
The Should/Could tool is used to compare a set of opportunities on their inherent potential and the company’s capabilities to deliver on that potential. It allows a team to independently assess a set of opportunities along two dimensions to create the Should/Could canvas shown below.
The 2 main dimensions are defined as follows:
The Should Dimension is about “Should anyone” pursue the opportunity (not “Should we“). It is an assessment of the future potential of the opportunity in the world, regardless of who creates it. Think of the “should” axis as what will drive the S-curve of adoption.
The Could Dimension is about “Could we” pursue the opportunity successfully. It’s an assessment of the company’s current and future capabilities against what the opportunity will require – how well we could create such an opportunity (alone or with partners) and the likelihood that we could be successful. Think of the “could” axis as how the company can participate in the evolution of the S-curve.
Each of the dimensions, in turn has its own set of sub-criteria allowing a more thorough assessment of Should and Could dimension.
The framework is particularly useful for assessment of a more mature set of opportunities as the criteria used to evaluate these ideas are rather in-depth. Thus a deeper understanding of market size, trends and forces as well as organizational capabilities is required.
The framework was designed “specifically to help address the needs of the substantial and growing number of new and serial entrepreneurs, informal investors, students of entrepreneurship, as well as formal investors who favor an additional opportunity-screening model to recommend or use.” The authors seemed to have tried to popularize the framework by using mnemonics for every set of factors and sub-criteria.
The framework breaks out five main factors which are:
Each one of the factors, includes a number of sub-criteria which can be used to analyze the ideas more thoroughly. Product/service factor, for example, includes the following criteria: Superiority, Uniqueness, Protection, Ethicality, Readiness and Business model (SUPERB). The rest of the factors have their own sets of sub-criteria.
I hope these examples have been useful in providing some context for idea evaluation. Feel free to check out our library of idea evaluation frameworks in the Rationalize template library. And, once again, drop me a note if there is a framework you’d like to see featured. Now feel free to try these out and make your own evaluation at Rationalize.io
Grid analysis (also known as Decision Matrix, Pugh Decision Matrix, Weighted Scorecard and others) is a framework for evaluating ideas and making decisions which uses a set of weighted criteria to rank the ideas. Each idea is evaluated against each criteria and assigned a score based on how well the idea satisfied that specific criteria. Each criteria, in turn, is weighted according to its importance to the decision-maker. The end result is a ranked list of ideas. The detailed process is described below.
What are the Benefits of Using Grid Analysis to Evaluate Ideas?
The main benefit of using Grid Analysis for decision making and idea evaluation is to make the process more objective. By using a concrete set of criteria with distinct weights, the decision-maker can inject a certain amount of rationality in the process and, subsequently, more easily justify their decision to others if needed.
Where is Grid Analysis Used?
Grid Analysis can, in theory, be used for any decision which requires selection between a number of concrete alternatives. Practically, it is widely used in areas of design engineering, capital allocation, procurement, product management, and others. These are some of the questions that can be answered with Grid Analysis approach:
Design Engineering: Which one of the potential product design best balances the needs of the users with the costs associated with each design?
Capital Allocation Decisions: Based on the priorities of the company, which projects should receive funding?
Procurement: Which suppliers best satisfy the list of our demands and regulations we have to abide by?
Product Management: Based on the feedback from our users and the priorities of the company, which product features should be prioritized for development?
How to Use Grid Analysis? Step-by-step Guide
Define the alternatives you are going to be evaluating. These can be the ideas you are considering, or life decisions that are being pondered (more concrete example is below).
Define the criteria. The criteria answers the question of “What is important to you or your user, when choosing between the alternatives?”. The criteria can be anything from product specifications which are valued by the user to your personal preferences when choosing between which houses to buy.
Assign weights to the criteria. This step answers the question of “How important is each criteria to the decision?”. Here you pick a scale (1-10 for example) and assign it to each one of the criteria based on how important it is. This is an optional step – if every criterion is of equal importance to the decision, this step is not necessary.
Rank each idea on each criterion. This is where the “grid” or “matrix” concept comes in. Arrange your criteria and ideas in a grid, witch criteria listed in the top row of the table and the ideas listed in the first column of the table. Then score each of the concepts on each of the criteria. Visually it looks like this:
Multiply the idea scores by criteria weights. Now simply take the scores from the previous step and multiply them by the weight of each criteria.
Sum up the idea scores. Simply sum up the rows to understand which idea has the highest score – that is your winner.
Example: Using Grid Analysis to Identify the Best Product Design
You are an industrial engineer asked to design an environmentally-friendly container for a new soda drink. You have come up with a number of designs and are trying to figure out which one should the company go with. Gird Analysis is one of the best tools to make that decision.
Identify your alternatives
In this case the alternatives you are considering are you bottle designs. Let’s say that these are the designs you have identified:
Identify the Criteria
Next step is to answer “What matters?” From the above we know that the bottle needs to be environmentally-friendly, so that’s certainly one of the criteria. The other couple of criterias could relate to customer appeal and cost to manufacture. So your criteria then is such:
Cost-effectiveness (be careful with cost/price related criteria – they are counter-intuitive with how you will score them. Designs with high cost, should score low on the criteria. That is not a natural instinct for most people)
Weight the Criteria
Now we answer “How much do our criteria matter?” Higher scores here indicate higher importance.
Environmental-friendliness: SCORE 10 (this is the one requirement mentioned in the description)
Customer Appeal: SCORE 7 (you are going to sell this to customers, so this is likely a pretty important criterion, though perhaps not as important as environmental-friendliness)
Cost-effectiveness: SCORE 4 (while price is important, it was never mentioned, so we can leave it at 4)
Score the Designs on your Criteria
Let’s now build our grid and understand how our designs rank on the criteria. These scores might be a subject to a debate. This is why Rationalize provides a tool for COLLABORATIVE concept evaluation – which takes an average of multiple responders. For now, let’s go with the following scores:
Multiply Design Scores by Criteria Weights
Now we just multiply these out:
Add up the Design Rows
Looks like the Aluminium Bottle has narrowly beat out the Glass Bottle here. Notice that if we increased the weight of Customer Appeal, for example, the Glass Bottle would have been a winner.