Developing ratings for prospective networking partners

Once you've used your selection criteria to daft a preliminary list of prospective networking partners, learn how to evaluate the qualifications.

By James D. McCabe

After you've performed steps such as discussing selection criteria for prospective networking channel partners with colleagues and refined your selection criteria, the fifth step in choosing networking partners for your project is establishing ratings for each prospect.

Armed with our criteria and the results of our data gathering exercise we now develop ratings for each candidate. As in developing the first set of weights, this should be a group effort that includes your best technical and executive decision makers. The size of such an evaluation group is important—you don't want such a large group that nothing gets accomplished, nor one so small that decisions made by the group will be protested by others in your organization.

Example 10.2.
My experience is that a group size somewhere between six and twelve is optimal for making these types of decisions. Oddly enough, this works from small to very large designs.

One important requirement of the evaluation group is that there be no vendors or service providers present. This may seem obvious, but it is surprising how often they are able to get involved. Clearly, in order to develop fair and balanced decisions it is best to leave them out. You should have already gotten all relevant information from vendors and service providers during the data gathering process, before you develop and apply ratings.

To continue our exercise, we have our evaluation group together along with our evaluation chart. We then need to agree on a scale for our ratings. One common range is the same range previously used, 0–1, where 0 means that the candidate is least relevant for that criterion and 1 means that it is most relevant. Other common ranges are 1–5 or 1–10, where a weight of 1 means that that candidate is the worst or least relevant for that criterion, and 5 (or 10) means that that candidate is the best or most relevant for that criterion. You could use these ranges to rank the candidates, giving the best candidate (according to that criterion) a 1, the next best a 2, and so on.

Expanding the scale, or reversing the numbers (so that 10 is the worst or least relevant) is entirely subjective. However, it is important that your evaluation group agrees on the scale. If you have problems with this, it indicates that you will have trouble during the evaluations.

Let's say that for this example we choose a scale of 1–5, 5 being the best and 1 the worst. We then take one of the evaluation criteria (e.g., costs) and discuss each design option based on this. This should be a democratic process, where each person gets a chance to express his or her opinion and vote on each candidate. The group leader or project manager would break ties if necessary.

For a discussion on costs we may have cost information (for each candidate design) that was provided to us by an independent assessment, from other organizations that are deploying a similar design, from our own past experience, or from the vendors and service providers themselves. You should be able to compile such cost information and use it to determine a relative rating for each design candidate. You would populate the evaluation chart with ratings for all of the candidates for the cost criterion and then do the same thing for the other criteria. Once you have given ratings to each of your candidates across all of the criteria, you can multiply the weight of each criterion with the rating given to each candidate. The result would look something like Figure 10.10.

In this example each candidate is rated from 1 (worst) to 5 (best) for each criterion. Those ratings are shown as the first number in each evaluation box. Then each rating is multiplied by that candidate's relative weight, resulting in a weighted rating. These weighted ratings are shown as the second number in each evaluation box. The ratings and weighted ratings for each candidate are added together, as reflected in the candidate totals at the bottom of the figure. Although totals for both weighted and unweighted ratings are shown (for illustration), only the weighted ratings would be totaled and applied to the evaluation. Having both weighted and unweighted ratings in an actual evaluation would be confusing.

FIGURE 10.10 A set of ratings for candidates

Notice from this figure that, if we follow the unweighted ratings (the first numbers), Candidate 1 has the highest score. However, using the weighted ratings, Candidate 2 receives the highest score. This is because Candidate 2 has the highest ratings for those criteria that have the highest weights. Thus, applying weights allows you to focus the evaluation on those areas of importance to your project.

Finally, it is helpful to have a way to determine when the overall (summary) ratings are so close that you should declare a tie, and what to do when that occurs. Best practice is to declare a tie when candidates are within a few percentage points of each other.

When ratings development and application to the candidates are done well, it helps to take the politics out of the evaluation process.

Having rated our candidates, it is now time to refine the set of candidates, with the objective of selecting the optimal vendor, equipment, or service provider for our project.

Evaluating vendors and service providers for networking projects

  How to choose vendors, tools and service providers
  Seeding the evaluation process
  Having conversations about prospective networking partners
  Gathering data on prospective networking partners
  Refining your criteria for prospective networking partners
  Developing ratings for prospective networking partners
  Modifying the list of prospective networking partners
  Determining the order of evaluations for networking partners

Reproduced from Chapter ten of the book Network Analysis, Architecture, and Design by James D. McCabe. Copyright 2007, Morgan Kaufman Publishers, an imprint of Elsevier Science. Reproduced by permission of Elsevier, 30 Corporate Drive, Burlington, MA. Written permission from Elsevier is required for all other uses.

Dig Deeper on Network management services

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.