Looking for something else?
By James D. McCabe
Once you've seeded the evaluation process, had discussions with colleagues about finding top prospects for your networking projects, and gathered data on top prospects, the fourth step is refining your criteria to ensure that you make the best choices.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Now that you have additional information regarding your criteria (from the data gathering exercise), you can use this information to refine the criteria. Often, in gathering and developing data, we learn that there are some new criteria that should be added; that some existing criteria are not as appropriate as first thought and perhaps should be removed; or that some criteria should be modified according to the new information. The result is a refined and better set of evaluation criteria.
In order to apply these evaluation criteria to your list of candidates, you should have some way to compare and contrast the candidates. This is commonly done with a system of ratings. Ratings show how the candidates compare to one another. Ratings are applied with criteria (which you already have) and weights that show the relative importance of each criterion. In this section we are concerned with developing weights for our criteria. Although this is one of the more subjective parts of the process, it is necessary in order to make selections from your list.
In our example we had the following initial criteria: costs, technology, performance, and risks. From our data gathering we learned that costs should be separated into initial costs and recurring costs, and that other criteria should be added: standards compliance, available services, operations, and scalability. If our seed set of two candidates expanded into five design candidates, our evaluation chart would look something like Figure 10.8. Note that we have a column for relative weights, and a row to total the ratings for each candidate, both of which we have yet to complete.
In this figure we have nine criteria. Each criterion will be given a weight based on the importance of that criterion to the evaluation. This is determined through group discussion, which may include voting on suggested weights. The range of weights that you apply is not as important as maintaining consistency throughout your evaluation. One common range for weights across the set of criteria is 0–1, where 0 means that that criterion has no importance to your evaluation, 1 means that it has the highest importance to your evaluation, and any value in between indicates that criterion's degree of importance. (Note that you could decide that all criteria are equal, in which case you need not assign weights, or you could give each criterion a weight of 1.)
For our example we would take the data gathered so far, along with our refined sets of criteria and candidates, and conduct a discussion regarding how to weight each criterion. Using a range of 0–1, Figure 10.9 shows how the weights might be applied.
FIGURE 10.8 A refined set of evaluation criteria
FIGURE 10.9 A set of evaluation criteria with relative weights added
In this figure recurring costs are weighted highest, followed by risks, initial costs, and performance. Operations and technology are weighted in the middle of the scale, while standards compliance, available services, and scalability are weighted the least. By looking at this comparison chart, you can see the importance the evaluation team places on each criterion.
It is useful to write down how you arrived at each weight and keep this as part of your documentation. If you are ever asked why certain criteria were weighted higher than others, you will be happy to have it documented. I have found that memory does not serve well here: There have been times when I was certain I would remember the reasons for a particular weight (it seemed obvious at the time), only to forget when asked later.
There are additional ways to develop weights for criteria. As discussed during the analysis process, some characteristics that we can apply to criteria are urgency, importance, and relevance. Urgency is a measure of how time-critical the criterion is; importance is a measure of how significant the criterion is to this project; and relevance is a measure of the appropriateness of this problem to the project. The default characteristic is importance.
Each criterion can be evaluated in terms of urgency, importance, and relevance, and a weight assigned to the criterion that is based on all three characteristics. Then the candidates can be evaluated as in the previous example.
In the next section our weights are used with ratings based on how each candidate fares relative to one another for a given criterion.
Evaluating vendors and service providers for networking projects
How to choose vendors, tools and service providers
Seeding the evaluation process
Having conversations about prospective networking partners
Gathering data on prospective networking partners
Refining your criteria for prospective networking partners
Developing ratings for prospective networking partners
Modifying the list of prospective networking partners
Determining the order of evaluations for networking partners
Reproduced from Chapter ten of the book Network Analysis, Architecture, and Design by James D. McCabe. Copyright 2007, Morgan Kaufman Publishers, an imprint of Elsevier Science. Reproduced by permission of Elsevier, 30 Corporate Drive, Burlington, MA. Written permission from Elsevier is required for all other uses.