A common mistake in competitive procurements is a mechanical fixation on the adjectival ratings, color ratings, or numerical “scores” a procuring agency assigns a proposal under a particular evaluation factor. In best value tradeoff acquisitions, one often hears (from both contractors and government personnel) that, if two proposals have the same adjectival ratings, then the only rational choice is for the government to award the contract to the lower-priced offeror. This common view is wrong and can lead to a sustained bid protest.
Guides to Intelligent Decision-Making—Not Substitutes for It
As the GAO frequently repeats, ratings are merely guides to intelligent decision-making.See, e.g., Centerra Grp., LLC, B-414800, B-414800.2, Sept. 21, 2017, 2017 CPD ¶ 307 at 4 (“Evaluation ratings and the number of strengths and weaknesses assessed are merely a guide to, and not a substitute for, intelligent decision making in the procurement process.”). In a best value tradeoff analysis, an agency must do more than merely compare the adjectives, colors, or points a proposal is assigned. Rather, source selection authorities (SSAs) are supposed to compare the substance of the underlying merits of competing proposals in a qualitative manner to determine which (if any) proposal is superior under a given evaluation factor, and whether qualitatively superior proposals are worth any associated price premiums.
Two principles flow from this fundamental rule.
First, the mere fact that two proposals received the same adjectival, color, or numerical rating for a given factor does not necessarily mean they are equal under that factor. One can think of this as a “bluer shade of blue” rule. My “outstanding” rating may be more outstanding than your “outstanding” rating if my proposal had more underlying merit than yours, even if they both received the same adjectival label. Or, if our proposals were both rated “good,” perhaps mine barely made the cut and is tottering on the edge of merely “acceptable,” whereas your proposal may be just an inch away from “outstanding.” The same is true for strengths assigned to a proposal: some strengths are “stronger” than others. That is why merely tallying the total number of strengths and weaknesses assigned is usually not a meaningful exercise. A proper best value analysis considers the merits and flaws underlying the ratings assigned.
This means, when the proposals of two offerors have identical adjectival ratings, agencies may (and sometimes should) award to the higher-priced offeror if the underlying merits of that offeror’s proposal are sufficiently superior to its lower-priced competitor with the same ratings.
We can see this in the GAO’s decision in CharDonnay Dialysis, LLC, B-420910, B-420910.2, Oct. 27, 2022, 2022 CPD ¶ 273. There, the protester and the awardee received identical technical and past performance ratings, but the awardee’s price was slightly lower than the protester’s. Unsurprisingly, the lower-priced, equally rated proposal won. The protester raised a number of objections to the evaluation, all of which failed. But the protester also argued that the two proposals, based upon the different underlying strengths assigned, were not technically equal, notwithstanding the identical adjectival ratings. The agency’s best value analysis simply assumed that, if two technical proposals receive identical adjectival ratings, that must mean they are equal. The GAO agreed with the protester that this assumption is flawed. The GAO found the agency failed to compare the underlying merits of the competing proposals and “meaningfully look behind the adjectival ratings . . . before finding the proposals to be technically equal.” On this basis, the GAO sustained the protest and recommended the agency re-do its best value analysis.
Second, in the course of a protest, the GAO must determine whether a demonstrated procurement error is prejudicial. If a protester already has the highest possible adjectival rating under a particular evaluation factor, one might be tempted to say it is irrelevant whether that offeror should have received extra strengths under that factor, as it already has the maximum possible adjectival rating. But that is not necessarily so. If the awardee has only a slight competitive edge over the protester, it is possible one extra strength might shift the competitive balance—even under an evaluation factor where the protester is already blue or outstanding. That extra strength might make the protester a sufficiently “bluer shade of blue,” and alter the award outcome.
This principle contributed to a sustained protest in Tech Marine Business, Inc., B-420872 et al., Oct. 14, 2022, 2022 CPD ¶ 260. In that case, both the protester and the awardee received the highest possible adjectival rating of outstanding for the technical factor. The protester argued it deserved a number of additional strengths under the technical factor. The agency rebutted all these “missing strength” arguments, except one, which the GAO found it failed to address in any substantive fashion. Although both offerors already had the highest possible adjectival rating under the technical factor, the GAO sustained the protest because even the one additional strength under an already outstanding evaluation factor might have tipped the scales in what was otherwise a very close competition: “[A]ny change in Tech Marine’s technical evaluation could have widened the gap between the two proposals sufficiently that the SSA no longer considered them technically equivalent, potentially resulting in a different best-value decision.” This was the “bluer shade of blue” principle once again in action.
The main takeaway is for federal procurement officials, rather than offerors. When a solicitation provides for a best value tradeoff source selection methodology, the agency’s analysis should “look behind” the assigned ratings and compare the underlying merits of the competing proposals. It is easier simply to compare colors and the number of strengths, but the GAO’s decisional law demonstrates that usually is not a rational method of source selection. If the agency determines two proposals are technically equivalent for a particular evaluation factor, the agency should document why that is so for some reason other than “same ratings” or “equal number of strengths.”