Methoden & Tools

Peer-to-peer IT benchmarks and their best use cases

by Xenia Dold

IT benchmarks can be categorised as peer-to-peer, individual or vendor. Here, we explain the differences between these performance comparisons and provide some useful tips on how to make the best choice.

 

Three worlds, three effects: The term 'IT benchmark' covers a range of tools, from quick market checks and peer group comparisons to customised management tools for CIOs. Peer-to-peer, vendor and classic individual IT benchmarking projects differ significantly in terms of data collection, standardisation, effort and accuracy. Choosing the right tool determines whether IT managers receive only a snapshot of the situation or precise answers to their strategic IT questions.

The risk: CIOs who rely on easily accessible market studies or quick tips from peers ('golf course metrics') often end up with an eclectic mix of irrelevant information, overlooking the real cost reducers or performance boosters. To assess IT costs, service quality and KPIs reliably, you need to understand the three types of benchmark: peer-based consortium benchmarks, bespoke individual analyses, and market-oriented vendor studies that evaluate suppliers.

 

An overview of IT benchmarking

1. How peer-to-peer benchmarking works

A peer-to-peer benchmark (also known as a consortium benchmark) brings together several companies, usually from the same industry or size category, to compare their IT key figures and processes within a common framework – accompanied by a neutral moderator, uniform definitions, questionnaires, workshops and best practice exchanges. The added value comes from direct dialogue between ‘peers’, but this comes at the cost of a high level of coordination. In addition, the exercise quickly ends up comparing apples and oranges if performance content, volumes, qualities and complexities are not consistently standardised.

2. Individual IT benchmarks – tailor-made for CIOs

Individual IT benchmarking focuses on precision rather than averages: the benchmark consultant puts together a curated peer group of around eight similar organisations for the company. The comparison is based strictly on specific questions, such as those relating to IT services, cost structures or governance, rather than standardised questionnaires. Performance content, qualities, complexities and volumes are systematically recorded and methodically normalised for the company to ensure a clean and meaningful comparison from heterogeneous raw data. The result is a reliable, actionable basis for decision-making with precise target values, ranging from concrete optimisation potential to prioritised roadmap elements.

3. What characterises a vendor benchmark?

Vendor benchmarks from analyst firms such as Gartner (‘Magic Quadrant’), PAC (“RADAR”) or Forrester (‘Wave’) classify IT suppliers and services uniformly in two-dimensional spaces – based on surveys, expert interviews and public and confidential data. They are perfect for IT managers who need a quick overview of the market, for example, of a provider or technology landscape. They do not take into account specific company details – they are fast, cost-effective and deliberately generic.

Which benchmark provides the most benefit for what?

The following table shows you which requirements the different types of IT benchmarks are suitable for:

 

Target group / need Recommended format Why it fits
Companies needing a quick, high-level market view vendor benchmark Broad comparison base, fast orientation on overall cost and performance levels
Firms with strong industry specifics and a desire for peer comparison peer-to-peer benchmark Comparison within a mid-sized, industry-related peer group; solid basis for relative positioning
CIOs with concrete steering questions on IT services, cost centers or governance topics individual benchmark Small, carefully selected peer group; considers scope, quality, complexity and volume for true apples-to-apples comparisons
Organizations with heterogeneous service portfolios needing normalized, actionable recommendations individual benchmark Normalization of remaining differences leads to implementable business cases and roadmaps
Companies with limited budget seeking only rough reference values

vendor benchmark

Cost-efficient and quickly available, provides sufficient input for initial assessments
Multiple companies within one industry wanting to develop common KPIs peer-to-peer benchmark Promotes standardization and a benchmarking community; suitable for KPI development and regular comparisons

 

In summary: Vendor benchmarks are suitable for quick, cost-effective orientation; peer-to-peer benchmarks are useful when industry context and common KPIs are the main focus; individual IT benchmarks are the right choice when precision, standardised comparisons in detail and directly usable recommendations for action are important.

However, consortium benchmarks are subject to a significant limitation: open book comparisons between competitors are not easily possible under the German Act Against Restraints of Competition (GWB). This law is intended to prevent illegal price fixing, which is why the detailed disclosure of data — which may be subject to non-disclosure agreements (NDAs) — is a sensitive issue. Competitors cannot simply sit down together and carry out open-book comparisons, as this could quickly be interpreted as price fixing.

In other countries and legal systems, the exchange of strategically sensitive information (such as current prices, discounts, or cost structures) is also potentially considered anti-competitive. In contrast, traditional, individual IT benchmarks have the advantage of enabling precise market comparisons using completely anonymised peer values, while avoiding legal risks relating to competition law or NDAs.

Data collection and standardisation in benchmarking

Data collection and standardisation are invisible levers that determine the success of an IT benchmarking project. Even small differences in definitions, service content or volumes are enough to turn a meaningful comparison into an apples-and-oranges scenario. Vendor, consortium and individual benchmarks place different emphases here: from a highly standardised market view to jointly defined peer KPIs to a finely tailored suit. Here, performance profiles with their essential dimensions of content, quantity, quality and complexity are placed in relation to reference performances via surcharges and discounts. This standardisation process is significantly influenced by the experience of the consultants.

One example is the comparison of ‘costs per kilowatt hour’. At first glance, the performance is 1:1 comparable, as the unit ‘price per kWh’ is standardised. But here, too, the small print must be taken into account: Are the costs of the meter taken into account? How is the contract term regulated? Are there price adjustment clauses, discounts or bonus payments? Therefore, a variety of factors that influence price must be taken into account in order to make a meaningful and fair comparison. This is because most offers on the market are not comparable on a 1:1 basis. Anyone who understands this mechanism will quickly realise why superficially similar benchmark results can have completely different implications for specific IT decisions.

If you have any questions about standardisation or IT benchmarking, please feel free to contact me.

Xenia Dold

Xenia Dold

Having previously specialised in analysing user behaviour in e-commerce, Xenia Dold now focuses on real customers. Her main objective is to support them effectively with their IT challenges and develop customised solutions.