Analytical Job Evaluation Methods

Analytical job evaluation entails taking an entire job and breaking it down into a series of more manageable factors, such as responsibility, decisions, or the required knowledge and skills. Comparing jobs then becomes much easier when they’re analyzed in this way, whether through point systems that attach numerical values to each factor or by utilizing grade/role profiles under broader heading factors.

The advantages of employing an analytical approach are that, first, evaluators must evaluate each of the job’s characteristics independently before drawing a conclusion about its relative worth, and second, they are given defined yardsticks or standards to help increase objectivity and consistency in their judgments. It can also assist you to defend yourself against an equal pay claim in the United Kingdom or the United States. Point-factor rating, analytical matching, and factor comparison are three prominent analytical approaches.

Point-factor rating

Point-factor schemes are one of the most widespread forms of analytical job evaluation. According to the e-reward 2007 job evaluation survey, this form is used by 70% of respondents who have a job evaluation scheme. How it works is that jobs are broken down into several factors.

A crucial element of a job is responsibility, knowledge, and skill. These are the components of a work that show the degree of responsibility, knowledge, and ability required for the task. It’s assumed that each of the factors will contribute to the overall value of the position and represent aspects of all jobs to be evaluated but to varying degrees.

According to an official statement, the respondents’ programs had between three and fourteen factors on average, with a median of five.

There are typically five or six levels in each factor. We create definitions of these levels to help us determine how much the factor applies to the job being evaluated. Each factor has a specific maximum points score that it can receive based on its importance (this is called explicit weighting). If some factors have more levels than others, this means that those factors are implicitly weighted because they have a higher range of scores available.

The numerical factor scale is constructed by dividing the overall score for a variable among the levels. Arithmetic progression, such as 50, 100, 150, and 200, or geometrical progression such as 40, 90, 150 and 220 are examples of arithmetic progressions. In the latter case, senior jobs with higher scores have greater latitude to be recognized.

The scheme includes the factor and level definitions as well as the scoring system, which outlines how many points are available for each factor and how those points will be distributed among the factor levels. This is known as a “factor plan”.

On the basis of the level of each factor in the job, jobs are ‘scored’ (ie given points) under each category heading. This is determined by comparing the job’s characteristics to those definitions to discover which definition best fits. The factor scores are then added together to produce a total score that indicates how well each job fits with respect to all of the factors and can be used to rank them in order.

Although evaluators, often a panel of management and staff representatives, review job descriptions to compare them with the position being filled, there are limits to how precisely levels can be defined. Additionally, not all available information about the job indicates which level is appropriate. Therefore, making a decision on the best fit requires judgment and this is why point-factor evaluation cannot be entirely objective like other forms of valuing jobs.

The role of the facilitator on a job evaluation panel is key to coming to an agreement without making too many compromises. However, as evaluators gain more experience, they become better at interpreting the factor plan and information about the job.

They establish guidelines that, based on past decisions and precedents, expand and clarify the meaning of level definitions. This allows for a greater understanding of how the information about a job can be interpreted in order to make an informed judgment.

A point-factor scheme may be operated manually, in which case a ‘paper’ system is used, or computers can be utilized to automate the decision-making process.

Analytical job matching

Analytical job matching, like point-factor job evaluation, is based on the analysis of a number of defined criteria. Analytical matching may be divided into two categories: one matches role profile to grade/level profile; the other matches role profile to benchmark role profile.

Role-to-grade analytical matching

The profiles of roles to be evaluated that have been evaluated and described in terms of a variety of job evaluation criteria are compared with grade, band, or level profiles that have been evaluated and described in terms of the same job evaluation criteria. The role profiles are then ‘matched’ with the range of grade or level profiles to determine the best fit and thus grade the position.

Role-to-role analytical matching

We match the role profiles of jobs to be evaluated against benchmark role profiles that have been defined using the same job evaluation factors. A benchmark job is a reference point or standard against which other similar jobs can be compared.

A benchmark role is a type of job that resembles many different professions and career levels in an organization, and it’s used to compare and assess other jobs. If a role for evaluation has a good match with an established grade, the role will be graded at that level.

We will use generic role profiles for any group of roles that have comparable responsibilities, such as team leaders or personal assistants. Role-to-role matching can be combined with role-to-grade matching.

Use of analytical matching

Analytical matching can be used to grade or place jobs in levels after a thorough analysis of a substantial sample of benchmark tasks: representative ones that may serve as a valid basis for comparisons. This may happen in big businesses where it is thought that going through the whole point-factor evaluation procedure for every job is not necessary, especially when it comes to ‘generic’ roles.

When this comes in the wake of a large job evaluation procedure such as that seen in the NHS, it is likely to be based on similar criteria as those utilized in analytical matching, which can be used to handle difficult situations or appeals.

The number of factors can be reduced in some matching procedures, as the HERA technique for higher education institutions combines related variables together, cutting the total number of variables from seven to four.

Although analytical matching might not always use a point-factor evaluation system, this can save time and effort when designing and implementing the scheme.

Factor comparison

The original factor comparison method compared job rates by looking at different money values for each job, providing a more direct indication of which job was better-paying. It was developed in the United States but is not used in the UK.

The Hay Group’s proprietary job evaluation method, the Hay Guide Chart Profile, is a factor comparison scheme. The only other form of factor comparison in use today is graduated factor comparison, which compares jobs on various factors using a graded scale.

The scale may have only three value levels - for example, lower, equal, and higher - and no factor scores are used. This is a method frequently used by employment tribunals’ independent experts to advise on an equal pay claim. Their obligation is simply to compare one job with one or two others, not to evaluate internal relativities across the full spectrum of occupations in order to create a rank order.

Tailor-made, ready-made and hybrid schemes

The schemes described above may be ‘tailored-made’ or ‘home-grown,’ in the sense that they are completely created by or for an organization, a group of organizations, or a sector such as further education institutions.

In 2007, the e-reward survey showed that only 20 percent of programs were catered to individual needs. Many management consultants have their own “already created” schemes or proprietary brands. Consultants’ schemes are more often than not analytical (point-factor, factor comparison or matching) and could be associated with a market rate database.

The use of computer aid is provided for in these schemes, and up to 60% of respondents have used them.

A hybrid scheme is a consultant’s pre-existing plan that has been adjusted to better suit an organization. This was the case for 20% of e-reward respondents. Most commonly, the modification is seen in the factor plan or, with Hay schemes, the Guide Chart.