Track Record: 2024 Errors

This page first posted 17 October 2024

The headline prediction for the July 2024 election was correct in the main outline. The polls showed a substantial Labour lead over the Conservatives, indicating a Labour parliamentary landslide. Our regression-based prediction also showed big gains for the Liberal Democrats and smaller gains for Reform UK and the Greens.

The actual size of Labour's majority was less than predicted, and the Conservatives won correspondingly more seats.

In numerical terms, the prediction and the outcome for GB seats were:

Party2019 Votes2019 SeatsPred VotesPred Seats Actual VotesActual Seats Vote ErrorSeat Error
CON44.7%36521.8%78 24.4%121−2.6%−43
LAB33.0%20338.8%453 34.7%412+4.1%+41
LIB11.8%1111.0%67 12.5%72−1.5%−5
Reform2.1%016.4%7 14.7%5+1.7%+2
Green2.8%16.3%3 6.9%4−0.6%−1
SNP4.0%483.1%19 2.6%9−0.5%+10
Plaid0.5%40.6%3 0.7%4−0.1%−1

The final Electoral Calculus prediction is made up of two components: the poll of opinion polls and the seat predictor. The poll-of-polls was correct in outline, but overstated the Labour lead. The seat predictor performed relatively well in showing which seats were more Labour than others.

The poll-of-polls showed a Conservative lead over Labour of 17pc compared with an actual lead of 10pc. The difference between those, an error of 7pc, is high by historical standards. For comparison, the error was 6pc in 2015, and 9pc in 1992 when the polls mistakenly predicted Neil Kinnock would beat John Major. For more details of the possible causes of this error, see Polling Errors in 2024.

Because of the poll error, the seat predictions were not perfect. Labour were over-predicted by about 40 seats, and the Conservatives under-predicted by a similar amount. This is less than ideal, as we normally aim for a seat error of within 20 seats. For the smaller parties, the predictions were much better with the Liberal Democrats, Reform UK and the Green party all predicted fairly well.

However, if we adjust for the error in the poll-of-polls, the underlying pattern of the seat predictions looks better. If we feed the actual vote shares into our seat calculator, the results are:

PartyAdjusted
Pred Seats
Actual SeatsSeat Error
CON1231212
LAB411412−1
LIB6772−5
Reform853
Green34−1
SNP1596
Plaid34−1
Other25−3

You can see this for yourself by running the seat predictor with the correct national and Scottish vote shares: seat calculator.

The seat errors are now very small indeed. This shows that the MRP regression methods are fairly accurate in showing whether one seat is more Conservative. It was the absolute level of Conservative support which was overestimated by the general polls which introduced the bulk of the errors.

Of course, there is more to prediction accuracy than the headline seat count, since every seat is predicted individually which provides 650 opportunities to be right or wrong. Looking at the final Electoral Calculus seat-by-seat prediction, the winner was predicted correctly in 546 seats and incorrectly in 86 seats. That is a success rate of 86pc, which is not as high as 2019 when the success rate was 93pc. The success rate is 90pc for the MRP prediction adjusted to match the actual vote shares.

We will now look at these and other issues in more detail. The particular topics studied are:

  1. Northern Ireland
  2. Model errors
  3. Seat by seat errors

1. Northern Ireland

Electoral Calculus was able to make predictions for Northern Ireland, in collaboration with our polling partners at LucidTalk.

The table below shows the predictions which were made and compares them with the actual outcome.

Party2017 Votes2017 SeatsPred VotesPred Seats Actual VotesActual Seats Vote ErrorSeat Error
DUP30.6%821%722.1%5-1.1%2
SF22.8%723%727.0%7-4.0%0
SDLP14.9%214%211.1%22.9%0
UUP11.7%013%112.2%10.8%0
Alliance16.8%118%115.0%13.0%0
Other3.2%011%012.6%2-1.6%-2

On the whole, the vote share predictions were fair rather than good. All parties were predicted to within 4pc of their actual vote share, which is outside the margin of error (2pc) for a poll of 3,800 people. Our LucidTalk poll correctly predicted that the DUP would lose significant support compared with 2019, and that smaller parties (such as Traditional Unionist Voice) would gain. In the end, Sinn Fein did a bit better than predicted at the expense of the SDLP and Alliance.

In terms of seats, three seats were mis-predicted and fifteen seats were correctly predicted. Overall, the prediction was fair.

The three mis-predicted seats are described and explained below.

In Antrim North, we predicted a safe DUP victory, but the Traditional Unionist Voice (TUV) won the seat with a small majority of 1pc. This was a surprise result in a seat which has been a DUP stronghold since Ian Paisley won it in 1970.

In Down North, we predicted a very narrow Alliance victory over the independent unionist Alex Easton with the UUP in a distant third place. In the event, Alex Easton won with a decent majority 17pc ahead of the Alliance, and the UUP was indeed further behind.

In Lagan Valley, we predicted a narrow DUP victory over the Alliance. In reality, the UUP took more votes from the DUP, and the Alliance won the seat with a majority of 6pc.

1.1 Northern Ireland methodology

This was the second election for which Electoral Calculus made predictions for Northern Ireland.

Our analysis was based on a combination of polling from LucidTalk, UNS analysis, and other relevant data, such as betting market prices.

Seat predictions were based on a combination of these inputs and regularised so that province-wide vote totals matched those of the LucidTalk polls.

In terms of the division of labour between LucidTalk and Electoral Calculus, LucidTalk was responsible for the province-wide vote shares; Electoral Calculus was responsible for individual seat predictions.

2. Model errors

We can separate out the effects of polling error and model error. This is done by running the model using the actual national vote shares form the election and seeing how accurate the result is. This removes polling error because we are using the actual vote shares, and so the error that remains is model error.

This can be done both for the new regression-based model, which was the one used in the campaign, and also our older UNS-style strong transition model. This lets us compare between the two models to see if there is any noticeable difference.

The regression-based model used a regression-driven "baseline" prediction, which was then modified by a small UNS overlay to adjust for the difference between the regression's national vote shares and the target national vote shares. The regression baseline was based on two waves of campaign polling, each around 6,000 respondents in size.

The table below shows the actual election result in terms of both vote share and seats won; the UNS-style prediction using the actual vote share as the model input; and the regression-based prediction. The predictions are given in terms of seats and the error (defined as predicted seats minus actual seats) is also shown.

PartyActual
Votes
Actual
Seats
UNS
Prediction
UNS
Error
Regression
Prediction
Regression
Error
CON24.4%121232+111123+2
LAB34.7%412339−73411−1
LIB12.5%7240−3267−5
Reform14.7%50−58+3
Green6.9%41−33−1
SNP2.6%915+615+6
Plaid0.7%43−13−1
Other3.5%52−32−3

The number of incorrectly predicted seats in total was 142 for the UNS model and 65 for the regression model.

The regression model performed significantly better than the UNS model. The regression model predicted the total seats for each GB-wide party to within a narrow tolerance of just five seats. But the UNS model was over 100 seats too high for the Conservatives. (If the UNS model had been run with the predicted vote shares rather than the actual vote shares it would have predicted the Conservatives to win 158 seats, which is 37 seats too high.) The regression-based model was also more successful at predicting individual seats, having a seat error rate of 10pc compared with 22pc for the UNS-style model.

Overall the regression-based model was a significant improvement on the older UNS approach and was a more accurate predictor.

3. Seat by Seat errors

Even if we use the exact national vote shares, we do not predict every seat correctly. Although the total for each major party is correctly predicted to within five seats, there are still 65 seats which were mis-predicted. This compares poorly with previous years. There were 34 seats wrong in 2019 (after correcting for polling error), 50 in 2017, 36 in 2015, and 63 in 2010.

Around half of these seats were fairly close seats where we predicted one party would win narrowly over the other (with a majority less than 10%) and in the event the other party won narrowly over the first. These 32 seats were:

There were also four three-way marginals where the winner was not the party we predicted:

Additionally there were six seats in Scotland where we over-predicted the SNP would win. These were:

Another feature of the 2024 election was the growth of sectarian voting driven by religious affiliation. This mostly hurt Labour, both in seats where Muslim candidates made unexpected victories, and also in Hindu-heavy seats where there was support for the Hindu-led Conservative party.

There were a further seventeen seats with a variety of causes:

CauseNum SeatsSeats
CON did better than predicted5Farnham and Bordon; Godalming and Ash; Melton and Syston; Aberdeenshire West and Kincardine; Berwickshire, Roxburgh and Selkirk
LAB did better than predicted4Dunstable and Leighton Buzzard; Finchley and Golders Green; Harlow; Hartlepool
Reform did worse than predicted3Barnsley South; Cannock Chase; Skipton and Ripon
LIB did better than predicted3Chichester; Devon South; Hampshire North East
Reform did better than expected1Basildon and East Thurrock
Green did better than expected1Herefordshire North

Summary and Conclusions

The main points that this analysis has shown are: The overall performance of the prediction was fair, and the model was good.
Return to top of page, or track record summary, or home page.