The headline prediction for the July 2024 election was correct in the main outline. The polls showed a substantial Labour lead over the Conservatives, indicating a Labour parliamentary landslide. Our regression-based prediction also showed big gains for the Liberal Democrats and smaller gains for Reform UK and the Greens.
The actual size of Labour's majority was less than predicted, and the Conservatives won correspondingly more seats.
In numerical terms, the prediction and the outcome for GB seats were:
Party | 2019 Votes | 2019 Seats | Pred Votes | Pred Seats | Actual Votes | Actual Seats | Vote Error | Seat Error | ||
---|---|---|---|---|---|---|---|---|---|---|
CON | 44.7% | 365 | 21.8% | 78 | 24.4% | 121 | −2.6% | −43 | ||
LAB | 33.0% | 203 | 38.8% | 453 | 34.7% | 412 | +4.1% | +41 | ||
LIB | 11.8% | 11 | 11.0% | 67 | 12.5% | 72 | −1.5% | −5 | ||
Reform | 2.1% | 0 | 16.4% | 7 | 14.7% | 5 | +1.7% | +2 | ||
Green | 2.8% | 1 | 6.3% | 3 | 6.9% | 4 | −0.6% | −1 | ||
SNP | 4.0% | 48 | 3.1% | 19 | 2.6% | 9 | −0.5% | +10 | ||
Plaid | 0.5% | 4 | 0.6% | 3 | 0.7% | 4 | −0.1% | −1 |
The final Electoral Calculus prediction is made up of two components: the poll of opinion polls and the seat predictor. The poll-of-polls was correct in outline, but overstated the Labour lead. The seat predictor performed relatively well in showing which seats were more Labour than others.
The poll-of-polls showed a Conservative lead over Labour of 17pc compared with an actual lead of 10pc. The difference between those, an error of 7pc, is high by historical standards. For comparison, the error was 6pc in 2015, and 9pc in 1992 when the polls mistakenly predicted Neil Kinnock would beat John Major. For more details of the possible causes of this error, see Polling Errors in 2024.
Because of the poll error, the seat predictions were not perfect. Labour were over-predicted by about 40 seats, and the Conservatives under-predicted by a similar amount. This is less than ideal, as we normally aim for a seat error of within 20 seats. For the smaller parties, the predictions were much better with the Liberal Democrats, Reform UK and the Green party all predicted fairly well.
However, if we adjust for the error in the poll-of-polls, the underlying pattern of the seat predictions looks better. If we feed the actual vote shares into our seat calculator, the results are:
Party | Adjusted Pred Seats | Actual Seats | Seat Error |
---|---|---|---|
CON | 123 | 121 | 2 |
LAB | 411 | 412 | −1 |
LIB | 67 | 72 | −5 |
Reform | 8 | 5 | 3 |
Green | 3 | 4 | −1 |
SNP | 15 | 9 | 6 |
Plaid | 3 | 4 | −1 |
Other | 2 | 5 | −3 |
You can see this for yourself by running the seat predictor with the correct national and Scottish vote shares: seat calculator.
The seat errors are now very small indeed. This shows that the MRP regression methods are fairly accurate in showing whether one seat is more Conservative. It was the absolute level of Conservative support which was overestimated by the general polls which introduced the bulk of the errors.
Of course, there is more to prediction accuracy than the headline seat count, since every seat is predicted individually which provides 650 opportunities to be right or wrong. Looking at the final Electoral Calculus seat-by-seat prediction, the winner was predicted correctly in 546 seats and incorrectly in 86 seats. That is a success rate of 86pc, which is not as high as 2019 when the success rate was 93pc. The success rate is 90pc for the MRP prediction adjusted to match the actual vote shares.
We will now look at these and other issues in more detail. The particular topics studied are:
Electoral Calculus was able to make predictions for Northern Ireland, in collaboration with our polling partners at LucidTalk.
The table below shows the predictions which were made and compares them with the actual outcome.
Party | 2017 Votes | 2017 Seats | Pred Votes | Pred Seats | Actual Votes | Actual Seats | Vote Error | Seat Error | ||
---|---|---|---|---|---|---|---|---|---|---|
DUP | 30.6% | 8 | 21% | 7 | 22.1% | 5 | -1.1% | 2 | ||
SF | 22.8% | 7 | 23% | 7 | 27.0% | 7 | -4.0% | 0 | ||
SDLP | 14.9% | 2 | 14% | 2 | 11.1% | 2 | 2.9% | 0 | ||
UUP | 11.7% | 0 | 13% | 1 | 12.2% | 1 | 0.8% | 0 | ||
Alliance | 16.8% | 1 | 18% | 1 | 15.0% | 1 | 3.0% | 0 | ||
Other | 3.2% | 0 | 11% | 0 | 12.6% | 2 | -1.6% | -2 |
On the whole, the vote share predictions were fair rather than good. All parties were predicted to within 4pc of their actual vote share, which is outside the margin of error (2pc) for a poll of 3,800 people. Our LucidTalk poll correctly predicted that the DUP would lose significant support compared with 2019, and that smaller parties (such as Traditional Unionist Voice) would gain. In the end, Sinn Fein did a bit better than predicted at the expense of the SDLP and Alliance.
In terms of seats, three seats were mis-predicted and fifteen seats were correctly predicted. Overall, the prediction was fair.
The three mis-predicted seats are described and explained below.
In Antrim North, we predicted a safe DUP victory, but the Traditional Unionist Voice (TUV) won the seat with a small majority of 1pc. This was a surprise result in a seat which has been a DUP stronghold since Ian Paisley won it in 1970.
In Down North, we predicted a very narrow Alliance victory over the independent unionist Alex Easton with the UUP in a distant third place. In the event, Alex Easton won with a decent majority 17pc ahead of the Alliance, and the UUP was indeed further behind.
In Lagan Valley, we predicted a narrow DUP victory over the Alliance. In reality, the UUP took more votes from the DUP, and the Alliance won the seat with a majority of 6pc.
This was the second election for which Electoral Calculus made predictions for Northern Ireland.
Our analysis was based on a combination of polling from LucidTalk, UNS analysis, and other relevant data, such as betting market prices.
Seat predictions were based on a combination of these inputs and regularised so that province-wide vote totals matched those of the LucidTalk polls.
In terms of the division of labour between LucidTalk and Electoral Calculus, LucidTalk was responsible for the province-wide vote shares; Electoral Calculus was responsible for individual seat predictions.
We can separate out the effects of polling error and model error. This is done by running the model using the actual national vote shares form the election and seeing how accurate the result is. This removes polling error because we are using the actual vote shares, and so the error that remains is model error.
This can be done both for the new regression-based model, which was the one used in the campaign, and also our older UNS-style strong transition model. This lets us compare between the two models to see if there is any noticeable difference.
The regression-based model used a regression-driven "baseline" prediction, which was then modified by a small UNS overlay to adjust for the difference between the regression's national vote shares and the target national vote shares. The regression baseline was based on two waves of campaign polling, each around 6,000 respondents in size.
The table below shows the actual election result in terms of both vote share and seats won; the UNS-style prediction using the actual vote share as the model input; and the regression-based prediction. The predictions are given in terms of seats and the error (defined as predicted seats minus actual seats) is also shown.
Party | Actual Votes | Actual Seats | UNS Prediction | UNS Error | Regression Prediction | Regression Error | ||
---|---|---|---|---|---|---|---|---|
CON | 24.4% | 121 | 232 | +111 | 123 | +2 | ||
LAB | 34.7% | 412 | 339 | −73 | 411 | −1 | ||
LIB | 12.5% | 72 | 40 | −32 | 67 | −5 | ||
Reform | 14.7% | 5 | 0 | −5 | 8 | +3 | ||
Green | 6.9% | 4 | 1 | −3 | 3 | −1 | ||
SNP | 2.6% | 9 | 15 | +6 | 15 | +6 | ||
Plaid | 0.7% | 4 | 3 | −1 | 3 | −1 | ||
Other | 3.5% | 5 | 2 | −3 | 2 | −3 |
The number of incorrectly predicted seats in total was 142 for the UNS model and 65 for the regression model.
The regression model performed significantly better than the UNS model. The regression model predicted the total seats for each GB-wide party to within a narrow tolerance of just five seats. But the UNS model was over 100 seats too high for the Conservatives. (If the UNS model had been run with the predicted vote shares rather than the actual vote shares it would have predicted the Conservatives to win 158 seats, which is 37 seats too high.) The regression-based model was also more successful at predicting individual seats, having a seat error rate of 10pc compared with 22pc for the UNS-style model.
Overall the regression-based model was a significant improvement on the older UNS approach and was a more accurate predictor.
Even if we use the exact national vote shares, we do not predict every seat correctly. Although the total for each major party is correctly predicted to within five seats, there are still 65 seats which were mis-predicted. This compares poorly with previous years. There were 34 seats wrong in 2019 (after correcting for polling error), 50 in 2017, 36 in 2015, and 63 in 2010.
Around half of these seats were fairly close seats where we predicted one party would win narrowly over the other (with a majority less than 10%) and in the event the other party won narrowly over the first. These 32 seats were:
There were also four three-way marginals where the winner was not the party we predicted:
Additionally there were six seats in Scotland where we over-predicted the SNP would win. These were:
Another feature of the 2024 election was the growth of sectarian voting driven by religious affiliation. This mostly hurt Labour, both in seats where Muslim candidates made unexpected victories, and also in Hindu-heavy seats where there was support for the Hindu-led Conservative party.
There were a further seventeen seats with a variety of causes:
Cause | Num Seats | Seats |
---|---|---|
CON did better than predicted | 5 | Farnham and Bordon; Godalming and Ash; Melton and Syston; Aberdeenshire West and Kincardine; Berwickshire, Roxburgh and Selkirk |
LAB did better than predicted | 4 | Dunstable and Leighton Buzzard; Finchley and Golders Green; Harlow; Hartlepool |
Reform did worse than predicted | 3 | Barnsley South; Cannock Chase; Skipton and Ripon |
LIB did better than predicted | 3 | Chichester; Devon South; Hampshire North East |
Reform did better than expected | 1 | Basildon and East Thurrock |
Green did better than expected | 1 | Herefordshire North |