In addition to our efforts in developing accurate nowcasts during the challenge, we also conducted a thorough post mortem analysis of our forecasting methodology. This analysis allowed us to evaluate the effectiveness of each of the models we utilized and to identify areas for improvement in our approach. By analyzing the performance of each model against the actual values of the target variables, we were able to determine which models were best suited for each challenge and adjust our modeling accordingly. This post mortem analysis provided us with valuable insights into the strengths and weaknesses of our approach, allowing us to continually refine and improve our modeling techniques.
You have the ability to select a specific country of interest and analyze the performance of the different models from the beginning of the challenge.
viewof country = Inputs.select(Object.values(country_map), {label:"Select a country:",unique:true})
Past predictions
This interactive graph displays the historical forecasts generated by all of our models, as well as the actual observed value for the selected country.
Plot.plot({grid:true,y: {label:"↑ ELECTRICITY",transform: d => d /1e6 },marks: [ Plot.line(historical, {tip:true,x:"date",y:"values",stroke:"black",strokeWidth:2,title: (d) =>`${d.date.toLocaleString("en-UK", {month:"long",year:"numeric" })}\n${d.values/1e6} millions ` }), Plot.line(predictions, {tip:true,x:"date",y:"values",stroke:"model",title: (d) =>`${d.model}\n${d.date.toLocaleString("en-UK", {month:"long",year:"numeric" })} : ${d.values/1e6} millions ` }) ],color: {legend:true} })
Square relative error per month
This graph illustrates the square relative error for each of the models used in the challenge. The square relative error is a measure of the accuracy of a forecast that takes into account the magnitude of the error, as well as the level of the first official release being predicted.
\[
SRE = \left(\frac{Y - R}{R}\right)^2
\] where \(R\) is the first official release and \(Y\) the nowcasted value.
This interactive graph displays the mean square relative error for each of the models used in the challenge, ranked by their performance from the least accurate to the most accurate. The mean square relative error is a statistical measure that provides an average of the square relative error across all the forecasts made by a given model.
These errors can be weighted by a factor, as was the case in the official evaluation of the challenge. The role of the weights is to reflect the difficulty of predicting the point estimate of the target variable for the corresponding country.