We wanted to alert you to a finding from our recent quality assurance check. While conducting a routine audit of the scoring functionality, we determined that there was an inconsistency in scoring returns.  The automated scoring/leaderboarding routine was inadvertently omitting the first day’s forecast from question and overall Brier scores.* The impact on overall Brier scores was generally minimal (<=0.01 change to overall mean of mean daily Briers). As a result, you will see periodic changes to the leaderboards as we re-run the current scores. This should continue over the next couple of days, and we will update you when we’ve completed this process.

We have also recalculated the scores for the Milestone 1 and Spring Forecaster Awards, and have determined that the Spring Forecaster Award winners remain the same, though the scores have adjusted slightly.  However, there are a couple additions to be made for Milestone 1. The new calculations show that both seb and Digital Delphi should have been recognized at that time. We will be correcting this immediately, so please watch your accounts for our email.

Moving forward: We will continue to regularly audit the scoring system to ensure the highest level of accuracy. The updated scoring will be applied to Milestone 2 and all future awards. And, we will continue to keep you up-to-date with news as soon as we have it.

In the meantime, let us know if you have questions or concerns. We are a community, and we appreciate your contributions and time.



*For example, if questions were released on 01/01, final forecasts submitted before 2:01pm ET 01/02 should have been treated as the first scored “day”; however, the system was not doing this: it was treating finial forecast submitted before 2:01pm ET 01/03 as first scoreable forecasts and was omitting the day ending 2:01pm ET 01/02 from scoring.