George R. Kasica
Reflections and strategies to improve forecasts going forward The first two weeks in the National Forecasting contest were extremely interesting for me, and also somewhat frustrating initially. The interesting nature of the contest was due to the fact that this is the first time that I have ever compared my forecasting skill and accuracy in a really objective manner with others in an identical setting. By that I mean we all had to forecast for the same cities and same days. We were then objectively graded on our forecast accuracy. For the two weeks that I forecast for San Antonio, Texas my final ranking on day number 8 was 99th out of approximately 1360 total forecasters, or about the top 8% of the group. This was FAR better than what I had expected or hoped to do for the first city in the contest. The reason being that I am competing with other students and staff with education levels ranging from undergraduates to PhD students or instructors. The following two week period of forecasting for International Falls, Minnesota saw my final ranking on day 8 end up at 230 or about the top 17% of the group. Although it was lower in standing than week one, it was still about where I had expected to be at the time. Overall, based on the performance from the first two weeks of the program I would have to conclude that I have in fact learned a substantial amount of information in the last 4 semesters here, and can in reality forecast the weather. The latter conclusion is based on the fact that the National Weather Service forecasters for San Antonio, Texas and International Falls, Minnesota came in below my standings in the competition at 332 for San Antonio and just slightly above my ranking at International Falls at 223. Some things that I have noticed over the last four weeks in terms of my preparation of a forecast and my attitude of how I approach the forecast are that I tend to get "tunnel vision". By that I mean I focus on just one or two items of data rather than taking a good look at the entire "big picture" of the weather before making a forecast. I also tend to rely very greatly on the computer model outputs such as MOS to base my forecasts on, often to my detriment. The reliance on computer output is likely due to my extensive background and experience in the data processing field, over 20 years as an information systems professional, and trust of this type of output is an easy trap for me to fall into. An example of the above statement is that on day 4 in San Antonio, Texas, had I taken a closer look at the numerical model output graphics and not focused my attention almost exclusively on the output from the Model Output Statistics (MOS) I may well have picked up on the light winds and lower humidity possibilities and been able to account for that in my forecast. A similar thing occurred at International Falls, Minnesota on day eight where I again focused on the MOS outputs almost exclusively and ignored what was a very obvious sign that skies were already clear, namely the real time observations from the location. Had I taken just a few minutes to basically "look out the window" as it were, I would have been able to adjust my forecast to account for the clear skies and coming rapid cool down. Clearly to continue to improve as a forecaster I need to make some changes in how I prepare for and what I base my forecasts on, because with some relatively minor adjustments to my process I think I can make some major improvements to my forecast accuracy. The items I would like to modify and how I'd modify them are listed below:
|