George R. Kasica
Reflections and strategies to improve forecasts going forward

The first two weeks in the National Forecasting contest were extremely interesting for me, and also somewhat frustrating initially.

The interesting nature of the contest was due to the fact that this is the first time that I have ever compared my forecasting skill and accuracy in a really objective manner with others in an identical setting. By that I mean we all had to forecast for the same cities and same days. We were then objectively graded on our forecast accuracy.

For the two weeks that I forecast for San Antonio, Texas my final ranking on day number 8 was 99th out of approximately 1360 total forecasters, or about the top 8% of the group. This was FAR better than what I had expected or hoped to do for the first city in the contest. The reason being that I am competing with other students and staff with education levels ranging from undergraduates to PhD students or instructors.

The following two week period of forecasting for International Falls, Minnesota saw my final ranking on day 8 end up at 230 or about the top 17% of the group. Although it was lower in standing than week one, it was still about where I had expected to be at the time.

Overall, based on the performance from the first two weeks of the program I would have to conclude that I have in fact learned a substantial amount of information in the last 4 semesters here, and can in reality forecast the weather. The latter conclusion is based on the fact that the National Weather Service forecasters for San Antonio, Texas and International Falls, Minnesota came in below my standings in the competition at 332 for San Antonio and just slightly above my ranking at International Falls at 223.

Some things that I have noticed over the last four weeks in terms of my preparation of a forecast and my attitude of how I approach the forecast are that I tend to get "tunnel vision". By that I mean I focus on just one or two items of data rather than taking a good look at the entire "big picture" of the weather before making a forecast. I also tend to rely very greatly on the computer model outputs such as MOS to base my forecasts on, often to my detriment. The reliance on computer output is likely due to my extensive background and experience in the data processing field, over 20 years as an information systems professional, and trust of this type of output is an easy trap for me to fall into.

An example of the above statement is that on day 4 in San Antonio, Texas, had I taken a closer look at the numerical model output graphics and not focused my attention almost exclusively on the output from the Model Output Statistics (MOS) I may well have picked up on the light winds and lower humidity possibilities and been able to account for that in my forecast.

A similar thing occurred at International Falls, Minnesota on day eight where I again focused on the MOS outputs almost exclusively and ignored what was a very obvious sign that skies were already clear, namely the real time observations from the location. Had I taken just a few minutes to basically "look out the window" as it were, I would have been able to adjust my forecast to account for the clear skies and coming rapid cool down.

Clearly to continue to improve as a forecaster I need to make some changes in how I prepare for and what I base my forecasts on, because with some relatively minor adjustments to my process I think I can make some major improvements to my forecast accuracy. The items I would like to modify and how I'd modify them are listed below:

  • "Look out the window before making a forecast" - by this I mean take a look at the current conditions for the location and the area around it and see what is happening right now in order to avoid mistakes like what I made at International Falls on day eight. This change could best be monitored or measured by my consciously looking at the current conditions before I submit my final daily forecast for each remaining location.
     
  • "Look at a more comprehensive set of data" - in this I'm referring to avoiding my tendency for "tunnel vision" and considering only one source of forecast data - in my case usually MOS. Rather I need to not only look at MOS but other items of data as well such as the numerical guidance output graphics (NGM, WRF and AVN model panels), cloud forecasts, satellite imagery and also recent surface analysis and forecast products. I could monitor this problem of not looking at enough data again by making a conscious effort to look at the most current model guidance output off of the Pennsylvania State University e-Wall before I finalize each forecast.
     
  • "Don't always try hit home runs" - in this respect I'm referring to my tendency to want to always disagree with the computer models and try to out forecast them. Though this my seem like a contradiction to what I had stated above, it really applies to my knowing when to trust or disagree with the computerized forecast models. There are many situations where they perform relatively well and come quite close to the actual forecast that occurs. Conversely, there are situations where they perform poorly. I need to realize when each of these situations occur and as a result when to disagree with the models on the forecast. I think that this particular item will be the hardest to accomplish as it seems that only experience in forecasting, which I will gain a small amount of in the next six weeks, will enable me to know when to pick a fight as it were with the computer. This last item will be the hardest to monitor as normally there are small adjustments to even the "consensus" or average of the model forecasts, but rarely are there large departures from them. I think the best way to account for this may be to make a written forecast discussion as it were for each time I deviate in a large degree from MOS and therefore have to have good, solid scientific support for why I want to deviate from the model forecast.

Main Page Previous 1  2  3  4  5