George R. Kasica
Reflections and strategies to improve forecasts going forward

The final three cities in the National Forecasting contest continued to be both interesting and frustrating to me.

The interesting nature of the contest was continued due to the ongoing competition with other forecasters around the country and in my own class. I enjoyed the fact that my forecasts were objectively compared to many other forecasters and that I did relatively well during the competition when compared with other students, some of whom had more formal education than myself.

For the third city in the competition I had to forecast for Tucson, Arizona and my final ranking on day number 8 was 131 out of approximately 1360 total forecasters or about the top 10% of participants. This was significantly higher than my prior weeks ranking and similar to how I had ranked for the first city in the contest San Antonio, Texas.

The following two week period of forecasting for Atlantic City, New Jersey my final day 8 ranking rose just slightly to 127 out of 1360 forecasters or again about the top 10% of participants. Again, I was quite pleased and surprised with how well I was doing as compared to other forecasters in the competition. At this point if you would have asked me could I really forecast the weather, I would have given you an unqualified yes as an answer.

The final city in the competition, Rapid City, South Dakota brought home in dramatic fashion that no matter how much we think we know about forecasting, there are still many unknowns, at least for my own personal case. I saw my ranking on day 8 finish at a disappointing 304 out of 1360 level, almost a 200 place drop that placed me barely in the top 25% of the group. At that point had you asked me if I could really forecast the weather I would have said maybe. However, in looking at that two week period of forecasting, my ranking actually was improving from where I began since in the early part of the period I was ranked as lows as 449 on day 2 and had made steady progress in moving up in the standings from that point.

Overall, based on the performance from the last three cities/six weeks of the program I would have to conclude that I have in fact learned a great deal of information about the structure and functioning of the atmosphere and how to interpret the various forecasting guidance and tools to make a generally successful forecast the majority of the time. The latter conclusion is based on the fact that over the life of competition I had a cumulative rank of 89 out of the 1360 forecasters or in the top 7% which included many forecasters with far more formal education and experience than myself. Obviously this is better than what you would expect to see with just the climatology or even the Pennsylvania State University consensus numbers for the forecast as those two items came in at 871 and 206 respectively. For a further comparison the National Weather Service ranked at 251 over the life of the contest and the lowest cumulative rank for my spring 2007 class was 131 so in all cases the class did far better than many other measures.

In this last six weeks I have noticed a tendency in my forecasting to get "stuck in a rut" at times. By that I mean I fall in to a pattern of forecasting where I look at the same set of models or data items each day and sometimes forget to look at or consider other items that may not necessarily be useful on a day to day basis. I saw this type of behavior in the first four weeks of the competition as well, but this time I think I was more aware of it and tried to actively force myself to consider other data items to avoid it. Again, with my background in computers and data processing relaying on purely computer model output data is an easy trap for me to fall into and one I will have to be continually working on avoiding, as there are may cases, such as in Atlantic City, New Jersey where it  easily caused me to make large errors in the forecast.

Some things I need to do to continue to improve as a forecaster are that I will continue to need to make some changes in how I prepare for and what I base my forecasts on, because I can see that in several cases that I would have easily had a far more accurate forecast than what I ended up with had I made these changes.. The items I would like to modify and how I'd modify them are listed below:

  • "Look at an appropriate set of data" - in this I'm referring to looking at a mixture of data that is appropriate for the forecasting issue or problem at the time, not just looking at the same set of items every day regardless of what the atmosphere may be doing. A good example of not looking at the right set of data could be seen in the incorrect forecast for precipitation at Atlantic City, New Jersey where if I had looked at the forecast winds at 850mb I would have quickly seen that they were not expected to be from a direction that would supply all the expected moisture for the amount of rain I was forecasting.

  • "Don't always try hit home runs" - I realize this is a repeat from earlier recommendations, but I still see myself doing this on a regular basis. Tending to disagree with the model output far more often than I probably should and trying to outguess the forecasts rather than using them as a guide to what is likely to occur. I think that this tendency to try to out-forecast the models will only go away as I gain more experience with knowing when they are really in a position to be beat and not pretty much on the mark for the forecast. As I'm finding out there are many occasions where the models perform well and come quite close to the actual forecast that occurs. On the other hand, there are also many situations where they are known to perform very poorly. I need to understand when each of these situations is occurring and as a result when it is time to disagree with the models on the forecast results for the day. I still feel that that this particular item will be the most difficult to accomplish as it seems that only time and experience in forecasting, will enable me to know when to pick a fight as it were with the computer models and when to accept that they may "know" more than I do. One way I have been trying to see when this happens is by keeping a written record or diary of my internal discussion as it were of each forecast I make and why I make it, similar to what the area forecast discussions look like from the National Weather Service to justify why I'm making the forecast that I am making. By doing this I hope to be able to go back after the fact and analyze either why the forecast was correct or what went wrong and how to try to avoid it in the future.

Main Page Previous 1  2  3  4  5  6  7