|George R. Kasica
METEO 410 Portfolio #1: Forecast Assessment and Review
In this section I will discuss my overall performance in forecasting for San Antonio, Texas for the four days from January 23 through January 26, 2007, and what I learned in terms of my strengths and weaknesses as a forecaster from the experience. I'm also going to try to lay out a plan of where I intend to go from here to attempt to address the weaknesses in my forecasting abilities, and how I can manage to minimize errors in the future forecasting for the Weather Challenge Contest.
Overall for the four forecast days I performed as follows as compared to the actual results:
How would I rate my performance over the period? In terms of accurately predicting the maximum and minimum temperatures and maximum wind speeds for a day, I felt that I did very well. Most of the days had errors in temperatures within 3-4 degrees, and with in just a couple of miles per hour in wind speeds. In terms of predicting the precipitation received I think I could use significant improvement, as on two of the four days I had large errors in that portion of the forecast.
My strengths as forecaster are probably that I am very comfortable in working with the various computer model outputs (MOS and FOUS/FOUE) as I have been exposed to and used them in prior classes that I have taken and in my own forecasting attempts locally over the years. I also have a large interest in the computer forecasting models and how they process the data to produce the forecast outputs we use, and would like to know even more about the peculiarities of the various models (GFS, NGM, NAM/ETA) during various weather patterns and locations around the country.
My weaknesses are a forecaster is that I need to remember to not to try to outguess the models by looking at the latest bit of data and considering it with more significance than the prior collection of data items. An example of this is the 18Z GFS model output, I tended at the beginning of the week to assume that it's results would somehow be "better" or more accurate than the three earlier 12Z models (NGM, NAM/ETA, GFS) consensus results. In fact it is no more accurate and its output should probably not be given any added weight to the consensus forecast than any of the other three models. In fact it should be treated as just another data item to be added to the group consensus forecast. This week that was apparent in the second day's forecast where I badly missed the amount of precipitation received.
A goal that I am going to work towards in the first few cities of the forecasting contest is to remember to use the models as a group of data leading to a solution. This first week I tried to pick one "best" model each day and give it a heavier weight in my forecasting without adequately considering the other aspects of the atmosphere and how they may affect the forecast which if I had looked at all the models equally may not have occurred.