Hey guys, a recent thread (https://elitetrack.com/forums/viewthread/11419) got me thinking about how we evaluate our programs. Anyone who has been coaching for some time but also a lot of research can see that there is a huge disconnect between what happens in the laboratory vs what happens in real life. How do we know that what works for an untrained individual will work for a trained track athlete? Unfortunately, we have to test it ourselves.
My approach in general has been to get ideas from research, books, etc. but only in an actual training program can these things be tested. And more importantly than that, one has to know how to evaluate their own results. This is where we need to know improvement rates. Ryan Banta posted a great blog a couple years ago that I think is a great reference point (https://elitetrack.com/blogs-details-6028/). A high school season lasts roughly three months and it could be said that most kids come in untrained. Therefore, the progression rates above could be looked at as a standard for the first three months of training.
What happens after three months? Improvement rates tend to follow an exponential decay model easiest seen on a year to year basis. it tends to decrease 30-50% per year. If you are so inclined, you could write a differential equation that would convert to month to month decay as well. I have written on this topic before (https://elitetrack.com/forums/viewthread/10483/#94914) and so has Carl (https://elitetrack.com/blogs-details-6517/).
I have also given some examples before of athletes I worked with in hurdles. I have found that when I get a new hurdler, they improve about 2 seconds in the 100/110 hurdles the first year. The second year would see a decrease of roughly 1-1.2 seconds. Following that model, high level hurdling comes during the 3rd and 4th years hurdling, with the exception of very talented athletes.
So to come full circle to my original point, we need to make sure that we make changes to our programs with an eye on how they affect improvement rates. That is the most constant (in my experience) factor in determining our coaching success.
If anyone has their own data to share, either by season or by year, please feel free to share it. Collegiate data is obviously lacking here as well.