So, theoretically, an athlete who has a longer ground contact time, while generating the same amount of force per unit of time within that contact time, will apply more force to the track and should propel himself faster.
Practically, however, this model fails. I'd like to see some reasoning as to why, because I'm having a hard time figuring out what causes the theoretical model (which I'm fairly sure is accurate) to fail when practically applied, and I'd like to know. I'm fairly sure its something rooted in the mechanics of running, such as stride recovery (applying force longer causes a longer duration needed for stride recovery maybe?) or something along those lines.
I think the reasoning behind it is this…
When keeping the leg straight during the support phase of sprinting, the body can only produce force for so long. Otherwise, the leg would bend, allowing more time to produce force, but lowering hip height, thus lever length. This would also make it more difficult to complete the leg cycle and prepare for the next footstrike.
Also, like the bike analogy, by trying to apply force for a longer period of time, you would only end up applying a braking force to the bike wheel, then re-accelerating it. same applies to sprinting.