You’ve just run a race. Maybe you are happy (You probably are! You have a runner’s high!) maybe you are not (You got passed by your arch-rival in the final stretch). But can you objectively decide whether you did well or not? I think that’s the question Adam was asking.
The geek way to answer: you make a statistical model that calculates how fast you might be expected to run the race and you compare that to how well you actually ran the race. The components of the model are straightforward: a component for the race and a component for you. To estimate the component for the race, you ideally have many runners who ran the race and also who have run other races. These components are necessarily comparative: was this race faster or slower than the other race? The more people in common across both races, the better you can figure that out. The component for you is similar: the more races you have run in, the better the model knows what to expect.
So it would go something like this:
- expectation_for_this_race = component_for_this_race + component_for_you
- performance_for_this_race = actual_time - expectation_for_this_race
If the statistical model is good, then performance_for_this_race should center around zero. Since high times are bad and low times are good, positive numbers for performance_for_this_race would mean you ran slowly whereas negative numbers for performance_for_this_race would mean you ran fast.
Once you have run in many races, you can add components that estimate whether you are better on trails versus roads, or better in short versus long races.
I made a model like this for the FLRC challenge last year but have been too busy this year. The PGXC series is particularly good for this kind of thing because lots of people run in common across a number or races.
Fun with statistics!