Register | Login
Attackpoint - performance and training tools for orienteering athletes

Discussion: The Altered Boogie Pepper pre-announcement

in: Orienteering; General

Apr 6, 2006 12:50 PM # 
Swampfox:
This is to announce that the newly calculated Altered Boogie Pepper standings will be released today at 1200 RMT, or some other equally suitable, undesignated time. Boogie on, boogie off.
Advertisement  
Apr 6, 2006 1:55 PM # 
vmeyer:
In the meantime, check out the post-Pig rolling rankings. :)
Apr 6, 2006 2:35 PM # 
Swampfox:
The Altered Boogie Pepper has been updated with results from the Flying Pig. Certain adjustments had to be made to account for the presence of the abandoned pig house, though luckily by the end of the races all piggies had been placed with good homes.

There was a whole lot of shaking going on within the lists with the rolling off of last year's Team Trials, and because of an especially strong M21 field at this year's Pig.

If you dare to test the taste and zest of full strength pepper, you can: http://www.geocities.com/Colosseum/Stadium/7418/al...

It's a delightful jalapena pepper world!
Apr 6, 2006 3:27 PM # 
Sergey:
Wow, Valerie, what do you use to calculate ranking? No way I can be that close to Mike Waddington. In my opinion I should be at least another couple points back.

Thank you for the work and dedication.

Sergey Velichko
Apr 6, 2006 3:51 PM # 
Joe:
I like Valerie's rankings. It puts me just ahead of an ailing SF.
Apr 6, 2006 4:05 PM # 
vmeyer:
Sergey, you are boosted by the fact that the rankings are normalized to 100 for the top 3 USOF members, and then on down the list. This gives you a consistent number to compare yourself to the other USOF members as the non-USOF people roll on and off the list.
Apr 6, 2006 5:03 PM # 
feet:
What does the comment about 'data errors requiring a recalculation of the 2005 official rankings' mean?
Apr 6, 2006 5:16 PM # 
j-man:
I also like these rankings. They put me .1 ahead of a non-ailing Joe ;)
Apr 6, 2006 5:29 PM # 
Sergey:
Valerie,

I understand the normalization but I should be at least couple points back. Something is wrong with the calculations you are doing. You should probably post description of the exact procedure that you are using so knowledgable people can scrutinize it.
Apr 6, 2006 5:36 PM # 
igoup:
I would also find it interesting to see how individual races score -- not the gnarliness factor, but the ranking points achieved per race. I'd like that for the Altered Boogie Pepper too but I'm afraid to ask lest I get moved to the Altered Booger rankings.
Apr 6, 2006 5:37 PM # 
j-man:
Wow Sergey, you might want to reconsider that request.
Apr 6, 2006 5:41 PM # 
j-man:
Anyway, this shouldn't be a surprise, but these rankings are calculated the same way as last years, and the year before, etc...

Knowledgable people can contact Wyatt, or ask Kenny, because the AP rankings use the same underlying methodology. As does DVOA.

Here's a cheat sheet:

http://us.geocities.com/nikdangerdvm/03methods.htm...
Apr 6, 2006 6:14 PM # 
vmeyer:
Tom, when we are fully up and running, you should be able to see individual scores - look for later this year.

Sergey, sorry, I misunderstood your point about thinking you were too close to Mike W in points...
Apr 6, 2006 8:35 PM # 
Sergey:
Valerie,

The reason I am concerned is that Attackpoint ranking per meet gives totally different values. I didn't run my own calculations but that is somewhat suspicious.

For example, consider my best result this last weekend Flying Pig X - Classic. If you substruct 4 or so points from my ranking to scale to 100 point Mike Waddington - I still be at 95 points average. I know that my second day during USA Champs in August was above that but the rest should be well below.

Just some reasoning behind my statement :)
Apr 6, 2006 8:38 PM # 
BorisGr:
Wow, i have no races left in the Boogie Pepper. Sad....
Apr 6, 2006 8:43 PM # 
j-man:
Sergey, you of course appreciate that the self-selection biases inherent in AP rankings, the different populations used to cross validate, and the fact that the winner of an event may be significantly +/- 100 points makes such back of the envelope calculations spurious? And would make direct, person to person comparison of scores between AP and USOF rankings difficult?

Or, at least I think this is what a "knowledgable" person might begin to say about this. (Luckily, I am hardly knowledgable in this area--this but I can make conjectures.)
Apr 6, 2006 8:51 PM # 
Tundra/Desert:
Sergey, your FPX Classic run vs. Hammer is almost dead-on (within a minute) of the outcome predicted by your post-Pig ranking.

The reason AP rankings tend to be weighed down is just what Clem suggests—self-selection.
Apr 6, 2006 10:06 PM # 
Sergey:
My other FP-X races were way below Hammer's performance. For some reason I trust more AP rankings as it reflects more on who is who excluding sprint races. I just wonder what Valerie and the team used to make calculations. Whatever Nik posted as ranking for 2005 was making sense. Now everyone is way too close to the top. I see it as a result of the calculation methodology change as oppose to everyone starting running faster. Looking at the training and results posted on AP it should not be such. Did Valerie and the team use tested software? May be they switched to Excel? Too many questions. No way I can be 5 points or so to Hammer! JMHO
Apr 7, 2006 3:01 AM # 
Sandy:
Several comments.

We (Valerie, Kent Shaw and myself) are currently using the exact code,i.e. the same exact file, that has been used for the last few years to do rankings. Nik sent us the program and that's what we are using. Eventually, when our site is fully functional, we will migrate to our own code, but all results that are currently posted are using the same rankings program - not just the same methodology, the same program.

The methodology we are using is what is currently posted in the USOF rules: 2005 USOF Rules
Scroll down through the document to section 50, the section on rankings. This is what has been used for the last few years.

The changes from the early 2005 rankings that Nik posted to what are now the official rankings came from 3 places:
-a few people had credit days that were overlooked
-data from one event was reported as, for example, 43:15:00 instead of 43:15 so several fast results (anything under an hour) were being recorded as very slow ones!
-mp, dnf, ovt and dsq were not handled consistently for all events. The current rules state that all competitors not finishing a course, for a reason other than spw, should receive a time of course limit plus 20 minutes. (You may disagree with this rule, but that's what the current rules say.)
To reiterate, we did not change the code or methodology, we simply cleaned up some of the data input files.

This last point, however, explains some of the difference between AP rankings and USOF rankings - I don't recall seeing anyone post their splits for a dnf, with a time of 3:20:00. :)

Finally, the rolled rankings labelled April 2, 2006 are actually April 4, 2006 - i.e. last year's Pig has rolled off. We'll correct that date shortly.
Apr 7, 2006 5:31 PM # 
Sergey:
Sandy, Valerie, and Kent,

Thank you for the information. And thank you for the time you invest into the new system!

Sergey

PS. I think the statistical discrepancy that I see is unique combination of my, Leif's, and Andy's results as they are tied to both Magnus and Mike W. Probably Magnus and Mike didn't have many common starts past year. The only connection point is via my results. That may explain it.
Apr 10, 2006 11:20 PM # 
feet:
Sandy said: This last point, however, explains some of the difference between AP rankings and USOF rankings - I don't recall seeing anyone post their splits for a dnf, with a time of 3:20:00. :)

Check out Brendan's and Mikkel's splits for the Mega, and note how they inflate the top results. People do sometimes enter odd things on Attackpoint too. Here is a link to the splits.
Apr 10, 2006 11:41 PM # 
Tundra/Desert:
I always do this.
Apr 10, 2006 11:50 PM # 
Sandy:
Never noticed before - I stand corrected!
Apr 11, 2006 3:48 PM # 
Sergey:
Agree that AP ranking are skewed as people may selectively enter their race results, as well as top Canadians are not entering at all. Also a lot of non-sanctioned sprint races in results give their input.

Looking at the results of some championships I wonder if rankings are somewhat deflated. Sometimes people get much higher ranking points at lower status A meets. May be all USA, Canadian, and NA Championships need to be adjusted 5% up as it is done with World Champs results for WRE rankings? Just a thought.
Apr 11, 2006 4:42 PM # 
Tundra/Desert:
It's now +10% for WOC Finals. But why another random number? How about (3.14159×2.718282)%? USOF Rankings have a meaning, unlike WRE points... USOF's are an indication of one's (some kind of) average speed. It's not like everyone's speed automatically becomes 5% or 10% higher at a Championship event. To this extent, I'd say I wouldn't mess with the extra coefficient.
Apr 11, 2006 4:46 PM # 
j-man:
Don't the Boogie Pepper do this sort of weighting?
Apr 11, 2006 4:58 PM # 
feet:
I think the point would be to weight the speeds when calculating the average, not to multiply the speeds by a factor and then average. The second is nonsense, the first makes Championships more important in the ranking of those that run them. You could then require all ranked orienteers to run at least one championship day.
Apr 11, 2006 5:29 PM # 
Sergey:
I bet that by selectively going to specific low profile A meets and obliterating the competition I can be at the top of the table. It is statistics and depends heavily on the selected population groups and their juxtaposition.

For example, my daughter who won middle and clasic USA Champs last year (excluding long and night that she didn't attend) and performed very well in all other starts is ranked second right now in F-10 rolling rankings. I believe she never ran against Isabel Bryant who is ranked first.

Consider my position in the current rolling rankings. I am not 99 points runner. I am 95 at best right now. Mine, Leif's, and Andy's position in the table is due to our results related to Magnus and my results related to Mike. There is too little of statistical confidence due to the small populations.

I do believe that assigning higher weights to the Championships that ultimately attract better cross representation of athletes will more closely show "who is who" in NA orienteering. At least, it is worth considering.
Apr 11, 2006 6:07 PM # 
barb:
The statistics would stabilize a lot if we got more people to meets. And this would be particularly effective in the sparse juniors' categories.
Apr 11, 2006 6:13 PM # 
Swampfox:
To Barb's point, another thing that would help greatly (and which wouldn't require more orienteers) would be to have fewer categories. There's no reason for M&F35. Categories above that could be consolidated to 10 year age groups with good effect instead of the current 5 year increment.

Boogie on, you fiery chillie peppers.
Apr 11, 2006 6:16 PM # 
Tundra/Desert:
Thank you.
Apr 11, 2006 6:17 PM # 
eddie:
Agreed. Can we get someone to volunteer to lead such a bid to the AGM/Board?
Apr 11, 2006 6:38 PM # 
barb:
Getting more people to orienteer is harder to do than making fewer categories, but it's WAY better for the sport.

I think it's silly to reduce the number of categories. They're not harming anyone, and they're motivating for some people. Are we in danger of encouraging too much hubris? You can do your statistics any way you like, anyway. The point is to get out in the woods and orienteer.
Apr 11, 2006 7:03 PM # 
eddie:
Would it be equally silly to increase the number of categories/championships? If so, why?

I think the main reason for decreasing the number of categories is exactly the one presented in this thread...it would make the (any) ranking system more meaningful in the presence of small numbers or otherwise.
Apr 11, 2006 7:12 PM # 
eddie:
As I recall there was a long discussion in another thread regarding physiologically-defined categories vs. demoraphically-defined categories. I'd be for the former, whether thats good or bad for my motivation. At least its physically meaningful.
Apr 11, 2006 7:14 PM # 
j-man:
Interesting. This reminds me of the plight of American automotive concerns. To the extent that the markets responded favorably to anything Ford and GM have done recently, they did so to the companies' decisions to shed current excess capacity to be more in line with demand. It would likely have been better to increase demand to sop up capacity, but the thought was you had to do something, and costs were more controllable than demand. Now, I'm not sure that excess categories impose a direct cost on orienteering, but they do seem, somehow, to lead to a loss in efficiency. Therefore, to the extent that they are something we can control directly and immediately, maybe we should -- until the point when we have enough participants (cars) to justify their re-emergence. On the other hand, arguably, closing down categories affects the people in those categories, but unlike the car industry, they will still have jobs (er, placings?) or more correctly, not be chilling in the job bank.
Apr 11, 2006 7:20 PM # 
randy:
While I agree with reducing the number of categories, I do not see how it would make the rankings more robust.

Presumably, M40 and M45 would become M40-M49 (still ranking on red using the same algorithm as now, with the same scores (yet less patches)), and presumably M35 would move to blue. But since most top M35s run blue anyway, I'm not sure what that would change. Obviously, those now over 40 running blue wouldn't change anything either. On the women's side, other than the fact that I'm not sure whether the top F35s run red, the same logic holds. Sorry for being dense :-)


Can we get someone to volunteer to lead such a bid to the AGM/Board?


Bring it on. I'm always asking for APers to send me proposals when the AP crowd is asserting USOF isn't doing this or that. Board representatives are just that, representatives.

FWIW, I've in the past tried to get M35 moved to blue thru the board process, without success.
Apr 11, 2006 7:22 PM # 
eddie:
I think the main scoring robustness gain is increasing N per bin, although like you say it wouldn't change the M21 balance at this point. It would however eliminate the practically empty and not very meaningful scores of the M35 class.
Apr 11, 2006 7:59 PM # 
randy:
I think the main scoring robustness gain is increasing N per bin

My understanding is that the present bin on red, at least, is all the red runners, regardless of age. Then, they slice it by class after the algorithm runs. So, even if you sign up for M35 at a meet, you are listed as M40 at the end of the season if you are over 40. Therefore, your score is the same whether you are M35, M40, M45, or M40-49. Moving 35 to blue would actually decrease the red bin and lead to less robust red rankings (assuming robustness is directly proportional to N).

Someone who understands the algorithm better may want to correct or affirm my guess on this ...

Apr 11, 2006 8:09 PM # 
Sandy:
Randy is correct. All runners on a given course are in the same "bin" when the rankings are determined. The only caveat is when there's a RedX and a RedY (and ditto for green); since these are distinct courses, it's as if they were two separate events. So eliminating age classes does not change the ranking process at all.
Apr 11, 2006 8:13 PM # 
Tundra/Desert:
Randy is correct. There is no gain to be had as far as ranking robustness is concerned, from moving to 10-year age-category increments. There indeed are very few juniors, but on advanced courses, they get ranked against a much larger pool of old folks running the same-color course, so nothing is to be gained from rearranging junior-level advanced categories, either, as far as rankings are concerned.

Where a gain could occur is from moving more junior categories onto advanced courses, but this is such a political hot potato, I would never want to bring it forward to USOF Board as a proposal.

My "Thank you" to Swampfox referred to the one area in which gains could be had, that from moving the 35–39 onto Blue M/Red W. The increase in the Blue pool would most likely bring on more statistical stability than the loss from a corresponding decrease in the already significantly larger Red pool.
Apr 11, 2006 8:25 PM # 
eddie:
What happens when there is a RedX and RedY again? So for a given ranking race I can see that everyone who ran the RedX course will be grouped together and recieve a ranking for that event. But what happens when you combine several events - some of which had split red courses, and some of which didn't - so that sometimes the M45's were compared to the F21's and sometimes they weren't?
Apr 11, 2006 8:29 PM # 
Sergey:
We just need less A meets to get good "cross" population within the bin. I would personally limit it to national and regional championship events. And present national champs in 2 groups of events. One in spring/summer and another in fall time. Something like sprint+classic and relay+middle+long. Combine interscolastic and intercollegiate champs with one of the regional champs. And get rid of night champs. Should also save lots of resources. Unfortunately, for these scheme there should be government support which is not existant in the USA. A meets in the USA are major sources of the income for club maps and development.
Apr 11, 2006 8:41 PM # 
Tundra/Desert:
Split courses are just separate events.
Apr 11, 2006 8:48 PM # 
eddie:
But in one event I raced against the F21's and in another event I only raced against the old farts. I'll be much slower than the F21s, but competitve against the old farts. Therefore it would be to my ranking advantage to skip races where I had to run against F21's, wouldn't it? There has to be a post-slicing age class score normalization somewhere for this to work, right? Are you re-normalized within your class for *each event* and then these scores added together?
Apr 11, 2006 8:57 PM # 
eddie:
incidentally, by "old fart" in that example I mean any male over 35, which includes me. :)
Apr 11, 2006 9:18 PM # 
ebuckley:
Actually, you're a lot safer in not ducking the competition. It's very rare for an 80-point runner to have a 100-point run. It's quite common for a 40-point runner to have a 60-point run. If you go to a small race with 6 ranked runners on your course, you could get seriously screwed if a couple of them turn in unusually good performances. In a large and quality field, such results don't effect the course norm nearly as much (if at all).

I did a fair bit of empirical analysis on this a few years ago and ended up concluding that the rankings stabilize a lot faster (and arguably better) if there is a single ranking scale for ALL classes. That is, you get one ranking value, regardless of whether it's on Blue, Red, Green, or Brown (including Orange, Yellow, and White is less valid). This would naturally lower the values on RGB significantly, but it would still preserve the order. The stability comes from the fact that right now, someone who runs on multiple courses is treated as two separate people so there is some information loss in the process. The only caveat is that you really don't want an M-21+ getting ranked on brown. That's easily solved by only ranking runs by people who are running their championship course or above (e.g., PG gets ranking points on Blue, but if Mr. Bone runs Green, it counts as a DNS).

Anyway, I formally made that suggestion when the board solicited input on course and ranking structure a few years back, but it obviously didn't get much support.
Apr 11, 2006 10:00 PM # 
eddie:
The 40-60 and 80-100 thing only holds when comparing people in the same age/sex bin. In other words "all other things being equal." But when you are alternately mixing and not-mixing physiological provinces it most definately is an advantage to juke your races. Otherwise there would be no call for age/sex classes at all.
Apr 11, 2006 11:05 PM # 
randy:
If you go to a small race with 6 ranked runners on your course, you could get seriously screwed if a couple of them turn in unusually good performances.

But the probability is also higher that they all tank than at a race with a large field. So it seems your expected value has a wider range by attending a small meet (and technical standards may also be lower than at a high profile championship, further increasing chance). Given that you can drop every other race, it seems going for the races with the largest potential variation is the best way to game the system. (I've personally never been motivated to duck or attend a race due to ranking considerations, except for sprints on occasion, but I think it is interesting to think about).

As for sprints, I'm not sure it makes sense to put them on equal footing as longer races. I think the weighting should be scaled by winning time :-) More seriously, I think it would be nice to have a ranking in each discipline, or at least rethink the weightings, tho I don't understand the math well enough to make an intelligent suggestion.
Apr 12, 2006 4:22 PM # 
Sergey:
Well, speaking about statistics, size of the population, and probabilities, do you believe that Eddie is 107 points and Mihai is 104 points runner :)
Colorado 5-day Day 1-Blue 2005-08-04

Or Sergei Zhyk is 120 points runner :)
WCOC Ansonia-Red 2005-09-17
Apr 12, 2006 4:28 PM # 
Sergey:
Randy rightfully pointed onto the way how to be at the top of the table :) The current system somewhat penilizes for competing at the big events with wide competing field and rewards for performing well against small field at non-championship events. By defenition championship events attract stronger field.
Apr 12, 2006 4:32 PM # 
randy:
do you believe that Eddie is 107 points

Yep. Its all the training tips, punk, and heavy metal music I feed him when we carpool ...
Apr 12, 2006 4:35 PM # 
Sergey:
:) :) :)
Apr 12, 2006 4:37 PM # 
j-man:
Eddie (sorry to pick on him) isn't a 107 point runner (apparently, AP says he is ~95). But, he can have 107 point performances. And this general distinction, I think, is not entirely meaningless in the context of this ranking discussion, and isn't meant to be pedantic.
Apr 12, 2006 4:46 PM # 
eddie:
I'm really looking forward to the N.C. road trip!!!! ROCK!



Many of you may not know of mild-mannered Randy's dark side.
Apr 12, 2006 4:54 PM # 
Swampfox:
Not to pick on Eddie either, but to pick instead on your (j-man's) statement, there's no way Eddie is going to have a 107 point run anytime soon--at least not on the Altered Boogie Pepper. I'm talking about a raw score, not normalized so that the top ranked is 100, and without the 5% boost in Team Trials or US Classic Champs scores. Eddie's best score over the past year is 93.94, and I would be willing to bet a large sum of money at even odds that he won't have a raw score of 107 or greater anytime during the remainder of the 2006 season. It would take, ummm, quite a race. The Altered Boogie Pepper ranking scores are in fact better at predicting performance (for runners who have run a meaningful number of scoring races and who are no longer rapidly improving) than some might guess.

As an example, to score 107, Eddie would have needed to run about 52 minutes for the classic day at The Pig against the Wadd's winning time of 65.6. Seem possible?
Apr 12, 2006 5:10 PM # 
Tundra/Desert:
So what was Eddie's raw score for the mentioned Buena Vista day? It couldn't have been particularly low.
Apr 12, 2006 5:16 PM # 
Swampfox:
93.94, is the answer. 107 points would equate to run by Eddie of about 49 minutes versus winning of 57 1/2.

By the way, going back to Randy's (correct) point about how to get big scores, there is--unfortunately--another way to get big scores, which we shall affectionately refer to as the "Fallen Leaf Method".

Apr 12, 2006 5:26 PM # 
eddie:
That 57.5 time at Buena Vista was posted by Swampfox. AP scoring depends on who actually posts their splits. Its easy to sour the juicyness with selective posting.
Apr 12, 2006 6:23 PM # 
Tundra/Desert:
But if you are talking about a raw, un-normalized ABP score, then everyone's such score would be below 100, so a comparison with an AP/USOF raw score is not entirely appropriate because the latter raw scores are adjusted for the final normalization. And, the AP normalization can be a few points higher than the ABP number. In ABP, only the top person is 100, top 3 average to <100, while in AP/USOF, top 3 are 100. Those two effects can each be worth ~1–3 points. So, I would speculate that the actual gap between Eddie's Buena Vista scores for the two systems is closer to ~10 points instead of 107 − 94 = 14.

Those latter ~10 points are, in turn, due to a combination of two factors. One is the mentioned tendency to underreport data; this is not present if everyone is reported, i.e. in the official USOF calculation. But, a bigger effect is that the ABP only accounts for a small number of data points in calculating the course's difficulty value. At Buena Vista, many medium-ranked runners (like myself, Mikkel, etc.) had large problems, of a kind not unlike those typically present in FLL-effect environments (need I mention that house-sized rock at #15?). This large number of slow times pushed the course difficulty, i.e. the GV, up. But, the ABP does not know about the plight of medium-ranked people. It only looks at the top-ranked ones. Those, including Eddie and SF, made it through with few problems. So, Buena Vista is just not a typical day for which to compare the ABP vs. AP/USOF results. With a smaller probability, it could have been a day on which the top-ranked had problems and most others did not, like, I would guess, FPX Classic or NEOC "Short", and for those days ABP would inflate points and AP/USOF should not.
Apr 12, 2006 6:32 PM # 
Swampfox:
Vlad, I'm in general agreement with your analysis. Further, were it the case that there was some kind of subjective analysis of race "quality" with regards to whether accept a race for ranking or not, the Buena Vista day would have been a prime candidate for getting the hook.
Apr 12, 2006 6:37 PM # 
jjcote:
A few points:
1) Attackpoint rankings suffer from the huge flaw that they're based only on a selected portion of the results, selected by the people who post their times. Given that people may have motivations about when to post or not post, the data may well be biased, as compared with using the complete set of results.
2) If there are no weirdnesses with the course or the circumstances, under the USOF ranking algorithm, there's nothing to be gained or lost by running against stronger or weaker fields. You can trounce the weaker field more soundly, but the points you get for doing so are based on the paltry rankings of your victims. And when you run against a strong field, it's harder to do well, but the rankings of the other competitors boost you up. The way it works out, you can in theory remove any random set of runners from the results, and the affect on the ranking calculation should be insignificant.
3) It is possible to rank everybody in one big bin, and the reason this isn't done is that there are few people who run multiple courses at different events, and thus the courses are "poorly connected". In addition, there may be a bias in terms of which course they run on a given day, e.g. they run a shorter course when they are tired, out of shape, or whatever. Is this an adequate reason to rank the courses separately? Aren't there other poorly connected populations (on the same course, due to geography)? Perhaps. It's a matter worthy of valid debate.
4) White, Yellow, and Orange courses have typically had poorly connected populations on them, and thus all rankings on those courses are dubious, in my opinion.
5) "Weighting" a championship event more heavily is something that has often been suggested, but that is often misunderstood. The way people often propose implementing it is basically as a simple bonus for running at a championship event. That doesn't make sense unless you somehow think that everyone is performing better than normal becuase they're so motivated for the championship. And that's not weighting, anyway. Weighting would be as if you just pretended that they ran the championship race twice, so that the normal score on the course would have double the influence on their score, be it good or bad.
Apr 12, 2006 7:05 PM # 
urthbuoy:
For the Canucks I propose the following formula for determining our standing:

(100-(winning time-your time)) / time to drink 1 pint of beer (seconds).

bonus 1.1 multiplier if you drink the beer at the start vs. finish.



Apr 12, 2006 7:19 PM # 
eddie:
Now yer talkin'! The QOC Beer Chase. I'll never forget once having to kick as hard as I could against Pamela on the last leg of the beer chase because I knew I didn't stand a chance of downing that last beer faster than her. Figured I needed at least a 20s head start. I think the Beer Chase is included in my AP ranking, as well as for a few other folks. I wonder if its in the ABP list?
Apr 12, 2006 11:23 PM # 
Hammer:
The best formula for getting a good ranking this year (and any other year) is: 2x7x52 (hours training)
Apr 13, 2006 3:47 PM # 
Sergey:
Right on Hammer! :)
Apr 13, 2006 3:56 PM # 
eddie:
Sticking on Hammer's ass in a race doesn't hurt either, does it Sergey?
Apr 13, 2006 5:06 PM # 
pi:
ouch...
Apr 13, 2006 5:49 PM # 
ebuckley:
While it may be possible to extract something useful, I think this thread has been pedantic for quite some time.

However, to JJ's points: If we can leave the realm of "thought experiments" and actually look at the data it's pretty easy to confirm that, anomolies like Eddie's 107 notwithstanding, the way to get a high ranking is to run against, and beat, quality competition. (To do this, Hammer's advice is certainly sound). I've performed this analysis on several years of ranking data and will leave it as an exercise to the reader for those who wish to confirm.

As for cross pollenation, my empirical analysis of the same data sets (2000, 2001, 2003, if you really care), showed that there was, in fact, plenty of stability across courses. The rankings converged on fewer iterations and the end results seemed at least as good for most and considerably better in cases where someone ran only a few meets and split courses.
Apr 13, 2006 6:05 PM # 
j-man:
Well, I don't think Eddie's observations about the Beer Chase are pedantic. Anyway, DVOA does overall club rankings, across all courses, using the system. Thus, we compare people who run blue against those who run white. Somehow, it seems to be servicable, although some of us had reservations about going this route when we were deliberating. Theoretically, there seems to be two semi-discrete populations (the WYO vs. the Br+) between which I'd expect there to be limited cross pollination. But, I suppose you can get away with fewer bridge people than you might think.
Apr 13, 2006 9:02 PM # 
ebuckley:
You could always have some advanced runners blast through a yellow course every now and then just to set the course norm. The results wouldn't count for them, but it would help keep scores from drifting.
Apr 13, 2006 9:47 PM # 
jjcote:
I can see it for Brown-Green-Red-Blue. But I think there's something distinctly different about WYO, or at least something different about the people who run them. I think it's questionable to mix people who have complete vs. incomplete skill sets. Basically, this leads back to the notion that I don't really think that it makes sense to rank WYO courses at all, but I realize that there are people who want rankings for those courses. Looking at it another way, I think that if an advanced runner does a White and does a Yellow, the relative times they would turn in would differ from the relative times of a kid who had been doing White, and was now old enough to be moving up to Yellow for the first time. WYO are all trivial navigationally for advanced runners, and that would push Orange down, in a sense. But for the people who typically run beginner courses, there's a major navigational difference. Putting WYO into the same bin as everything else does no harm, perhaps, but there's also nothing meaningful to be gained from it.
Apr 14, 2006 4:10 PM # 
Sergey:
Not to pick on you Eddie, but your number is 703 :)

Sometimes you remind me a little chicken who drunk too much beer and flopping undeveloped wings tries to fly :)

I hope you will not take it too personal. It is just a joke (in a Russian sense, of course:). ROFLMAOWTIME. The point is that, unfortunately, only couple-three athletes in NA are training properly to be considered candidates to the international elite. The rest are just plain amateurs. You and me including, with all our pathetic training process. I just hope that we may one day build a SYSTEM that would support prospective juniors interested to explore their own potential.
V ('o') V
Apr 14, 2006 4:38 PM # 
Tundra/Desert:
Moved my message here.
Apr 14, 2006 7:25 PM # 
Sergey:
Vlad, it warrants separate discussion. Concerned and willing people may input lots of good suggestions and ideas, and later labor and goodwill into actual implementation. We already had some discussions in past about juniors. Mostly they were destructive :) Now time came to be more constructive.

Unfortunately, so far I see only a handful of USOF programs that really work: map fund, insurance, and may be team organized events - thank you Vlad a lot on that!
Apr 18, 2006 8:08 PM # 
Sergey:
Coming back to the ranking system and its anomalies, I think it works now quite good with downside outliers by using the rule that only each second race counts after 4 required. However, giving the nature of sometimes loosely correlated data sets (each race is a separate data set), it may be beneficial to throw away one best result for each competitor to mitigate the effect of upward outliers. Would, probably, make results better representing the averaging that we are all looking for.

Just a thought.

This discussion thread is closed.