The numbers game:
Apparently, Tom and I have independently started a quantitative analysis of DVOA events. My survey looks at the previous five years: 2004-2008 (through Brandywine.)
My objective was to first take a look at the mean and median amount of involvement of people as course setters. I need to determine if there is a way to substantiate my thesis that we really aren't doing much to develop new course setters. (I think some sort of panel study, or more loosely, to identify the first occurrence of a person as a meet director and then trace their history over time would help. OK--that's what I will do next. For now, just summary statistics.)
One model assumption is that during this period, feedback was non-existent or severely constrained. One proxy for feedback would be to cross-reference posts on the eboard with meets. Unfortunately, I don't have access to historical eboard posts. Also, there is no way to assess feedback that occurred elsewhere. (I think Randy got turned away from course setting after Lehigh due to alleged negative feedback but I am not sure what the feedback was, how it was delivered, and if he is no longer course setting.) And, of course, since there is no formal feedback mechanism, I can't do anything more rigorous. Anyway, based on the ferocity with which criticism is stamped out on the eboard, and anecdotal observations of interactions at events, I really do think the "see no evil, hear no evil" mantra really does guide behavior.
The goal would be to examine the counterfactual--what things would look like if there were feedback. It is the vocalized opinion of many club leaders, volunteers, and seemingly the tacit opinion of some others, that feedback would destroy the volunteer culture. If I were to extrapolate from the Randy experience, they may be right. But, Randy is not ordinary, I think, and other data suggests rather minimal feedback that is largely fawning and anodyne in nature.
Anyway, the dataset includes all meets listed in the DVOA results, including ranked meets as well as minor meets like score-Os, night-Os, and some beginner meets. Also, there are things like the Stumble as well as dual events and Mid-Atlantic Champs. It is easy enough to filter those; for now they may as well stay in because they will only make involvement seem more broad-based than it is in reality. Also, I was liberal in giving credit to a director and co-director, and a designer and co-designer whenever there was an indication of multiple people involved. For simplicity, and because commonsense suggests a minimal marginal contribution when 2+ people are involved in one of these tasks, I capped it at two. Still, I think the data is biased towards more involvement rather than less. The data includes A meets. Typically, there would be a different course setter each day of a two-day A meet, but a single meet director. However, I gave credits for each day. Therefore, an A meet director would show up twice.
Over the past 5 years, three were 217 discrete events. We have had a total of 96 unique people involved in putting them on, according to the following (you can read the "Director" totals as the number of meets per year.
(2005 really did have that many events. We had a lot of beginner events, and even a Trail-O of all things.)
In this first pass, my objective is to look at the distribution of volunteers. It is not a surprise to anyone that there is great skewness in the distribution. As I am primarily interested in course designers, henceforth we will just focus on that subset of the data.
There were 73 unique people over the past five years that designed or co-designed courses. One thing that jumps out right away (besides the skewness) is that there is a cohort that only acted as a co-designer, never doing any meets on their own.
For now, I will just present the data graphically, and later we'll try to look at things on a granular, and intertemporal level.
In all cases, the individuals are coded with a unique numeric identifier.