Spike recently had a genius solution
for the issue we have been discussing about A-Meet sanctioning and meet quality. I talked with him about it this weekend with the idea of possibly setting up an AP Meet Award(s) so that the AP community can reward the meet directors and course setters that they felt did the best job during the past year. I also talked with Kenny about setting up something like this on AP and he seemed to like the idea.
I think that if we award the best organizers and really put some PR behind it (i.e. ONA interviews) all the other event organizers (both great and not as great) will be able to read about why we thought this one particular event really stood out and what they can do to raise the quality of their own future events. It should help to encourage event organizers to not only raise their meet quality, but to consider what the AP crowd wants in an A-Meet.
We could probably get an easy poll set up to start rating all the A-Meets from this past year so that we can get started with a 2006 award already. And possibly in the future Kenny can integrate it right in with the events listing so that people can rate each event as soon as they come home from it, and we can see average ratings for clubs on new events if they have had prior events in the listing that were rated.
As for criteria I would imagine we would want to rate individual things about each event rather than simply picking an event that we thought was the best so as to avoid making it simply a popularity contest. The criteria that I would like to see being rated would be overall meet quality, course setting and map printing, but there must be others as well. The rating scale should be 1-10 with a "no opinion" option as well. People should only rate events that they attended, and we should award the event with the highest average rating. We could even do special awards for highest course setting rating and things like that but we don't want to make it too complicated. What do you guys think?
Yes do it! At the very least, it will get people who have been nagging me to do it off my back :)
I think you need feedback on overall event quality, map quality (which would include printing), and course quality. Three scores averaged together or something. While the prize/scoring is a fine idea, I think it is important for comment-style feedback on the three areas.
It has also been suggested that the original sanctioning bid and sanctioning committee comments be part of any such system. I don't know if people care about that or not (obviously the people suggesting it did). To me, it seems like work for which there will be little return, but I will provide this stuff if there is an easy way for me to, and people want it.
The one downside seems that there may be an east coast bias. But that's everywhere, I guess. Nothing much to do about it.
Hopefully if it is a pure rating scale each event should be rated fairly regardless of where it is being held.
I propose a mix of subjective and objective measures.
I have trouble with the 10 point scale. I'd much rather see the subjective measures very coarsely measured, perhaps with only 4 levels for each rating item:
Meets the Standard
Better than the Standard
For objective criteria, not in any particular order, I'd propose:
1. Online Coursesetters notes, directions, maps, start times, others(?): X-days before the first event day---perhaps Monday for a Friday Start? 4 days?
2. Winning time for a 100% ranked runner---Based on the average correction of the top three ranked runners on the most elite category for the course: M21 for blue, obviously, F21 for red. I'm not sure about going beyond this, but it might get complicated. (If the first finisher is a 97% runner, then calculate the time of a 100% runner. Do this for the first three ranked runners, then average those three 100% times. I'm sure there might be a better way.)
4. Prompt online results. The standard should be by Monday or Tuesday, with same-day results being better than the standard.
5. I'd like to see something related to the number of mispunches or DNFs, but it would have to be something that would relate to meet quality and not something like weather or sprained ankles. How about the number or percentage of competitors with exactly one mispunch?
My point was that some of the criteria need to be fairly obvious things that every meet should do, and meeting them means checking a block, "Yeah, we did that." I propose that this be most of the score, so that any A-Meet can meet it.
And then there should be some smaller part of the criteria that is used to differentiate among the good events for the purpose of picking a winner.
The objective stuff is a pretty good idea. It would be great to encourage organizers to post course details as far in advance as possible to help people choose a course (not everybody always just runs their age category). And prompt results is always great. How about a bonus for uploading splits to AP? :)
I am not sure that we need to get too technical with course winning times. I think that simply having people rate how appropriate and high-quality they thought the course was should be sufficient. There aren't enough runners to really average out the number of MPs or DNFs to make that a fair determinate either. Is the difference between 1 and 2 really because one course was twice as bad, or just a random, uncontrollable occurence. I think it is too dependant on who shows up at the meet which is something no course setter can control.
Objective ratings should be a big part of it though because every event should be at least meeting those basic guidelines, and if event directors could see a review of their event they would try to improve those things the next time.
Ok, I think most of this stuff would need to be done right around the time of the event so we can't exactly do it fairly for all events from this year. We can probably start doing it for next year.
What criteria could we fairly use to rate events from the past year? Would people be comfortable going back and rating each individual event they went to over the past year?
Although the winning time criteria I proposed is objective, and is likely to provide a number suitable for discriminating between events (every event will have a different number), this might be unfair based on weather and quick-changing vegetation. What if the correction and averaging was done to give a projected winning time to the nearest second, but the criteria was broad. For example, if the 100% time is within plus or minus 10% of the desired winning time for "Meeting the standard"?
A good criteria should be mostly box checking. "Yes. We met that." But you also need some discriminators for picking winners (and losers), and these need to be robust enough (or fair enough) so that having bad weather isn't catastrophic to the event ranking. Does anyone have any data for the same course on different days? How much can the winning time change based on weather?
To me, "quality" is mostly subjective (of course, if you follow the thinking of Robert M. Pirsig, he suggests quality is the intersection of the subjective and objective, but we'll leave that discussion for another time :-)).
In any case, the things that matter (to me), course quality, map quality, and map printing, seem subjective. I'd rather the busses be late and the map be good than the other way around, tho of course meets with both should score higher. So, I'd argue for a low weight for these objective "checklist" things, unless they provide a direct metric for one of the three things above.
One thing I'd like to see rated is first aid stuff. I'd never gotten injured at an A-Meet until this year, where I sprained my knee.
At one A-Meet, there was someone with a first aid kit and a cooler full of ice packs at the finish line both days. Within a minute of finishing, I was offered an icepack for my knee, I wasn't even asked. That was great.
A few months later, at another A-Meet, I re-injured my knee. When I hobbled to the finish line, I asked about a first aid station or ice or something, and the finish people didn't really know, and I had ended up having to head over to the nearest store to buy a 10# bag of ice. That was not so great.
I'm still a newbie to the sport, so details about mapping, courses, DNFs, and winning times are things that I don't think I could rate very well (yet). But, I think I can adequately rate overall organization, things such as first aid, parking, opportunities for social interaction, etc. To me, those items are just as influential when I reflect back and think, "Oh, that was a good meet."
Along similar lines, whether the meet includes an informal non-competitive orienteering opportunity for those not into racing is important to me. Some meets include what are called "recreational courses", of typically white, yellow level. Others include a training area with a handful of controls available to sample the terrain/mapping standard.
Since beginning to attend A meets with young kids I have found this to be a required item. If the meet doesn't include it my wife will say "then we're not going".
The point is the kids are still too young to compete, yet love to go on a map hike as soon as one of us is back from our competitive run. It can't be a pre-assigned start time on an official course due to the difficulty in planning that in advance. When it isn't specifically advertised in the meet announcement and I inquire to the meet director, some people are great and figure out a way to make it happen. Others have told us to sign the kids up on the competitive course and to pay the full competition entry fee.
Once the kids are ready to start competing I will have no objection to that. In the mean time it's a lot of money for a non-competitive map hike, and runs the risk of not making it to the start line before it is closed up if the earlier runner isn't able to finish, rehydrate, get dry clothes, collect kids, and return to the start all in time to still be allowed to start. And that's hard to calculate when we often don't know the distances to start and finish or even the length of the start window in advance of registration.
Everyone knows this sport is aging rapidly. Rec courses and training maps are not to my knowledge a required item at A meets, but including them does help attract families with young children. I have also seen rec courses attract newcomers when an A meet is held in an area where there are few local meet opportunities.
While I agree that recreational courses are nice, I certainly wouldn't want that to become a criteria for a good A-meet. The presence of such courses is advertised, so as j-man likes to say, "the market can decide that issue."
As some have already insinuated, having the ratings broken down into categories is much more helpful than an overall rating. I couldn't care less if there's a nice dinner on Saturday. If the map is poorly printed, I'm dissapointed. If a control is misplaced, the event is ruined.
My point is for some of us it already is a criteria for a good A-meet. For others it's completely orthogonal. If the plan is to ask folks to place votes, this is something I will have to consider when voting.
The "market decides" theory works best when there are sufficient buyers and sellers in the market. When there are rarely if ever two competing A meets on a weekend, there is no "market" and we're only left with the choice of go or stay home.
Hmm, given those choices then I have to disagree with whoever said one can only vote on meets they've actually attended. There should also be a means to register "Why I chose not to go to meet X" if it was something specific to the meet rather than, say, a kids soccer game or whatever. Otherwise that info doesn't get back to organizers.
fossil wrote: "Rec courses and training maps are not to my knowledge a required item at A meets, but including them does help attract families with young children. I have also seen rec courses attract newcomers when an A meet is held in an area where there are few local meet opportunities."
True enough. We opened the competitive W & Y courses to rec at Batona 500, and had what I would consider A LOT OF walk ups (approx. 60 maps; 95 participants), especially on Saturday. The weather and a public venue sontributed a lot, but I can attest to teh fact that several groups, who knew nothign about O on Saturday morning, had a great time,a nd will be back at one of our local events.
Well, the market may not be purely competetive, but it certainly is driving decisions. For example, SLOC has been put out of the A-meet business due to low attendance. I honestly don't know why, as we've received uniformly positive feedback, but if people don't come, then we don't have much choice but to close shop. Maybe it was the lack of rec courses, but I rather doubt that.
We offer rec courses at our annual B meet as well as A meets (which we hold quite infrequently). It provides a way for spouses and other family members to participate -- parents of our many Scout and JROTC competitors, for example -- as well as hikers, geocachers or others who read about it online. (It's a rather remote location, so we get few true walk-ups.)
In terms of return (e.g., new members, repeaters) vs. effort, I'm not sure it's "worth it," but it's really good for community relations. So we consider it a worthwhile endeavor.
Eric, my guess is that the main reason for the low participation at SLOC A-Meets is because St. Louis is so far away from the majority of the Orienteering population.
AP meet reviews should help smaller and more isolated clubs gain recognition for holding quality meets. We should make a point of recognizing all of the clubs who hold high quality meets so that they can be rewarded.
Excellent point, John.
Keep up the good work SLOC!
JDW, I'm glad to hear opening up W/Y for rec walkups was a success at Batona. I was [one of] the one[s] asking for this. Unfortunately other conflicts arose and we were unable to attend.
Eric, we flew out to SLOC-land the year they held the US 2-day Champs. Loved the terrain and the maps and would definitely consider doing it again. I vaguely recall something bad happened one day and a course was thrown out, though I don't think it was due to any fault of the organizers. I think that was back in the days you were still here in Ithaca though.
I vaguely recall something bad happened one day and a course was thrown out, though I don't think it was due to any fault of the organizers.
1991, misplaced control, Day 2 Red and Blue thrown out.
Unfortunately other conflicts arose and we were unable to attend.
Yeah, and you were on my bingo card! (only one I couldn't get)
Back on topic:
Personally, I think that the ratings of meets for this purpose should be entirely subjective. If Sanctioning wanted to make a report to USOF about which meets complied with the rules, then fine, an objective list would be suitable for that purpose. But for an Attackpoint award, I don't think we'd want to recognize adequacy, but rather excellence. The things that make a meet really awesome aren't the fact that it dotted all the I's and crossed all of the T's. It's that the terrain was cool, and the courses were really well thought-out, and they had everything set up so that it was a blast to hang around, etc. And if the winning times didn't meet the cookie-cutter numbers, that's still fine. But I've also been to meets where everything was done exactly by the book, but it was still just kinda boring*. I'd rather see the former get an Attackpoint award than the latter. I'm also not sure whether maybe some other things besides A-meets should be included (e.g. Sprint Finals Weekend).
* Hey, it's still orienteering, so it's never actually boring. It's a rare and special thing when a meet is so bad that I actually have negative feelings about it.
I still support the objective ratings, as in my original conception this would serve as the "stick." I'm not sure that the AP Award, which I also think is a good idea, would be sufficient to encourage across the board improvement in meet quality which is my real priority.
If I read j-man correctly, he's suggesting that every meet be objectively rated. I think that's a good idea, sort of like a report card for meets. And then, amongst those who meet a certain baseline, award an AP Award based on the objective + subjective ratings.
That sounds like a good idea. Who'd be in charge of rating meets? The sanctioning committee?
Randy is going to be annoyed with me (because I've offered too many flip suggestions without contributions of time or treasure to implement them, but...)
I imagined that for each of the objective criteria considered by sanctioning meets would be rated ex ante and ex post (before they happen and after) to compare what they intend to do with how they followed through. Ideally, this would be posted on a public forum. I really hoped it could be on the USOF website as they should take responsibility for meet quality. But there are likely implementational and/or political reasons why that would be unpallatable, so if these "scoresheets" had to reside somewhere else, so be it.
I know I would put a lot of stock in these ratings. While I went to a lot of A meets last year, I didn't get to all of them. In the future, I know I will be more inclined to invest my travel $s to travel to meets that follow through with what they say they will and who adhere to the USOF standards.
And, I will also be inclined to attend meets put on by the AP meet award club, but there won't (and shouldn't) be many of those in a year, so I think you need both.
I would love to separate the terrain from the meet organization for the awards. People should make their decisions about which meets to attend based on both terrain and expected event quality, but it's somewhat unfair not to give an award to an excellently-organized meet in fairly crappy terrain simply because that's all the club has.
That is, awards for doing well with what you have, not for having good terrain, as far as possible.
I agree. Although perhaps the objective ratings could address terrain suitability, not desirability. Although, I don't know if sanctioning cares about terrain suitability or not. For the first iteration of this I'd be in favor of keeping it simple and only considering things that sanctioning considers.
The Awards could consider all sorts of intangibles and warm and fuzzy things.
Objective ratings regarding sanctioning requirements and other thigns should be a part of every AP meet review. That way it will get the same exposure that the rest of the AP review is going to get. I really think that we have an opportunity here to help the sanctioning committee maintain their integrity and encourage organizers to follow through on their responsibilities in holding a sanctioned A-Meet. Events that don't meet the basic sanctioning requirements (and I am not talking about cookie-cutter winning times) should be penalized for it.
Are the sanctioning rules published online somewhere? I can think of certain objective things that we should check, but I also think we should try to help out the sanctioning committee by checking all the things that they require in a sanctioned meet.
My list of objective checks would include (each would only get a check mark, not a rating):
* Basic meet information published as soon as the meet is sanctioned.
* Course length and climb information published before entry deadline.
* Detailed meet information (course setter's notes, directions, start times) published at least 7 days before start of event.
* No courses thrown out due to organizer error.
* Results posted at the event as people are finishing. Each competitor should not have to wait more than an hour to see their result posted.
* Results posted online within 3 days of each day of competition (lose 1 out of 10 points for each day late).
* Splits uploaded to AP? Or would that be unfair to clubs who don't use epunching yet? Still not a bad idea to reward those that do and encourage others to do the same.
Subjective ratings should include:
* Quality of courses. Each day separate. Ideally both individual course setters and course consultants could maintain an average score that they could publish on future courses that they design. A separate course setter award could be awarded for an individual day of competition.
* Map print quality.
* Map quality (as in the mapper did a good job).
* Overall meet quality. This should include a comment box so that people can use this rating to reflect certain personal standards that they might have for a meet such as child-care, enough water on courses, availability of rec courses, etc. That way if there is enough demand for something that is lacking from an event the organizer will hear about it and future organizers will be able to see what they need to get the top ratings from the majority of people.
I think the rating scale should be 1 to 7 for all the subjective ratings except for the overall rating which should be 1-10 to allow for people to reward meets where the organizers went above and beyond to make the event spectacular in areas that aren't necessarily rated. The objective checks should be 10 points with each check getting all or nothing (or slowly losing points in the case of late results).
The subjective ratings should be defined as follows:
0-2 Substandard - Does not meet any of your expectations and seriously detracts from your overall enjoyment of the event.
3-5 Marginally Acceptable - Does not meet your expectations, but does not seriously detract from your overall enjoyment of the event.
6-7 Acceptable - Meets your expectations.
8-10 Outstanding - Only available to rate overall meet quality. Reflects organizers attempts to go above and beyond the standard in areas that aren't necessarily rated, resulting in a special heightened enjoyment of the meet. This would encourage organizers to do little things like having an announcer, putting the finish in a good place to hang out, having spectator controls, or any other little extras that people might like in an event. As the comments come in explaining the ratings, future event organizers will be able to see what they can do to make their events excel in the eyes of the public.
First the summary:
My gold standard of rating systems would
* be heirarchical and user-friendly (like logging training in Attackpoint), so that people could provide feedback on just a few key points or go into great detail, as their motivation, expertise and attention span might dictate
* include both objective and subjective measures, covering both adequacy in meeting standards and excellence in providing an especially great experience
* include a way for people to rate certain aspects of the event, regardless of whether they attended, because interesting information can be learned from those who stayed away because the event didn't meet their needs, those who would have loved to go but had a conflict, and those who sat at home and enjoyed (or were frustrated by) the specatator experience
Now, some comments:
cedarcreek: some of the criteria need to be fairly obvious things that every meet should do, and meeting them means checking a block
Yes, this way, the rating will have a mechanism to cite problem areas with meet quality. I think this is important, because there are some problems that recur over and over, and the rating would bring these to light in a way that would (I hope and expect) encourage clubs to clean up their act.
Cristina: And then, amongst those who meet a certain baseline, award an AP Award based on the objective + subjective ratings.
I agree with this carrot (excellence) and stick (adequacy) approach, because the best meets are both adequate (in meeting baseline criteria for A meet standard) and excellent (enthralling courses, exceptional social atmosphere, etc.). This will serve both the risk-averse masses, who don't want to waste their money and time attending poor events, and the connoisseur, who delights in meets that exceed mere compliance with standards.
jjcote: If Sanctioning wanted to make a report to USOF about which meets complied with the rules, then fine, an objective list would be suitable for that purpose. But for an Attackpoint award, I don't think we'd want to recognize adequacy, but rather excellence.
I agree in principle that USOF should conduct meet standards compliance audits, but lets face it: it is more likely to happen, and the resulting data will be better if this is included in AP rankings. (It appears we already have the enthusiastic support of the Sanctioning Committee Chair.) And frankly, I would consider a purely subjective rating of people's faves (tastes) an indulgence and not very interesting or useful.
feet: I would love to separate the terrain from the meet organization for the awards.
Yes, there have already been extensive discussions on Attackpoint about people's favorite orienteering venues, so it would be pointless to rehash them by making this an overriding factor in the AP rating.
jfredrickson's list of criteria is a good start, but I would separate logistical issues from (or rename) what he calls "overall course quality".
Since you picked up this ball, John, why don't you go ahead and create a design for a "survey form" with some combination of check boxes and radio buttons, send it to Kenny, and put it in action for this year. And if people have suggestion to improve it, they can mention them for a revised version for next year. And we can have the month of December or whatever to vote for this year. (I don't know who's supposed to be making the call on whether meets met the objective criteria.)
Woops, thanks for pointing that out Eric. It was supposed to read "overall meet quality". I edited my post to fix that.
I am not sure that it will be fair to events that occurred earlier in the year to have people rate them now. Perhaps we should start this thing in 2007 and give Kenny more time to integrate it into AP. Then people can rate the event as soon as they get home from it.
I like idea of including a spectator rating. It should probably be separate from the participant rating though.
I actually like the idea of including terrain quality in the ratings. Terrain selection is an important skill that affects the enjoyability of the event a lot. I've noticed that events on new maps are often less well attended than events on old maps, because people know what to expect for an old map, and don't have confidence that new terrain will be good. Although some clubs seem surrounded by endless good terrain, and others by orienteering "deserts", I think that any club can make a big difference by good terrain selection (and by good terrain use). I can't think of any North American club that doesn't have some good terrain, and in most cases some excellent terrain. My 2c.
good discussion... there's plenty of time to integrate something for 2006 events, I'm just giving this thread some more time to continue generating ideas.
I think the idea of meet report cards is good, but it needs to happen immediately after each meet. For the purposes of an award, can't it be as simple as voting for your favorite meet? Kind of like MVP balloting in pro sports. My pick for #1 would be the North Americans put on by GHO with Hammer at the helm.
My honorable mention picks would be Barebones by Adrian Zissos, the US Team Fundraiser in Florida by Vlad, and the BC Champs in Whistler by Magnus. Barebones and the Florida meet show that you don't need to have a huge production to have fun. Less work means more meets.
That last paragraph of Jeff's with all the attribution makes me wonder, not terribly seriously, whether we ought to have Best Meet and Best Meet Director awards, a la Oscars, the latter being useful for recognising excellent meet organisation in less excellent terrain.
Meet of the year undoubtably Ballinafad Creek in ACT controlled by Shep http://www.attackpoint.org/discussionthread.jsp/me...
for sheer entertainment value
I don't think there's a problem with rating meets now, after they're over. In fact, it might make sense to always do the yearly voting at the end of the year (that is what they do with things like the Academy Awards).
But you can't just vote for your favorite. That would bias too strongly towards large meets. A terrific meet that had only a few attendees shouldn't suffer from getting only a few votes, if those votes are uniformly of praise.
The people who vote on the Academy Awards spend hours reviewing movies and considering each one closely. Somehow I can't imagine AP members putting in that kind of time or consideration. Course quality and map quality can be reviewed at any time, but can you really remember if an event was a 6 or 7 in the overall category 9 months after the event happened. Results are bound to be skewed toward more recent events.
But it would be fun to get something started already for 2006. It would certainly wake people up to the fact that it will be ongoing in 2007 and we may see event organizers considering it already next Spring.
Results are bound to be skewed toward more recent events.
Toward? Or perhaps against. Maybe the memory of little problems fades with time.
On the other hand, after all of the events are over, you can rate them relative to one another in your mind. But you can't do that if some of them haven't happened yet.
Wouldn't it be better if events were rated on their own, rather than relative to one another? I wouldn't trust my memory to know exactly how much better one event was than another. I sometimes find myself changing my old movie ratings on netflix when comparing them to a movie I have just recently seen. Then when I go back and watch one of the older ones I relive the experience that influenced me to give it the original rating.
I can only assume that a rating resulting from the original experience of an event is going to be more accurate than a rating resulting from a mental comparison of two separate events with a large time difference between the events.
John has a good point, in that movies can be reviewed all at once at the end of the year. I'm for reviewing meets as I attend them. Then, at the end of the year, if there's to be a big, overall, "what was your favorite?" vote I can look back over what I thought at the time. Okay, I probably wouldn't spend more than a few minutes doing this, but I would spend even less time rating an event 8 months after it happened.
But we are rating them relative to one another if we're picking one that's best.
Nevertheless, I have no objection to anything you've suggested, I'm just tossing out ideas. Pick whatever you think makes sense, and go with it. If that means that we can't do an award for 2006, then so be it.
JJ: Yeah, and you were on my bingo card! (only one I couldn't get)
Ah, I read that story in the other thread and wondered if it might have been me. But in my defense, as soon as Mary posted the note asking everyone who was going to be sure their name was in the AP list, I went in and pulled mine off. So she must have captured names several days in advance of the meet.
Also, you may recall you gave me the same treatment as sit2much at the CNYO meet earlier this fall. (Hollering my AP name out at the start line.) :-)
I think Eric Bone has summed up most of my thoughts, but I do think that the difference between great meets and OK is non orienteering items. Map quality and course quality should be rewarded, but food quality (the dinner) and a place to socialize is the icing on the cake, and are the things that get many of the "masses" to O meets. Many of them are not on Attackpoint of course, but if you want to have a well attended great meet, you have to provide a good time for everyone.
It is going to be difficult to get a reliable end of year ranking by rating directly after meets. Memories fade quickly. In addition, attendees with many years of experience have a much larger perspective than some one who has attended meets for a year, or been tothe same A meet for 10 years. Not sure how to avoid that, but perhaps meets should be rated as they happen, then those results posted at the end of the year and everything reviewed again.
I know for me, the events that I remember are the ones I ran especially well or especially poorly. The rest just sort of fade away and I don't remember much about them. So rating as they happen makes more sense to me. I also think you want to rate in specific categories. I can think of events where I had a great course in terrain I didn't like, or a so-so course in terrain I loved, a great course on a so-so map, a boring course on an excellent map, etc. And if forced to just give an overall vote it wouldn't be clear what aspect was getting the most weight. The same with the technical vs. social aspect of the events. And similarly, if the ratings were going to be helpful to me I'd want to know why an event or club was getting high marks.
I'd like to see an opporutnity to rate things like were the courses designed as advertised (encompassing things like was it really a middle distance or long course design, was the winning time appropriate, etc.) or was the map quality what you expected (if promised offset did you get it, if told inkjet was it acceptable, etc.). Same for some of the non-technical things. And these can get rated on a 1-5 scale or something like that. An option like "isn't important to me" could also be provided.
Do people feel all or part of it should be anonymous or not?
For my part, I generally believe in putting names with the comments/ratings.
No need for anonimity; if you really slam an event, just create a special AttackPoint account called "visitingAussie" or such. Just kidding.
I think names with comments are a good idea, though I wonder if it would mean that people would censor themselves more for events where they know the organizers than for events where they don't. Maybe show a graph of the APer's distance from the event versus the event rating.
Comparing ratings by age, experience and course would be interesting, which is one argument in favour of attaching names, or at least vitals, to comments.
It seems like a bunch of different concepts are being discussed simultaneously here ...
1) It seems that USOF should be doing a post-evaluation as a part of its sanctioning process. A meet could get a report card on its meet based on how well it satisfied the santioning rules. This should be done by USOF (not AP) and needs to be objective.
2) It seems like AP might lend itself to an evaluation that is a more like a "customer feedback" approach. A mix of objective ratings and some subjective issues. I envision a web-form on AP, where those who have been to the event can tick a rating for a bunch of different categories and also give general comments. This feedback would come in as soon as the meet was done.
3) Finally, the concept of a year-end "best event". This is definitely subjective. Events in the top of the rankings in #2 would be obvious candidates for the "best of year" award. I think a good approach would be to i) solicit nominations (with explanation as to why meet qualifies as best of year), ii) a discussion period where others can provide input on how good/bad each meet was, iii) a voting period where all AP users can vote on best event (whether they were there or not). In the end the voting should be based on evidence (i.e., the nomination and discussion), not just "my favorite event this year was ...".
Excellent summary Brian.
I would just add that I would like #1 to be easily publicly accessible so that it could serve as a resource to the general community. This can complement #2 and #3 but I think they should be distinct schemes with a different priority.
Brian makes some good points, but I wonder, with limited manpower, if it's better keeping it simple for now, and mushing all three into one semi-subjective AP rating. Anyway, I'm not sure that 1 - the basics and 2 - the customer experience - aren't valueable parts of 3 - the loveliest. (And of course I'd be surprised if AP didn't amply supply a profusion of event quality discussion threads...occassionally meandering into "best animal encounter", "best use of red wine", "best O suit duct taping", and "best printer alchemy".)
If Brian's #1 is objective, then anyone with the time, resources, and interest could do it; it does not have to be done by USOF sanctioning. Everything needed to do it is public except the sanctioning apps, which I said I would make public if people thought that was valuable.
I'm just tapped, I can't do it, sorry.
And I can put links to it/them on the USOF website, if someone comes up with a survey/poll/whatever.
"How much can the winning time change based on weather?"
Depends on the terrain and how much that weather can change. Hot weather at times of year when it is normally cool will make people slower. Rain in rocky terrain or terrain with deadfall can make conditions much slower. Snow, fog, etc. will also slow people down.
The min/km at West Point in 2005 were much higher on the first day when visibility was limited in places due to thick fog.
Times at this year's NAOC were about 5-8% faster than we expected in the elite classes as the woods lost a lot of leaves in the few days leading up to the race and the wonderful weather dried up most of the forest except the big marshes. If we had the weather that occurred on the Wed. before or Wed. after the weekend (rain and snow, respectively) the times may have been (much?) slower than the reccomended winning times.... even more so in the "supermasters" categories.
Times were about 10% slow for the long at US Team Trials this year due to unusually warm weather several months prior to the event. It got all the undergrowth started early and by meet time it had slowed down the forest considerably. Fortunately, the courses as designed were a tad short, so the change had the happy consequence of moving winning times right to where they should have been.
At a B-meet SLOC put on in March of 2000, we got over a foot of snow the day before the event. Winning times were much slower than expected (about 20%).
An alternative approach to consider is to create an AP "Best Practices" document that could be used by organizers before events and would thus (hopefully) have a more immediate impact on event quality.. It would be a way of re-inforcing what elements of an event are most important to the AP orienteers.
btw: I consider that the number one criteria of a sucessful event is that the organizers enjoyed themselves and are prepared to do loads more volunteer work to organize future events.
"btw: I consider that the number one criteria of a sucessful event is that the organizers enjoyed themselves and are prepared to do loads more volunteer work to organize future events."
I'll second that. Are you running a risk of not only rewarding, but also discouraging some organisers?
I'll third...a good point. Some of the most fun events are those most "barebones" anyway. So, would an award recognizing the best or most fun event (by whatever criteria) be a motivation and reward and encouragement, or annoyance and discouragement, to organizers?
I skimmed the thread, so I'm not sure if something like this was proposed, but I once mentioned that A-events should have a sanctioning fee of, say, $500 more than they are today with the concept that the $ will be returned, if the event is good. A well run event would get all $500 back. An event, that gets a really low 'score', (e.g. if a course gets thrown out for event organizer fault), or whatever, they get a big 'fine' taken out of their $500... Screw up enough and USOF keeps the $500 in the Event Quality Improvement Fund....
It might be more saleable to frame that as a carrot and not a stick. Increase the sanctioning fee, then _separately_ propose meet awards, financial, as an unprecedented reward for those that hold good meets.
It seems the increased sanctioning fee would be passed onto the participants (as that can be budgeted ahead of time), yet the refund wouldn't. So clubs would be pocketing a difference for holding a good meet. Moreover, some clubs would be put off from applying for a meet at all with the larger fee (some clubs want to hold A meets now, but are put off from applying because of perceived "bureaucracy").
It seems like a better approach is to simply give clubs a bonus for having a good meet. Keep the pocket the difference approach without the above problems. USOF is flush with cash (in part due to dues increases), and using some of that cash to incent meet quality seems like a good benefit for USOF members.
Of course, judging the meet and deciding the bonus is a different kettle of fish altogether.
I like Randy's suggestion of giving clubs a bonus for putting on a good meet, though I guess the sticking point would be how to judge that.
There is no Event Quality Improvement Fund. :-) If there was, there'd be a program in effect to make sure meet directors/course setters are up to par on the Rules which have been developed over the years (and changed as necessary).
Event quality should be tops if meet directors read and follow the Rules of Competition instead of making things up as they go without regard to how successful clubs have held meets in the past. And no one should meet direct/course set for an A meet who has not been a participant at many A meets, to see what makes a meet excellent!
Here's just one example that's not always followed: "Rule 10.1 The [A-meet] Invitation shall be published at least three months before the event" (nowadays, either in ONA or online, on the club's site; preferably both as not everyone is online!). Most of the time that does happen, but not always...
This discussion thread is closed.