pull a team together to do this stuff, and I stand in awe of the
> organizational display Trip and his team put on at Great Meadow.
> Personal aside: I had way too much fun at this event. I go there, I
> do nothing but schmoze, and everyone who meets me is interested in the
> hobby, the kids, our mission and our organization. By far and away,
> it's turned into the absolutely the best perk of being NAR president > (grin).
Kudos sir. TARC is far and away an unqualified success for NAR and model
rocketry generally.
Jerry
I'm sure the weather, the motor rule change, and the fact that 38/102
teams were veterans vs. the all rookie field in 2003 were all
contributing factors.
38 sounds good, but only 12 of those 102 qualifying teams managed to
score below 40 in the finals. I'd guess the ability to take a
Mulligan in qualifying really skews the comparative results. OTOH,
half the finalists scored within 145, and 46 within 10% of the desired
altitude. It looks like maybe 40% had problems, e.g. low performance
from non vertical flight, or cluster ignition failure, or broken egg,
unstable, etc.
The spinning rocket issue is one that interests me. Were more models
spun this year simply because more clusters were flown? Were more
models spun because the veteran teams remembered flying in bad weather
in TARC 2003? Where did they get the idea to spin models? Was it
from individual research and experience, or on-line open discussion
lists, or taught from the mentors and coaches? They certainly did not
learn it from running Rocsim. More generally, I'd like to read
articles written by student participants about the technical,
operational, and tactical lessons learned.
There was also a quiet discursion of flights on RMR about non vertical
flights due to a cluster motor failing to ignite. I thought about
doing an R&D project to determine optimal fins size and fin cant angle
for a TARC like model with one cluster motor ignition failure.
However, this is project best left to the TARCians and NARAM B
division competitors.
The team winning the Honeywell control award was DQ'd. Ouch! I
wonder what they did to earn the award?
While I was looking that up, I noticed that Clayton High from MO had
three teams, all with closely grouped low scores. I'm wondering if
they functioned independently, or more like a motor sports team
fielding three cars in the same race, sharing knowledge. Did they all
use the same batch of under performing motors? Did the third team to
fly not think that maybe they should remove some ballast or something?
I'm sure that there are at least 100 good stories and lessons learned
coming out of TARC 2004. I hope we get to read about some of them?
Alan
winning
Complex
the Most
High, and
The award write-up allows DQs to win (best laid plans....)
They eliminated a lot of variability by using a nice tower
launcher, and had a fairly sophisticated weight adjustment system
based on a simulator-derived table that took into account
temperature, humidity, etc.
I think the DQ was because they didn't get a motor lit...
--tc
Like yeah... Wasn't the team a group of freshman? :-)
I see what you're probing at (if I got it right) with TARC is more
in-line with the NCAA basketball tourney where every year is fairly
wide open.
I've got this database of some NAR/FAI contest flights and did some
comparisons...
1/2ASRA 27% DQ Rate
BELA 28%
CELD 28%
BHD 28%
BSD 28%
1/4ASRD 28%
1/4ASD 32%
FSD 33%
1/4ARG 33%
1/2AFW 33%
DBG 33%
The DQ percent of this years TARC was what, about 29%? Looking at the
Cumulative Distribution of scores, the shape of the curve looks very
similar to FAI B-PD and maybe FAI A-PD where they were going for
Maxes. Good stuff...
Regards,
Andy
I don't follow NCAA basketball, but are you suggesting that high
performing teams make it to the NCAA tourney, but few of them display
that same high perfomance in the pressure if a NCAA tourney, and that
TARCians just crack under the pressure of the finals competition? I'm
not probing at anything. I'm just musing the posted results and
Mark's commemts. I'm sure people who were actualy there can provide a
better assement. Bunny suggested that the competitors were MUCH
better this year because the average qualifying score went from 99 to
38 in an easier contest. I'm suggesting that the real improvement in
contestant skill may not be as great as Mark and his chosen metric
suggests.
Alan
March Madness is an interesting time of the year. It's a single
elimination tournament and in the end, everybody but one school will
suffer a loss.
It would be too obvious to say that some teams handle the distractions
of being on the road better than others. For all I know, some may
prefer the distractions (?!?)
Based on simply two years of watching , I'm happy to claim that the DQ
rate from our area teams is about half of the national average yet
puzzled that for two years in a row, have consistently flown to lower
altitudes than expected. This year, their models were flying straight
as a laser so I'm wondering if it's either the local weather or motor
variability. Oh well... I understand everybody had a good time.
Ahhh... Cut him some slack. After all, he's hobnobbin with the big
boys now and may have lost touch for the moment... :-)
When I inspected the cumulative performance distribution (CPD) curves
from the two years, I see two distributions within each year. Those
in the pack and the other pack of those who were probably like me in
college. Last year, the curves indicate 30% of the teams were in the
hunt and this year 56%. For discussion, NAR contests typically has
only a fraction of contestants in the head pack. As I mentioned in the
prior post, the shape of the TARC CPD's resememble that of the FAI
B-PD events. Whatever, I guess the gist is if the event coordinators
wants a tightly contested event or one where the pack gets spread out.
Just yapping,
Andy
I don't think Mark has lost touch. He is doing a great job, along
with Trip, the volunteers and contestants. I'd rather keep him
engaged than let him slack off...
I agree that that is a better metric. However, EVERY team in the
finals is qualified and should be in competitive pack. The altitude
is lower, and the weather was much nicer, which certainly helped, but
I don't know how much. The fact that 38 teams returned from the first
year also allows for more statistical analysis. Did the verteran
teams do significantly better in thier second year, or did they
randomly rise and fall in relative demonstrated skill? The allowable
motors also chnged, in a way that should be expected to show better
results.
Agreed. While interesting, I don't see the relevance to comparison
with FAI B-PD, or even NAR contests.
I'd guess they like to see every team make a qualified flight within
15% or so of the desired altitude. But the fact is randomness
dominates the determination of the top money winners. Requiring teams
to say make three flights and average the altitudes would help reduce
the randomness, helping the highest skilled teams to actualy finish in
the prize money. Likewise, making the event more difficult, would
help insure that the highest skilled teams actualy finish higher in
the rankings than lower skilled teams. Of course the other idea is to
not pay out the prise money based solely on rank, but on actual
performance. Say, $100 - $1/foot of error. Of course you would still
have to make it worthwile to participate. And yes, like T ball, they
are all winners.
Exactly. We should talk this up more, but I'd like to hold off until
more reports are written and published.
Alan
PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.