Jump to content

Recommended Posts

Posted
9 minutes ago, BashiChuni said:

I’ll take that feedback with a HUGE GRAIN OF SALT

Said feedback probably reviewed by MG Wills, the sales pitch guy for this disaster.  Love the guy, he’s one of the few GOs willing to engage with peons like us; but he’s a company man in this race to the bottom.  I’ll go out on a limb and say China doesn’t shortchange it’s training for expediency.  Standing by spears. 

Posted (edited)
7 hours ago, dream big said:

Said feedback probably reviewed by MG Wills, the sales pitch guy for this disaster.  Love the guy, he’s one of the few GOs willing to engage with peons like us; but he’s a company man in this race to the bottom.  I’ll go out on a limb and say China doesn’t shortchange it’s training for expediency.  Standing by spears. 

Feedback was first person direct to me, a T-6 ip who trained them, so I could make changes for subsequent classes.

Still tiny sample sizes, but so far, they are certainly no worse than what we put through before 

Edited by yzl337
  • Like 1
Posted

I did hear a rumor that this batch of 2.5 will be the last going to fighters (because the length of training is the same as normal UPT, doesn’t really buy much), but they may continue this type of program for crew-bound aircraft. My question there is how do they know who’s going where outside of the ARC guys. 
 

I say leverage the technology and teaching technique improvements, but do not cut the overall syllabus/length of training short. Just update how you train them (and no, I don’t mean cut a shitload of flying for sims)

Posted
14 hours ago, LookieRookie said:

Vance or Randolph syllabus?

 

Randolph from what I know is AMF aka ITD/sim only.

It is the step towards having no T-1s.

Fighter/bomber bound will still get T-38 eventually T-7 time.

KEND

Posted
2 hours ago, brabus said:

I did hear a rumor that this batch of 2.5 will be the last going to fighters (because the length of training is the same as normal UPT, doesn’t really buy much), but they may continue this type of program for crew-bound aircraft. My question there is how do they know who’s going where outside of the ARC guys. 
 

I say leverage the technology and teaching technique improvements, but do not cut the overall syllabus/length of training short. Just update how you train them (and no, I don’t mean cut a shitload of flying for sims)

 

at least for fighters, the syllabus is longer, 96 hours in the T-6 versus the 60ish they got in the legacy syllabus plus a full T-38 syllabus

Posted
On 7/22/2021 at 8:32 AM, Shakermaker said:

Most recent iteration of the syllabus has 42.8 hrs programmed in the jet.

Not at Randolph with the current studs.

Posted
On 7/24/2021 at 2:18 AM, yzl337 said:

 

at least for fighters, the syllabus is longer, 96 hours in the T-6 versus the 60ish they got in the legacy syllabus plus a full T-38 syllabus

The "old" supt syllabus (as in prior to upt next) had ~80-90 T-6 hours  regardless of track irrc. So it sounds like no real change there. Or am I misremembering? 

Posted
The "old" supt syllabus (as in prior to upt next) had ~80-90 T-6 hours  regardless of track irrc. So it sounds like no real change there. Or am I misremembering? 

True story. Mid 80s in T-6 was normal 10 years ago.
Posted (edited)

Interesting read on a scientific approach to determining success at UPT using machine learning methods. 

BLUF: They were able to predict to 94% accuracy a UPT candidate's success in UPT (from 2010-2018). The following were the factors deemed most significant to success in SUPT.

Predicting success in United States Air Force pilot training using machine learning techniques

 

1-s2.0-S0038012121001130-gr7_lrg.jpg

Edited by Av8
Added full article link
Posted



Interesting read on a scientific approach to determining success at UPT using machine learning methods. 
BLUF: They were able to predict to 94% accuracy a UPT candidate's success in UPT (from 2010-2018). The following were the factors deemed most significant to success in SUPT.
https://www.sciencedirect.com/science/article/pii/S0038012121001130?dgcid=coauthor
 
1-s2.0-S0038012121001130-gr7_lrg.jpg


Can't pull down the full article, but I'd bet "predict" is not the right word in the way we think of it since the prediction pool is based on people already selected to attend UPT, and those predictive characteristics may not be correct if you were looking to see who to select for UPT in the first place.

In other words, the machine learning can only tell you if the UPT selection criteria works, but not if it is the best selection criteria since the variables were limited before the analysis.
Posted (edited)

 

Quote

Via the utilization of the best fitting model, our results show that current USAF pre-application testing is insufficient for the prediction of SUPT success. Additional factors are shown to provide more information in this regard. Most notably, an applicant's academic major, commissioning source, and number of AFOQT retests are the top three most important factors. These results underscore the importance of the whole-person selection concept for SUPT candidate selection.

This was a quote that summed up the conclusions obtained. Basically, the USAF/AFR/ANG currently is performing much lower than 94% of selectees making it through SUPT by relying on AFOQT/PCSM/etc. If we can identify beforehand those who will be successful via these techniques then we can eliminate the $ and time waste of sending someone through training who washes out.

@jazzdude The data included those selected from UPT from 2010-2018 and the authors used a machine learning model to attempt to determine who would succeed and who would not. The model was 94% accurate at this. There is some other interesting reading on whether or not the SUPT process itself is working correctly, which is supplemental to the main point of the model: to determine the most efficient ways for the Air Force to choose candidates to complete SUPT.

@brabus Degree type was in fact the most significant variable. PCSM etc. were not the least (there were many more variables tested), but the PCSM and AFOQT scores were less accurate predictors of who would/would not complete SUPT. 

Graph Below: This is a prospective candidate's "score" based on the model presented in the article and whether or not they completed SUPT.

Fig. 8

Edited by Av8
Added Graph
Posted
1 hour ago, brabus said:

Am I taking crazy pills, or does this say degree type is the most impactful variable and PCSM, etc. is the least?

Degree type vs major further down...so people with a masters likely do better than people with a bachelor's. 

  • Upvote 1
Posted
3 hours ago, Av8 said:

 

This was a quote that summed up the conclusions obtained. Basically, the USAF/AFR/ANG currently is performing much lower than 94% of selectees making it through SUPT by relying on AFOQT/PCSM/etc. If we can identify beforehand those who will be successful via these techniques then we can eliminate the $ and time waste of sending someone through training who washes out.

@jazzdude The data included those selected from UPT from 2010-2018 and the authors used a machine learning model to attempt to determine who would succeed and who would not. The model was 94% accurate at this. There is some other interesting reading on whether or not the SUPT process itself is working correctly, which is supplemental to the main point of the model: to determine the most efficient ways for the Air Force to choose candidates to complete SUPT.

@brabus Degree type was in fact the most significant variable. PCSM etc. were not the least (there were many more variables tested), but the PCSM and AFOQT scores were less accurate predictors of who would/would not complete SUPT. 

Graph Below: This is a prospective candidate's "score" based on the model presented in the article and whether or not they completed SUPT.

Fig. 8

Yeah but I think his point was survivorship bias. You are only deciding the best criteria for selecting students based on the criteria currently used to select students.

  • Like 2
Posted
Yeah but I think his point was survivorship bias. You are only deciding the best criteria for selecting students based on the criteria currently used to select students.


Exactly.

One of the dangers of a backwards looking prediction is that the data is biased because selection criteria were applied already. Because there is selection criteria, other potentially causal factors or better predictor variables may have been excluded. So the best this methodology can do is say the selection criteria is adequate, but not enough information exists to say it's the best selection criteria.

It's like saying back in the 80s that women can't be pilots because all the successful pilots in the past were men. Technically true and backed by data, but the data was biased because of the selection criteria used in the past.

The other issue is this study used graduation from UPT as the success criteria, which may or may not be the real measure of success we want. FTU graduation might be a better success criteria (as it also evaluates if the right assignment to airframe was given)

  • Upvote 1
Posted (edited)

You are correct about the fact that the model uses previous data, but that is not survivorship bias. Survivorship bias is when you throw out the unsuccessful candidates and only use those who were successful, but this model actually emphasizes the unsuccessful events to draw insight from them. The fact about data showing women not completing training in the 80's is actually acknowledged within the article, specifically with regards to race and gender parameters, but the results still remain valid.

If we could have 94% of the studs passing UPT, compared to where we are now, that's hundreds of extra pilots through the pipeline each year that we aren't currently getting. These added efficiencies come with essentially no change, except for the selection process.

Edited by Av8
Survivorship bias
Posted

I also just fixed the link in the original post so the full text should be viewable now 👍 sorry about that

Posted

I recently washed out of UPT. Of course I don't have access to big-picture data, but of the 28 of us who started, 5 failed, 1 SIEd and 1 rolled back and then went to another base for reasons unrelated to flying. I know several studs from the class before ours also washed, though I don't have exact numbers. I have a lot of prior (civ and mil) flying experience, I worked extremely hard, and I still failed. Does that disprove assertions that UPT has degraded or become easy? Certainly not. Still, of the 11 in my own flight for most of the way through (1 rolled back but eventually got wings), the majority went to at least one 89 ride or ground eval. My classmates, to my knowledge, were all dedicated and took the program seriously, and I'm happy that most of them graduated.

Posted
8 hours ago, Av8 said:

You are correct about the fact that the model uses previous data, but that is not survivorship bias. Survivorship bias is when you throw out the unsuccessful candidates and only use those who were successful, but this model actually emphasizes the unsuccessful events to draw insight from them. The fact about data showing women not completing training in the 80's is actually acknowledged within the article, specifically with regards to race and gender parameters, but the results still remain valid.

If we could have 94% of the studs passing UPT, compared to where we are now, that's hundreds of extra pilots through the pipeline each year that we aren't currently getting. These added efficiencies come with essentially no change, except for the selection process.

It’s not about throwing out data, it’s about not knowing the data even existed. The survivors are the ones selected for UPT. The tons of unknown data are all the ones that never went to UPT, ie the majority of the population. Granted it’s hard to find out how someone that never went to UPT would do against those who did.

  • Like 1
  • Upvote 1
Posted
6 hours ago, Splash95 said:

I recently washed out of UPT. Of course I don't have access to big-picture data, but of the 28 of us who started, 5 failed, 1 SIEd and 1 rolled back and then went to another base for reasons unrelated to flying. I know several studs from the class before ours also washed, though I don't have exact numbers. I have a lot of prior (civ and mil) flying experience, I worked extremely hard, and I still failed. Does that disprove assertions that UPT has degraded or become easy? Certainly not. Still, of the 11 in my own flight for most of the way through (1 rolled back but eventually got wings), the majority went to at least one 89 ride or ground eval. My classmates, to my knowledge, were all dedicated and took the program seriously, and I'm happy that most of them graduated.

Sorry to hear that, but good on you for not going 100% into despair mode.  

Success at UPT should probably be factored in graduation numbers meeting goals and producing acceptably skilled new pilots.  The AF is fairly good at tracking the first metric, but the second is much more questionably measured 

I know I took the full syllabus plus a few rides, back when it was like 90hrs in T-6s.  New iterations don't give students that much time to figure it out.  Is that necessarily bad?  Not if you can meet production and quality goals... although I question the quality side.  We are either cutting the slower learners or graduating pilots with lesser knowledge/skills, imo.

Posted
15 hours ago, MCO said:

Yeah but I think his point was survivorship bias. You are only deciding the best criteria for selecting students based on the criteria currently used to select students.

Survivorship bias doesn't have anything to do with the criteria being used in an evaluation - it has to do with the "subset" of data points included in the analysis. See the small section about "missing bullet holes" in the wiki: https://en.wikipedia.org/wiki/Survivorship_bias. It's an interesting and counter-intuitive discussion about how our intuition works and how easily our "reasoning" can be led astray by invisible and incorrect assumptions.

In that situation, the mistake the military made was to only look at bombers that returned from combat - not bombers that didn't make it back (i.e. the ones that were shot down). That led them to draw wildly wrong conclusions about where to armor up the bomber fleet. By way of analogy, this study includes UPT graduates (bombers that "make it back") and UPT washouts (bombers that "don't make it back") - it doesn't include intel school washouts and/or AFIT graduates because that isn't going to tell you anything about graduating from UPT. It didn't make sense to include data where P-38s were or weren't getting shot up because it was a study focused on bombers.

It's not survivorship bias, you're advocating for using more dimensions of data - which is fine.

13 hours ago, jazzdude said:

Exactly.

One of the dangers of a backwards looking prediction is that the data is biased because selection criteria were applied already. Because there is selection criteria, other potentially causal factors or better predictor variables may have been excluded. So the best this methodology can do is say the selection criteria is adequate, but not enough information exists to say it's the best selection criteria.

It's like saying back in the 80s that women can't be pilots because all the successful pilots in the past were men. Technically true and backed by data, but the data was biased because of the selection criteria used in the past.

The other issue is this study used graduation from UPT as the success criteria, which may or may not be the real measure of success we want. FTU graduation might be a better success criteria (as it also evaluates if the right assignment to airframe was given)

A few things. First, any prediction that is going to be made, will by definition, be "backwards looking" since there's no such thing as future data. And while there definitely may potentially be better predictor variables out there, the difficulty will be to capture them in a consistent and reliable way across a large population which is distributed across multiple communities and multiple time spans - not an easy challenge. Maybe if we could somehow capture those students who used to "bullseye womprats back on Tatooine" we could enhance our process...it's challenging to get to that level of fidelity though.

Already, the fact that > 85% of UPT candidates make it through provides a high level of confidence that UPT selection criteria are pretty good - squeezing out the last few percent becomes increasingly hard in any endeavor. Any average high school varsity basketball player is in the top 1% of all basketball players on earth. Though we all know there is an enormous difference between that kid and Michael Jordan...

And finally, this is not like saying women can't be pilots. No scientific researcher looking at that data and looking at how people were selected for pilot training back in the 80s would ever draw that conclusion. I get your point about the insight gained being limited by the data, but then so is everything else because we don't have perfect measurement for anything. In any case, all the data used in this study included women.

10 hours ago, Av8 said:

You are correct about the fact that the model uses previous data, but that is not survivorship bias. Survivorship bias is when you throw out the unsuccessful candidates and only use those who were successful, but this model actually emphasizes the unsuccessful events to draw insight from them.

Correct. Though I would say the model "includes" the unsuccessful events in order to learn from them. Not emphasizes.

2 hours ago, MCO said:

It’s not about throwing out data, it’s about not knowing the data even existed. The survivors are the ones selected for UPT. The tons of unknown data are all the ones that never went to UPT, ie the majority of the population. Granted it’s hard to find out how someone that never went to UPT would do against those who did.

So is your suggestion to include people not selected for UPT and then measure how the do in UPT? Or is it to just lump random people into the study who didn't go? I'd pay to see the first executed. If you're suggesting the second, then I think all that study will conclude is that being selected for UPT is the most important data point in determining who graduates from UPT - not exactly a ground-breaking research.

The point is that a study like this is not the same as a vaccine trial. You are already selecting from a group that self-selected and there is nothing you can do as the researcher to affect the outcome you want to examine (UPT graduation) from a group of people that doesn't want to be military pilots.

  • Upvote 1
Posted
43 minutes ago, ViperMan said:

 

Already, the fact that > 85% of UPT candidates make it through provides a high level of confidence that UPT selection criteria are pretty good - squeezing out the last few percent becomes increasingly hard in any endeavor.

That’s the key takeaway. How much more work is it to get that extra few %? It seems like all the new UPT programs are to capture that extra bit, and is the juice worth the squeeze?

Posted
9 hours ago, Splash95 said:

I recently washed out of UPT. Of course I don't have access to big-picture data, but of the 28 of us who started, 5 failed, 1 SIEd and 1 rolled back and then went to another base for reasons unrelated to flying. I know several studs from the class before ours also washed, though I don't have exact numbers. I have a lot of prior (civ and mil) flying experience, I worked extremely hard, and I still failed. Does that disprove assertions that UPT has degraded or become easy? Certainly not. Still, of the 11 in my own flight for most of the way through (1 rolled back but eventually got wings), the majority went to at least one 89 ride or ground eval. My classmates, to my knowledge, were all dedicated and took the program seriously, and I'm happy that most of them graduated.

Splash: Good on you for giving it a shot. Keep your chin up--your UPT performance doesn't define you as a person.

From what I've seen in the MAF, the recent UPT grads are just as good or as bad as the older ones. I've flown with copilots from Altus that blew me away with strong GK/procedures/flying, and I've flown with copilots that were, well, copilots. What's more frustrating to me than the UPT syllabus is that kids graduate UPT, sit for a while, go to Altus, PCS to base X, sit some more while waiting for SERE and water survival, then finally touch a plane again after 3, maybe 4 months. That's a long time to sit; young pilots' hand-flying skills are very perishable.

I have yet to be shocked by a newly minted pilot's (in)abilities after UPT. Does anyone out in the operational squadrons have similar experiences, or the opposite?

Posted

image.png.6e90bd2a8c982519b1110264d7ab99dc.png

Graduation rates for 2011-2018 with an overall rough average of 84%...imagine if our average graduation rate was 94% without degrading the curriculum. 10% seems like a small change, but that equates to hundreds of extra pilots each year through the pipeline. The quality of the program is another question, but this model is agnostic to that program quality. It is a tool to create a new vetting process to improve who we select to go to UPT in the first place. There are shortfalls in terms of its dependence on the 2010-2018 UPT process, but it is definitely a worthwhile discussion to have, and an alternate way of solving the UPT backlog crisis.

 

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...