Problem with Periodization Part II: A Function of Human Error

Last time I talked about Periodization, it probably felt like I bashed on it pretty hard. Well, John Kiely’s “article” (it’s pretty long) is compelling and I recommend you read it. A link to the download off of researchgate is here. Some of the most thought-provoking parts of this article are actually not about training athletes at all, but rather on the amazing ability of humans to make mistakes, really big ones.

The pervasiveness of human planning inadequacies has been extensively documented. High-profile examples include the construction of Denver International Airport (16 months late, costs 300% greater than forecast), the development of the Eurofighter jet (5 years late, $25 billion above predicted cost), the Scottish Parliament building (3 years late, with projected cost of £35million escalating to £414 million) and, most famously, the Sydney Opera House (a scaled-down version completed 10 years late, with estimated costs of $7 million eventually amounting to $102 million). What is startling about such examples is that these projects were not entered into lightly and were not planned by inexperienced amateurs short on resources, yet all still floundered spectacularly.
Academic paper (PDF): Planning for physical performance: the individual perspective. Available from: https://www.researchgate.net/publication/285190238_Planning_for_physical_performance_the_individual_perspective [accessed Apr 2, 2017].

As Kiely states in this part of the article, these aren’t projects that were taken on by people who were short on experience or resources. Their limitations however are actually brought on by the fact that they are human. Humans have innate psychological weaknesses that make planning next to impossible when it comes to handling things with dynamic components. Of course there are some instances where humans have incredible abilities to predict outcomes based on interventions, like mechanistic systems with stable, one-dimensional components. Mechanistic systems have components that serve a single purpose, if one goes bad, it causes the system to get chaotic and the results are either unpredictable or non-existent. Replace that part or repair it, and function is restored. Therefore, when it comes to mechanistic systems, linear and deterministic logic is appropriate for designing interventions which will restore or improve performance.

The preference to view the human body and it’s components (or subsystems) as a mechanistic system is not only archaic, but ridiculous. By nature, complex biological systems are chaotic and unstable. To steal a quote from Kiely’s article: “Complex adaptive system is a term used to describe systems composed of multiple subsystems which interact in non-linear, non-periodic, and non-proportional ways”. Unfortunately, it’s incredibly common for humans to think of things in ways that are comfortable and familiar to them. The idea of a complex biological system with seemingly infinite moving and interacting parts is an uncomfortable thought for most people. We prefer the idea of a mechanistic system in which a certain intervention will produce a predictable result.

That said, when viewing the adaptive responses of a biological system from a mechanistic perspective it officially becomes impossible to predict the outcomes. As Kiely states brilliantly; if the conceptual model or paradigm for understanding the system is flawed, then all subsequent decision making will be compromised. In our case, decision making regarding how we are going to organize training programs is flawed if we assume that we understand how the complex adaptive mechanisms work. The truth is, we have very little information about how adaptive mechanisms work. We “know” how very specific physiological mechanisms work in very controlled environments (i.e. in vitro) but we cannot accurately predict their responses as a component of their actual subsystem and resultant contribution to the biological system holistically.

Considering that these chaotic properties are actually the things that keep complex biological systems in balance, they are a necessary evil. Since the system exists in delicate balance, a single seemingly innocuous event may be all that is necessary to push that system off its axis. Since each of the subsystems have influence on each other, that seemingly innocuous event will have a ripple effect (for better or worse) that can actually be amplified by competitively compensatory mechanisms as it propagates through neighboring systems. In this case, we go from one system being pushed off axis to the manifestation of large scale disruption and dysfunction, usually the result is overuse symptoms like sickness, soft tissue injury, wacky hormonal responses, or just plain old over-tiredness.

Even something as common as our efforts to predict the weather are flawed. Something we “understand” as well as meteorology still has so many influencing factors (which appear stable) that our predictions for the next seven days changes multiple times over the course of a week. The reason for this is that we simply don’t have the technology or monitoring ability to view the subsystems on a fine enough scale. Without the means to accurately assess current states of the subsystems, or their inter-systemic dynamics, they are, by definition: unpredictable.

To make matters worse, there are people responsible for giving us information regarding the behavior of unpredictable systems that we rely on. These “experts” are meteorologists, stockbrokers, market analysts, performance coaches etc. and they all have the duty of earning the trust of the people they deliver that information to. Paradoxically, confident and assertive facades are required to cultivate that trust, while we tend to be wary of the cautious, questioning and cognitive styles that are required to make good decisions. If you flipped on the news and the weather guy says “well I think based on factors A, B, and C, that we could conceivably have rain this afternoon” I bet you’d change the channel, and he’d get fired.

The dogmatic nature of “sticking to the plan” therefore has become heralded as the gold standard for being a good leader and disseminating the course of action. Unfortunately, psychological research has demonstrated that is this very behavior which predisposes us (as humans) to be absolutely miserable at being correct.

Two of my favourite quotes from Kiely’s review are found below:

In the most extensive examination of human predictive ability to date, Philip Tetlock of the University of California at Berkeley collated the precisely specified predictions of a large cohort of experts. This 20-year study involved 284 professionals, all of whom made their livelihood through the prediction and analysis of political and economic trends. All experts were given regular lists of questions and asked to fore-
cast future outcomes. All had access to extensive information, had extensive experience, had high levels of relevant education and were considered leaders in their respective fields. Yet, when the results of the many thousands of predictions were collated, it became blatantly obvious that their ability to predict was universally poor. No single expert came remotely close to being consistently right. In fact, only in certain cases were expert predictions better than what researchers termed ‘dart-throwing chimps’ – in other words, randomly generated guesses (Tetlock 2005).

Academic paper (PDF): Planning for physical performance: the individual perspective. Available from:

https://www.researchgate.net/publication/285190238_Planning_for_physical_performance_the_individual_perspective [accessed Apr 2, 2017].

——————————————————————————————————————————–

An extensive literature has provided illustrations demonstrating that, as a species, we universally struggle with planning and prediction tasks (Fig. 10.1). Paradoxically, in both the seminal work of Meehl and the extensive study of Tetlock, experts with the lowest rates of forecasting accuracy were simultaneously the very ones with the greatest confidence in their predictive abilities. It would appear that this misplaced overconfidence made experts increasingly vulnerable to decision-making error. In contrast, their less dogmatic peers, who resisted the temptation to employ the cognitive shortcut of predicting the future exclusively on the basis of past observations and who refined their perspectives against the emerging evidence, consistently outperformed their more confident peers. The key factors driving such behaviours are innate, instinctive ego-protective mechanisms, which habitually rationalize our successes as resulting from our superior intuition but failure as being consequent to events outside our control (for more on this, see Tetlock 2005).
Academic paper (PDF): Planning for physical performance: the individual perspective. Available from: https://www.researchgate.net/publication/285190238_Planning_for_physical_performance_the_individual_perspective [accessed Apr 2, 2017].

So based on this, not only are the overconfident experts wrong more often, but they are also the most convicted that positive outcomes are their own doing, and negative outcomes are due to factors they could not control or foresee. Ah, maybe that’s the problem. It’s the factors that we cannot PREDICT that makes us suck at making decisions and plans regarding the future. Complex systems are bursting at the seams with factors that are not one-hundred percent predictable. The best way to design a plan therefore, is to view both the successes and failures of the previous plan as your own doing. Only when you accept responsibility for the flaws (not just the successes) are you prepared to make a more educated suggestion as to how you should move forward logically. In other words, projections must be checked against outcomes, and in the world of human performance this means employment of monitoring strategies. Not testing, monitoring. Testing means looking at results and determining what to do next time. Monitoring means looking at current evidence and deciding what to do while there’s time. I’m not saying you’ll be completely accurate in correcting the course, but you’ll at least be using information regarding the current state of affairs to guide your next decision.

(Paraphrased from Kiely:) This adds importance to the design of a sensitive and responsive training process. The training and monitoring process needs to be sensitive to the threats and opportunities that arise as the programs runs its course, and responsive in the sense that it enables us to modify so as to dodge risks or capitalize on opportunities.

Of course, you’re not always going to be working with people who are open to the idea of “protecting” their athletes. Those folks want them full-blast, full-time. Occasionally however, you’ll have an opportunity where coaches are willing to employ things like Session RPE or Power Outputs that can aid in guiding the training process as opposed to reflecting on it. Another thing I’m becoming interested in exploring is RPE intensity prescription. For example, “I want this next set to be a 7/10” as opposed to “here’s 70%, hit it for 5”. Anyone who trains themselves or athletes knows that 70% doesn’t always feel like 70%. This could potentially be an opportunity to have a more sensitive daily intensity prescription and therefore help you dodge overuse issues or at least mitigate them. As I’ve said, I’m looking forward to exploring the advantages and limitations of this method.

Final Thoughts

  1. Chess masters actively seek to falsify potential next-move decisions. In contrast, experienced novices primarily seek evidence that confirms the perceived worth of their next move. Masters tried to pick holes in their own theories, novices seek evidence that positively supports their initial opinions (Cowley & Byrne 2004).
  2. Our brains are made for fitness, not for truth (Pinker 1997). Evolutionary requirements have taught us to seek out simple, straightforward answers. This habitual preference for simplistic rule-based decision making is a predisposition which is especially exposed when we attempt to plan in unpredictable, complex environments.
  3. Proliferation of stress mismanagement syndromes throughout physical performance domains is suggestive of cultural planning inefficiencies… An inevitable byproduct of negotiating fine margins of error, is that thresholds will be exceeded (Kiely 2011).
  4. Research aimed at explaining the effects of training programs on multiple subjects of comparable age and experience repeatedly produce variable results. The same program can produce large, small, and negligible change in multiple people. The same program can also produce different size effects in the same person over multiple exposures. If the system responded in a predictable manner, there would be less inter-subject and intra-subject variability in outcomes.

References

  1. Cowley, M., Byrne, R., 2004. Hypothesis testing in chess masters’ problem solving. In: Fifth International Conference on Thinking, Leuven, Belgium, 22–24 July 2004.
  2. Kiely, J., 2011 Planning for physical performance: the individual perspective. University of Central Lancashire.
  3. Meehl, P.E., 1954. Clinical versus statistical prediction, University of Minnesota Press, Minneapolis, MN
  4. Pinker, S., 1997. How the mind works. WW Norton, New York
  5. Tetlock, P., 2005. Expert political judgment: how good is It? How can we know? Princeton University Press, Princeton, NJ.