Open access peer-reviewed chapter

Knowledge Redundancy Cycles in Complex Mission-Critical Systems

Written By

Darrell Mann

Submitted: 28 February 2019 Reviewed: 14 October 2019 Published: 04 March 2020

DOI: 10.5772/intechopen.90138

From the Edited Volume

Harnessing Knowledge, Innovation and Competence in Engineering of Mission Critical Systems

Edited by Ali G. Hessami

Chapter metrics overview

904 Chapter Downloads

View Full Metrics

Abstract

Based on a 20-year, 10-million case study programme of research, 98% of all innovation attempts end in failure. The main aim of the research has been to decode the underpinning, first-principle-driven ‘DNA’ of the 2% of successful attempts. Sitting right at the centre of this DNA is a triad of fundamentals: the need to embrace the dynamics of complex adaptive systems, the need to actively seek out and eliminate compromises and contradictions, and the need for industry domains to periodically unlearn knowledge that has become redundant. The chapter discusses all three of these pillars. Particular attention is paid to the knowledge redundancy topic, where the fact that the life-cycle of knowledge follows distinct, repeating patterns of evolution at meta, macro and micro- hierarchical levels is demonstrated. The research further demonstrates how organizations can use these patterns to objectively identify redundancy ‘pulse-rates’ and thus objectively manage both the acquisition of required new knowledge and the disposal of knowledge that is no longer fit for purpose. The research shows too that a key aspect of this ‘unlearning’ activity demands that organizational leaders acknowledge and accommodate the very human emotions that accompany change initiatives where the things that define a person’s competence become a hazard to the future success of the enterprise.

Keywords

  • complex-systems
  • S-curves
  • innovation
  • redundant knowledge
  • embedded knowledge
  • emotional design

1. Introduction

Across the evolutionary history of mankind, technology has generally evolved through trial and error. Luck, happenstance and the occasional random ‘Eureka’ moment appear to have been the dominant mechanisms of progress, more often than not, appearing against the prevailing ‘common sense’ of the day [1]. Johnson [2] does not take the story much further with his seven principles of progress, the majority of which also seem to feature a strong bias in the direction of randomness - serendipity, error, exaptation, ‘the adjacent possible’ and ‘slow hunches’ being five of the seven—leaving just ‘liquid networks’ and ‘platforms’ as the two that offer even a glimmer of hope that scientific progress might have any underpinning repeatable ‘theory’ upon which future engineers and scientists might seek to design the future. If Wolpert and Johnson—and the myriad other authors sharing the same basic views—are to be believed, the future progress of mankind is little more than a game of roulette. Except with somewhat worse odds of success. If this is in any way true then things do not seem to bode well in a world of globalization, digitalization and exponentially increasing interdependencies. If we cannot fathom the mechanics of new knowledge creation, what hope is there that engineers, scientists and business leaders can know when existing knowledge has become redundant?

The question is not a trivial one. Especially when viewed through the mission critical lens provided by the recent pair of crashes of Boeing’s new 737 Max aircraft, the first, Lion Air flight 610 on 29 October 2018, and then the second, Ethiopian Airlines flight 302 on 10 March 2019. The first generation 737 entered service in 1968, and the various evolutions have collectively made it the best-selling commercial aircraft of all time. This in an industry that sets the global standard when it comes to safety.

In 1946, meanwhile, as is the case with the large majority of discoveries reported by Wolpert, Johnson and their ilk, an apparently random research question sparked another major discovery. Albeit one that, to date, still has not been recognized by most. The recipient of the question was Soviet engineer, Genrich Altshuller, when he was sent to the Patent Office to determine the differences, if any, between ‘good’ and ‘bad’ inventions [3]. The research—still ongoing today, and now able to take advantage of computerized search techniques that permit several thousand new case studies to be analyzed per week (not coincidentally, the same rate that new patents and patent applications are published)—has, through largely empirical means revealed much of what might be described as the ‘DNA’ of the 2% of innovation attempts that end up achieving success. In examining, at the time of writing, over 10 million case studies have been built into a series of ‘first principles’ focused databases that together reveal:

  1. ‘good’ solutions deliver the customer-desired functions better than previous incumbent solutions, where ‘better’ means an ever-increasing ratio of benefits to negatives (cost, waiting time, harmful side-effects, etc.).

  2. ‘good’ solutions do not make the trade-offs and compromises associated with ‘optimization’-based design strategies taught in schools and colleges, but rather progress by eliminating said compromises and contradictions.

  3. ‘good’ solutions follow clear trajectories of successive contradiction emergences and eliminations.

Sitting right at the heart of the Soviet-originated discoveries is the so-called S-curve of system evolution. S-curves are visible enough that many authors have been able to spot their basic characteristics. Indeed, most engineers and scientists will claim some kind of passing awareness of the dynamics associated with S-curves, albeit few can be observed making meaningful use of the knowledge when it comes to the curves’ relevance to the generation of new knowledge, or the redundancy of old.

The vertical axis on any S-curve picture may be plotted to show any and all of the attributes of a system that might wish to be improved. At a management level, the axis might be plotting customer adoption, or profit, or ROI. At more granular engineering levels, the axis might be plotting performance parameters like speed, range, payload, or, closer to manufacturing operations, quality, waste-reduction or cost-reduction. Oftentimes, all of these attributes can be integrated together so that the curve plots ‘value’. The horizontal axis is usually plotted as time, or, in more enlightened environments, improvement effort expended (Figure 1).

Figure 1.

S-curves and the dynamics of discontinuous change.

The shallow gradient start of the S-curve is usually associated with the inevitable struggle that occurs when a new technology appears. Eventually, assuming a critical mass of ‘early-adopter’ customers are willing to pay enough for the ‘poor’ initial manifestations of the technology, this early revenue will pay for the continuing development of the technology. At some point, there will be some form of internally-controlled production-related Eureka moment—a new manufacturing technology, for example, or a new pricing model—that will allow the curve to follow a much steeper upward trajectory. This ‘stride’ portion of the curve is the joyous stage of an enterprise when life is easy—easy sales, easy improvements and easy knowledge creation and sharing. But then, sooner rather than later, comes the law of diminishing returns top part of the curve; the ‘stuck’ portion. This is where contradictions begin to emerge: whatever it is that the enterprise is trying to improve, ‘something’ increasingly comes to prevent the achievement of those improvements. Then finally—some smart individual or team solves the contradiction and in so doing permits the jump to a new S-curve. And, assuming the ‘right’ new solution is selected, the struggle-stride-stuck dynamics of the S-curve will repeat again.

The process by which systems advance up their S-Curve and the process through which the discovery that permits the discontinuous shift from one S-Curve to the next can be seen to be polar opposites. The majority of the tools, processes and management strategies (Lean, Six Sigma, Agile, etc.) found in modern enterprises have evolved to assist in the process of optimizing systems and thus allowing them to climb their S-curve. As the top of the curve is approached, however, a fundamental law of diminishing returns kicks in, whereby as these tools, methods and strategies are adopted by greater and greater numbers of people tasked with ‘continuously improving’ the system, their efforts produce fewer and lesser benefits. If the tools being used enable optimization, they, by definition, prohibit the real job from taking place. Optimization is the nice word for making trade-offs. Innovation—i.e. the successful jump to the next S-curve—is all about not making those trade-offs and solving the contradictions instead. Alas, the number of engineers, scientists and leaders that even recognize the existence of contradiction-solving tools is still very much in the minority. And consequently, the majority of organizations find themselves stuck at the top of their current S-curve deploying tools that are no longer relevant. In many ways, all that the currently fashionable ‘Agile’ movement has shown is how deploying the wrong tools faster does not equate to faster progress, it merely means improvement teams spin their same trade-off wheels faster and go around and around in ever tightening circles which deliver no tangible progress.

A big part of the task of climbing the current S-curve is about eliminating the complexity of a situation. By understanding how the system works and by ‘standardizing’ as much of it as possible, enterprise leaders have been taught for the last 100 years, that is how the greatest efficiencies are obtained. Standard work makes it much easier to train workers. Making every worker focus on a narrower and narrower area of specialization, on the other hand, also creates enormous knowledge silos. Silos make for high efficiency, but become a serious hindrance when the need to innovate arises.

Innovating—the job of leaving the current S-curve and finding the next one—means embracing the inherent complexities of the world. In the world of standardization there are clear rules and protocols, everything is controllable, mistakes are bad, failure is worse. If there’s a problem, root-cause analysts will find it. In the world of innovation, there are no clear rules—the job is in fact to break the current rules in order to find better ones—seemingly nothing is controllable, there is no such thing as a root cause anymore, and mistakes and failure are one of the primary mechanisms of progress. Failure, in the complex and often chaotic world of innovation, is learning. And in a complex world, the teams that learn the fastest are the ones most likely to prevail.

In the S-curve climbing world of optimization, chaos is to be avoided at all costs, whereas those tasked with working through the fog of uncertainty inherent to the innovation process know that chaos often plays a pivotal role. Some authors, most notably Hurst [4], would say that chaos was an essential component.

The Cynefin Framework [5] offers a pioneering means of displaying the different kinds of operating regime found in the world of business. Originating from work in the knowledge management domain, Cynefin’s starting premise is that unless managers understand which of these regimes they are currently within—obvious, complicated, complex or chaotic—it is highly likely they find themselves using the wrong sorts of methods at the wrong time and for the wrong reasons. More recently the Complexity Landscape Model [6] has extended the Cynefin framework to incorporate a second dimension that now maps not just the system being managed, but also that of the surrounding environment.

Figure 2 takes the ideas of Hurst regarding the importance of chaos in the innovation process and plots a typical S-Curve-to-S-Curve discontinuous change process onto the Complexity Landscape Model. ‘Creative destruction’ and ‘ethical anarchy’ are expressions used by Hurst to describe the thinking necessary to successfully navigate the chaotic ‘no man’s land’ between the prevailing and emerging new S-curves. They are also the engine behind the un-learning that needs to take place during the transition process. In many ways this ‘unlearning of the old’ is as critical to success as the discovery of the new.

Figure 2.

Complexity landscape model and discontinuous change.

If the descent into chaos provides the necessary ‘permission’ to unlearn the current knowledge, the surrounding complex domain demands a shift in the way innovators see the world. The links between cause and effect become highly tenuous and consequently there are no root-causes in the complex domain. Repeatability largely disappears. And all of the optimization-related knowledge acquired during the rise from ‘stride’ to ‘stuck’ very clearly becomes redundant. The only knowledge, in fact, that does not become redundant is that relating to the first principles from which the system behavior emerges.

The discontinuous shift from one system S-Curve to another may thus be seen to have potentially profound implications for the management and flow of knowledge. With this in mind, an important knowledge management metric for any enterprise relates to the frequency with which such discontinuities occur. For the most part, these discontinuity ‘pulse-rates’ are typically not understood or are not managed well in most enterprises. Rather it has been the 10-million case-study research started in the former Soviet Union in 1946 that has done most to reveal this kind of pulse rate information. And specifically, how it varies considerably from one industry domain to another. In high capital investment, slow-changing industries like mining or oil and gas, for example, the discontinuity pulse rate is typically over 30 years. Contrast this with the digital world, especially the ‘innovation cauldron’ that is China, where significant changes are likely to occur every few months, thanks in no small part due to a working culture in which intellectual property is not respected and hence in order for an enterprise to stay ahead of the game, they need to be innovating on an almost continuous basis [7]. Old knowledge in the digital world can become redundant in a matter of months, meaning that workers in the field need to devote a significant proportion of their time to learning new ways of achieving the functions customers wish to have delivered. Contrast this with a typical mining engineer, who might, if they are unfortunate enough to join the profession at the wrong time, might never see a discontinuous change during their entire working career.

One other aspect of the Soviet-instigated research that also features heavily on the knowledge redundancy story involves another revealed pattern associated with the S-Curve. That pattern is reproduced in Figure 3.

Figure 3.

S-curves and complexity.

This graph shows how, despite attempts to reduce system complexity, the actual complexity of a system follows an increasing-decreasing characteristic. The first—increasing complexity—portion of this curve is all about the inherent need to add elements to a system in order to improve performance and functionality. A classic example of this pattern in action in today’s world can be seen with the evolution of the mobile phone—which has progressively added cameras, e-cash, alarm-clock, torch, music-player, and an app-store full of other functional capabilities over the course of the past decade. Sooner or later, however, and probably sooner as far as our phones are concerned, customers begin to complain that the phone is over-serving their needs. Or that they are having to be continually re-charging it. Or that durability is impaired. Something tells the providers of the handsets that the system has reached a maximum viable level of complexity. Once this point has been reached, the motivation of the designers tasked with continuing the evolution story makes the shift towards simplification. The need during this second phase of the curve is to maintain the required functionality of the system but to achieve it with fewer components and lower cost. Complexity, in other words, head in a downward trajectory, in part through cunning engineering design, and also by embedding the complexity so that it is no longer visible to the customer.

The increasing-decreasing complexity curve describes an important aspect of the knowledge story. In effect the shaded area under the curve describes the ‘excess knowledge’ generated during the S-curve evolution journey. Complexity of systems over the course of multiple S-curve jumps tends to head in an upward direction, but in effect during each individual S-curve there is an ‘over-shoot’ that happens because designers aren’t smart enough (or there is not enough incentive to become smarter) during the early evolution of the system to make functional improvements without adding new elements into the system. If designers were smarter, or at least understood the complexity curve pattern, it ought to be possible to avoid much of this complexity over-shoot and the consequent surplus or redundant knowledge that is attributable to it.

A more subtle characteristic of the transition from one S-curve to the next is that, traditionally, when further improvement of the current system becomes difficult, engineers and scientists make many attempts to try and find the new S-curve. Thomas Edison famously tried over a thousand different materials before he found one suitable for the filament in his lightbulb. Almost as famously in more recent times, James Dyson built over 5000 prototypes of his cyclone vacuum cleaner before he found something worthy of release onto the market. In many ways this trial-and-error experimentation—Figure 4—is what sits behind the 98% failure rate of innovation attempts.

Figure 4.

Finding the new S-curve.

This work, too, has considerable relevance to the knowledge creation story. Finding several thousand solutions that do not work represents a considerable amount of redundant knowledge being generated. True, a failed experiment provides a modicum of new knowledge (‘don’t bother doing this again’), but when the modus operandi is trial and error, it inherently means that a significant amount of ‘non-knowledge’ is generated. Traditionally, the generation of new knowledge has been somewhat inefficient. Perhaps because there has been an implicit assumption by engineers and scientists that this is the way the world works.

Initially, of course, it very likely was the way the world had to work. But today, having benefited from millions of trial and error S-curve shifting experiments, the Soviet-sparked innovation-DNA research has revealed a number of patterns that now show how engineers can do something far more efficient than trial and error. Ninety-eight percent of innovation attempts end in failure. If the 98% failed attempts (‘noise’) are removed from the analysis and the remaining 2% (‘signal’) are analyzed, a clear roadmap of successful design strategy begins to emerge. The next section examines two important elements of this roadmap. Just before heading into that story, it is worth summarizing the knowledge flow story in relation to the S-curve pattern described in this section:

  1. When systems make jumps from one S-curve to the next, much of the knowledge associated with the old system is likely to become redundant.

  2. The pace of knowledge redundancy—the ‘half-life’ of knowledge—is determined by the rate at which discontinuous S-curve shifts occur.

  3. During the search for the ‘right’ new S-curve start-point, significant amounts of new knowledge are generated; much of this knowledge will never result in meaningful progress and is thus ‘noise’.

  4. Once the right new S-curve solution emerges from this noise, designers tend to overshoot the complexity of the solutions they design, and thus find themselves generating yet more knowledge that also becomes redundant.

Historically, this process of random knowledge creation and redundancy was deemed to be ‘the way the world works’. It was inefficient but there was no alternative. Innovation in today’s world increasingly cannot afford the 98% ‘waste’ and, like all other aspects of life, needs to begin climbing its own S-curve. The world has experienced the ‘struggle’ portion of the innovation capability S-curve, now it needs to hit its stride … and, thanks to a random cluster of Soviet engineers working on a fortuitously posed question in 1946, perhaps it can …

Advertisement

2. Patterns of system evolution

The trick to identifying and removing the 98% noise present in the innovation world involves, first, recognizing the need to examine the world in terms of function. In the words of the cliché, customers want a hole not a drill. The implication of this cliché is that rather than looking at the evolution of drills, it is more sensible to examine the evolution in terms of how holes get made. Figure 5 examines an evolution journey relating to the function, ‘protection’:

Figure 5.

Early evolution of the ‘protection’ function.

Shields were one of the first solutions by which humans sought to protect themselves against an aggressive threat from others. As time progressed, the design of shields improved, but no matter how light they became, or how much their productions were reduced, shields suffer a pair of fundamental problems. Number one, they have to be held and this causes a severe impairment to the parallel attacking function. Secondly, the bigger the shield becomes, the more of a person it is able to protect, but, unfortunately, this extra size results in extra weight and less maneuverability. There are, in other words, two contradictions—one relating to attack-versus-defense and the other to area-versus-weight. The emergence of these two contradictions effectively saw the shield stuck at the top of its evolutionary S-curve.

Necessity being the mother of invention, eventually the inventors of the day devised the first armor solutions. Now the wearer had two arms free and, although they were still heavy, at least the weight was better distributed and allowed a certain freedom of movement. The new S-curve had arrived. Importantly, like almost every innovation when it first appears, the new system is likely to be inferior in multiple ways to the much-optimized previous solution. The armor manufacture technology barely existed and so, if nothing else, the armor solution was much more expensive to produce than the shield. The armor was also difficult to get on and off, and so on. The new armor S-curve finds itself lower than the top of the shield S-curve for the majority of prospective ‘customers’. At the same time, crucially, a person fortunate enough to be wearing armor is much more likely to be the victor in a fight against someone carrying only a shield. And because of this advantage, more ‘customers’ demand suits of armor and that interest causes the design of the armor to become progressively improved. Armor producers struggled for a while, but eventually hit their stride, and armor became the dominant protection solution and armor was a better solution than the best shield.

The suit of armor, of course, never gets to be the perfect protection solution. Armor technology is also subject to the ‘stuck’ law of diminishing returns found at the top of the S-curve. In this case, the limiting contradiction was about lack of mobility. If the wearer was attacked from the side, it would take several seconds to turn and face the opponent. Necessity being again the mother of invention, eventually some smart designer conceives the idea of chain-mail, and a new protect S-curve begins.

Repeat this story for tens of thousands of innovation step-changes and a very clear pattern of evolution occurs. The most popular version of the pattern is reproduced in Figure 6. It is usually related to the evolution of movement or ‘dynamization’ within systems:

Figure 6.

Dynamization trend pattern.

At the first stage of evolution, like the shield, designers typically create artifacts that are ‘immobile’. Then, for various reasons, they find it beneficial to add joints to enable greater freedom of movement. Sometimes this might mean adding a single joint (think about the evolution of the clamshell or flip-up mobile phones that followed in the wake of the original mono-block handsets), and sometimes—in the case of the armor—multiple joints. Eventually, the jointed solutions evolve to become ‘completely flexible’. From there, there is a likely fluidic stage, which ultimately finds itself replaced by solutions making use of ‘fields’. It is usually not possible to specifically pin down what kind of field this will be in a given domain—it could be an electrical field, magnetic, digital, or anywhere along the electromagnetic spectrum—but it can be said with certainty that field solutions will eventually supplant the earlier mechanical or fluidic solutions. It is more effective, ultimately, to move electrons rather than atoms. Think laser-drills, digital user-interfaces on smartphones, ‘drive-by-wire’ vehicle control systems, electric vehicles, maglev trains, etc.

The Dynamization trend pattern in effect offers designers a roadmap of future solutions. Thus, staying with the protection function for a moment or two longer, if the designer of Kevlar bullet-proof vests was looking to explore likely future technologies, the Dynamization trend firstly asks them to place the current Kevlar solution on the trend (if we are being generous to the Kevlar scientists, we might call it ‘flexible’). Once we have found the current position, the roadmap then tells us the places that others in a similar position have successfully evolved their own solutions. The future of armor, according to this trend is, next, some form of ‘liquid’, and will ultimately become some kind of field. To date, it does not appear that a field-based bullet stopper is possible, but the future of armor does indeed look like a fluidic solution—specifically a non-Newtonian fluid—offers a highly effective and commercially viable solution to the main contradictions present in today’s ‘flexible’ Kevlar vests [8].

From a knowledge redundancy perspective, the Dynamization trend pattern holds a number of important clues. Mechanical solutions will sooner or later be superseded by fluidic ones; fluidic ones will be superseded by field-based solutions. Mechanical engineers and mechanical engineering knowledge is useful when physical artifacts are being produced for a period of time, but ultimately that knowledge will become redundant as the ‘field’-based solutions begin to emerge. The ability to design scalpels, for example, is an important design skill only so long as clinicians decide that lumps of sharpened metal or composite are a better way to conduct invasive surgery than a laser.

To date, 37 other patterns have been identified relevant to the evolution of technical systems [9], 31 patterns have been found pertaining to the evolution of business systems [10], and 26 patterns found relevant to the evolution of IT systems [11]. Figure 7 illustrates one of the other technical evolution patterns, one with particular relevance to the exploitation of knowledge in the design of mission critical systems:

Figure 7.

Resilient design trend pattern.

Mission critical means achieving high levels of reliability, availability and system resilience. Such parameters are typically measured in terms of the number of ‘nines’ a given design is able to deliver. A ‘two-nines’ solution, for example, will be available 99% of the time; three-nines will be available 99.9% of the time, and so on. State of the art automotive design will aim to achieve four- or five- nines levels of performance. Commercial aircraft will typically be at ten- or eleven-nines levels of performance. The Resilient Design trend pattern reproduced in Figure 7illustrates the various step-change different design methodologies (i.e. as with the Dynamization pattern, each stage may be viewed as a distinctly separate S-curve to the one that precedes it). The first stage of the pattern is the ‘trial and error’ methodology that essentially exists within the innovation world today. The average 98% failure rate of the innovation world—i.e. slightly below two-nines—is typical of what’s achievable when designers essentially make guesses about how to design the systems they are responsible for.

The next stage beyond ‘trial and error’ sees the adoption of some form of steady-state prediction model. Typically, this means defining an ‘optimum’ condition for the system—a production-line, for example, or an internal combustion engine ‘cruise’ state—and modeling the desired interactions between the various components of that system. Such a model permits each of the variables in the system to be ‘optimized’ to deliver the most efficient performance, and then, from an operational perspective, the aim becomes to operate the system at that optimal condition for as much of the time as possible.

The next evolution stage then sees system designers coming to recognize that while ‘steady-state’ might be a target, there will be inevitable ‘transient’ conditions where the system may well be expected to operate far from its optimal state. A cold engine, for example, is not the same as one that has reached its cruise operating temperature; a production-line at a shift handover is likewise non-optimal. Building a transient model of the system permits ‘optimal’ performance to be extended to a broader and broader spectrum of transient conditions.

Next up comes the ‘slow degradation’ design capability stage. This is the stage that recognizes that systems—particularly physical ones—wear out over time. A new engine is not the same as one that has been driven 100,000 km. A slow-degradation design capability thus enables designs that are able to operate optimally (and safely) as aging takes place.

A slow-degradation capable design capability, depending somewhat on the level of complexity of the system under consideration, will enable up to seven-nines levels of reliability. Going beyond this level demands another step-change in capability. This time one that in effect creates a ‘whole-system’ analysis capability in which elements that one might think aren’t connected to one another in actual fact are. In theory the behavior of the jet engine mounted on the wing of an aircraft should have no impact on the design of the nose landing gear, but in a cross-coupling level design capability, potential interactions between the two are incorporated into the design process.

The next S-curve jump in design capability builds on the need to mitigate for the dangerous user. In this way of thinking, no matter how big a mistake a user makes, the system ought to be able to survive.

Finally—so far—comes the ‘AntiFragile’ evolution stage [12]. This is a level of system design capability that not only sees the system able to survive abusive treatment, but also to become stronger as a result of that abuse. As in the Hydra from Greek mythology, or, in less metaphorical terms, the growing number of software systems that ‘learn’ from attempted attacks such that those attacks will be automatically dealt with in the future.

If the previous Dynamization evolution pattern was about the knowledge acquired at each new stage to make previous stage knowledge largely redundant, with the Resilient Design pattern, the knowledge acquired from each previous stage tends to become embedded within the next stage. It might thus become essentially invisible to the user, but it still has a role to play in the successful design of future systems. An example of this form of embedded knowledge can be seen in the design of aircraft control systems. When a pilot in an early generation was coming in to land, they would be expected to manually set the flap-angles on the aircraft’s control surfaces. It is very unlikely that a modern day commercial pilot would have anything beyond a passing awareness of their flap-angles, the aircraft’s control systems now effectively having evolved to the point whereby, under normal landing conditions, the pilots will do little more than programme the desired runway location and let the aircraft take over all the functions necessary to land safely.

Advertisement

3. Emotions and intangibles

Considered from purely rational perspectives, the knowledge acquisition/disposal story appears straightforward: where a system is on its S-curve journey informs us when there is a need for new knowledge, and conversely, when there is a need to dispose of redundant knowledge or embed still useful knowledge. Of course, when the discussion centres around human beings, ‘purely rational’ is rarely a possibility.

An engineer or scientist that has devoted the best part of their career designing mechanical systems is highly unlikely to welcome the advent of a disruptive ‘field’-based successor technology. That same engineer is not likely to be much more welcoming of a new ‘idiot-proofing’ design methodology that forces them to double or triple the amount of time they have to devote to conceiving unlikely failure scenarios.

Take the case of physician, Barry Marshall, an Australian internist that resorted to drinking an infectious broth of Helicobacter pyloria in order to demonstrate that stomach ulcers were caused by a bacterial infection that could be treated with anti-biotics. The prevailing medical advice of the time was that ulcers were caused by stress and could only be cured by removal of said stress, or, rather more drastically, removal of the stomach [13]. For an industry supposedly built on the premise of clinical evidence, long after Marshall had demonstrated that ulcers were caused by bacteria, the large majority of ulcer specialists were still looking in the direction of stress and gastrectomy for their patients. Humans, it seems, do not like being wrong [14].

It is often said in academic circles that redundant old knowledge only truly disappears with the death of its originator. If this is the case, it brings a whole new dimension to the S-curve pulse rate story.

In industry terms, the equivalent of this phenomenon comes from the average tenure within a job role. Over the course of the last 50 years, tenure pulse rates have risen considerably. Go back three generations from today, and most workers would spend their whole career in the same basic job. Today, many will expect to be shifting every 2–3 years. Faster still in the management domain.

This rising pulse rate is likely a good thing from the perspective of removing redundant knowledge, but it sees the emergence of another, perhaps more pernicious problem: the unintended removal of knowledge that continues to be relevant.

If the average tenure period of personnel is greater than the knowledge pulse rate, then there is a possibility that acquired knowledge will be preserved. If, on the other hand, average tenure is less than the knowledge pulse-rate then valuable knowledge will inevitably be lost. This is especially true of the tacit knowledge that is almost impossible to meaningfully record. Unless that tacit knowledge is transferred person-to-person while ‘on the job’ the likelihood is that it does not get transferred at all.

Advertisement

4. Mission critical systems

When thinking about ‘mission critical’ systems, the benchmark for safety and resilience is set largely by the aerospace sector. Safety is everything, the factor that unites the whole industry. When planes fall out of the sky it is not good news for anyone. Therefore, the moment an incident occurs, it is investigated rigorously and objectively and the findings spread across the industry to ensure that a repeat will never happen. This is the way to build the world’s safest industry.

But then, of course, the innate human desire for ‘more’ and the inevitability of the S-curve dynamics sooner or later pushes systems towards dangerous cliff edges. The full story of the two Boeing 737 Max accidents are not yet known, but it is nevertheless possible to see that something significant has shifted.

The aerospace industry in general, and Boeing in particular have a long and successful track record of evolving their products in order to offer customers better performance, economy and reliability, and so, over the years, there have been several versions of the 737.

In order to ensure safety, the industry takes very complicated systems (‘600,000 components flying in close proximity’) and makes them ‘simple’ for operators by imposing strict constraints on what is and is not allowable for pilots to do. In terms of our Complexity Landscape Model (CLM), for the early evolution from the 100- to the 200- Series 737, the aim is to constrain the operating complexity such that the aircraft sits in the simple-simple domain, with sufficient variability to operate above the Ashby Line [15] (Figure 8):

Figure 8.

Complexity landscape model—Boeing 737,100–200 series.

One of the 100- to 200 Series evolutions arrived with the advent of much more fuel efficient high-bypass-ratio turbofan engines. This new generation of engines offered the potential to save a substantial amount of fuel, but at the expense of having a bigger overall size than the pencil-like low-bypass-ratio engines they replaced—Figure 9. These bigger diameter engines created a complicated problem for the 737 design team: how to fit them in the space under the wing without having to re-design the wing or the undercarriage. Here was a classic engineering contradiction. The answer, now widely familiar as an illustration of another of the Soviet-research discovered contradiction-solving strategies, Asymmetry, was to design the ‘squashed’ engine nacelle.

Figure 9.

Shift from low to high bypass ratio engines on Boeing 737.

As shown in Figure 9, the need for the new, higher diameter, engines created a complicated problem. When the designers successfully solved the contradiction associated with this problem they made use of complicated design tools and methods. And then, once the problem had been solved and validated through a series of qualification trials, the productionised solution would be effectively no different from the operator perspective.

The latest, Max, evolutions of the 737, in theory at least, created a similar CLM development programme trajectory. Firstly, a desire to improve performance triggering a series of complicated engineering challenges: even bigger diameter, heavier engines, stretched and strengthened fuselage and new dual-feather winglets.

Yet again, the desire for increased fuel efficiency saw the creation of bigger, heavier engines, and yet again there was a desire to not make big changes to the undercarriage or wing design. This time the solution involved moving the engines forward and upward slightly, and an increase in the height of the nose landing-gear. One of the consequences of these moves was to alter the balance of the aircraft. Another complicated problem, but one that the engineers were able to solve using changes to the control software of the aircraft.

So far so good. Simple, resilient, well understood system, has complicated changes imposed on it, which get solved, and validated … and, hey presto, the new aircraft design returns back to ‘simple’ from the operator perspective.

Except. Not quite. This time around the business imperative was much greater than in the past. Airbus were winning lots of orders thanks to their new, fuel efficient A320neo, and Boeing were forced to offer airlines a more competitive 737. Costs are always important, but now they became critical to securing future business. One constraint put on the engineers was to ensure the flyability of the Max was as near as possible the same as for the ‘classic’ 737 s. This would mean that pilots could be re-trained very easily. Again, a complicated problem that the engineers seemed to have found a fix for. Another cost constraint then starts to appear: on-time delivery of the new aircraft. As is the way in the airline industry these days, if aircraft are delivered late, airlines benefit from substantial compensation fees.

This time pressure now hits the programme managers. And specifically the cost-schedule-quality iron-triangle. Which two did the Boeing senior managers want? On budget, on time, or to the right quality?

No-one can as yet know for sure how the programme managers and their managers chose to tackle this iron-triangle problem. But what we can say for sure is that the problem is no longer a purely technical one. Crucially, the moment we bring humans—most project managers count as humans—into the equation, a previously complicated problem has now become complex …

The problem context (environment) having transitioned into the Complex domain, now demands a system capable of dealing with that complexity. The fact that two 737 Max aircraft have fallen out of the sky and killed 346 people tells us that the system did not possess that requisite level of capability. Nothing had changed about the project management iron-triangle—i.e. this knowledge was still relevant—and so almost inevitably, given the average tenure of the project management community, important (tacit) knowledge had been lost from the management system. Including, it thus far seems, the essential ‘first principles’ knowledge essential to the management of any complex system.

In the same way that it is very possible to push a technical system across a boundary (from Simple-to-Complicated, for example, or Complicated-to-Complex), it is also very possible that the business and social systems surrounding that technical system can also see similar boundaries being crossed. The premise for building the Complexity Landscape Model was to help organizations to know where and when such boundaries do get crossed. And the reason that premise arose in the first place was the observation that almost none of the world’s enterprises or those tasked with leading them had the first clue that such boundaries existed, never mind that they might be being crossed (Figure 10).

Figure 10.

Complexity landscape model—Boeing 737,300-max series.

Advertisement

5. Conclusions

In order to establish the validity or otherwise of the prevailing knowledge existing within a given domain, it is incumbent upon those responsible for the effective functioning of systems to understand:

  • What the domain S-curve pulse-rate is, and whereabouts on the S-curve cycle they are

  • Where in the CLM the system is operating, and, if complexity is involved, what the ‘first principles’ from which the overall system behavior emerges are.

  • Whether the intangible and emotion related human factors regarding knowledge are consistent with the retention of requisite levels of knowledge.

Experience working with large numbers of organizations over the course of the last 20 years has revealed that very few are able to answer these questions. For the most part this is due to widespread ignorance regarding the Soviet-sparked research to reveal the ‘DNA’ of innovation.

This DNA reveals:

  • The future of successful system designs is highly predictable, at least in terms of where systems will evolve in the future.

  • If domain (S-curve) pulse-rates can be determined, the ability to foresee knowledge redundancy cycles is increased to the point of meaningful science.

  • A multitude of Trend Patterns (like the Dynamization and Resilient Design patterns used as illustrations in Section 2) assist knowledge managers to determine what the ‘next’ knowledge will be.

  • Domains tend to ‘overshoot’ on the knowledge generated during the advance through the S-curve. Much of this overshoot comes as the result of working on optimization tasks. When S-curve discontinuities occur, all of that optimization knowledge becomes redundant.

  • During some S-curve shifts (e.g. advances on the Dynamization Trend) significant proportions of the previously relevant knowledge become redundant.

  • During other shifts (e.g. the Resilient Design Trend) some of the knowledge from the previous S-curve becomes embedded within the new.

  • When the prevailing environment is or transitions to being complex (as when human beings become involved directly in a system), the only knowledge that is meaningful is that which pertains to the first principles upon which system behavior emerges.

  • This ‘first principle’ knowledge falls into two main categories: one relating to function, and one related to strategies for resolving contradictions.

The ongoing Boeing 737 Max story demonstrates that the technical aspects of the two lost aircraft problems were complicated rather than complex, and that the appropriate (contradiction-solving) knowledge was in all probability brought to bear to provide appropriately resilient technical solutions. In the case of the surrounding ‘business’ issues, however, challenges that were inherently complex, the requisite management knowledge was either lost or missing. The required first-principle-based knowledge (e.g. the project management iron-triangle) had not pulsed since the initial launch of the 737 in 1968, but between 1968 and the present day, several generations of managers had followed one another, and, in so doing, it appears likely that a critical mass of tacit project management knowledge has been lost.

The advent of the Internet is frequently described as the engine that has caused the generation of new knowledge to expand exponentially. If the data scientists are to be believed, knowledge doubles in periods now less than a year. What the Innovation DNA research has revealed, however, is that the large majority of this apparent new knowledge is merely noise (Figure 11).

Figure 11.

Knowledge redundancy in mission critical systems.

First Principles knowledge is remarkably stable. The laws of physics are essentially just that: laws. As mankind’s understanding of these laws evolves, the ‘first principles’ will evolve too, but their half-life generally speaking is measurable in decades or centuries. More subtle, but the Soviet innovation-DNA research has also revealed the relative stability of knowledge pertaining to the emergence and resolution of contradictions. Innovation—the successful transition from one S-curve to another—is in effect driven by this contradiction story. Innovation, to all intents and purposes, is contradiction solving. Knowledge pertaining to how contradictions are solved thus becomes one of the critical factors in the knowledge management story. If organizations are not managing the contradictions in their business, they are placing their future on a 98% likelihood of failure.

References

  1. 1. Wolpert L. The Unnatural Nature of Science. London: Faber and Faber; 1992
  2. 2. Johnson S. Where Good Ideas Come from: The Natural History of Innovation. London: Allen Lane; 2010
  3. 3. Altshuller GS. Creativity As an Exact Science: The Theory of the Solution of Inventive Problems, Translated into English. London: Gordon and Breach; 1984
  4. 4. Hurst DK. Crisis and Renewal: Meeting the Challenge of Organizational Change. Boston: Harvard Business School Press; 1995
  5. 5. Snowden DJ. Boone MEA. Leader’s Framework For Decision Making. Harvard Business Review; 2007
  6. 6. Mann DL. If all you Have Is a Hammer: TRIZ and Complexity, Paper Presented at ETRIA TRIZ Future Conference. Marrakech; 2019
  7. 7. Lee KF. AI Superpowers: China, Silicon Valley and the New World Order. Boston: Houghton Mifflin Harcourt; 2019
  8. 8. Teel R. Army explores futuristic uniform for SOCOM, US Army Research, Development and Engineering Command Public Affairs bulletin. 28; 2013
  9. 9. Mann DL. Hands-on Systematic Innovation. 2nd ed. UK: IFR Press; 2009
  10. 10. Mann DL. Hands-on Systematic Innovation for Business and Management. UK: IFR Press; 2007
  11. 11. Mann DL. Systematic (Software) Innovation. UK: IFR Press; 2008
  12. 12. Taleb NN. Anti Fragile: Things that Gain from Disorder. London: Penguin Random House; 2012
  13. 13. Weintraub, P. The Doctor Who Drank Infectious Broth, Gave himself an Ulcer, and Solved a Medical Mystery. Waukesha, WI: Discover Magazine; 2010
  14. 14. Schulz K. Being Wrong: Adventures In the Margin of Error. New York: Ecco Press; 2010
  15. 15. Ashby WR. An Introduction to Cybernetics. London: Chapman and Hall; 1957. Available at: http://pespmc1.vub.ac.be/books/IntroCyb.pdf

Written By

Darrell Mann

Submitted: 28 February 2019 Reviewed: 14 October 2019 Published: 04 March 2020