Open access peer-reviewed chapter

Introduction to Complex Systems, Sustainability and Innovation

Written By

Ciza Thomas, Rendhir R. Prasad and Minu Mathew

Submitted: 17 October 2016 Reviewed: 02 November 2016 Published: 14 December 2016

DOI: 10.5772/66453

From the Edited Volume

Complex Systems, Sustainability and Innovation

Edited by Ciza Thomas

Chapter metrics overview

2,571 Chapter Downloads

View Full Metrics

Abstract

The technological innovations have always proved the impossible possible. Humans have all the time obliterated barriers and set records with astounding regularity. However, there are issues springing up in terms of complexity and sustainability in this context, which we were ignoring for long. Today, in every walk of life, we encounter complex systems, whether it is the Internet, communication systems, electrical power grids, or the financial markets. Due to its unpredictable behavior, any creative change in a complex system poses a threat of systemic risks. This is because an innovation is always introducing something new, introducing a change, possibly to solve an existing problem, the effect of which is nonlinear. Failure to predict the future states of the system due to the nonlinear nature makes any system unsustainable. This necessitates the need for any development to be sustainable by meeting the needs of people today without destroying the potential of future generations to meet their needs. This chapter, which studies systems that are complex due to intricateness in their connectivity, gives insights into their ways of emergence and the nonlinear cause and effects pattern the complex systems use to follow, effectively paving way for sustainable innovation.

Keywords

  • chaos
  • complex systems
  • complex networks
  • complexity theory
  • information systems security
  • sustainable networks

1. Introduction

Today, in every walk of life, we encounter complex networks, whether it is the Internet, which is a global system of interconnected computer networks; or the communication systems, which is a collection of individual communications networks, transmission systems, relay stations, tributary stations, and data terminal equipment usually capable of interconnection and interoperation to form an integrated whole; or the electrical power grids, which is an interconnected network for delivering electricity from suppliers to consumers; or the financial markets, which is any collection of traders, firms, banks, and financial exchanges; or the food web, which is a system of interlocking and interdependent food chains; or the metabolic networks, which is the complete set of metabolic and physical processes that determine the physiological and biochemical properties of a cell; or the social networks, which is a network of social interactions and personal relationships. The interacting units and intricate connections make these systems complex. Though these complex systems perform diverse functions, studies show that there are similarities in its structure.

Complexity has been defined in various ways in different fields. Moses [1] defines complex systems as “A system is complex when it is composed of many parts that interconnect in intricate ways.” According to this definition, the number and types of parts of a system decide its complexity. The systems become complex when these parts are connected in a nonregular way. The information about the nature of interconnections is usually insufficient and this makes modeling difficult.

Rechtin and Maier [2] define a complex system to consist of a set of different elements so connected or related as to perform a unique function not performable by the elements alone. The complex system acts as “whole is greater than parts,” which means a complete knowledge about the parts will not be sufficient to predict the working of a system as a whole. Parts show different behavior than when working together as a whole. The traditional scientific method of studying a system is to dissect a system into its parts, model them into a formal system, study individually, and then combine them to get the whole information about the system. This works on the assumption that the parts combined together will correctly give complete information about the system as a whole and the behavior of the system can be predicted completely. However, in the case of complex systems, it is not possible to get the complete information about the system just by combining parts, because information is generated, not in parts per se but in the intricate connections between them. The component parts of a complex system are more or less complex entities linked with one another and organized within an ordered structure of mutual interaction. It is not mandatory that the web of linkages is visible—instead, it may consist of cause and effect relations which originate by the way of communication. Hence, studying the behavior of complex systems, in effect, is the study of the system as a network of individual parts and the emergence of complex networks from the interconnection between individual parts.

Strogatz [3] describes the complications in studying complex systems. Complex systems show structural complexity because the nature of connection is intricate, with connections changing dynamically over time as in the case of World Wide Web, where pages and links are created and deleted all the time, making the evolution of network very dynamic and hence complex. These links are not homogenous and they show considerable diversion in terms of weights, direction, and signs. The nodes itself can be nonlinear dynamic systems since the state of nodes is varying in time. The nodes can be of diverse types as in the case of biochemical networks. Moreover, they affect each other and this interdependency result from networked risks, which means a small disturbance caused by a component, can result in a catastrophic failure of the system. Global financial meltdown of 2008 can be cited as an example of this type of cascading failure. In today’s highly interconnected world, coupling of different kinds of systems makes the effects of failure uncontrollable in a global scale. Any naturally occurring complex system has a tendency to evolve automatically and those networks are quite resistant to small initial variations causing a tremendous effect on the entire system.

Helbing [4] argues that global interdependent systems produced by intricate networking are prone to cascading global-level failures even when external shocks are absent. He discusses various anecdotes of failures and calls for a new approach called Global Systems Science for the study of these phenomena and gives a detailed description of drivers and examples of systemic instabilities.

Networks have interconnected the distributed, heterogeneous systems and offer a wide variety of applications that were never dreamt of. However, there are issues springing up in terms of complexity and sustainability in this context, which we were ignoring for long. Any creative change in a complex system poses a threat of systemic risks. The exploitation of the technologies has of late led to the verge of a complexity crisis. Due to its unpredictable behavior, these networks are vulnerable to a cascade of overload failures, which can in turn cause the entire or a substantial part of the network to collapse. It is the artificial or the man-made complex systems like the Internet that gets affected by cascading failures capable of disabling the network almost entirely. Even though humans have all the time obliterated barriers and set records with astounding regularity, the irony is that an innovation is always introducing something new, introducing a change, possibly to solve an existing problem, the effect of which is nonlinear. Failure to predict the future states of the system due to the nonlinear nature makes any innovation vulnerable. Hence, there is a high possibility that an innovation can make a system unsustainable. Many examples like invention of plastics, usage of fossil fuels, etc., can be cited especially from global climate change studies. A disturbing question arises in this context: Is there a contradiction existing between innovation and sustainability? However, innovation is the only way to progress, and hence in the case of complex systems, innovation has to happen in the backdrop of sustainability. This necessitates the need for any development to be sustainable by meeting the needs of people today without destroying the potential of future generations to meet their needs. Hence, the study of complex systems, stability of complex systems, and the interdependence of the complex system components is inevitable today and is discussed in detail in this chapter. The remaining part of this chapter is to study systems that are complex due to intricateness in its connectivity and provides insights into its ways of emergence, and the nonlinear cause and effects pattern the complex systems use to follow, effectively paving way for sustainable innovation.

This chapter is organized as follows. Section 2 gives an introduction to reductionism and discusses the insufficiency of reductive approach in addressing the behavior of chaotic systems. Emergent behavior and order in chaotic system is addressed in Section 3. Section 4 discusses the relation between chaotic systems and the structure of complex systems. Section 5 looks at the Internet as a typical example of complex system with information system security as the main focus. Self-organized criticality and interdependence and its effects are also discussed in this section. Section 6 concludes the chapter.

Advertisement

2. Reductive approach

Nature and all its phenomena are innately mathematical. That was the confidence history accepted for a long time. All of nature’s intricate and beautiful phenomena were simplified into mathematical abstractions and discussed extensively. This notion got a firmer foothold when Sir Isaac Newton discovered a common law that governed an apple falling from a tree and the splendid “planetary motion.” The Newtonian theory came with all intelligibility for an acceptable explanation. There was no notion of a complex system as all observable phenomena were represented by mathematical equations, more precisely, differential equations. Newtonian physics took up the center stage and paved way to reductionism. Reductionism is the practice of simplifying or minimizing a complex issue or condition, and it deals with the theory that every complex phenomenon can be explained by analyzing the most basic and simple component that exists during the phenomenon.

Reductionism is the first footstep of modern science. The reductionist philosophy holds that a complex system is nothing but the sum of its parts and a complex phenomenon can be explained completely in terms of the interactions between more fundamental phenomena. To put it simple, analysis is done by breaking down the larger system into pieces or individual parts and determining the connections between the parts. We credit this procedure to the mathematical concept of linearity and additivity. If we add the small individual parts together, the complexity will increase in a linear manner resulting in the same original complex system.

Furthermore, reductionism infers that there exists a point-to-point connection from the starting state to the mature state of a complex system. If we know the starting state, we have 100 percent predictability of the mature state. If we know the rule that governs a system to go from state 1 to state 2, we can apply the same rule and calculate the parameter changes from state 2 to state 3. This notion is the concept of extrapolation in mathematics. The concept nullifies going stage by stage to find out the mature state of a system. Instead, it demands that applying the same rule for as many iterations as needed, we can find out the mature state just by knowing the starting state of a system and vice versa [5].

With all these rules of reductionism and definition of procedures in terms of mathematics taken seriously, the Universe becomes a clockwork machine. This type of Universe (also known as the Laplace demon), made famous by Laplace, enjoyed an established set of rules. According to Laplace, it is possible to both calculate the entire history of the Universe and to predict its entire future if there is sufficient information about the current state of the Universe in addition to all of its fundamental and unchanging laws. However, scientists have always scrambled to find a mathematical model for the natural phenomena like weather, tides, clouds, and the human body.

Last century saw physicists investigating the dynamics of atoms and everything smaller than them, as if matter and all its wonderful properties can be explained by protons, neutrons, and going further deep into quarks and gluons [6]. Subsequent research in medical sciences devoted time in understanding and developing the tools for comprehending the workings of genes. The reductionist mindset has pervaded the field of molecular biology for half a century. Since the biological systems are composed solely of atoms and molecules, it is assumed that it should be possible to explain them using the physicochemical properties of their individual components, down to the atomic level, until and unless there is the influence of “alien” or “spiritual” forces.

For science, measurement of properties of minute particles was obviously the next step. However, physicists encountered variability, or noise as it is known in the reductive world, in those measurements of fundamental particles. Variability was stated as the discrepancy from the actual true measure. This noise represented system error and it is something that has to be avoided to get the true result. The way to reduce noise is to become more and more reductive. Reductionists argued that the closer we look at a system, the closer the obtained result will be to the actual result.

Human body is a scale-free system. Consider the tree-like bifurcating structure of the nervous system, or the circulatory system, or the pulmonary system. If the genes specify the bifurcations in a human body, there is not enough genome to code for this entire scale-free structure and all the activities that each cell is capable of. The neurons, arteries, and dendrites have a bifurcating structure which cannot be coded for and each individual part given a specific function. For a point-to-point connection to exist between the genes and the various things that a human is capable of, there is a lack of genes in our body. All the information is not coded in a single cell; it is coded in the interactions between different cells. Biological problems cannot be solved in a reductive manner [5]. We know the qualities of oxygen atoms and we know the qualities of carbon atoms. By reductive philosophy, we should be able to determine the qualities of carbon dioxide. Actually we cannot. We are familiar with the qualities of carbon dioxide only because we have observed them empirically. This does not concede with the additive philosophy. Brownian motion of molecules cannot be explained by reductionism. Reductive philosophy does not count for a role of chance. With randomness in a system, we will not be able to have absolute knowledge of its mature state from the initial state. In the reductive world, anything without an established absolute certainty becomes invalid.

Advertisement

3. Chaos in nature

Today, advances in both physics and biology have exploded the myths laid out by the reductionist approach. Accordingly, how inanimate and animate things work, and are made, cannot be explained simply by breaking it down into their components. Scientific developments obviously state that the specificity of a complex biological activity does not arise from the specificity of the individual molecules that are involved. Experiments have shown time and again that reality does not obey classical deterministic rules. General relativity is a good conceptual tool that describes many phenomena very accurately on fairly small scales. This being a deterministic rule, it is often argued that it serves as a limitation for reaching out to a unified theory. As opposed to Laplace’s claim, mathematics begins to fall apart—as indicated by infinities and time anomalies (what they call variabilities) that pop up—when it is (mis)applied or when trying to “solve” the state of the entire Universe [7]. When some very fundamental questions on life and Universe come to the fore, the general relativity theory remained inept to answer. For as long as science inquired about the laws of nature, it has suffered ignorance about the sudden changes and the disorder in the atmosphere, the turbulence of the wind and the sea, the fluctuations in wildlife and human populations, and the variations of signals from the brain and heart. The most simple and interesting things could not be regulated in a simple reductive way. No consensus on a satisfactory answer has led to the emergence of the concept of chaos.

“As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.”

– Albert Einstein

3.1. Chaos and complexity

In the scientific world, we realize chaos everywhere. A rising column of smoke breaks into wild swirls. A flag snaps back and forth in the wind. A dripping faucet goes from a steady pattern to a random one. Similar is the fluttering of leaves in the wind or the trajectory followed by a double rod pendulum or the ball in a pinball machine. Chaos appears in the behavior of weather, behavior of an airplane in flight, behavior of road traffic, behavior of oil flowing in underground pipes, population changes, or swarm formation of bees. Chaotic systems are not reductive. If a cloud does not rain, we cannot examine the cloud’s component parts and come up with a solution. If we divide up a complex system into its pieces and add them up again, the resultant can be another system as the individual parts may add up differently. Knowing the initial state of a complex system will not result in a 100 percent predictability of the final state. The concept of extrapolation, linearity, and additivity does not apply to a complex system. The only way to know the mature state of a complex system is to go step by step. There are no patterns that the system follows over and over again. Chaos breaks across the lines that separate scientific disciplines. That realization has begun to change the way neurologists look at brain activity, the way business executives make decisions about insurance, the way astronomers look at the solar system, the way political theorists talk about the stresses leading to armed conflict, and the way economists look at organizational structure. Last 20 years, science saw an increase in specialization, looking for an order in all of nature. Dramatically, that hunt for specialization and order has reversed because of chaos. As Glieck [7] mentions in his book of Chaos-Making a New Science, “Where chaos begins, classical science stops.” Chaos is a science of process rather than state, of becoming rather than being. Being a science of the global nature of systems, it has brought together thinkers from fields that had been widely separated.

Traditionally, when physicists saw complex systems, they tried to model it by complex functions and interactions. For complex results, they looked for complex causes. When they encounter a seemingly random relationship between an input to a system and its output, they simulated it by introducing artificial noise or error. They were ignorant of the fact that variability is intrinsic to the system and that quite simple mathematical equations could model systems every bit as violent as a thunder storm. Senge [8] defines complex systems based on cause and effect in it as “A system presents dynamic complexity when cause and effect are subtle, over time.” The cause and effect pattern is not static over the time, but it changes along with the emergence of the system. A local effect may not give a clue about the global effect, so the interventions produce complex unpredictable effects. Tiny differences in input could quickly escalate and become overwhelming differences in output. This phenomenon of chaotic systems being highly sensitive to initial conditions is termed as the “butterfly effect”—a notion that a butterfly flapping its wings in Perkin would result in a thunder storm in New York a month later [7].

3.2. The butterfly effect

The butterfly effect was first observed by Edward Lorenz, a research meteorologist at the Massachusetts Institute of Technology. In 1960, Lorenz created a weather model on his computer [9]. It consisted of an extensive array of complex formulas, but the computer system had neither the speed nor the memory to manage a realistic simulation of the earth’s atmosphere and oceans. Yet the program behaved quite like the real weather; it never seemed to repeat a sequence. One day Lorentz had let the program run on certain parameters to generate a certain weather pattern, and he wanted to take a better look at the outcome. Instead of starting the whole run over, he started midway through. To give the machine its initial conditions, he typed the numbers that had come up in the earlier run. This new run should have exactly duplicated the old as he inputted the exact results from the preceding run. The program had not changed. Yet the weather result diverged so rapidly from the previous run such that all resemblance to the old one was gone. It was not a malfunctioning of the system, but an infinitesimal difference in the input that scaled exponentially to cause an overwhelming difference in the output. In the computer’s memory, six decimal places were stored: 0.506127. On the printout of the result, to save space, just three appeared: 0.506. Lorenz had entered the shorter, rounded-off numbers. And that puny little inaccuracy appeared to amplify and cause the entire system to swing out of whack. Lorenz weather toy was implementing the classical program. It used a purely deterministic system of equations and the small errors proved catastrophic. Given a particular starting point, the weather would unfold exactly the same way each time. Given a slightly different starting point, the weather at the next moment would unfold in a slightly different way. This is the new initial condition which results in totally different mature state values compared to the previous run. This variability continues in a scale-free manner to result in a weather pattern with no similarities. Now the question arises: what if we give Lorenz weather machine results accurate to six decimal places? Remember, the Heisenberg’s uncertainty principle prohibits accuracy of measurement. An infinitesimal change in the input values can result in a totally different outcome. Therefore, the initial situation of a complex system cannot be accurately determined, and the evolution of a complex system can therefore not be accurately predicted. This shattered the classical theory that one can calculate the approximate behavior of the system given an approximate knowledge of a system’s initial conditions and an understanding of the natural law [7]. As chaos theory progressed, the research community saw journals which compared strange dynamics of a ball bouncing on a table side by side with articles on quantum physics. The simplest systems now seemed to be unpredictable.

3.3. Patterns in chaos

The second law of thermodynamics can be stated as follows, “In an isolated system, entropy never decreases.” The second law of thermodynamics is just another way of saying that all physical systems tend to move toward their most probable states, which shows up as increased entropy. Viewed in that context, entropy can be thought of as measuring disorder or randomness. The Universe is governed by entropy and constant changes and growing toward greater and greater disorder. Yet, chaos is not simply disorder. Chaos explores the transitions between order and disorder, which often occur in surprising ways. An order arises from the ever-growing disorder of the Universe—chaos and order together. Let us take a close look at nature in Figure 1.

Figure 1.

Pattern of river networks, lightning bolts, trees, snowflakes, metal crystals, and pinecones.

River networks, lightning bolts, and trees have a self-similar pattern. Snowflakes, metal crystals, and pinecones have similar structural pattern. In all these structures, the same pattern gets repeated. They are said to have fractal-like features. Fractal, by definition, is a curve or geometrical figure, each part of which has the same statistical character as the whole. Fractal-like patterns are found throughout the nature. It can be seen in the formation of ice crystals and veins in the human body, in leaves, in water drops, in air bubbles, etc. It is often been said that nature is an excellent designer and fractals can be thought of as the design principle nature follows when putting things together. Fractals are the best existing mathematical descriptions of many natural forms, such as coastlines, mountains, or parts of living organisms. Fractals are infinitely complex patterns that are self-similar across different scales and infinitely detailed.

Most physical systems of nature and many human artifacts are not regular geometric shapes of the standard geometry derived from Euclid. Fractal geometry offers almost unlimited ways of describing and measuring these natural phenomena. Scientists discovered that the basic architecture of a chromosome is tree-like; every chromosome consists of many “mini-chromosomes” and therefore can be treated as fractal. This self-similarity has also been found in DNA sequences. Even the rhythm of our heart is a fractal pattern as a function of time. Many image compression schemes use fractal algorithms to compress computer graphic files to less than a quarter of their original size. Recent study shows that the Universe and our brain cell follow a very similar pattern as shown in Figure 2.

Figure 2.

(a) Pattern of the brain cell. (b) Pattern of the Universe.

Figure 2(a) is an image, which is only micrometers wide and shows three neuron cells and their connections in the brain of a mouse, whereas Figure 2(b) is an image, which is billions of light-years across and is a computer-simulated image, which shows how the Universe grew and evolved. Together, they suggest the surprisingly similar patterns found in vastly different natural phenomena.

Fractals are related to chaos because they are complex systems that have definite properties. Chaotic systems and fractals are both iterated functions. Each iteration takes the state of the previous iteration as an input to produce the next state. Although fractals can show beautiful patterns, we cannot actually predict what their value is going to be of a specific iteration other than by calculating all the preceding iterations. So in a sense, fractals are just beautiful chaos.

The process of making a fractal is a type of self-ordering process. These self-organizing processes are very local in nature but lead to highly organized structures on very large scales. An order arises from the ever-growing disorder of the Universe. We cannot see the order in chaos if we are looking moment to moment. We cannot see order if we are managing individual behaviors. However, if we stand back far enough, if we wait over time, scale, or distance, we will observe the order that is in chaos, a pattern emerges from the otherwise disordered system.

3.3.1. The strange attractor

In a deterministic linear system, variability is not significant. There is a definite answer, a mathematical proof and solution for its characteristics and problems. A perfectly periodic system has the concept of a regular attractor. The damped pendulum has two invariant points; one is the point of minimum height and the other of maximum height. Due to the dissipation of energy, point of minimum height behaves as a regular attractor. The system tends to evolve toward the attractor. Into a structured linear system, if the amount of input force is increased, the system undergoes a transition to a chaotic system. The periodically repeating pattern previously present in the linear system breaks apart and gives rise to a pattern that never repeats. The system loses its predictability. A system in chaos is totally unpredictable from moment to moment. When converted into multidimensional space on high-speed computers, one could track many variables at once and could plot the movement of a system in chaos. The system is in total disorder when it is observed from moment to moment. Prediction of the next state is impossible. However, over a period of time, we come to realize that the system conforms to a boundary. There is an inherent shape that it never violates and never will it move out of a defined boundary. The plots that are based on nonlinear equations may appear to be random or chaotic but over many iterations will evolve to be simple and tend to produce patterns called strange attractors that confine to a boundary. These strange attractors need not be symmetrical always.

The behavior strange attractor can be seen from the first chaotic system discovered by Edward Lorenz, the Lorenzian (or chaotic) waterwheel. A chaotic waterwheel is just like a normal waterwheel except for the fact that the buckets leak. The entire waterwheel system is mounted under a water sprout. In the working of the waterwheel, water pours in from the top at a steady state and gives the system energy while water leaks out from each bucket at a steady state and removes energy from the system. Water trickles into the top bucket and immediately trickles out through the hole in the bottom at low speeds. With an increase in the inflow of water, the waterwheel will begin to revolve as the buckets fill up faster than they can empty. The heavier buckets containing more water on descending will let the water out, and when empty, it ascends on the other side to be refilled. The system is in a steady state; the wheel will continue to spin at a fairly constant rate. On increasing the input flow of water, increasing the system’s entropy, the system transitions into a chaotic behavior. Rather than spinning in one direction at a constant speed, the wheel will speed up, slow down, change directions, and oscillate back and forth, between the combinations of behaviors in an unpredictable manner. However, the entire system is within some bounds. Because the system never exactly repeats itself, the trajectory of a particular bucket never intersects itself. Instead it loops around and around forever in a strange pattern. It seems to be attracted toward a point in space. The system does not reach the ideal attraction point but is constantly pulled to it. There emerges an order in the chaotic system as shown in Figure 3. Hence, simply put, a chaotic system is a deterministic and unpredictable system.

Figure 3.

Chaotic patterns followed by the center of mass of the bucket in Lorenzian waterwheel with x, y, and z being the coordinates of the center of mass in (a) x-y plane, (b) x-z plane and (c) y-z plane [10].

Advertisement

4. Complex systems

Complex systems exhibit chaotic behavior that is characterized by extreme sensitivity to initial conditions, fractal geometry, and self-organized criticality [11]. The concept of sensitive dependence on initial conditions has been popularized in the “butterfly effect”: minute differences in the initial conditions for such a system result in extremely different outcomes.

Fractal geometry refers to self-similarity, which implies that an object looks the same at any scale. Mathematically, self-similarity requires a power law distribution of representative objects. In other words, as the size of an object decreases, there is a corresponding power law increase in the number of objects of that size [12].

Self-organized criticality (SOC) is the concept that a dynamic system will move toward a critical, an emergent, or a metastable state, by natural processes [13]. A metastable system is in a delicate state of equilibrium where even a small change in conditions may precipitate a major change in the system. The chaotic nature of complex systems and its sensitivity to initial conditions make it prone to be in a metastable state and not in an equilibrium state.

The knowledge about the emergent properties of chaotic system and similarities across various natural systems opens up new possibilities. First, it boosts the effort to explore the secrets of the nature in the sense that these conspicuous similarities encourage to think about the common mechanisms connecting all these systems. Second, it gives a tool to study the chaotic system. Chaotic systems are highly complex systems due to the randomness and nonlinearity existing in these systems. Chaotic systems are usually defined as deterministic systems whose trajectories diverge exponentially over time. It involves few parameters and the dynamics of their values. Complex systems are essentially chaotic systems, and hence, all these properties are relevant to complex systems too. However, as complex systems comprise of many interacting parts, they are ever-growing dynamic systems. It always interacts with the environment. So, the scientific study of chaos is the root of the modern study of complex systems [14]. The big question is how we can study the complex systems based on the insights obtained from the study of chaotic systems.

Complex systems manifest themselves as complex networks because complexity stems from intricacies that exist in the interconnections. Hence, the study of complex systems is the statistical study of network matrices as graphs representing complex networks. Networks can be represented as regular or nonregular graphs. Regular graphs are those having regular topologies, and algorithms in graph theory are used to analyze their properties. However, many complex systems, for example, World Wide Web or social networks, do not follow a regular graph structure. There is randomness associated with the existence of a connection between two nodes in a complex network. The random graph as proposed by Erdös and Rényi [15] tries to capture this randomness. It is observed by Strogatz [3] that the small-world effect is exhibited in complex networks. When a rewiring is done in a regular lattice with some random connections, short paths are produced between any pair of nodes and the network is highly clustered than random graphs. Many complex systems like social networks show this property of high significance in the way people are connected with each other.

The quest for finding the underlying laws in the emergence of complex systems is a very important one. With the knowledge that a small change during the emergence plays a big role in forming the final form, the question arises as to whether there exists any simple law that resulted in the systems that exist now. This enquiry starts with the analysis of complex networks and tries to find the similarities in them. Barabási and Albert [16] have observed that the degrees of distribution among the nodes are dissimilar. Some nodes are highly connected than others. This means, if pk denote the fraction of nodes that have k links where k is the degree and pk is the degree distribution, the random graph model predicts a bell-shaped Poisson distribution for pk. However, in the case of real networks, pk is highly skewed and decays much more slowly than Poisson’s distribution. For instance, the distribution decays as a power law pk ~ k for the Internet backbone, metabolic reaction networks, the telephone call graph, and the World Wide Web. These networks are called “scale-free” networks by analogy with fractals, phase transitions, and other situations where power laws arise and no single characteristic scale can be defined. Though power law distribution is not universal, it is quite common among real networks. Hence, this result is quite significant when the laws for emergence of these networks are concerned. Turcotte [11] has shown that this emerges due to the existence of two simple rules during the emergence: the growth of network and its preferential attachment. This means that if the network expands continuously by the addition of new vertices and the new vertices attach preferentially to the sites that are already well connected, a power law distribution in degree distribution will occur. In complex systems improbable events are orders of magnitude more likely than events that follow the Gaussian distribution. Complex systems exhibit non-Gaussian outputs since the estimates and predictions of system’s future behavior, particularly Gaussian estimates, formed by observations collected over short time periods provide an incorrect picture of large-scale fluctuations [17].

Advertisement

5. Internet as a complex system

After the Internet became widely available in the early nineties, it has enabled new services and transformed access to information, improved economic efficiency, and facilitated greater collaboration among individuals, businesses, and government. The Internet and the internetworking infrastructures underpin every aspect of our lives. More than 50% of the world’s population now uses the Internet, a proportion that is growing every year. The Internet provides the supporting platform for an emerging digital economy, which depends on broadband networks and services like social media and cloud computing.

The computers and network which form the information system is an important and natural application domain for complex systems. Those systems are dynamic in nature, with continuously changing patterns of behavior, programs getting modified or removed or new programs included, users being removed or added, and configurations getting changed over time. The information system security problems are quite challenging due to the growing complexity and interconnected nature of these systems. It happens to be additionally complex with its multidimensionality as problems at different layers of the system are to be addressed. These problems include the physical layer security problems, network layer routing problems, application layer privacy, authentication and encryption problems, information security management problems, etc.

The Internet is identified as a critical enabler for sustainable development since it aids in unlocking human capabilities and enables communication and integration of the large information society. Forecasting the development of innovative and new technologies in this field is very difficult. The developments of the Internet were not anticipated by anyone. Most forecasts turn out to be too optimistic about the short-term introduction and too conservative about the societal consequences in the long term. Sustainability necessitates the reliability and resilience of the Internet if governments and businesses are to use the Internet to deliver services and for economic prosperity. Participation in the digital economy, including cloud computing, requires uninterrupted and safe access to high-speed networks.

5.1. Information systems security

The Internet evolves at a rapid rate in terms of the number and variety of the individual components and the possible applications that were never expected at all. The main reason for instability of the Internet is that many of the engineering innovative applications for ease of use are not being matched by engineering for ease of security. Now, large number of people, even with little technical knowledge or skill, all over the world are using the power of application software and new hardware to work with greater effectiveness and efficiency. However, it is difficult to securely configure and operate many of these services or products. This mismatch between the knowledge needed to operate a system and that needed to keep it secure results in increasing the number of system exploits. Additionally, the Internet has evolved so rapidly that any service provider or application developer will be concentrating on reducing the time taken to bring a new innovation to market. Hence, security is given the least priority in a competitive market for new products. This results in a high chance of exploits and attacks on the Internet that involve direct participation of many components including people and systems (software and hardware including networks). Hence, there is a fundamental relationship between the networking components and network security problems.

Additionally, the complexity of the Internet adds to the risk in the system usage as it allows attackers to find novel ways to gain improper access to the information on computer systems or intrude into network systems. The security analysis in such a complex context has to be investigated rather than depending only on heuristics. This paradigm shift leads to employ complexity theory to model the interactions of networking components in security problems. In these decision-making security models, the decision makers play the role of either the attacker or the defender. They have conflicting goals, with attacker attempting to breach security of the system so as to interrupt the services offered or damage the information available on systems, whereas a defender takes either preventive or corrective measures to enhance the system performance.

There are situations of uncertainty in predicting the behavior, interaction, and intention of both the malicious attackers and the defenders. It is necessary to understand the interaction between the attackers and defenders. Also, there is an increasing interaction and collaboration between various organizations that are interconnected as there are security interdependencies among them. This is because the vulnerability of one organization may result in compromises for others and in cascading failures affecting many nodes. The equilibrium analysis of the entire system provides insights on the decisions like security investment and patch management. Complexity theory aids in analyzing and investigating the decision process by the proper prediction of the behavior of networking components.

It is therefore necessary to give a top priority to analyze, model, and better understand the interconnected large-scale complex systems due to the increasing vulnerability of these network infrastructures. The Leontief-based input-output infrastructure model [18] can be made use of to represent the added and ever-increasing interdependence, intraconnectedness, and interconnectedness among the networking infrastructures. Additionally, the entire system becomes complex due to its architecture involving decision-making at multiple levels. The Leontief-based model can be used in managing the vulnerabilities of these complex network systems and the threats on these infrastructures in a more cost-effective manner. This will ensure the continued performance of these complex infrastructures. This will enable the total protection of the information stored in the systems or the information that gets communicated over the network.

Sustainable development has been a focus of international public policy since the Earth Summit in 1992. It identifies that economic growth, social inclusion, and environmental sustainability are essential to achieve human development that meets the needs of the present without compromising the ability of future generations to meet their own needs. Any innovation happening in a field becomes sustainable if it can meet the needs of people without adversely affecting the future generations to meet their needs. Thus, sustainable innovation has a close linkage with value choices. Any naturally occurring complex system has a tendency to evolve automatically, and those networks are quite resistant to small initial variations causing a tremendous effect on the entire system. It is only the artificial or the man-made complex networks like the Internet that get affected by cascading failures capable of disabling the network almost entirely.

5.2. Self-organized criticality

As many vital systems for the existence of human beings follow chaotic pattern, the quest for finding out the measure of the stability and predictability of a complex system is a very active research field. The stability of the Internet system is getting affected both from the internal and external threats. Internal threats stem from the growing instability inherent in the system. Self-organized criticality introduced by Bak et al. [13] addresses this issue. SOC is a property of dynamic systems to organize its microscopic behavior to be spatial (and/or temporal) scale independent. SOC is typically observed in slowly driven nonequilibrium systems with extended degrees of freedom and a high level of nonlinearity. Phenomena of strikingly different backgrounds were claimed to exhibit SOC behavior: sand piles, earthquakes, forest fires, rivers, mountains, cities, literary texts, electric breakdown, motion of magnetic flux lines in superconductors, water droplets on surfaces, dynamics of magnetic domains, growing surfaces, human brains, etc. [19]. SOC models are useful in analyzing systemic instabilities inherently present in the system. Growth, i.e., addition of the new components to the system, is a ubiquitous feature of the complex system. SOC models study whether this growth will reach critical state and in which further addition of a component leads to failure of the system. This scenario may be in direct relation to the issue of sustainability of the Internet system. Many concerns of the Internet usage in modern life are the unprecedented growth that raises the concerns of sustainability. The studies are conducted using SOC model to analyze the growth dynamics of these systems and effect of human interventions in the system. For example, the effect of introducing a new species into an ecosystem [20], external threats to the system, attack from outside targeting a random member or targeting a special member of the system, etc.

Peculiar structure existing in the Internet influences the effectiveness of these attacks. In many complex systems like social networks or the Internet, very few nodes are rich in their connections, whereas most of the nodes are poor. This gives rise to the formation of hubs in these kinds of networks. These hubs are important and the significance of their security is integral to the stability of the whole system. Albert et al. [21] have studied about the attack tolerance of scale-free network and have shown that scale-free networks are more tolerant to random attacks. Change in diameter (longest geodesic) is analyzed for both random attacks and targeted attacks, and their study shows that scale-free network is less resilient against targeted attack. This shows that the nature of possible attack and the resultant failure that can happen are highly important in systems like the Internet. A similar result is shown by Xiao et al. [22] when they examine the robustness of communication networks under intentional attacks from a complex network point of view. They have found that incomplete information may significantly degrade the efficiency of intentional attack, especially if a big hub is missed. After analyzing local information–based distributed attacks such as greedy sequential attack, coordinated attack, and lower-bounded parallel attack, it is shown that distributed attacks can be highly effective, sometimes almost as efficient as an accurate global information–based attack. This observation shows that attack is possible even when complete information about a hub is not available.

The question of how effectively information can be spread in a complex network or conversely how fatal a malicious node or the information it spreads is very significant. In this regard, Kitsak et al. [23] investigated on the question of which nodes can be the most influential spreaders in a complex network. The most efficient spreaders were identified by the k-shell decomposition analysis to be those located within the core of the network. When multiple spreaders are considered simultaneously, the distance between them becomes the crucial parameter that determines the extent of the spreading. Basaras et al. [24] explore the possibility of blocking virus propagation in complex networks. They found that removing edges is more practical than removing nodes to prevent the spread of a malware infection. Identification of edges that have to be removed is discussed and they propose an algorithm that detects critical connections based on their diffusion capabilities, namely, critical edge detector (CED).

Due to the nonlinear nature of complex networks, a failure starting at some point can cause unpredictable damage to the whole system. An attack can cascade through the nodes too. Lai et al. [25] review two problems in the security of complex networks: cascades of overload failures on nodes and range-based attacks on links. They argue that many scale-free networks are more sensitive to attacks on short-range than on long-range links because short-range links tend to link together highly connected nodes, while long-range links tend to connect nodes with very few links. Another finding is that the small-world phenomenon in these scale-free networks is caused by short-range links. The removal of a node with a tremendously large load is likely to significantly affect the loads at other nodes. There is hence a chance to trigger a sequence of overload failures. Thus, in networks with some degree of randomness, the distribution of loads and the distribution of links are highly correlated.

Buldyrev et al. [26] developed a framework for understanding the robustness of interacting networks that are subject to cascading failures. They have presented the analytical solutions for finding the critical fraction of nodes that, on removal, will lead to a cascading failure and to a complete fragmentation of two interdependent networks. It is also found that a broader distribution with the same average degree implies that there are more low-degree nodes. The advantage of a broad distribution for a single network becomes a disadvantage for interdependent networks as the low-degree nodes are more easily disconnected.

5.3. Interdependence and its effects

Complexity is multiplied when one complex network intricately depends on other complex networks. The nonlinearity in the effect discussed so far makes most of the systems vulnerable to even the smallest perturbation. In a highly networked world, interdependence causes many uncertainties. The effects of climate system on financial system, social network on political system, power grid on the Internet, etc., are some of the examples. Buldyrev et al. [26] discuss about the problem of interdependence failure. Vespignani [27] analyzes the number of nodes that must fail before network is totally fragmented. Using percolation theory it is shown that the number of nodes is very less for interdependent networks compared to isolated networks. Chen et al. [28] investigate the emergence of extreme events in interdependent networks and found that when the number of network layers and/or the overlap among the layers is increased, extreme events can emerge in a cascading manner on a global scale. Thorough analysis has been carried out to understand the emergence of global-scale extreme events based on the concept of effective betweenness. A cost-effective control scheme to suppress the cascading process of extreme events so as to protect the entire multilayer infrastructure against global-scale breakdown is undertaken in his work by increasing the capacity of very few hubs. Buldyrev et al. [26] developed a framework to understand the robustness of interacting networks that are subject to cascading failures.

Advertisement

6. Conclusion

The possibility of studying systems that are complex gives insights into the ways of emergence and the nonlinear cause and effects pattern that are followed. Most natural systems follow a complex structure. The interdependence of seemingly disparate systems is getting more evident with a tiny change in one parameter producing disastrous effect in many interconnected systems. A system becomes unsustainable when it causes those effects, which are never expected out of it. This is exactly because of the inability to predict the effect of a system on other systems.

Sustainability is crucial in organizational and technological innovations that yield both bottom-line and top-line returns. Sustainability in a sense is being environment friendly with a reduced consumption of resources or, in simple terms, by reducing the inputs used by a system, in terms lowering the cost of production or innovation. Additionally, the entire process generates extra revenues from better products or enables to create new businesses. In fact, it is smart to treat sustainability as innovation’s new frontier. Hence, complex systems are to be taken as a whole for a long-term analysis. Complexity theory, which studies about the emergence of systems with a “whole is greater than part” approach, is the very suitable tool to attack the effects.

Research challenges in the field of complex systems are about creating models and finding metrics that can capture the true nature of any real-world complex system. Representing complex systems as a network provides a powerful tool to decipher the secrets of their complexities. This inquiry is yet another epoch of comprehending the difficult riddles nature poses before humans, the most intelligent life form of the Universe. Maybe, victory in this through a proper understanding of the system gives the ability to innovate and sustain.

Finally poised between chaos and order, sustainability, and innovation, chaos theory has an edge that promises to clarify the simple and yet mysteriously complex nature of this Universe and all its beings and phenomena. Its ramifications are the information available, strong claims made, more lucid answers sought and driven by the sustainability of nature, and the innovative and problem-solving capacity of humans. Driven by the need for better problem solving, we continue to structure knowledge, recognize new problems, and create new knowledge with innovation at the center to sustain the flow.

References

  1. 1. Moses, J. (2000). Complexity and flexibility. Quoted in Ideas on Complexity in Systems – Twenty Views, by JM Sussman. February. http://web. mit. edu/esd, 83.
  2. 2. Rechtin, E., & Maier, M. W. (2010). The Art of Systems Architecting. CRC Press, UK, ISBN 9781420058529 - CAT# E0440.
  3. 3. Strogatz, S. H. (2001). Exploring complex networks. Nature, 410(6825), 268–276.
  4. 4. Helbing, D. (2013). Globally networked risks and how to respond. Nature, 497(7447), 51–59.
  5. 5. Stanford University. “Chaos and Reductionism”. WeTube. WeTube, 01 Feb. 2011. Web. 30 Sept. 2016.
  6. 6. Bhat, R., & Salingaros, N. (2013). Reductionism Undermines Both Science and Culture. New English Review Press.
  7. 7. Gleick, J. (1987). Chaos: Making a New Science. Viking Press, New York.
  8. 8. Senge, P. M. (2006). The Fifth Discipline: The Art and Practice of the Learning Organization. Broadway Business, ISBN 0-385-51725-4 (second edition).
  9. 9. Lorenz, E. N. (1963). Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20(2), 130–141.
  10. 10. Wang, X., & Zhang, H. (2013). Bivariate module-phase synchronization of a fractional-order Lorenz system in different dimensions. Journal of Computational Nonlinear Dynamics, 8(3), 031017 (7 pages) Paper No: CND-12-1202; doi: 10.1115/1.4023438
  11. 11. Turcotte, D. L. (2006). Modeling geocomplexity: “A new kind of science”. Geological Society of America Special Papers, 413, 39–50.
  12. 12. Mandelbrot, B. B. (1967). How long is the coast of Britain. Science, 156(3775), 636–638.
  13. 13. Bak, P., Tang, C., & Wiesenfeld, K. (1987). Self-organized criticality: An explanation of the 1/f noise. Physical Review Letters, 59(4), 381.
  14. 14. Bar-Yam, Y. (1997). Dynamics of Complex Systems (Vol. 213). Addison-Wesley, Reading, MA.
  15. 15. Erdös, P., & Rényi, A. (1959). On random graphs, I. Publicationes Mathematicae Debrecen, 6, 290–297.
  16. 16. Barabási, A. L., & Albert, R. (1999). Emergence of scaling in random networks. Science, 286(5439), 509–512.
  17. 17. Herbert, B. E. (2006). Student understanding of complex earth systems. Geological Society of America Special Papers, 413, 95–104.
  18. 18. Leontief, W. W. (1986). Input-Output Economics. Oxford University Press, Oxford, England.
  19. 19. Jensen, H. J., & Criticality, S. O. (1998). Emergent complex behavior in physical and biological systems. Cambridge Lecture Notes in Physics, 10.
  20. 20. Wang, Y., Fan, H., Lin, W., Lai, Y. C., & Wang, X. (2016). Growth, collapse, and selforganized criticality in complex networks. Scientific Reports, 6, [24445], pg.1-12, DOI: 10.1038/srep24445 .
  21. 21. Albert, R., Jeong, H., & Barabási, A. L. (2000). Error and attack tolerance of complex networks. Nature, 406(6794), 378–382.
  22. 22. Xiao, S., Xiao, G., & Cheng, T. H. (2008). Tolerance of intentional attacks in complex communication networks. IEEE Communications Magazine, 46(1), 146–152.
  23. 23. Kitsak, M., Gallos, L. K., Havlin, S., Liljeros, F., Muchnik, L., Stanley, H. E., & Makse, H. A. (2010). Identification of influential spreaders in complex networks. Nature Physics, 6(11), 888–893.
  24. 24. Basaras, P., Katsaros, D., & Tassiulas, L. (2015, June). Dynamically Blocking Contagions in Complex Networks by Cutting Vital Connections. In: 2015 IEEE International Conference on Communications (ICC) (pp. 1170–1175). IEEE, London.
  25. 25. Lai, Y. C., Motter, A. E., & Nishikawa, T. (2004). Attacks and cascades in complex networks. In: Complex Networks (pp. 299–310). Springer, Berlin Heidelberg.
  26. 26. Buldyrev, S. V., Parshani, R., Paul, G., Stanley, H. E., & Havlin, S. (2010). Catastrophic cascade of failures in interdependent networks. Nature, 464(7291), 1025–1028.
  27. 27. Vespignani, A. (2010). Complex networks: The fragility of interdependency.Nature, 464(7291), 984–985.
  28. 28. Chen, Y. Z., Huang, Z. G., Zhang, H. F., Eisenberg, D., Seager, T. P., & Lai, Y. C. (2015).Extreme events in multilayer, interdependent complex networks and control. Scientific Reports, 5, 17277, pp.1–13, http://doi.org/10.1038/srep17277

Written By

Ciza Thomas, Rendhir R. Prasad and Minu Mathew

Submitted: 17 October 2016 Reviewed: 02 November 2016 Published: 14 December 2016