Open access peer-reviewed chapter

Time Factor in Operation Research Tasks for Smart Manufacturing

By Leonid A. Mylnikov

Submitted: May 8th 2017Reviewed: December 12th 2017Published: December 20th 2017

DOI: 10.5772/intechopen.73085

Downloaded: 343

Abstract

The shift to the concepts Industry 4.0 and IIoT helps collect a vast amount of objective data about processes that take place in a production system, and thus, it creates background for taking advantage of theoretical results in practice; it is a trend towards synchronizing production system processes and external (market) processes in practice. In order for the target to be achieved, we use the methods that formalize management tasks in the form of predictive models, consider the cases with the computational solution of management models and decision making in production system tasks which are set based on time factor and are solved by approximate methods. We also take a look at the problems of probabilistic nature of gained decisions and address the cases, when by computational solution of tasks we need to take into account restrictions and select time step in order to obtain the decision in a table form of the function of time. The problems that we investigate help obtain and solve management tasks of production systems with help of forecasting data for a group of indices that are involved in decision making – this all helps enhance the sufficiency and quality of management decisions.

Keywords

  • production system
  • smart manufacturing
  • Industry 4.0
  • management
  • operation research
  • scheduling

1. Introduction

Presently, the information support of production systems management is mainly focused on the control and management of production systems (SCADA), the support of sales and production process (ERP, MRP, Just in Time), organizing production for known customers (CSRP), and product life cycle management (CALS). However, the aspects of tactical and strategical management get information support only on the stage of data preparation for decision making, yet on the stage when potential solutions are to be identified based on data we observe lack of information support. The integration of automatization and control systems, the trend towards Industry 4.0 and IIoT leads to an exponential growth of collected data. Hence, on the one hand, it can be expected that the use of big data might help obtain fundamentally new solutions due to their immense nature, but on the other hand, the issues of decision support automation become more acute as we face the trend of an ongoing automation expansion of production systems, and hereafter, can assume a concept of a virtual factory.

The use of the concepts Industry 4.0 outlines possibilities for the automation of production systems management taking into account the interaction of subsystems and the synchronization of their interaction with external factors. In the age of cutting edge innovation products we cannot talk about the stability of production processes since life cycle of such products is short, the number of modifications and parts is high, and power intensity and resources consumption is much higher. This proves the necessity of collecting reliable information with help of IIoT. The presence of such data helps build predictive models and use preventive control actions as production system is an inertial management object that is not able to adjust the ongoing processes instantaneously. Besides, the change of processes requires additional time resources, financial resources, labor competence, and organization resources.

The implementation of the concepts Industry 4.0 and Industrial Internet of Things [that deals with collecting information about each production unit and provides operation management over production processes in PS] [1] opens new possibilities for developing industrial engineering methods [2].

Taking into account long decades, when production systems were examined only on the basis of general data, data engineers had limited data to develop methods for decision making and took advantage of expert evaluations, i.e. the methods of utility theory (considering customer preferences as maximization of expected utility, probability models (see the works of O. Morgenstern), axiomatic theory of D. Savage that enables measure the utility and subjective probability simultaneously; decision tree approach that partitions the tasks into certain subtasks (look the works of H. Reif); multiple-criteria utility theory (developed in the works of R. Keeney); prospect theory methods, Electre methods (worked out by the French School on MCDA headed by B. Roy), hierarchy analysis method proposed by Saaty [3], heuristic methods (for instance, the method of the weighed sum of its evaluation ratings, compensation methods etc.), the models of bounded rationality by A. Rubinstein, the technic for order of preference by similarity to ideal solution (TOPSIS) [4].

The appearance of a big amount of statistical data encouraged the development of the methods of mathematical formalization used to solve tasks for the management of materials, parts, operations, and choice of suppliers [5] with the consideration of stochastic factors, probability approaches to measure risks taking into account different nature of examined events (joint, correlative, inconsistent and interdependent) used to solve planning tasks taking into account the dynamics of examined processes.

The consideration of random factors and the use of probability approaches help measure risks with help of models. There are planning risks (the risks related to decision making based on models [6] that depend on the current state of market (change in price, sales volumes etc.)) and production risks (the risks related to equipment mortality, failures in the delivery of necessary materials or parts etc.).

The use of probability models is based on the use of risk metrics [7], Bayes’ Theorem [8] or Monte-Carlo Method [9].

2. Methodology aspect of management task setting in production systems

Production system is regarded as management object that is placed in a state space. The coordinates on this nis the dimensional space are represented by the management parameters that are considered significant for achieving the targets, and their values describe the current state and remoteness from the selected targets.

If we mark target goal indexes by the vector Pp, and the current state by the vector Pa, we will receive a mathematically measurable metric PpPathat shows how the current position deviates from the goal position that is deemed a sign of progress for project implementation (the end of implementation, Pp=Pa). However, to know the metrics PpPais not enough for management, we also need to know the vector of the parameters Ythat greatly affect the state of project and consist of the values that describe project, production system and the environments in which project is implemented as well as dynamics of change and prognostic values of all these parameters. It should be noted, that the achievement of the goal values Pp=Padoes not always mean the achievement of the vector values Yexpected for this state.

In management tasks values and parameters can be classified in four groups [10]: parameters and values that describe a current state Ppi, values and parameters that describe the action (external factors and control action – Y=AΘ, the Ais the set of control actions, Θis the set of environment values), values and parameters that describe a goal state Pai, values and parameters that describe the output of system operation by shifting from the state Ppiinto Pai- Rand time T0.

Therefore, management has to use an automaton where the consecutive state is defined by experts based on the current state and the state that was planned to be achieved on the previous stage and the time when it has to be done – Pp0Pa0T0,Pp1Pa1T1,,PpnPanTn. In order for a new state to come, action Aihas to be defined. We can determine such action with help of the production system model that implements innovation projects φj=US, where Uis the vector of management parameter, Sis the set of project resource needs, jis project number.

This approach helps work out hierarchically coordinated managerial decisions by taking into consideration system-interrelated external and internal factors that interact. Management process is considered then as a holistic undetermined process.

In general, the model can be presented in a form of a tuple:

ψ=YPpPaTRφE1

where φ=φ1φ2φnis the projects’ vector, Yis the action vector on each step, Ris the outcome vector on each step, Ppis the vector of system states, Pais the vector of system goal states, Tis the vector of decision points.

The use of the model (1) is described by an undetermined algorithm [11] see Figure 1.

Figure 1.

The algorithm to manage a production system that implements projects φ .

As a result, management task becomes more transparent. However, it opens new sub-tasks, i.e. to determine decision points, to define the set of indexes and their values for each stage of project implementation, to build a model of production system by implementing the projects (φ) in order to define the vector of control actions Y.

At the same time, the more formalized is the description of tuple parts (1) (less ambiguity), the higher is the quality of management [according to system properties].

Decision points can be defined in case if we know the set of controlled parameters [12], and have additional information that characterizes the production system that we manage (equipment maintenance periods, internal technological cycles etc.) [13].

The setting of management tasks taking into account time factor Tileads to formalizing the models YiMiAΘTi1,Ti¯. The structure of the model sets formal interrelations between its parameters, and on each step the type of the model will depend on the managerial task that we consider (whether we forecast properties and behavior of the investigated management project; or when dealing with object management we select best actions by testing them on the model, investigate the object and look for the ways to improve management object).

The model itself can use both non-causal (component-oriented) and causal (block-oriented) modeling, and model components can set requirements to their development tool (for example, the possibility to 1) work with big data volumes set by time series 2) use the methods that are applied for incomplete data 3) solve tasks set in a form of mathematical programming 4) employ methods to work with probabilistic models etc.).

The specialization of models Yibrings the problem of choosing approaches and ways for formalization based on the set of already known approaches, ways, methods and models [14] that will be collected as a composition (the compatibility of input and output areas).

For the implementation of each project in the considered production system, the model formation that is presented in a general form is as follows RφAΘMiAΘTi1,Ti¯mjiUPΠ, where Pis the vector of external parameters that exert impact on the system, Πis the vector of system parameters, mjiis the components or blocks of the model for time Ti).

Despite the apparent simplicity of the approach, underlying this approach is a necessity to work out managerial decisions taking into account different levels (institutional, managerial, technical) and management types (finance management, production management, goods management, launch management, sales management, R&D management, institutional management), and subsystems of production system – all of that generates a whole group of managerial tasks that have to be solved together for each time period Ti; the interrelation of the tasks is demonstrated in Figure 2.

Figure 2.

The interrelation of management levels and management tasks to be solved by using parameters and indicators for developing decision support models.

Work with a model structure means that we need to consider several subtasks related to forecasting parameters of the considered project [15] and to formalizing an optimization task in a form of mathematical programming [16].

The examples of tasks that are considered in decision points can encompass the tasks of production and client analytics taking into account time factor, such as demand forecast and sales planning, volume planning, stock and procurement planning (including working life), equipment selection taking into account maintenance costs; these can be the tasks of optimizing stock work and minimizing the volumes of working assets, and obtaining optimal machine utilization and work force.

In this case, each of tasks can be described by a separate criterion; the use of a reflexive approach enables their joint solution as a set of optimization tasks that have common parameters and use forecast-based data.

3. Solving management tasks with help of predictive models

Let us now consider a general task of formalizing management processes for project implementation in PS. This task can be handled as a task of defining decision points and a cyclic solution of prognostic models that are represented by optimized formalizations based on forecast data and elaboration done on each step of processing data in order to make consecutive iterations with new data, and calculation results.

In order for the tasks to be formalized as tasks of optimal control, we have to input a set of indices, variables and parameters of management [9], for instance, like: iis the supplier’s index; jis the index of production system/stock (PS); mis the part index or the demand in materials; nis the index of end item; kis the index of production operation; gis the index of machine or instrument; pis the index of operation; tis the time; xijmis the number of parts mreceived from the supplier ifor PS j; yjnis the number of parts nproduced in PS j; rnis the number of returned items nfor utilization; omis the number of reused parts or materials m; dmis the number of items or materials msent to utilization; refjmis the number of reused items or materials min PS j; bdnis the binary variable that possesses the value equal to 1 in case if it can be repeatedly used for the item nand 0 if not; tis the time step; sellnis the item’s market price n; costjnis the item’s production cost nin PS j; priceimis the price of the part mreceived from the supplier i; shipm/nijis the delivery cost of the part/ item m/nfrom the station ito the station j; invjis the storage cost in PS j; setdisnis the preparation cost to get the parts out of the item n; disamis the preparation cost to get the part mout for reuse; dispmis the utilization cost for the part m; refcostjmis the preparation/recovery cost of the part mfor reuse in PS j; demjnis the need/demand in the item n, if there is the index jthe consumer get then j; reqmnis the number of requested parts mrequired for the production of the item n; costeqpgjis the cost of the operation pon the equipment gin PS j; timeeqpgjis the time of operation performance pon the equipment gin PS j; partmpgjis the demand in parts/materials min order to perform the operation pon the equipment gin PS j; eqpgjis the demand in the equipment gin order to perform the operation pin PS j; supmaximis the maximum size of the batch of the parts mthat can be delivered from the supplier i; supminimis the minimal size of the batch of the parts mthat can be delivered from the supplier i; supmaxpartjmis the maximum potential number of parts and components mthat can be delivered for production in PS j; supmaxeqjpis the maximum potential number of equipment units for the operation pin PS j; reusemis the maximum percentage of the parts mthat can be reused.

The approach described above helps state a set of optimization tasks that can be considered both, as joint and separate tasks. Let us give the examples of feasible task formalizations:

  • Profit maximization (production planning for demand), sellntjcostjntjyjntmax,n;

  • Production cost minimization, pcosteqpgjt+mpartmpgjtpricemit+mpartmpgjtshipmijtmin,g,j;

  • The minimization of costs for goods’ storage,costjnyjnt+invjty1jnt+shipnijty2jntmin, yjnt=y1jnt+y2jnt, y2jntdemjnt, where y1jn-the number of items stored in stock, y2jnis the number of items sent to consumer;

  • The selection of suppliers taking into account that certain components can be reused, jnsellncostjnyinijmpriceim+shipij+invjxijmnsetdisnbdnp(costeqpgjmdisamom+dispmdmjmrefcostjmrefjmmax.

The tasks can be subject to different restrictions:

  • Production capacity restriction, geqpgjtsupmaxeqjpt,j,p,t;

  • The restriction related to delivery options of components and materials, gpartmpgjtsupmaxpartjmt,j,p,m,t;

  • Non-negativity restriction on the volumes of goods, orders etc.,yjnt,xijmt,rnt,omt,dmt,refjmt0,j,n,i,m,t;

  • Demand volume restriction, jyjntdemnt,n,t;

  • The description of technological process, nreqmnyjnt=ixijmt+refjmt,j,m,t, jrefjmt+dmt=om,m,t, omt=nreqmntrnt,m,t;

  • The restriction on the volume of orders, jxijmtsupmaximtsit,i,m,t, jxijmtsupminimtsit,i,m,t;

  • The restriction on the volume of reused parts, jrefjmtreusemtomt,m,t, dmt1reusemtomt,m,t;

  • etc.

The obtained tasks in their general form refer to a class of multi-parameter tasks with non-linear restrictions. In such tasks a part of parameters is set by time functions. The outcome of the solution of such tasks will be the function of time as well (by numerical solution in a table form). Since today we lack analytical methods to solve such tasks, we will build then the solution of this task on multiple cyclic determination of numerical solutions of a multi-parameter optimization task with the time period tmini=1,n¯Ti+1Tithat determines the accuracy of the description of the required function (see Figure 3).

Figure 3.

The scheme that clarifies the principle of defining calculation points (special states) by implementing projects in PS.

Taking gradient calculation for finding solution was one of the first approaches to develop solution methods (gradient search method with the split of the step метод градиентного поиска с дроблением шага, steepest descent method, conjugate direction method, the Fletcher-Reeves method, the Davidon-Fletcher-Powell method). By these the goal function has to be differentiated two times and convex. The Newton’s method and his modification the Newton–Raphson method is widespread. These methods also set the requirements to the goal function to be differentiated two times and be convex. Besides, these methods are sensitive to the selection of initial value. Moreover, in obtained optimization tasks we the cases can appear that are related with multiextremality, non-convex restrictions, multicoupling of the area of feasible solutions etc., and these methods cannot handle that appropriately. Modern methods can in general be split into three groups [17]: cluster methods, the methods of restrictions’ distribution, metaheuristic methods. By choosing the solution method it is important to consider that the most significant feature of combinatoric optimization methods is their completeness and comprehension. A complete method ensures the finding of the task solution if it exists. However, the application of these methods can bring difficulties by a big dimension of search space, and we might not have sufficient amount of time that will be required for search in this case (for instance, due to time restrictions for decision making). If we use heuristic methods in task solutions, and heuristic elements complement combinatoric methods, it is getting more complicated to prove that the applied method is comprehensive. The methods of heuristic search are, in general, incomplete.

In practice hybrid algorithms are often used. Besides, the outcome of any algorithm work can be improved by building a joint solver. Due to the lack of specialized solution methods, for the obtained formalization we assume that we can use a developmental approach – the method of stochastic search. The drawback of developmental approaches is that in some cases the results and optimization time are dependent on the selection of initial approximation. This drawback can be eliminated by using as an initial approximation the solution that was worked out by experts. That is why, as a universal solution we suggest to use the method of stochastic search taking into consideration expert knowledge and indistinct preferences. However, in this case we need to direct attention to the fact that for some tasks we can obtain formalizations that already have methods of their solution. Hence, the decision about what method to apply should be taken dependent on the targets, i.e. how accurate the solution is expected to be and whether we have time restrictions for solution search (the methods of stochastic search can be limited in time required for solution search, which is crucial in integrated systems and IIoT that operate in real time).

In heuristic methods of random search we can distinguish two big groups: the methods of random search with learning and developmental programming [18]. In practical use the methods differ in convergence speed and the number of iterations required for search of a feasible solution (several methods, for example, genetic methods, ensure finding an extremal value, but not obligatorily an optimal one). The complexity of selection task is that the efficiency performance of certain methods of stochastic search (in particular, genetic algorithm) is determined by their parameters. As an example let us examine the application of the method of random search with inhibits (Pareto simulated annealing) [19]; along that, we take into account the set values, that were obtained by forecasting during the modification of task for work with restrictions. Before we start perform numerical calculation we need to determine the area for feasible solutions. The algorithm will consist in five steps and an additional sixth step; the latter step allows solve tasks with the restrictions set by functions and forecast values with the set accuracy and the criterion that can also use the values obtained by forecasting.

Let us now consider the search option of parameter values xi,i=1,N¯as points in space BiLet us assign Λto the set of all points xi, that comply with the task restrictions: Λ=xijBijj=1,N¯(that are included in the area of feasible values), where Nis the capacity of the finite set BN, Nis the number of components in the vector of unknown quantities. Consequently, the algorithm has the following sequence of steps:

  1. Set Nis the requested number of points from the set Λ(Nis the parameter of algorithm). Depending on the certain task, the value Ncan alter.

  2. Find Npoints for each parameter xiΛ, scattered in the spaces BiNrandomly or by the use of expert knowledge, and use these points as an initial approximation.

  3. For finding the solutions xiΛDi(ΛDiis the set of feasible points) apply one of the heuristic methods of stochastic search. For this purpose, the point xiΛis taken as a base point, and based on this point we build new points belonging to Λwhere the criterion values are better than in a base point. Even if one such point is found, its base is used then for finding new values etc., and next search is done. All the points found this way xiΛmake the set ΛDi.

  4. All points xiΛDiare studied for optimal factor, after that they are used to form an optimal set of solutions ΛP. The required sets are easily recovered from the labels of criteria in spaces.

  5. The selection of the singular variant X̂, where Xis the vector X̂=x1x2xN, from the Pareto-set is submitted to an expert, that has additional information that has not been formalized and neither taken into account in the model.

  6. For an operational reaction to altering external factors we should perform several iterations for task solution (by modeling the deviations of forecasting values within the confidence interval) and do that cyclically with the time period t.

As a result, we receive altering in time span (corridor) of potential solutions for each time period. At the same time, as several functions describe the parameters that are set by forecasts, where accuracy depends on the planning horizon, we can encounter the case, when the obtained values can fluctuate either towards the increase or the decrease. Such behavior will bring additional organization expenses for PS; however, it is possible to manage such behavior (smoothly adjust the altered values) by changing the dimension within the obtained corridors and the time step t(as a rule, such deviation is described by a stochastic variable that obeys normal distribution law).

In the result of the solution we can determine the diapasons and the values of the values that can be presented in a suitable way to the decision-maker (for instance, in a form of the Gantt chart that is so widespread in management) [20].

4. The generation of the area of feasible solutions by solving the tasks for optimal control of projects and production systems

By the implementation of management tasks as dynamic management tasks, where the solution is the function of time, it should be noted that the restrictions can also change in time. It happens as the characteristics of production system can alter in time, the changes can affect the schedule of supplies, the volume of resources allocated for the implementation of a certain project etc. The restrictions can be shown as follows:

m1t</M</m2t,m1t</M,M</m2t,Mm3t,

where Mis the parameter or an expression with imposed restrictions, m1tand m2tis the restrictions set by the functions of time, m3tis the area of feasible values can also alter in time.

The use of several criteria and a big number of restrictions often leads to the situation that we obtain an empty area of feasible values or the solution shows some deviations. In any case, in PS management tasks the final decision is taken by the expert. That is why, the restrictions can be presented by the functions Fm1tm2ttthat can be represented in a form of additional criteria and used by performing the operation of criteria compression.

In case of discrete set values or if restrictions are set as an area of feasible values, the function Fm1tm2ttor Fm3ttbecomes piecewise-set. Hence, the values that belong to a feasible interval are maximum high by considering a maximization task, and the others become maximum low and vice versa by considering a minimization task. In general, for the consideration of all types of restrictions in one record the function can be written as Fm1tm2tm3tt.The membership with the area of feasible values can be validated then by calculating the value:

i=1nFim1tm2tm3ttE2

where nis the number of restrictions.

If this value is equal to the sum of minimal or maximal values i=1nmin/maxFim1tm2tm3ttdependent on the type of the considered task (minimization or maximization), then it will belong to the area of feasible values. In practice, the restrictions can be considered not as stiff and we can determine the feasible deviation of values ().

Such approach helps add restrictions to a criterial function as additive components that allows get rid of restrictions and apply for solution the methods that do not work with restrictions. Since restrictions can be destroyed in this case, so the obtained functions are to be ranged with help of weight coefficients K. As a result, we receive a final setting of the task for extremum in the following form:

J+i=1nKiFim1tm2tm3ttopt,E3

where Jis the criterial function.

5. The problems of obtaining solutions as functions of time

By solving tasks of optimal control taking into account time factor and some discrete time step tthe solution will be a set function presented in a table form. In this case, the system interacts with the external environment and the found solution can be not achievable due to the changes of external or internal factors. According to Bayes’ theorem [21] the probability of a successful transfer to another state (to a new solution) will depend on the previous state (the state that we are placed now). Hence, for selecting the path for project development it is useful to consider not just one solution, but a set of solutions that are Pareto optimal. So, the task solution will be a set of development paths that technically can be shown as a tree for each of the required parameter values (see Figure 4) that can be considered as Bayesian network.

Figure 4.

The tree of management task solutions taking into account time factor X is the vector of variable values received in the solution of an optimization task, m ∆ t is the planning horizon, m is the number of task solutions, n is the number of solutions found on each step.

The selection of a singular solution will be based on the choice of a path and on the potential of its implementation. The potential of each solution will be defined by chain rule [21]:

PX0Xm=j=1mPXjXj1X1.E4

Therefore, by the planning horizon in mtand nsolutions on each step we will obtain j=1mnprobabilities for leaf nodes in the built tree that should satisfy the following conditions i=1nPXi1=1, i=1nj=1nPXij2=1, etc. for each solution step.

If we assume that all Xare unique, then the implementation potential for each solution will be equal. However, in practice solutions can repeat. It is connected with the fact that we use the method of random search for solving a task; more than that, for modeling deviations we need a multiple solution of a considered task. In this case, the probability of a transfer from the state X0into the state Xmwill be determined by the sum of probabilities of repeated values, and this value will determine the probability of a transfer from one decision point to another one.

This probability will not be a random value since multiple calculations are performed, as parameters that are obtained based on forecast data can have random walk described by the functions of probability density; the latter ones are necessary to be used for generating new forecast values by multiple calculations.

μx1=1σ12πexx122σ12E5

where σ1is the standard deviation, x1is the value obtained by forecasting. By a transfer to the consequent value the function will alter:

μx2=1σ1+σ22πexx222σ12+σ22E6

in a new formula we add σ22is the Gaussian perturbation of constant dispersion that is calculated by the formula [22]:

σ22=Dx=Mx2=j=1mx1j2μx1E7

where Dxis the dispersion, Mx2is the mathematical expectation, x1jis the possible values for x1(belonging to the interval σin order to perform the validation for adequacy).

As a result, it is possible to define the probabilities of obtaining solutions and select the most probable ones.

The use of the probability density functions for modeling deviation helps measure the achievement probabilities of a series of consecutive states s1, s2, , sn. If the probability p10indicates that we are placed in the state siand the state fully complies with the expected state (determined on the basis of previous stages), pijshows the probability of the transfer from the state siinto the state sj, and pi1indicates the probability that the state siwill be achieved. Then:

p11p21pn1=p10p20pn0p11p12p1np12p22p2npn1pn2pnnE8

and the management task adds up to the selection of a desired state from the set of possible states and the determination of a path (the set of delta states) to achieve this desired state. Therefore, it is possible to define the probabilities for obtaining decisions that will be taken into account for further selection of the most probabilistic ones based on the method of dynamic programming (Bellman method) (see Figure 5).

Figure 5.

Decision tree for PS path selection task or project implementation.

Each state is determined by a risk metric (a value that is calculated on the base of the probability pijdepending on the path that we have taken to land at the examined state) and the dynamics of the change in the criterion value by the transfer from one special state into another one (see Figure 5).

By obtaining the solutions as the functions of time on each step of calculations the time step tbecomes an important algorithm parameter. On the one hand, as a step we can choose the time between the decision points Ti+1Ti, from the other hand, by such approach the sensitivity of the system to altering external factors is decreasing (it becomes inertial). That is why, the selection of time step will be a trade-off between sensitivity and persistence of system. At the same time, time step can be an altering dimension (t=ft) but it should be placed in the diapason τtTi+1Ti, where τis the minimal time required for changing production capacity, reset of technological cycle etc. (system characteristic), Ti+1Tiis the time for the next decision point. There can be any number of solutions between decision points.

Underlying a new calculation is the output of values of a forecast parameter outside the bounder of the confidence interval ±σ. On the other hand, works related to changing production capacity, production and procurement scheduling etc., bring additional expenses for enterprise (in general, we encounter the situations, when production capacity is to be increased first and decreased afterwards, that in some cases can be balanced, particularly, by stocks. Therefore, we should consider this task as a separate management task and use the algorithm shown in Figure 6.

Figure 6.

The algorithm for defining the step ∆ t for time moment t ~ , where J is the criterion value, k is the amount of work expenses for changing production cycle taking into account economic criteria.

The solution for the examined diapason Ti+1Tiby, for instance, joint consideration of the tasks of volume planning and procurement management will be production plan, the value of the given criterion (with a potential deviation diapason of decisions), the value of risk metrics and the volumes of changes in required parts and components taking into account possible deviations from target production volumes (Figure 7).

Figure 7.

The solution results of volume planning and procurement management tasks based on the collected data about production system for a discrete production: (a) an example of production output volume for one of the products by the use of different forecasting methods, (b) the values of risk metrics (solid line) and progressive risk metrics (dotted line) connected with the use of planning data, (c) adjusted criterion value by the use of best forecast results and the corridor of possible deviations by the use of normal distribution for their modeling and its correlation with the retrospective data-based criterion value, (d) the need in one type of parts taking into account possible deviations in production plan.

6. Conclusion

The present chapter describes the approaches that thanks to the use of the concepts Industry 4.0 enable the formalization of the processes that are connected with the reasoning and preparation of managerial decisions which are based on real statistical data that take into consideration the interaction of subsystems in production system. Therefore, together with the use of predictive models IIoT helps not only enhance the level of automation and reduce a certain part of personnel production expenses but also consider such factors as increasing power intensity and resources consumption of productions, inertness of integration and management processes in production systems, and the situations that are connected with repair actions, equipment mortality, procurement failures, change in demand and prices etc.

We have investigated the question how to use and apply under existing conditions the approaches that search feasible and optimal solutions in the tasks of efficient management and planning (taking into account time factor). The changes that affect the setting and solution of tasks can be explained by the shift to automated and automatic enterprises, by the shift from mass production to single-part production. In this connection, the current situation requires operational rearrangement of ongoing production processes; we need to increase global economics mobility, i.e. the variability of external environment where production systems operate.

The approach that is described in the chapter is relevant as it tackles management tasks given as optimization tasks; besides, it helps deal with the phenomenon of NP is the completeness of obtained tasks.

The obtained results are sensitive to the quality of forecasts and lack time lags; more than that, we can observe a change in production volume that creates additional increased capacities for production system (related to the change in production schedule).

That is why, the shift to the concepts Industry 4.0 gives not only evident momentary advantages, but also outlines new areas for studies, i.e. the solution of tasks that take into consideration the inertness of production system and expenses that arise due to changes in production volume and risk metrics, that appear upon interaction with external systems (for example, delayed delivery, the delivery of faulty parts, return of goods etc.).

The development of mathematical formalization of these areas of studies can lead to additional effects in future and underlie the appearance of industrial concepts of next generations.

Acknowledgments

The author thanks the government of Perm Krai for the support of the project for “Development of software and economic and mathematical models for supporting innovation project management processes in production systems”, which is being implemented in accordance with decree No. 166-п of 06.04.2011.

The reported study was partially supported by the Government of Perm Krai, research project No. C-26/058.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Leonid A. Mylnikov (December 20th 2017). Time Factor in Operation Research Tasks for Smart Manufacturing, Digital Transformation in Smart Manufacturing, Antonella Petrillo, Raffaele Cioffi and Fabio De Felice, IntechOpen, DOI: 10.5772/intechopen.73085. Available from:

chapter statistics

343total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Influence of Strategic Technology Management on Smart Manufacturing: The Concept of ‘Smart Manufacturing Management’

By Arif Sikander

Related Book

First chapter

Design of a Sustainable Electric Pedal-Assisted Bike: A Life Cycle Assessment Application in Italy

By Antonella Petrillo, Salvatore Mellino, Fabio De Felice and Iolanda Scudo

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us