Open access peer-reviewed chapter

Graph-Based Decision Making in Industry

Written By

Izabela Kutschenreiter-Praszkiewicz

Submitted: 31 July 2017 Reviewed: 02 November 2017 Published: 20 December 2017

DOI: 10.5772/intechopen.72145

From the Edited Volume

Graph Theory - Advanced Algorithms and Applications

Edited by Beril Sirmacek

Chapter metrics overview

1,976 Chapter Downloads

View Full Metrics

Abstract

Decision-making in industry can be focused on different types of problems. Classification and prediction of decision problems can be solved with the use of a decision tree, which is a graph-based method of machine learning. In the presented approach, attribute-value system and quality function deployment (QFD) were used for decision problem analysis and training dataset preparation. A decision tree was applied for generating decision rules.

Keywords

  • decision tree
  • decision-making
  • machine learning
  • quality function deployment (QFD)
  • inquiry planning

1. Introduction: decision-making in industry

The decision-making process in industry is focused on finding answers for the following questions: what should be done, how it should be done, when and who by? The decision-making process often uses heuristic and expert knowledge, which respects the relations between different variables, e.g. the problem of inquiry planning needs analysis, such as response time to consumer inquiries, response preparation costs and the risk related to the manufacturing process [1, 2]. There are different methods which aid industrial data analysis, among which quality function deployment (QFD) turns out to be useful in data analysis related to customer inquiries.

Each decision type requires data and process analysis. Decision problems can be divided into categories, distinguished from different points of view. In industry, we can meet structured decisions, as well as unstructured decisions, in which each decision-maker can use different data and processes to reach the conclusion, and semi-structured decisions in which decision scenarios have some structured and unstructured components. Decision problems are caused by a change related to distinctive features (attributes).

Decision-making in industry can be focused on different time periods, i.e. strategic decisions concern a few years, and tactical decisions relate to a period of a few months, whereas operational decisions regard a few days [3].

The decision-making process starts from decision problem analysis [4]. Steps in decision-making model include [5]:

  • definition of the problem,

  • establishment or enumeration of all the criteria (constraints),

  • consideration or collection of all the alternatives,

  • identification of the best alternative,

  • development and implementation of an action plan,

  • evaluation and monitoring of the solution and feedback examination, if necessary.

A problem should be precisely identified and described. Manufacturing products in an industrial plant require combined and coordinated efforts of people, machinery and equipment [6] which create a manufacturing system. This manufacturing system needs suitable values of decision variables, which characterise product and whole stages of the manufacturing process.

The overall manufacturing system decision problems include, among others [2, 6]:

  • inquiry planning,

  • the problem of resource requirements,

  • the problem of resource layout,

  • the problem of material flow,

  • the problem of buffer capacity.

Decision-making in manufacturing systems can be characterised as follows [6, 7]:

  • Manufacturing systems should be able to produce products according to customer requirements.

  • Manufacturing systems consist of many interacted components.

  • Manufacturing system is changing in time periods.

  • Manufacturing systems are influenced by internal and external variables.

  • A manufacturing system is complicated, and it is difficult to create its complex model. The relations between variables, which describe it, usually cannot be expressed analytically.

  • Data charactering a manufacturing process may be difficult to measure.

  • Decisions in a manufacturing process can be focused on achieving different goals, which are sometimes in conflict.

The methods useful in supporting decision-making in manufacturing system include:

  • Mathematical programming (linear programming) useful for decision problems for which it is possible to formulate goals and constrains as equations.

  • The queuing theory, which is a study of behaviour of queueing systems through the formulation of analytical models [6]. Queue disciplines include FIFO (first in, first out), LIFO (last in, first out), SIRO (service in random order), PRI (priority ordering) and GD (any other specialised ordering).

  • Artificial intelligence, including, among others, decision tree and rule-based systems, neural networks and genetic algorithms.

  • Simulations, which in industrial applications include the following steps: formulating the problem, collecting data and defining a model, model statistics of system randomness, ensuring validity, constructing and verifying a computer model, pilot runs and validity checks, design experiments, performing runs, analysing output data, documenting and implementing results [6].

In the presented approach, a decision problem should be described with the use of an attribute-value system, which is one of the well-known models of knowledge representation. Under the object-attribute-value (O-A-V) scheme, an object is associated with various attributes, and each attribute is assigned with appropriate values [8]. The attribute-value system uses statement object-attribute-value for decision problem characteristics, e.g. a decision problem like machine tool selection can be characterised by different objects, such as machine, material, process, etc. Each object can be characterised by different attributes (variables), e.g. machine can be characterised by attributes such as type, technical condition, work parameters, etc. Attributes can be characterised by a categorised, numerical or linguistic value.

Decisions can be supported by different types of systems, such as (Table 1) [9, 10]:

  • transaction processing systems (TPS), which focus on data evidence,

  • decision support systems (DSS), which support decision-making using simulation and data processing applicable for different variants,

  • expert systems (ES), which support experts in their decisions using heuristic knowledge.

Decision type Decision time period Support system
Operations level Tactical level Strategic level
Structured Resource planning
Delivery registration
Economic analysis Finance of investments
Warehouse localisation
TPS, DSS
Semi-structured Technical production preparation Credit assessment
Scheduling
Product development planning
Quality control
DSS, ES
Unstructured Software purchase Recruitment of managers Technology development ES

Table 1.

Decision problem types.

Among methods useful in structured decision-making, decision trees are widely discussed.

Decision-making process based on graph theory can be based on the following stages:

  • Formulating the problem.

  • Determination of a set of attributes which characterise the decision problem (for this purpose a QFD matrix can be used), e.g. machine failure diagnosis decision problem can be characterised by attributes such as noise level, vibration, type of failure, etc.

  • For each attribute a set of possible values is defined, e.g. the attribute noise level can be characterised by interval numerical values such as <85 dB and >85 dB.

  • Finding examples from the past with a solution of the problem and creating a training set, e.g. noise level >85 dB, vibration high, solution = repair Section A.

  • Creating a decision tree.

  • Solving the new problem with the use of a decision tree.

Decision-making process supported by the machine learning method uses knowledge (experience) which comes from different sources (different experts). In traditional decision-making, an expert develops his knowledge based on his own experience.

Human decision-making can be described as follows:

  • Expert no. 1—own experience—decision for a new case based on individual experience

  • Expert no. 2—own experience—decision for a new case based on individual experience

  • etc.

Graph-based machine learning decision-making can be described as follows:

  • Expert no. 1—own experience

  • Expert no. 2—own experience

  • etc.

  • Training set—common experience (set of all known cases)

  • Decision tree induction

  • Decision for a new case supported by a decision tree

Graph-based decision-making can be compared with neural network. In both cases, the knowledge saved in training set joins experience from different sources, but decision tree can be used for decision tree induction, which exhibits the knowledge in a clear way, instead of a neural network, which is able to predict the sought value without any explanations.

Advertisement

2. Decision tree

A decision tree is a graph which can be used as a model of a categorical variable. A decision tree aims at predicting a categorical (numerical or linguistic) output variable from a set of numerical or linguistic input variables [11]. Decision trees are useful in solving classification and prediction problems [12, 13]. The structure of a decision tree involves a root node, internal nodes, leaf nodes and edges which joint nodes, also called branches (Figure 1) [14].

Figure 1.

Decision tree.

A root node is an initial decision node which includes the main attribute in the decision process. Internal nodes include other attributes which are input variables in the decision process, whereas leaf nodes represent output variables including possible decisions in the decision process. Edges represent categorical values assigned to the attribute. The tree always starts from the root node and grows down by splitting the data at each level into new nodes [14].

Decision trees are one of the machine learning methods. Constructing a decision tree requires a set of decision problem-solving examples which create a training set.

Machine learning from examples and its generalisation ability were discussed, e.g. by Shiue et al. (Figure 2) [15]. In machine learning methods, one of the most important tasks is to create a training set of examples. For that purpose, it is necessary to define attributes and their values which are important variables in a given decision problem.

Figure 2.

Machine learning from examples.

Data in the training set can come from a real manufacturing system or from simulation experiments. Examples of training data are presented in Table 2.

Case no. Attribute 1 Attribute 2 Attribute 3 Attribute 4 Decision
Possible values Possible values Possible values Possible values Possible values
(0,1) (0,1) (0,1) (0,1) (1,2,3)
1 0 1 0 1 1
2 1 1 0 0 2
3 1 1 1 1 2
4 1 0 1 0 1
5 1 0 1 0 1
6 1 0 0 0 2
7 0 0 1 1 3
8 0 0 0 1 3
9 1 1 0 1 1

Table 2.

A training set.

In machine learning methods, attributes come from decision problem characteristics. QFD can support attribute selection in industry decision-making. In real applications, attributes’ value can use numerical as well as linguistic values (numerical value can be an integer or not integer put as a separate value or in intervals). The relevance of attributes in the decision process can be evaluated with the use of, e.g. Shannon entropy in ID3 algorithm, which is not discussed in this chapter.

Constructing a decision tree classifier is usually divided into two steps: generation and pruning trees (C4.5 and CART algorithms) [16, 17, 18, 19].

In the generation phase, the initial tree is built using available training dataset until each leaf becomes homogeneous. In the pruning phase, the already-grown tree is reduced in order to improve the accuracy obtained on the testing dataset [20]. There are many methods for constructing a decision tree. The basic generation algorithm includes the following steps [21]:

  1. Start with a single node representing all records in the dataset.

  2. Choose one attribute, and split the records according to their values on that attribute.

  3. Repeat the splitting on all new nodes, until a stop criterion is satisfied.

A single node representing all records in a dataset was found, and the root attribute was assigned (Figure 3).

Figure 3.

The first node in the decision tree induction process.

In the second step, decisions are not unique, so the decision tree should grow, and other attributes should be taken into consideration (Figure 4). In the presented example, the next step is necessary (Figure 5). The final decision tree is presented in Figure 6.

Figure 4.

Sequent nodes in the decision tree induction process.

Figure 5.

Sequent nodes in the decision tree induction process.

Figure 6.

Sequent nodes in the decision tree induction process.

The decision tree algorithm is a predictive model with a hierarchical structure and used in data mining [22]. This algorithm has several advantages [16, 23, 24, 25]:

  • The training set can include expert knowledge, as well as results of experiments and industrial data which come from manufacturing processes.

  • Decision trees can handle both linguistic and numerical input and output variables (attributes).

  • Results of the prediction process are easy to interpret, clear and close to human reasoning.

  • It can be joined with other algorithms.

  • It is possible to use decision trees even if datasets have missing values.

Decision tree construction requires data preprocessing.

Advertisement

3. Data preprocessing in the inquiry planning problem with the use of QFD

Constructing a training set, it is necessary to collect data related to the decision process. Data preprocessing related to inquiry planning can be supported with the use of an attributed model of the product [26, 27].

In the attributed product model, the product functions can be characterised by a set of attributes:

F = f 1 f 2 f n E1

Based on a toothed gear example, the set of attributes includes: f1—reducer working arrangement; f2—kind of duty.

Each attribute takes a value from the set Fnw:

F n w = f n1 w , f n2 w f nl w E2

An example of a set of attribute values Fnw includes: f11w—parallel axes; f12w—perpendicular axes; f21w—light duty; f22w—medium duty; f23w—heavy duty.

The set of product types was denoted as P:

P = p 1 p 2 p m E3

An example of a set of products includes: p1—helical gear; p2—bevel-helical gear.

Each product-type pm includes products pm1, pm2,…, pmk., described by attributes pmkz:

P mk = p mk1 p mk2 p mkz E4

An example of a set of products includes: p11—one-stage helical-geared reducer mounted on the feet; p12—one-stage helical-geared reducer hanged on the shaft.

An example of a set of product attributes includes: p111—weight; p112—dimensions.

Each attribute pmkz takes a value from the Pmkzw set:

P mkz w = p mkz1 w p mkz2 w p mkzt w E5

Product pm consists of modules/elements belonging to the set M:

M = m 1 m 2 m k E6

Each element is described by attributes belonging to the set Mk:

M k = m k1 m k2 m kv E7

Examples of attributes describing product parts are: mk1—weight; mk2—type of material.

Attribute values belong to the set Mkvw:

M kv w = m kv1 w m kv2 w m kvg w E8

Mk* is a set of variants of element mk:

M k = m k1 m k2 m kl E9

Mkvw is a set of attribute values:

M kv w = m kv1 w m kv2 w m kvg w E10

Basing on the presented product-attributed model, it is possible to analyse product and process attributes. The theory helpful in complex product and process development is quality function deployment (QFD), also known as house of quality. QFD supports meeting customer requirements in product and process design (Figure 7). QFD is a method of data analysis related to customer requirements, product and process characteristics in industrial plants.

Figure 7.

QFD in decision problem solving.

The QFD matrix (Figure 8) [28] determines the relations between customer needs (denoted as ‘what’s’) and design characteristics (denoted as ‘how’s’). The top part of the matrix called a ‘roof’ indicates how design characteristics interact. The right part of the matrix includes assessment of alternative products. The characteristic of alternative products is presented at the bottom of the matrix. The correlation between ‘what’s’ and ‘how’s’ is registered in the middle part of the matrix.

Figure 8.

A QFD matrix.

QFD consists of a series of matrices. The first one represents the relation between customer requirements and product characteristics.

The second one describes the relation between product characteristics and product parts.

The third QFD matrix gives information related to the production process.

The fourth one provides information related to production process parameters [28, 29, 30].

QFD is a customer-oriented method of industrial data analysis which is able to take into consideration numerical, as well as linguistically specific variables. QFD aids relations between customers and industrial processes.

An example of QFD was developed on the basis of the training dataset presented in Table 3 (Figure 9). The possible values of chosen attributes characterising products from the customer’s point of view were specified in the left part of the matrix.

No Attribute (value 1, value 2, etc.) Decision
Height (low, high) Colour (dark, red, white) Complication (simple, complex)
1 Low White Simple w2
2 High White Complex w1
3 High Red Simple w2
4 Low Dark Simple w1
5 High Dark Simple w1
6 High White Simple w2
7 High Dark Complex w1
8 Low White Complex w1
9 Low Dark Simple w1
10 Low Red Simple w2

Table 3.

Training dataset.

Figure 9.

An example of QFD matrix.

The first client’s decision is presented in Figure 10. The data from this matrix creates the first record in the training dataset presented in Table 3.

Figure 10.

The first client’s decision.

The second client’s decision is presented in Figure 11; this data create the second record in training dataset presented in Table 3.

Figure 11.

The second client’s decision.

Decision tree induction starts from the ‘colour’ attribute, followed by the ‘complication’ attribute (Figure 12).

Figure 12.

A decision tree example.

Advertisement

4. Knowledge extraction from a decision tree: production rules for knowledge representation

Decision tree induction is closely related to rule induction; each path from the root of a decision tree to one of its leaves can be transformed into a rule [16], which is one of the most popular approaches to knowledge representation. Rules, sometimes called IF-THEN rules, can take various forms, e.g.:

Simple rules:

  • IF condition THEN action

  • IF premise THEN conclusion

Complex rules:

  • IF proposition p1 AND proposition p2 are true THEN proposition p3 is true

Some of the benefits of IF-THEN rules:

  • they are modular,

  • each rule defining a relatively small and independent piece of knowledge.

For example, paths from the decision tree presented in Figure 12 can be transformed into rules:

  • IF colour = dark THEN w1

  • IF colour = red THEN w2

  • IF colour = white AND complication = simple THEN w2

  • IF colour = white AND complication = complex THEN w1

The resulting set of rules can be transformed to improve its comprehensibility for a human user, and possibly its accuracy [31], e.g. the rules presented above can be transformed to complex rules, such as:

  • IF colour = dark OR (colour = white AND complication = complex) THEN w1

  • IF colour = red OR (colour = white AND complication = simple) THEN w2

Advertisement

5. The prediction process

Rules produced in Section 4 can be used for prediction. For further clients, for whom requirements are characterised in the left part of the matrix presented in Figure 13, in the inquiry planning process, the enterprise should offer product ‘w2’.

Figure 13.

The second client’s decision.

The decision-making based on the rules produced in Section 4 is presented in Table 4.

No Attribute (value 1, value 2, etc.) Decision
Height (low, high) Colour (dark, red, white) Complication (simple, complex)
11 Low Red Complex w2

Table 4.

Predicted values.

Advertisement

6. Conclusions

A decision tree is one of the graph-based methods of machine learning which can be used in decision making in industry. Among decision problems met in industry, one of the most important decision-making is inquiry planning, which starts from product definition offered to a particular client. QFD can be applied as a method which facilitates data preprocessing in inquiry planning. What is more is that QFD aids attribute specification important from both customer and engineering points of view.

In decision tree induction, a training set can use categorical number values, as well as linguistically specific attributes. In training dataset development, the attribute-value system is a useful method of data analysis. The main steps in decision tree induction were applied. Optimal decision tree induction can be fulfilled with different algorithms which were not discussed.

A decision tree is called the ‘white box’ method because of clarity and intelligibility for humans, which is important in the decision-making process in industrial context.

The presented approach can be applied in e-commerce systems which are currently under development in many branches of the industry.

References

  1. 1. Moravcik O, Misut M. Decision Support Systems in Manufacturing Systems Manage-ment. In: Tzafestas SG, editor. Computer-Assisted Management and Control of Manufac-turing Systems. London: Springer; 1997
  2. 2. Macioł A, Macioł P, Jędrusik S, Lelito J. The new hybrid rule-based tool to evaluate processes in manufacturing. The International Journal of Advanced Manufacturing Technology. 2015;79:1733-1745
  3. 3. Mtsniemi T. Operational Decision Making in the Process Industry Multidisciplinary Approach. VTT Technical Research Centre of Finland; 2008
  4. 4. Kepner C, Tregoe B. The New Rational Manager: An Updated Edition for a New World. Updated ed. Princeton, NJ: Princeton Research Press; 1997
  5. 5. Guo K. DECIDE: A decision-making model for more effective decision making by health care managers. The Health Care Manager. 2008;27(2):118-127
  6. 6. Chryssolouris G. Manufacturing Systems. New York: Springer Science + Business Media; 1992
  7. 7. Su C-T, Lin C-S. A case study on the application of Fuzzy QFD in TRIZ for service quality improvement. Quality & Quantity. 2008;42:563-578
  8. 8. Hong TY, Tsai DH. An integrated expert operation planning system with a feature-based design model. The International Journal of Advanced Manufacturing Technology. 1994;9:305-310
  9. 9. Kwaśnicka H. Sztuczna inteligencja i systemy ekspertowe. Rozwój, perspektywy. Wrocław: Wydawnictwo Wyższej Szkoły Bankowości i Finansów; 2005
  10. 10. Mulawka J. Systemy ekspertowe. Warszawa: WNT; 1996
  11. 11. Voisine N, Boullé M, Hue C. A Bayes Evaluation Criterion for Decision Trees. In: Guillet F et al., editors. Advances in Knowledge Discovery and Management, SCI 292. Berlin, Heidelberg: Springer-Verlag; 2010. pp. 21-38
  12. 12. Micheal N. Artificial Intelligence: A Guide to Intelligence Systems. Great Britain: Addison Wesley; 2002
  13. 13. Zhou BH, Xi LF, Cao YS. A beam-search-based algorithm for the tool switching problem on a flexible machine. The International Journal of Advanced Manufacturing Technology. 2005;25(9-10):876-882
  14. 14. Kuo Y, Lin K-P. Using neural network and decision tree for machine reliability prediction. The International Journal of Advanced Manufacturing Technology. 2010;50:1243-1251
  15. 15. Shiue Y-R, Guh R-S. The optimization of attribute selection indecision tree-based production control systems. The International Journal of Advanced Manufacturing Technology. 2006;28:737-746
  16. 16. Rokach L, Maimon O. Decision Trees. Data Mining and Knowledge Discovery Handbook. US: Springer; 2005
  17. 17. Li Y, Guo J-N. Research on Pruning Algorithm of Decision Tree. Henan Science. 2009;27(3):320-323
  18. 18. Wang S-S, Sun J-Y, Li L-L. Fault Decision Tree model in the Application of Expert System. Computer Applications. 2005;25(s):293-294
  19. 19. Gong Y, Li Y. Motor fault diagnosis based on decision tree-Bayesian network model. In: Jin D, Lin S, editors. Advances in ECWAC, AISC 148. Vol. 1. Berlin, Heidelberg: Springer-Verlag; 2012. pp. 165-170
  20. 20. Gorunescu F. Data Mining: Concepts, Models and Techniques, ISRL 12. Berlin, Heidelberg: Springer-Verlag; 2011. pp. 159-183
  21. 21. Magnani M, Montesi D. Uncertainty in Decision Tree Classifiers. In: Deshpande A, Hunter A, editors. SUM 2010, LNAI. Vol. 6379. 2010. pp. 250-263
  22. 22. Stravinskienė A, Gudas S, Dabrilaite A. Decision tree algorithms: Integration of domain knowledge for data mining. In: Abramowicz W, Domingue J, Węcel K, editors. BIS Workshops, LNBIP, 127. 2012. pp. 13-24
  23. 23. Larose DT. Data Mining Methods and Models. Hoboken: John Wiley & Sons, Inc.; 2006
  24. 24. Phillips J, Buchanan BG. Ontology-guided knowledge discovery in databases. In: Proceedings of the 1st International Conference on Knowledge Capture. 2001. pp. 123-130
  25. 25. Kohavi R, Quinlan JR. Decision-tree discovery. In: Klosgen W, Zytkow JM, editors. Handbook of Data Mining and Knowledge Discovery, ch. 16.1.3. Oxford University Press; 2002. pp. 267-276
  26. 26. Kutschenreiter-Praszkiewicz I. Zastosowanie metod sztucznej inteligencji w planowaniu prac przygotowania produkcji elementów maszyn. Zakopane: Komputerowo Zintegrowane Zarządzanie; 2011
  27. 27. Kutschenreiter-Praszkiewicz I. Integration of product design and manufacturing with the use of artificial intelligent methods. Journal of Machine Engineering. Vol. 11, No. 1-2, 2011 Model Based Manufacturing Operation. In: Jerzy Jędrzejewski editor. Editorial Institution of the Wroclaw Board of Scientific Technical Societies Federation NOT. s. 46-53. Wrocław, 2011
  28. 28. Kutschenreiter-Praszkiewicz I. Systemy bazujące na wiedzy w technicznym przygotowaniu produkcji części maszyn. Bielsko-Biała: Wydawnictwo Naukowe Akademii Techniczno-Humanistycznej; 2012
  29. 29. Iranmanesh H, Thomson V. Competitive advantage by adjusting design characteristics to satisfy cost targets. International Journal of Production Economics. 2008;115:64-71
  30. 30. Raharjo H, Brombacher AC, Xie M. Dealing with subjectivity in early product design phase: A systematic approach to exploit Quality Function Deployment potentials. Computers & Industrial Engineering. 2008;55:253-278
  31. 31. Quinlan JR. Simplifying decision trees. International Journal of Man-Machine Studies. 1987;27:221-234

Written By

Izabela Kutschenreiter-Praszkiewicz

Submitted: 31 July 2017 Reviewed: 02 November 2017 Published: 20 December 2017