Summary of Literature Review
Knowledge has become the main value driver for modern organizations and has been described as a critical competitive asset for organizations. An important feature in the development and application of knowledge-based systems is the knowledge representation techniques used. A successful knowledge representation technique provides a means for expressing knowledge as well as facilitating the inference processes in both human and machines . The limitation of symbolic knowledge representation has led to the study of more effective models for knowledge representation .
Malhotra  defines the challenges of the information-sharing culture of the future knowledge management systems as the integration of decision-making and actions across inter-enterprise boundaries. This means a decision making process will undergo different constraints. Therefore, existence of a method to validate a Decision Support System (DSS) system is highly recommended. In the third generation of knowledge management, the knowledge representation acts as boundary objects around which knowledge processes can be organized . Knowledge is viewed in a constructionist and pragmatic perspective and a good knowledge is something that allows flexible and effective thinking and construction of knowledge-based artifacts .
For any decision, there are many choices that the decision maker can select from . The process of selection takes place at a decision point and the selected decision is a choice. For example, if someone wants to pay for something, and the payment mode is either by cash or by credit card, the payment mode is the decision point; cash and credit card are choices. Now, we can conclude that the choices and decision points represent the knowledge objects in DSS. Choices, decision points and the constraint dependency rules between these two are collectively named as variability. Task variability is defined in  as the number of exceptions encountered in the characteristics of the work. The study in  tested the importance of variability in the system satisfaction. Although there are many existing approaches for representing knowledge DSS, the design and implementation of a good and useful method that considers variability in DSS is much desired.
The term variability generally refers to the ability to change; to be more precise, this kind of variability does not occur by chance but is brought about on purpose. In other words, variability is a way to represent choices. Pohl et al.  suggest the three following questions to define variability.
What does it vary? : Identifying precisely the variable item or property of the real world. The question leads us to the definition of the term variability subject (A variability subject is a variable item of the real world or a variable property of such an item).
Why does it vary? : There are different reasons for an item or property to vary: different stakeholders’ needs, different countries laws, technical reasons, etc. Moreover, in the case of interdependent items, the reason for an item to vary can be the variation of another item
How does it vary? : This question deals with the different shapes a variability subject can take. To identify the different shapes of a variability subject, we define the term variability object (a particular instance of a variability subject).
Example of variability
The variability subject “car” identifies a property of real-world items. Examples of variability objects for this variability subject are Toyota, Nissan, and Proton.
The problem of representing variability in a DSS requires a complex representation scheme to capture static and dynamic phenomena of the choices that can be encountered during the decision process. We believe that the key feature of such knowledge representation (for variability in a DSS) is its capability of precise representation of diverse types of choices and associations within them. This involves: i) qualitative or quantitative description of choices and their classification, ii) representation of causal relationships between choices and iii) the possibility of computerizing the representation.
The main aim of variability representing in DSS is to create a decision repository that contains decision points, its related choices and the constraint dependency relations between decision points-choices, choices-choices, or decision points-decision points.
Nowadays, Feature Model (FM)  and Orthogonal Variability Model (OVM)  are the well-known techniques to represent variability. Although, FM and OVM are successful techniques for modeling variability, some challenges still need to be considered such as logical inconsistency, dead features, propagation and delete-cascade, and explanation and corrective recommendation. Inconsistency detection is defined as a challenging operation to validate variability in . In  the source of logical inconsistency is defined from a skill-based or rule-based errors which would include errors made in touch-typing, in copying values from one list to another, or other activities that frequently do not require a high level of cognitive effort. One of the main drawbacks coming from the fusion of several different and partial views is logical inconsistency . Dead feature is defined in  as a frequent case of error in feature model- based variability. Instead of dead feature, we called it dead choice. Modeling variability methods must consider constraint dependency rules to assure the correctness of the decision. Propagation and delete cascade operation is proposed to support auto selection of choices in the decision making process. Propagation and delete cascade operation is a very critical operation in the semi-auto environment.
This paper defines a rule-based approach for representing and validating knowledge in DSS. In addition to representing variability to capture knowledge in DSS, intelligent rules are defined to validate the proposed knowledge representation. The proposed method validates two parts. The first part is validating a decision repository in which a logical inconsistency and dead choices are detected. The second part is validating the decision making process by providing automated constraint dependency checking, explanation and corrective recommendation, and propagation and delete-cascade.
This paper is structured as follows: Literature is surveyed in section two. Knowledge representation of DSS using variability is demonstrated in section three. Knowledge validation is illustrated in section four and implementation is discussed in section five. Section six contains the conclusion and future work.
2. Related work
The aim of knowledge representation is to facilitate effective knowledge management which concerns expressive representation and efficiency of reasoning in human . Related works on this area are summarized as follows:
Haas  investigated the feasibility of developing an overarching knowledge representation for Bureau of Labor Statistics information that captured its semantics, including concepts, terminology, actions, sources, and other metadata, in a uniformly applicable way. Haas suggested the (ISO/IEC 11179) standard for metadata, as knowledge representation techniques. Molina  reported the advantages of using knowledge modeling software tool to help developers build a DSS. Molina describes the development of DSS system called SAIDA where knowledge is represented as components, which was designed by Knowledge Structure Manager (KSM). KSM is a knowledge-modeling tool that includes and extends the paradigm of task method-domain followed by different knowledge engineering methodologies. KSM provides a library of reusable software components, called primitives of representation that offer the required freedom to the developer to select the most convenient representation for each case (rules, frames, constraints, belief networks, etc.).
Froelich and Wakulicz-Deja  investigated problems of representing knowledge for a DSS in the field of medical diagnosis systems. They suggested in  a new model of associational cognitive maps (ACM). The ability to represent and reason with the structures of causally dependant concept is the theoretical contribution of the proposed ACM. Antal  proposed the bayesian network as a knowledge representation technique to represent multiple-point-of views. The proposed technique in  serves as a refection of multiple points of view and surpasses bayesian network both by describing dependency constraint rules and an auto-explanation mechanism. Lu et al.  developed a knowledge-based multi-objective DSS. The proposal in  considers both declarative and procedural knowledge. Declarative knowledge is a description of facts with information about real-world objects and their properties. Procedural knowledge encompasses problem-solving strategies, arithmetic and inferential knowledge. Lu et al.  used text, tables and diagrams to represent knowledge.
Brewster and O’Hara  prove difficulties of representative skills, distributed knowledge, or diagrammatic knowledge using ontologies. Pomerol et.al  used artificial intelligence decision tree to represent operational knowledge in DSS. Christiansson  proposed semantic web and temporal databases as knowledge representation techniques for new generation of knowledge management systems. One of the most sophisticated knowledge modeling methodologies is Common KADS . Common KADS explains how to model a knowledge application through structural top-down analysis of the problem domain. The outcome of modeling process according to Common KADS consists of three layers that are called contextual model, conceptual model and design model. Common KADS model did not provide mechanism to define relation between objects or between layers. Padma and Balasubramanie  used traditional knowledge-based tool to define DSS. Williams  described the benefits of using Ontologies and Argumentation for DSS. Suh in  applied Database Management System (DBMS) in two-phased decision support system for resource allocation.
To the best of our knowledge, there is no one particular or specific method for handling variability as a knowledge representation technique in DSS. In addition to variability representation, our proposed method could be used to deal with main challenges in variability representation such as: constraint dependency rules, explanation, propagation and delete-cascade, logic inconsistency detection and dead decision detection. Table 1 summarized the previous works in knowledge representation and validation regarding a DSS. The columns are denoted as following: KR for Knowledge Representation, CDR for Constraint Dependency Rules, Expl for Explanation, Pro and DC for Propagation and delete-cascade, LID for Logic Inconsistency Detection and DDD for Dead Decision Detection.
|Technique||Ref.||KR||Reasoning||CDR||Expl||Pro and DC||LID||DDD||Gap|
|Traditional artificial intelligence knowledge representation techniques such as|
frames, decision trees, belief networks, etc
|Associational cognitive maps(ACM)||8||Yes||Yes||No||No||No||No||No||5/7|
|Text, tables and diagrams||13||Yes||No||No||No||No||No||No||6/7|
|Temporal database and Semantic Web||4||Yes||Yes||Yes||Yes||No||No||No||3/7|
|Three layer modeling(KADS)||24||Yes||Yes||No||No||No||No||No||5/7|
Table 1 clarifies the need for new method of representation and validating knowledge in DSS. Although the logic inconsistency and dead decision problems are cleared in variability representation methods, some works in literature expressed these problems in a DSS .
3. Knowledge representation in DSS using variability
In this section, we defined variability is DSS, and then described how to represent variability in a DSS using First Order Logic (FOL).
3.1. Define variability in DSS
By variability in DSS, we mean choices representation (considering dependency constraint rules between them). In the choice phase, the decision maker chooses a solution to the problem or opportunity. DSS help by reminding the decision maker what methods of choice are appropriate for the problem and help by organizing and presenting the information . Hale  states that "relationships between knowledge objects such as kind-of, and part-of become more important than the term itself”. We can look for choices as knowledge objects. The proposed method defines and deals with these types of relationship and with dependency constraints between choices such as require and exclude.
There is no standard variability representation for a DSS . The proposed method can enhance both readability and clarity in representation of variability in DSS. Consequently, the proposed method represents variability in high degree of visualization (using graph representation) considering the constraints between choices. As we mentioned before in the definition of variability, there are two items: variability subject and variability object. We suggest variability subject as a
A reward system as an example:
Rewards systems can range from simple systems to sophisticated ones in which there are many alternatives. It is closely related to performance management. Both rewarding and performance measuring are difficult tasks due to the decision variability that takes place in different activities of human resources cycle as it can be seen in figure 1.
3.2. Representing variability in DSS using first order logic
In this sub-section, the notations of the proposed method are explained. Syntaxes and semantics (the most important factors for knowledge representation methods) for the proposed method are defined. The proposed methods composed of two layers. The upper layer is a graphical diagram. Providing visual picture is the function of the upper layer. The lower layer is a representation of the upper layer in forms of predicates. Providing a reasoning tool is the aim of the lower layer. You can imagine the upper layer as a user-interface while the lower layer as a source code. In the lower layer, decision point, choices, and constraint dependency rules are represented using predicates. The output of this process is a complete modeling of variability in DSS as knowledge- based. In other words, this process creates a decision-repository based on two layers. This decision-repository contains all decisions (choices) grouped by decision points. The proposed method validates both decision-repository and decision making process.
3.1.1. Upper layer representation of the proposed method (FM-OVM)
In this layer, we combined FM diagram with OVM notations. Figure 1 represents the upper layer of our proposed method. Optional and mandatory constraints are defined in Figure 1 by original FM notations , and constraint dependency rules are described using OVM notations. OVM and FM can easily become very complex for validating a medium size system, i.e., several thousands of decision points and choices are needed. This reason motivates us to develop an intelligent method for representing and validating variability in DSS.
3.1.2. Lower layer of the proposed method
Decision points, choices, and constraint dependency rules are used to describe variability . Constraint dependency rules are: decision point requires or excludes decision point, choice requires or excludes choice, and choice requires or excludes decision point. In this sub-section, decision points, choices, and dependency constraint rules are described using predicates as a low level of the proposed method. Examples are based on Figure 1. Terms beginning with capital letters represent variables and terms beginning with lower letter represent constants. Table 2 shows the representation of
A decision point is a point that the decision maker selects one or more choices. In figure 1,
Syntax: type (Name1,decisionpoint).
Semantic: Identify the type,
Syntax: choiceof (Name1, Name2).
Semantic: Identifies the choices of a specific decision point.
Syntax: max (Name, int).
Semantic: Identifies the maximum number of allowed selection at a decision point.
Syntax: min(Name, int).
Semantic: Identifies the minimum number of allowed selection at a decision point.
The common choices(s) in a decision point is/are not included in maximum-minimum numbers of selection.
Syntax: common(Name1, yes). common(Name2, no).
Semantic: Describes the commonality of a decision point.
A choice is decision belonging to a specific decision point. For instance, in Figure 1, time
Semantic: Define the type.
Syntax: common(Name1, yes). common(Name2, no).
Semantic: Describes the commonality of a choice, e.g. common(first reminder, yes). If the choice is not common, the second slot in the predicate will become no -as example- common(time on,no).
Constraint dependency rules
The following six predicates are used to describe constraint dependency rules:
Syntax: requires_c_c(Name1, Name2).
Semantic: choice requires another choice.
Syntax: excludes_c_c (Name1, Name2).
Semantic: choice excludes choice.
Syntax: requires_c_dp(Name1, Name2).
Semantic: Choice requires decision point.
Syntax: excludes_c_dp(Name1, Name2).
Semantic: Choice excludes decision point.
Syntax: requires_dp_dp(Name1, Name2).
Semantic: Decision point requires another decision point.
Syntax: excludes_dp_dp(Name1, Name2).
Semantic: Decision point excludes another decision point. Name1 represents the first decision point and
|type(negative performance, decisionpoint).|
choiceof(negative performance, time on).
choiceof(negative performance, N% salary).
common(negative performance, no).
min(negative performance, 1).
max(negative performance, 2).
requires_dp_dp(negative performance, punishment).
excludes_dp_dp(negative performance, positive performance).
In addition to these predicates, we define two more predicates
4. Variability validation in DSS
Although variability is proposed as a technique of knowledge representation that provides a decision repository; validating this repository and decision making process is important.
In a decision making process, a decision maker selects the choice(s) from each decision point. The proposed method guides the decision maker by: 1) validating the constraint dependency rules, 2) automatically selecting (propagation and delete-cascade) decisions, and 3) provide explanation and corrective recommendation. In addition, the proposed method validates the decision-repository by detecting dead choices and logical inconsistency. In this section, six operations are illustrated. These operations are implemented using Prolog .
4.1. Validating the decision making process
4.1.1. Constraint dependency satisfaction
To validate the decision-making process, the proposed method triggers rules based on constraint dependencies. Based on the constraint dependency rules, the selected choice is either accepted or rejected. After that, the reason for rejection is given and correction actions are suggested. When the decision maker selects a new choice, another choice(s) is/are assigned by the
|∀ x, y: type(x, choice) ∧ type(y, choice) ∧ requires_c_c(x, y) ∧ select(x) ⟹ select(y)|
∀ x, y: type(x, choice) ∧ type(y, choice) ∧ excludes_c_c(x ,y) ∧ select(x) ⟹ notselect(y)
∀ x, y: type(x, choice) ∧ type(y, decisionpoint) ∧ requires_c_dp(x, y) ∧ select(x) ⟹ select(y)
∀ x, y: type(x, choice) ∧ type(y, decisionpoint ) ∧ excludes_c_dp(x, y) ∧ select(x) ⟹ notselect(y)
∀ x, y: type(x, decisionpoint) ∧ type(y, decisionpoint ) ∧ requires_dp_dp(x, y) ∧ select(x) ⟹ select(y)
∀ x, y: type(x,decisionpoint) ∧ type(y, decisionpoint) ∧ excludes_dp_dp(x, y) ∧ select(x) ⟹ notselect(y)
∀ x, y: type(x, choice ) ∧ type(y, decisionpoint ) ∧ select(x) ∧ choiceof (y, x) ⟹ select(y)
∃x ∀y:type(x, choice ) ∧ type(y, decisionpoint ) ∧ select(y) ∧ choiceof (y, x) ⟹ select(x)
∀ x, y: type(x, choice ) ∧ type(y, decisionpoint ) ∧ notselect(y) ∧ choiceof (y, x) ⟹ notselect(x)
∀ x, y: type(x,choice ) ∧ type(y, decisionpoint ) ∧ common(x,yes) ∧ choiceof (y, x) ∧ select(y) ⟹ select(x)
∀ y: type(y, decisionpoint ) ∧ common(y) ⟹ select(y)
∀ x, y: type(x, choice ) ∧ type(y, v decisionpoint ) ∧ choiceof (y, x) ∧select(x) ⟹ sum(y,(x)) ≤ max(y,z)
∀ x, y: type(x, choice ) ∧ type(y, decisionpoint ) ∧choiceof(y, x) ∧select(x) ⟹ sum(y,(x)) ≥ min(y,z)
selected (assigned by
Table 3 shows the main rules in our proposed method. The proposed method contains thirteen main rules to validate the decision-making process. Rules 1 through 6 are used to validate constraint dependency rules. Rules 7 and 8 deal with relationships between decision point and their choice(s). Rules 10 and 11 satisfy the common property of decision points and choices. Rules 12 and 13 validate the maximum and minimum property of decision points. Appendix 1 describes the proposed rules in details.
4.1.2. Propagation and delete-cascade
This operation defines how some choices are selected automatically as a reaction to previous selection of other choices.
Definition 1: Selection of the choice
∀ x,n: type(x,choice) ∧ type(n,choice) ∧ select(x)∧ requires_c_c(x,n) ⟹ select(n).
If the choice
If the choice
This operation validates the automated decision-making process during execution time. The following scenario describes the problem:
For all choices
4.1.3. Explanation and corrective recommendation
This operation is defined (in this paper) for highlighting the sources of errors within decision- making process. The errors happened due to dissatisfaction of constraintdependency rules.
The general pattern that represents failure due to dissatisfaction of constraint dependency rules is:
In the proposed method, there are two possibilities for the decision: decision point or choice. Three possibilities for the exclusion constraint: choice excludes choice, choice excludes decision point, or decision point excludes decision point. We assign the predicate
The following definition describes these possibilities in the form of rules:
Selection of choice
If the choice
If the choice
If the choice
Two examples are presented to express how the proposed method could be used for guiding the decision maker by explanation and corrective recommendation. Example 1 shows an interactive corrective recommendation mechanism. Example 2 shows how the proposed method validates decision maker in future based on his current selections.
Suppose the decision maker selected
The decision maker asks to select the choice
The proposed method guides the decision maker step by step (in each choice). If the decision maker’s choice is invalid, the choice is immediately rejected and corrective actions are suggested, see Example 1. Moreover,
|? select (decrease level).|
You have to deselect high level
|? select ( non promotion decision ).|
notselect (positive performance) added to knowledge base.
4.2. Validate decision repository
4.2.1. Logical inconsistency detection
Inconsistency occurs from contradictions in constraint dependency rules. It is a very complicated problem. Inconsistency has different forms and it can occur between: groups (as example: (A and B) require (D and C) and (D and C) exclude (A and B)), group and individual (as example: (A and B) require D and D excludes (A and B)), or between individuals only (as example: (A requires B and B requires C and C excludes A)). A, B, C and D could be choices or decision points.
In this paper, we suggest rules to detect logical inconsistency between individuals. The rules that can be used to detect logical inconsistency (between individuals) are categorized in three groups. Each group contains two rules.
In this group, we discuss the constraint dependency relation between two decisions from the same type (decision point or choice).
The first decision requires the second one while the second decision excludes the first one. The logical inconsistency between two decisions could be indirect, e.g. A requires B and B requires C and C excludes A. Therefore, to transfer the logical inconsistency to be directly within two decisions, we define these transfer rules:
The following rules detect inconsistency in group 1:
If the choice
If the decision point
In this group, we discuss the constraint dependency relation between two decision points. At the same time, there is a contradictory relation between one choice (belongs to the first decision point) and the second decision point. The following rules illustrated group 2:
If the common choice
If the choice
In this group, we discuss the constraint dependency relation between two decision points. At the same time, there is a contradictory relation between their choices. The following rules illustrates group 3:
The common choice
4.3. Dead decision detection
A dead decision is a decision that never appears in any legal decision process. The only reason to prevent a decision from being included in any decision process is that there is a common choice/decision point that excludes this decision. The general form for describing a dead decision is:
According to the proposed method, there are two possibilities for a decision: decision point or choice, two possibilities for decision (A) to be common and three possibilities for the exclusion.
Two possibilities for decision (A) to be common:
Common decision point:
Common choice belongs to common decision point:
Three possibilities for the exclusion constraint:
Choice excludes choice:
Choice excludes decision point:
Decision point excludes decision point:
If we apply the two possibilities of common decision to the three possibilities of the exclusion constraint then we have six possibilities for satisfying the general form of the dead decision. These possibilities are (1, 3),(1, 4),(1, 5),(2, 3),(2, 4),(2, 5). The possibilities (1, 3), (1, 4) and (2, 5) are excluded because decision
The decision point
Rule (i) represents case (1, 5), rule (ii) represents case (2, 3) and rule (iii) represents case (2, 4). The decision point
5. Scalability testing
Scalability is approved as a key factor in measuring applicability of the techniques dealing with variability modeling . The output time is a measurement key for scalability. A system considers scalable for specific problem if it can solve this problem in a reasonable time.
In this section, we discuss the experiments, which are developed to test the scalability of the proposed method.
5.1. The experiment
In the following, we describe the method of our experiment:
Generate the decision repository: repository is generated in terms of predicates (Decision points, and choices). We generated four sets containing 1000, 5000, 15000, and 20000 choices. Choices are defined as numbers represented in sequential order, as example: In the first set (1000 choices) the choices are: 1, 2, 3,…, 1000. In the last set (20000 choices) the choices are: 1, 2, 3, …, 20000. The number of decision point in each set is equal to number of choices divided by five, which means each decision point has five choices.
Define the assumption: We have three assumptions: i) each decision point and choice has a unique name, ii) each decision point is orthogonal, and iii) all decision points have the same number of choices.
Set the parameters: The main parameters are the number of choices and the number of decision points. The remaining eight parameters (common choice, common decision point, choice requires choice, choice excludes choice, decision point requires decision point, decision point excludes decision points, choice requires decision point, and choice excludes decision point) are defined as a percentage. Three ratios are defined: 10%, 25%, and 50%. The number of the parameters related to choices (such as; common choice, choice requires choice, choice excludes choice, choice requires decision point, and choice excludes decision point) is defined as a percentage of the number of the choices. The number of parameters related to decision point (such as; decision point requires decision point) is defined as a percentage of the number of decision points. Table 6 represents snapshots of an experiment’s dataset, i.e. the decision repository in our experiments.
Calculate output: for each set, we made thirty experiments, and calculated the execution time as average. The experiments were done with the range (1000-20000) choices, and percentage range of 10%, 25%, and 50%.
In the following section, the experiments that are done for dead decision detection, explanation, and logical inconsistency detection are discussed. The rest two operations (constraint dependency satisfaction, and propagation and delete-cascade) are working in semi-auto decision environment, where some decisions are propagated automatically according to decisions made. In semi-auto decision environment, the scalability is not a critical issue.
5.2. Empirical results
Dead Decision Detection: Figure 2 illustrates the average execution time. For (20,000) choices and 50% of constraint dependency rules, the execution time is 3.423 minutes which can be considered as a reasonable time. The output of each experiment is a result file containing the dead decisions.
This process defines the source of error that might occur when the new choice is selected. To evaluate the scalability of this operation, we define additional parameter, the predicate select(C): where C is random choice. This predicate simulates decision maker selection. Number of select predicate (defined as a percentage of number of choices) is added to the knowledge-based for each experiment, and the choice C is defined randomly (within scope of choices). Figure 3 illustrates the average execution time. The output of each experiment is a result file containing the selected choices and the directive messages.
Logical Inconsistency-Detection: Figure 4 illustrates the average execution time to detect inconsistency in FM Range from 1000 to 20,000 choices
6. Conclusion and future work
Representing knowledge objects and the relation between them is the main issues of the modern knowledge representation techniques. We suggest variability for representing knowledge objects in DSS. By introducing variability to represent knowledge in DSS we can get both formalized knowledge representation in decision repository and support decision-making process by validation operations. Decision selection processes are validated using constraint dependency rules, propagation and delete cascade, and explanation and corrective recommendation operations. Decision repository is validated by detecting logical inconsistency and dead choices. In  it states, “developing and using a mathematical model in a DSS, a decision maker can overcome many knowledge-based errors”. For this reason, the proposed method is supported by FOL rules.
We plan to test and validate this work using real data and real life case studies from industry. In addition, new operations are needed to validate DSS.
7. Appendix A
Explanation of the rules in table 3
For all choice
For all choice
For all choice
∀ x, y: type(x, choice) ∧ type(y, decisionpoint) ∧ require_c_dp(x, y) ∧ select(y) ⟹ select(x)
For all choice
For all choice
For all choice
For all decision point
For all decision point
For all choice
This rule determines the selection of decision point if one of its choices was selected.
For all decision point
This rule states that if a decision point was selected, then if there is choice(s) belonging to this decision point, must be selected.
For all choice
For all choice
For all decision point
For all choice x and decision point y; where x belongs to y and x is selected, then the summation of x must not be less than the maximum number allowed to be selected from y.
For all choice x and decision point y; where x belongs to y and x is selected, then the summation of x must not be greater than the minimum number allowed to be selected from y.
Rules 12 and 13 validate the number of choices' selection considering the maximum and minimum conditions in decision point definition (cardinality definition). The predicate sum(y, (x)) returns the summation number of selected choices belongs to decision point y.
Antal, P. 2008” Integrative Analysis Of Data, Literature, And Expert Knowledge By Bayesian Networks”,, Phd Thesis, Katholieke University Leuven,.
Batory, D., Benavides, D., Ruiz-Cortés, A., 2006“Automated Analyses of Feature Models: Challenges Ahead”, Special Issue on Software Product Lines, Communications of the ACM, 49(12) December 45 47
Brewster, C., O’Hara, K. 2004,”Knowledge Representation with Ontologies: The Present and Future”, Intelligent Systems, IEEE 1094-7167/04, 19(1),.
Christiansson P. “. 2003Next Generation Knowledge Management Systems For The Construction Industry”, Paper w78-2003-80, Proceedings ‘Construction It Bridging The Distance. Cib Publication 284. Auckland 80 87
Celderman M.. 1997 “Task Difficulty. Task Variability. Satisfaction with. Management Support. systems Consequences. Solutions” Research. Memorandum 1997 53Vrije University, Amsterdam,
Czarnecki K. Hwan C. Kim P. 2005“Cardinality-based Feature Modeling and Constraints: A Progress Report”, International Workshop on Software Factories at (OOPSLA’05 ), San Diego California,
Densham P. J. 1991“Spatial decision support systems”, Geographical information systems: principles and applications, London, Longman, 403 412
Froelich W. Wakulicz-Deja A. 2008Associational Cognitive Maps for Medical Diagnosis Support”, Intelligent Information Systems Conference, Zakopane, Poland, 387 396
Grégoire E. “. 2005Logical Traps in High-Level Knowledge and Information Fusion”, Specialist Meeting Conference on Information Fusion for Command Support (IST-055/RSM-001), The Hague, The Netherlands,
Haas, S. W. 1999”Knowledge Representation, Concepts, and Terminology: Toward a Metadata Registry for the Bureau of Labor Statistics”, Final Report to the United States Bureau of Labor Statistics, School of Information and Library Science, University of North Carolina at Chapel Hill, July.
Hale P. Scanlan J. Bru C. 2003Design and Prototyping of Knowledge Management Software for Aerospace Manufacturing”, 10th ISPE International Conference on Concurrent Engineering, Madeira Island, Portugal,
Kang K. Hess J. Novak W. Peterson S. “. 1990Feature oriented domain analysis (FODA) feasibility study”,Technical Report No. CMU/SEI- 90TR-21, Software Engineering Institute, Carnegie Mellon University,
Lu J. Quaddus M. A. Williams R. “. 2000Developing a Knowledge-Based Multi-Objective Decision Support System”, the 33rd Hawaii International Conference on System Sciences, Maui, Hawaii,
Malhotra Y. 2002Why knowledge management fails? Enablers and constraints of knowledge management in human enterprises, handbook of knowledge management, chapter 30, springer, 568 576
Mikulecky P. Olsevicov´a K. Ponce D. ”. 2007Knowledge-based approaches for river basin management”,
Molina M. “. 2005Building a decision support system with a knowledge modeling tool”,
Martinez, S.I. 2008A formalized Multiagent decision support in cooperative environments, Doctoral Thesis, Girona Catalonia, Spain,.
Padma, T., Balasubramanie, P., 2009Knowledge based decision support system to assist work-related risk analysis in musculoskeletal disorder,
IMIA Yearbook of Medical Informatics, Reinhold Haux, Casimir Kulikowski, Schattauer, Stuttgart, Germany, BMIR- Peleg M. Tu S. W. 2006 Decision Support. Knowledge Representation. Management in. Medicine 2006 1088
Pohl k. Böckle G. Linden F. J. 2005
Pomerol, J., Brezillon, P. Pasquier, L. 2001, “Operational Knowledge Representation for Practical Decision Making”, The 34th Hawaii International Conference on System Sciences,.
littlefield, Roth B. M. Mullen J. D. 2002 Decision Making. Its Logic. And practice. Lanham M. D. Rowman
Segura S., 2008”Automated Analysis of Feature Models using Atomic Sets”, the 12th international conference of software product line, Limerick Irland,.
Schreiber, G., Akkermans, H., Anjewierden, A., Dehoog, R. Shadbolt, N. Vandevelde, W. Wieling, B, 1999,
Suh C.K. 2007An Integrated Two-Phased Decision Support System for Resource Allocation”,
Tuomi I., 2002” The Future of Knowledge Management”, Lifelong Learning in Europe (LLinE), vol VII, issue 2, pp. 69 79
The Impact of DSS Use and Information Load on Errors and Decision Quality”, Working Papers on Information Systems, Indiana University, USA, Sprouts: Working Papers on Information Systems, 4(22). Williams M. Dennis A. R. Stam A. Aronson J. “. 4 22 2004
Williams M. H. 2008Integrating Ontologies and Argumentation for decision-making in breast cancer, PhD Thesis, University College London,
Wielemaker J. 2007: SWI-Prolog (Version 5.6.36) free software, Amsterdam, University of Amsterdam,