Open access

Designing Effective Forecasting Decision Support Systems: Aligning Task Complexity and Technology Support

Written By

Monica Adya and Edward J. Lusk

Submitted: March 8th, 2012 Published: October 17th, 2012

DOI: 10.5772/51255

Chapter metrics overview

1,698 Chapter Downloads

View Full Metrics

1. Introduction and Motivation

Forecasting is critical to successful execution of an organization’s operational and strategic functions such as for delivery of a cost effective and efficient supply chain [1]. The complex and dynamic organizational environment that defines much of today’s forecasting function is often supported by a range of technology solutions, or forecasting decision support systems (FDSS). Typically, FDSS integrate managerial judgment, quantitative methods, and databases to aid the forecaster in accessing, organizing, and analyzing forecasting related data and judgments [1-2]. Forecasting task complexity can negatively impact forecast reliability, accuracy, and performance [3-4]. Specifically, it can influence two elements of forecaster behavior – deriving forecasts and judgmental adjustment of these forecasts [5]. In executing these functions, forecasters may utilize different heuristics for complex series as opposed to simple ones in order to mitigate cognitive demands [6-7]. Because selection and execution of these heuristics can be influenced by forecaster experience and knowledge-base, integrating time series complexity into Forecasting Support Systems (FSS) design can bring greater objectivity to forecast generation, while simultaneously providing meaningful guidance to forecasters [1].

Advances in design and use of FDSS, however, have been slow to come because of the following range of problems that limit their usefulness in the forecasting domain. Firstly, FDSS are expensive to create, operationalize, and calibrate and therefore, require significant organizational investment. Second, and most significantly, forecasts generated by such expensive FSS are often subjected to judgmental adjustments. Such adjustments may be driven by forecaster confidence, or lack thereof, in FDSS capabilities as well as forecaster’s sense of ownership once they make judgmental adjustments as opposed to just accepting outputs of a forecasting model. Third, forecaster confidence in FDSS and its outcomes is influenced by numerous system abilities such as strength of and confidence in explanations provided about forecast creation [8], information presentation [9], data about the systems past success rate [10], support from analogical forecasting tasks [11], and ability to decompose the forecasting problem [12] to mention a few. Lastly, the functionality and processes underlying FDSS are sometimes difficult to align with the experiential thinking of forecasters [11], i.e. If such support systems adaptively support complex and simple tasks according to task demands, forecasters may be less tempted to make judgmental adjustments [11, 13-14].

The above discussions and supporting literature reaffirm that the level of agreement between a task and the functionalities of the supporting technologies, i.e. the task-technology fit (TTF), can determine individual performance on tasks [15-19]. TTF studies suggest that the extent to which a technology supports individuals in performance of their portfolio of tasks can determine the degree of success in the execution of their tasks through both improved performance and better system utilization [15, 20]. In a sense then, TTF provides important justifications for discretionary use of FDSS for simple and complex tasks. Under conditions where FDSS perform well empirically, it would likely be worth committing the time and resources to utilizing the FDSS. In contrast, where FDSS do not perform effectively or performs as effectively as human judgment, such commitment of time and resources to parameterize the FDSS may not be warranted. It has been asserted that certain functionalities of a technology are better suited for specific types of processes or tasks [17]. To this end, improved alignment between FDSS and FDSS-supported tasks, essentially better Forecasting Task-Technology Fit (FTTF), can mitigate the factors driving forecasters to make ad hoc adjustments of questionable validity done for rationalizing their worth [19].

In this study, we specifically examine the issue of forecasting task complexity and commensurate FDSS support to provide a framework for FSS design and implementation using the TTF as an underlying motivator. In doing so, we achieve the following:

  1. Develop a characterization of complex and simple time series forecasting tasks. Herein, we rely on historical patterns and domain-based features of time series to develop discrete task profiles along a simple to complex continuum.

  2. Review evidence from the empirical literature regarding forecasting task complexity and its implications for FDSS design and suggest designs that would benefit the forecasting process.

  3. Develop an agenda for research and discuss practice-related issues with regard to balancing forecasting utility with efficiency given the costs of FDSS.


2. Literature Review

2.1. Task-Technology Fit

TTF theory defines tasks as actions carried out by individuals to process inputs into outputs [20] and task profile as aspects of these tasks that might require users to rely on information technology. Technologies are defined as any set of tools, such as FDSS, required for executing these tasks. The fit between tasks and technologies, then, refers to the “degree to which a technology assists an individual in performing his or her portfolio of tasks” [20]. In the usual case, TTF theory is implemented at two levels: (a) an organizational level that examines the presence of data, processes, and high-level system features (e.g. system reliability) to fulfill the broad needs of a decision domain, and (b) a context level that examines the presence of system features specific to a decision context e.g. system capabilities for time series forecasting or group decision making. Studies in both contexts develop TTF concepts from three perspectives: identification of (a) a task profile [15] i.e. tasks specific to the domain of study; (b) technology features or needs specific to a task profile; and (c) impact of TTF on individual performance. We discuss these findings in the next sections.

Most TTF studies characterize tasks based on organizational level decision support needs such as information and data quality, access, procedures surrounding data access, and system reliability [19-21] characterized tasks in terms of non-routineness, defined as lack of analyzable search behavior, and interdependence with other organizational units. Later a dummy variable was added to capture managerial factors as a determinant of user evaluation of information system use [20]. Recently, an increasing number of studies have examined tasks more contextually i.e. specific to the domain of study. Most commonly, these studies classified tasks according to their complexity [22-23]. Also characterized tasks were characterized on the basis of complexity and proposed a task classification that ranged from simple to fuzzy tasks [23]. In group decision making, [15] extended this classification to further define task complexity in group decision making. These studies define task complexity as having four dimensions: outcome multiplicity suggesting more than one desired outcome, solution scheme multiplicity suggesting more than one possible approach to achieving task goal, conflicting interdependence which can occur when adopting one solution scheme conflicts with another, and outcome uncertainty defined as the extent of uncertainty regarding a desired outcome from a solution scheme. Others found that task support for virtually-linked teams often translated into those related to conflict management, motivation/confidence building, and affect management [17]. Other applications of TTF theory appear in mobile-commerce for the insurance industry [24], consumer participation in e-commerce [25]), and software maintenance and support [16]) among others. In the forecasting domain, surprisingly we found only one TTF study [19], that adapted organizational level factors to the forecasting domain by examining needs related to forecasting procedures.

In the TTF framework, technological support has been characterized most often in terms of hardware, software, network capabilities, and features of the support system. Others, for instance, developed technology characteristics based upon input from an independent panel of IS experts [20-21]. These technology capabilities included relationships between DSS and its user, quality of support, timeliness of support, and reliability of system among others. Further, some relied on the same technology characteristics for considering adoption of Personal Digital Assistants (PDAs) in the insurance industry [24]. In keeping with the underlying emphasis of TTF, i.e. fit between tasks and technologies, context-dependent studies have focused on specific capabilities for the domain of interest. For instance, others examined richness of communication media for resolving conflict between virtual teams and motivating positive team work [17]. And some proposed that technologies supporting group decision making must be capable of providing communications support and structuring group interaction processes along with supporting information processing needs of the group [15]. Finally, [19] leveraged the framework of [20] to the forecasting domain by focusing more significantly on system functionality and capabilities—specifically focusing on data, methods, and forecasting effectiveness.

TTF studies have most commonly examined two outcomes of alignment between task and technology – system utilization and task performance. It was found that there was a suggestive relationship between TTF and system utilization and a strong positive connection with performance but mediated by utilization [20]. In contrast, it was suggested that TTF strongly predicted customer intention to purchase from an e-commerce site [25]. Also it was confirmed that a strong relationship between performance on certain insurance tasks and use of mobile devices exists [24]. Finally others found that FSS characteristics, specifically forecasting procedures included in the FSS, to be positively related to perceptions of TTF which, in turn, positively related to forecasting performance. In general, these results confirm a strong association between performance and alignment between task needs and supporting technologies [19]. This is a critical linkage for our study.

2.2. Adapting TTF to the Forecasting Domain

Time series extrapolation calls for quantitative methods to forecast the variable of interest with the assumption that behaviors from the past will continue in the future [2]. Time series forecasting is also found to improve with use of domain knowledge such as for series decomposition [26]. In essence, successful time series extrapolation relies upon recognizing idiosyncratic aspects of the series as defined by patterns in the historical data as well as domain knowledge likely to emerge through unknown future generating processes. Considering this, the implementation of time series task classifications using TTF theory is best achieved by following the context-specific approach discussed earlier as this perspective emphasizes conditions that impact the contextual usefulness of FSS. In other words, if for the task that a forecaster must execute, the time series, serves as input into the FDSS, task characterizations may best emerge from the features of the series being processed. For purposes of this paper, we follow recommendations by [15, 23] to classify decision tasks, specifically time series, along a simple to complex continuum.

2.2.1. Complexity in Time Series Forecasting

Complexity is inherent in the forecasting process [12]. While it can be argued that all one needs is a forecasting method and adequate data, a non-trivial view of the forecasting process suggested by [2] provides a more realistic perspective—that of decomposition. Each stage of the forecasting process entails coordinated action that requires use of judgment and analytical skills, inputs from multiple organizational units, as well as validation and integrity checks. When decomposed into its components, the forecasting process integrates domain knowledge, historical data, causal forces acting upon the domain, as well as physical characteristics of the process producing the measured realizations that are to be forecast [12].

Of interest in our study, however, is characterization of complexity in the context of the task i.e. the time series being forecast. Forecasting literature has provided some interpretations of complex time series. Most commonly, time series are defined as complex if the underlying processes that generate them are such [27]. Chaotic time series, as opposed to “noise driven series”, wherein observations drawn at different points in time follow a non-linear relationship with past observations of variables [28] have also been referred to as complex series. More recently, studies have characterized complexity in terms of time series composition. For instance, [26] describe complex time series as those where forecasters expect conflicting underlying causal forces, i.e. underlying forces will push the series in different directions in the forecast period. In essence, such series can be represented as a composite of multiple series where the challenge is to determine the overall effect or momentum of these multi-directional forces whose net effect could be static i.e., no movement due to offsetting causal forces.

Most views presented above define complexity in terms of either specific patterns in historical data (e.g. variation or volatility) or underlying processes and influences (e.g. causal forces). This constrained view of time series complexity is surprising considering the taxonomy of time series features available in existing literature. Time series features often captured in empirical literature include stationarity or non-stationarity of series [29-31]. Stock market forecasting studies have often relied on capturing features like volatility persistence, leptokurtosis, and technical predictability of stock related series [32-33] classified time series in terms of three features – irrelevant early data (where the generating process has fundamentally and irrevocably changed such that it creates a misleading impression of the future), outliers, and functional form. Although focused on assessing judgmental confidence intervals, some have characterized time series in terms of trend, level, seasonality, and noise [34]. These features provided 57% of the explanation for confidence intervals chosen by forecasters, suggesting that possibly a finer breakdown of series characterizations may be worth consideration.

For purposes of this paper, we rely on a more extensive taxonomy of time series characterizations suggested by [35-37] to classify time series along the continuum of simple to complex tasks. Their classification is particularly relevant because it captures not merely a range of patterns in historical data but also underlying generating processes and judgmental expectations about the future based on domain knowledge. Initially it was suggested that there were 18 such features [35] and these were later expanded to 28 by [36 - 37]. For purposes of this paper, we use a subset of these 28 features, particularly in the context of the four feature sets discussed earlier in this paragraph. These time series features are described at length in Table A in the Appendix and in [35].

2.2.2. Time Series Task Characterizations

As mentioned previously, some have classified tasks along a simple to fuzzy (complex) continuum such that system features could be developed in alignment with the task [15, 23]– in essence, TTF. Table 1 defines the key tasks types and their characteristics proposed in [15].

Task Categorization Description
Simple Tasks Low uncertainty; low conflicting interdependence; clear solution
Problem Tasks Multiple solution schemes to a well-specified outcome. Needs involve finding optimal way of achieving the outcome.
Decision Tasks Finding solutions that meet needs of multiple conflicting outcomes. Selecting best option from several available.
Judgment Tasks Conflicting and probabilistic nature of task related information. Need to integrate diverse sources of information, predict future states.
Fuzzy Tasks Multiple desired states and multiple ways of getting to them. Unstructured problems that require effort to understand. High information load, uncertainty, and information diversity. Minimal focus for the task executor.

Table 1.

Overview of Suggested Task Characterizations [15].

In Table 2, we offer a simplified adaptation of this taxonomy for the forecasting domain and classify series as simple, moderately complex, and complex. Time series features in [35], hereafter referred to as C&A, were used to develop the complexity taxonomy. The C&A’s feature set is particularly relevant because it captures not merely a range of visible patterns in historical data that can influence judgmental forecasting processes (e.g. outliers, trends, and level discontinuities), but also recognizes underlying generating processes and domain-based expectations about the future. These features, described in Table A in the Appendix, could broadly be categorized into four clusters: (a) uncertainty defined by variation around the trend and directional inconsistencies between long and recent trend, (b) instability characterized by unusual time series patterns such as irrelevant early data, level discontinuities, outliers, and unusual observations, (c) domain knowledge defined as availability (or lack thereof) of useful domain knowledge and underlying functional form of the series i.e. multiplicative or additive, and (d) structure, the presence or lack of a significant trend i.e., a perceptible signal. In forecasting literature, these features are the most comprehensive attempt to characterize series for use in an FSS, Rule-based Forecasting (RBF). RBF studies have extensively validated these features, first in C&A on 126 time series, then in [38] across 458 time series, and finally on 3003 M3 competition series [36]. Considering this, we relied on a subset of these 28 features (see Table A - Appendix and C&A) for development and validation of our taxonomy. The four feature clusters discussed above were used for classification as they have the potential of destabilizing a time series (C&A). Table 2 below provides a conceptual view of the details of this classification.

Using features from C&A, time series tasks can be classified into four categories with simple and complex forecasting tasks being the two ends of this continuum. Simple forecasting tasks represent low instability and uncertainty, demonstrate relatively clear structure in their underlying trend patterns, and do not rely on significant domain knowledge to generate useful forecasts. In most instances, demographic series such as percentage of male births tend to regress towards a known mean [37], have slow but steady trends, and variations that are rare, unusual, and easily accounted for, thereby making them easier to forecast. Alternatively, domain knowledge is clear and non-conflicting. Such tasks are expected to pose low cognitive load on the forecaster because confounding features and underlying processes are few and, consequently evident.

Series Characteristics Simple Time Series Moderately Complex Time Series Complex Time Series
Recent run not long
Near a previous extreme
Irrelevant early data
Changing basic trend
Suspicious pattern
Outliers present
Level discontinuities
Unusual last observation
Few or no instability
features present
Some instability
features present
Many instability
features present
Coefficient of variation > 0.2
Difference between basic and recent trend
Low variation
about the trend
Recent and basic
trends agree
Medium to high varia-
tion about the trend
Recent and basic
trends may disagree
High variation
about the trend
Recent and basic
trends disagree
Significant basic trend
Clear direction of trend
(up or down)
Insignificant trend
No or low trend
Significant trend
Clear direction
Significant trend
Lack of clarity in direction due to confounding features
Presence of Domain
Causal forces
Functional Form
Additive or
multiplicative series
Simple, consistent
causal forces
Multiplicative series
Multiple causal
Multiplicative series
Unknown or inconsistent causal forces

Table 2.

Time Series Task Classification Based on Series Features.

At the other end of the continuum, by contrast, complex forecasting tasks are characterized by greater instability and uncertainty, do not demonstrate a clear generating structure, and may require “systematic” integration of a complex set of domain knowledge features that send conflicting signals [26]. For instance, forecasting monetary exchange rates is made challenging by the low signal to noise ratio and the non-ergodic nature of the process caused by numerous undetermined underlying drivers [37]. Such series may pose greater cognitive demand on the forecasters who may find it difficult to isolate features such as trends and instabilities and recognize underlying processes.

Moderately complex time series will fall somewhere along the continuum (see Table 2). These tasks demonstrate some instability, variation about the trend may be higher than for simple series, and/or recent and basic trends may conflict. The structure of such series demonstrates a more complex interplay of domain knowledge than for simple series, thereby lending multiple possible solution schemes depending upon interpretation and application of domain knowledge. An example of decomposition of UK Highway deaths illustrated these conflicting scenario possibilities where the decomposition of the series yielded two conflicting elements of domain knowledge – growth in traffic volume and decay in decline in death rate [39]. Decomposing the time series into its components helped improve forecasts for the target series.

2.2.3. Judgmental Accuracy on Complex and Simple Forecasting Tasks

In earlier sections, we have mentioned the dearth of studies in forecasting on complexity and its implications on performance and outcomes. Consequently, we have relied on general studies in other domains to highlight implications of complexity in forecasting complex versus simple tasks. Most fundamentally, [23] defines simple tasks as those that are not complex. In general, more complex tasks require greater support [15] and richer information presentation [40-43]. Complex tasks increase cognitive overload and place greater information processing requirements on the user, thereby reducing performance [44-45]. Under such situations, decision makers choose “satisficing” but suboptimal alternatives [46] thereby lowering decision accuracy. When task complexity does not match abilities of the decision maker, motivation and consequently, performance may decline [47]. Using a Lens model approach, [48] attributed poor judgment in complex task settings to limitations in participants’ ability to execute judgment strategies as opposed to their knowledge about the task domain, essentially a lack of experiential acuity. This could be attributable to loss in perceived self-efficacy and efficiency in application of analytical strategies.

In the forecasting domain, studies have uncovered confounding effects in situations that manifest uncertainty and instability. For instance, [34] found that as the trend, seasonality, and noise increased in a time series, forecasters indicated wider confidence intervals, and hence uncertainty, in their forecasts. Further, [49] also found that while forecasters successfully identified instability in time series, their forecasts were less accurate than statistical forecasts when such instabilities were present. Considering this, even experienced forecasters may find lowered performance in complex settings. These multidisciplinary findings then suggest:

  • Practical Proposition 1: Judgmental forecasts of complex time series will be less accurate than judgmental forecasts of simple time series.

  • Practical Proposition 2: Judgmental forecasts of moderately complex time series will be less accurate than judgmental forecasts of simple time series but more accurate than those for complex time series.

FDSS, through effective design, can allay the cognitive and human information processing demands that task complexity can place on the decision maker, and thereby potentially increase system use and confidence. DSS range from simple decision aiding such as using visual, as opposed to text-based, presentations to complex intelligent systems that adaptively perceive and respond to the decision context. The alignment between task needs and technology support, however, needs reflection. If misaligned, decision maker performance can be compromised. For instance, [50] evaluated a DSS for treatment of severe head injury patients by comparing physician expert opinions with results generated by the DSS. The study concluded that the tool was not accurate enough to support complex decisions in high-stress environments. Similarly, [51] found that providing certain types of cognitive support for real-time dynamic decision making can degrade performance and designing systems for such tasks is challenging. Based on these studies, the following can be proposed:

  • Practical Proposition 3: FDSS generated forecasts for complex time series will be more accurate than judgmental forecasts of complex time series.

  • Practical Proposition 4: FDSS generated forecasts for complex time series will be more accurate than judgmental forecasts of moderately complex time series.

  • Practical Proposition 5: FDSS generated forecasts for simple time series will be as accurate as judgmental forecasts of simple time series.

2.2.4. Judgmental Adjustment of FDSS Generated Forecasts

While existing forecasting literature has yielded several recommendations for forecast adjustment, once again, this area suffers from lack of sufficient empirical findings regarding adjustment of forecasts for complex and simple tasks. Here too, we rely on multidisciplinary studies and findings from our own studies [52] to support our propositions. Forecasting literature, for instance, has suggested that statistically generated forecasts should be adjusted based on relevant domain knowledge and contextual information that practitioners gain through their work environment. Others [53-54] demonstrated that familiarity with the specific factors being forecast was most significant in determining accuracy. Judgmental adjustments should also be applied to statistically generated forecasts under highly uncertain situations or when changes are expected in the forecasting environment, i.e. under conditions of instability. Both uncertainty and instability, according to our earlier framework in Table 2, lend complexity to the forecasting environment.

Managerial involvement in the forecasting process, primarily in the form of judgmental adjustments, has been questioned in terms of value added benefits. For instance, [55] suggest that benefits of managerial adjustment in stable series may not be justified as automatic statistical forecasts may be sufficiently accurate. In contrast, they recommend high levels of managerial involvement in data that has high uncertainty, in a sense, high complexity surrounding it.

In our own empirical studies comparing FDSS and judgmental forecasting behaviors [52], we find that when given FDSS-generated forecasts, forecaster adjustments to simple series harm forecast accuracy but improve accuracy of complex series when compared to unadjusted FDSS forecasts. Furthermore, when given simple series, forecasters react to complex series by assuming forecast values to be too low and, in response, adjust forecasts more optimistically than necessary. In contrast, they view the forecasts for simpler series to be aggressive and accordingly overcompensate by suppressing the forecasts. Accordingly, we propose:

  • Practical Proposition 6: Forecasters will adjust complex series more optimistically than simple series whose forecasts will be suppressed.

  • Practical Proposition 7: Adjustments to FDSS-generated forecasts for simple series will harm forecast accuracy.

  • Practical Proposition 8: Adjustments to FDSS-generated forecasts for complex series, if executed correctly, can improve forecast accuracy.

As a caveat to the last proposition above, judgmental adjustments to complex forecasts may be best supported by FDSS in a way that the adjustments are structured [53] and validated automatically through improvements in forecast accuracy [35, 39]. In the following sections, we rely on the TTF framework and other DSS studies to propose ways in which FDSS could be best designed to adaptively support simple to complex tasks.


3. Implications for FSS Design and Research: Putting Theory into Practice

In conjunction with the decision maker, DSS have been shown to generate better decisions than humans alone by supplementing the decision makers’ abilities [56], aiding one or more of phases of intelligence, design, and choice in decision making [57], facilitating problem solving, assisting with unstructured or semi-structured problems [58-59], providing expert guidance [60], and managing knowledge. Our discussion above raises additional issues pertinent to FDSS design with emphasis on overcoming inefficiencies such as bias, irrationality, sub-optimization, and over-simplification that underlie judgmental adjustments. Since a growing body of research is focusing attention on specific DSS features such as information presentation, model building and generation, and integration of dynamic knowledge, in this section we view DSS design from the perspective of making directive and non-directive changes in forecaster behavior regarding application of adjustments. Such behavioral changes can be brought about in two ways: (a) by guiding and correcting forecaster behavior during task structuring and execution and (b) by encouraging evaluative analysis of decision processes through structured learning [61].

Our empirical research has raised two key observations related to forecaster behavior and implications for FSS design:

  1. Forecasters will make adjustments to forecasts even when provided highly accurate forecasts. However, the direction and magnitude of these adjustments may be defined by complexity of the forecasting tasks. Considering this, FSS should offer system features in congruence with adjustment behaviors.

  2. Design of FSS must necessarily factor in, and adapt to, forecasting task complexity.

Elaborating on these findings, we make several propositions for FSS design in the next few sections.

3.1. Design FSS that Adapt to Task Complexity

For years, DSS designers have proposed designing systems that adapt to decision makers [62-63] and align with their natural thinking. Adaptive DSS support judgment by adjusting to high level cognitive needs of decision makers, context of decision making, and task characteristics [64]. The FTTF framework proposed in this paper provides a task-based approach to such adaptive systems. As a time series is initially input into the FSS, automated feature detection routines can categorize time series along the simple to complex continuum. Task profiles gathered in this way could be used to customize levels of restrictiveness and decisional guidance for simple versus complex tasks.

Restrictiveness is the “degree to which, and the manner in which, a DSS limits its users’ decision making process to a subset of all possible processes” [65, p. 52]. For example, a DSS may restrict access to certain data sets or ability to make judgmental inputs and adjustments to the system. Restrictiveness can be desirable when the intention is to limit harmful decision choices and interventions. However, general IS literature has largely recommended limited use of restrictive features in DSS [1, 61, 65-66]. Excessive restrictiveness can result in user frustration and system disuse [65, 67]. It can also be difficult for the designer to determine a-priori which decision processes will be useful for a particular situation [1]. However, when users are poorly trained [1], known to make bad decision choices, or when underlying conditions are stable, restrictive DSS features can be beneficial.

Decisional guidance is “the degree to which, and the manner in which, a DSS guides its users in constructing and executing the decision-making processes by assisting them in choosing and using its operators” [65, p. 57]), can be informative or suggestive. Informative guidance provides factual and unbiased information such as visual or text based display of data thereby empowering the user to choose the best course of action. Suggestive guidance, on the other hand, recommends an ideal course of action to the user such as by comparing available methods and recommending the one deemed to be most suited to the task at hand. Also [1] provide an excellent and extensive review of decisional guidance features for FSS that we recommend highly. To complement their recommendations, in the next few paragraphs, we provide additional design guidelines emergent from the theme of this study.

  1. A.1 Restrict Where Harmful Judgment can be Applied: When unrestricted, forecasters are free to apply adjustments at many levels in the forecasting process such as toward data to be used or excluded, models to be applied and those to be ignored, and changes to decision outcomes. Similarly, as we demonstrated in our Study 2 [52], inexperienced forecasters may attempt to overcome their limited knowledge of underlying decision processes by making adjustments to the final outcomes [1]. FSS can restrict where such judgmental adjustments are permitted. Specifically, judgment is best utilized as input into the forecasting process or within the context of a validated knowledge base rather than as an adjustment to the final decision outcome [55].

  2. A.2 Restrict FSS Display Based on Task Complexity: Since complex tasks pose significant demands on human cognitive and information processing capabilities, FSS displays for such tasks can be restricted as opposed to simple tasks that can benefit from decisional guidance. Since simple tasks create lower cognitive strain, performance on such tasks can potentially be improved by increasing user awareness of the forecasting cues such as by displaying features underlying the time series, generating processes, forecasts from alternative methods, and forecasting knowledge underlying the final forecasts. For instance, [49] found that making available the long-term trend of a time series improved forecaster accuracy since it allowed them to overlook distracting patterns and apply knowledge more consistently.

    As decision makers have a tendency to trade off accuracy in favor of cost efficiency, informative and suggestive guidance could be displayed prominently such that the forecaster does not have to drill down to make such trade-off decisions [68]. However, this same information presented to the forecaster for complex tasks can result in greater information overload, cognitive strain, and over-reaction. Indeed, [69] confirm that in complex task settings, decision makers tended to ignore suggestive advice and focused on informative guidance. To reduce this cognitive load, several of the above discussed features could be hidden and made available as layered drill-down options. Such adaptive support can reduce information overload and related information processing challenges in the context of complex tasks [66], and is replicable across different contexts and organizational settings.

  3. A.3 Provide and Adapt Task Decomposition According to Task Complexity: Individual decision maker’s working memory is limited and consequently, complex tasks broken into simple “chunks” can be more effectively executed when compared to tasks not so simplified [12]. Cognitive overload may be avoided through effective and efficient design materials [44] ranging from better information presentation to providing greater structure to the learning environment [70] such as through use of decomposition strategies to simplify the subject domain. Decomposition is found to improve performance over unaided and intuitive judgment [71-72] by breaking down a complex, holistic task into a set of easier tasks which are more accurately executed than the more holistic task [1]. Others [73] also found that DSS users were able to leverage more information when they used decomposition for forecasting tasks. While there are neurological explanations for why decomposition is effective [74-75], from a psychological perspective, decomposition allows the decision maker to optimize the problem solving domain into manageable chunks so that information processing for each chunk can be minimal and relevant while cognitive overload is minimized [70, 76-77].

    Although it can be argued that decomposition can be a restrictive DSS feature when its use is forced upon the decision maker [1], most often, a user may not focus on the benefits of decomposing a task or may not recognize how to proceed with decomposition. To this end, we suggest that decomposition be implemented in both restrictive and decisional guidance mode. Specifically, we use the framework by [12] who suggests that decomposition can be applied at three levels: decomposition via transformation, i.e. identifying characteristics of the forecasting task and domain; decomposition for simplification, i.e. understanding components of the forecasting process from problem formulation to forecast use (Armstrong, 2001 [2]); and decomposition for method selection i.e. applying forecasting knowledge and rules to selecting fitting methods. Herein, we propose transformational decomposition should be a restrictive feature in FSS. This decomposition of time series into its features can enhance forecaster ability to recognize meaningful patterns as opposed to random ones.

    In the same vein, simplification of the problem domain could follow restrictive design by using the forecasting process presented in Figure 1 to design FSS modules. In such a design, then, the flow of activities presented in Figure 1 could be used to restrict more rapid convergence on forecast methods and use. In contrast, the evaluative component of this given process can lend itself to decisional guidance in numerous ways discussed later in this section. Decomposition by simplification can also be implemented by narrowing task demand for complex decisions. For instance, [49] recommend that forecasts should not be required for multiple time periods because forecasters tend to anchor long-term forecasts to short-term forecasts. Our data indirectly suggests that complex series generate higher errors and such anchoring and adjustment can compound errors across the long-term.

    Finally, decomposition for method selection could largely be implemented as decisional guidance. Users may be prompted with forecasts from multiple relevant methods (selected using rules applied to time series features) to consider use of alternative methods and processes. Suggestive guidance on how to proceed with method selection and combination could be useful for simple tasks.

    As decision situations become complex, guidance may need to be modified to minimal levels as such situations are already characterized by information overload. Adding suggestive guidance to this mix can lead to the FSS itself complicating the decision situation. Forecasters may become increasingly frustrated with interventions from such guidance and consequently engage in deleterious decision making behaviors. These suggestions are supported by [69] who found that for highly complex tasks, subjects who were provided with suggestive guidance performed poorly at the task when compared to those who were provided informational guidance or no decision support. Specifically, we suggest that for complex tasks, informational guidance be provided such that users can determine best strategy on their own or ignore the additional information as desired.

Figure 1.

Components of the Forecasting Process as Presented in [2].

  1. A.4 Provide In-Task Feedback for Simple Tasks and Shift to Post-Task Feedback on Complex Tasks: Feedback is intended to promote learning and behavior modification with the assumption that organizational practices encourage such review. Broadly speaking, evaluative feedback can be offered to forecasters at two stages – during task execution and post task execution – the former being critical to effective forecasting and the latter being beneficial for fostering reflection and learning [1]. Suggestive and informational feedback regarding impact of their current actions on other aspects of the forecasting environment may contain the extent to which a series of poor adjustments may be executed. However, feedback during execution of complex tasks can frustrate the user. Forecasters facing complex tasks may not have the time or cognitive resources to reflect adequately upon the impact of their adjustments on the environment [78] and consequently fail to consider control actions that can impact the forecasting environment. Indeed, corrective process-based feedback has been found to be transient and shallow [79-80] and inadequately contributes to long term behavior modification [81].

    To this end, FSS developers may primarily focus on post-execution feedback for complex tasks. Post-task feedback has been found to improve decision quality [82] and attainment of challenging goals [83], particularly when the feedback is informative [69]. Further, [1] suggest four forms of post-task feedback: outcome feedback, result of outcomes from the forecasting task; performance feedback, assessment of performance such as forecast accuracy; cognitive process feedback, effectiveness of forecasting process deployed; task properties feedback, information about the task e.g. presence of conflicting underlying series. Considering that the intention of post-execution is to foster learning, holistic learning is possible for instance, by providing informative guidance on the above aspects complemented with the ability to drill down to the suggestive components, may be most beneficial to forecasters.

    Simple tasks, in contrast, do not require the same level of feedback and support as complex tasks. Moreover, these tasks are cognitively less demanding. Consequently, in-task feedback may not be detrimental and may be designed to provide the user with guidance such as by displaying features of the time series and discussing their impact on forecasts, providing original series contrasted with series that have been cleansed of distracting features such as outliers and irrelevant early data, and providing forecasting guidance in form of rules and relevant methods. As a case in point, RBF rules that pertain to a specific set of features present in the task being executed could be displayed such that the user can recognize the knowledge that has gone into generating the forecast.

  2. A.5 Restrict Data and Models According to Task Complexity: Restrictiveness may be relaxed for simpler tasks by increasing the range of available data and models. FSS can shift to making some desirable processes easy to use while making other, less desirable alternatives, more difficult [1]. Automating and thereby simplifying the application of desirable strategies can serve to reduce the effort associated with executing the more desirable strategies [84] and thereby reduce the need for making damaging judgmental adjustments to the decision process [13].

  3. A.6 Restrict to Impose Standards and Best Practices: Finally, restrictions can be applied when certain organizational best practices and standards need to be applied in the forecasting process. For instance, a critical issue in supply chain forecasting is an escalation of forecasting adjustments as a forecast moves down the supply chain, thereby contributing to the bullwhip effect [85]. Embedding restraints in the forecasting system that contain the magnitude and directionality of adjustments may potentially reduce the risks associated with overcompensating for each element of the supply chain. This is particularly true for complex data where forecasters may overemphasize random patterns in the data or simple series where forecasters may want to overcompensate for seemingly aggressive forecasts. These restraints may be in the form of boundaries or confidence intervals which adapt to the nature of the complexity being presented to the forecaster.

3.2. Design FDSS to Increase Forecaster Confidence

Earlier, we discussed judgmental adjustments as a mechanism for forecasters to develop ownership of the forecasts. If FSS can be designed with features that enhance forecaster confidence in its abilities, possibly the compulsion to make judgmental adjustments may be mitigated. Most studies have focused on DSS use and satisfaction and suggested user attitudes towards DSS and their satisfaction with DSS as indicators of DSS use [86-87]. However, our concern in this paper extends beyond use since forecasters may use an FSS to generate forecasts and still make judgmental adjustments. Confidence in the system can be enhanced by making its abilities transparent to the forecaster by making the FDSS and its features fully disclosed [35]. Furthermore, a well validated FDSS that has demonstrated stability across time and multiple data sets can potentially improve confidence [88]. This validation is particularly simple to implement in FDSS due to the well-defined and universally accepted success measure, forecast accuracy. Confidence in an FDSS may also be enhanced by highlighting the credibility of knowledge underlying it. When transparent to forecasters, use of expert knowledge, empirically validated findings, and methodical calibrations can potentially enhance forecaster confidence in system abilities, and thereby mitigate the need for adjustments. Finally, user involvement in systems design and development has been shown to increase user satisfaction with and commitment to the system and its outcomes [89-91]. For instance, [92] found that forecasters involved in defining features of the FSS such as display and models indicated greater satisfaction with FSS forecasts, even though their overall accuracy was lower than those who were constrained in their involvement.

3.3. Implications for Practical Design Research

In the sections above, we have offered numerous suggestions regarding FSS design. While some of these have been researched and validated, most require further research attention particularly in light of the simple-complex task classification that forms the foundation of our paper. To this end, we first suggest that our proposed task classification be tested on a broader time series base to (a) determine if the application of this framework is generalizable to a larger set of time series, and (b) whether the patterns of judgmental performance and adjustments we observed across the two studies [52] hold ground in a larger context. If our results are proven across a broader base, implications for FSS design are numerous in terms of recommendations addressed earlier.

Beyond confirmation of the FTTF framework, there are numerous opportunities for examining FSS design issues. Most importantly, our proposition has been that FSS should be designed to not only enhance forecaster support for task execution but also to promote effective behavior modification during and post execution. Such learning and modification will occur over long term system utilization, features supporting feedback and learning in FSS should occur early in the design process. This has implications for finding the ideal balance between restrictive and decisional guidance features and identifying the decision making stage to which these are best applied. As [69] suggest, increased decisional guidance during problem formulation can have an adverse effect on judgmental task performance but providing feedback at the right opportunity can improve performance. In response, much research is required to identify aspects of forecaster behavior that are amenable to behavior modification and those that are not, nature of desirable support, and stage of forecasting process where these support features are best applied.


4. Summary and Conclusions

The practical implications of our chapter are, indeed, numerous from the eight practical propositions to the six design FSS aspects regarding adaptations to Task Complexity and issues having to do with forecaster confidence. We summarize these here next. First, fitting technology support to task characteristics can provide a useful mechanism for identifying gaps between system functionality and user needs. Understanding task characteristics and corresponding support needs will enable FDSS designers to create systems that better suit and adapt to user needs. Second, a methodical integration of task and support technologies can lead to greater user commitment, thereby reducing forecaster’s tendency to make deleterious ad hoc adjustments. Task-technology fit can enable identification of functions for which human intervention can be problematic and thereby restrict or guide selection towards improved choice [65, 93]. For instance, systems that complement limitations of human information processing (HIP) may improve decision maker performance [40] because they mitigate cognitive overload that constrains human performance on complex tasks [94]. Finally, a well-designed and optimally utilized FSS has a strong positive impact on individual performance and system adoption [20]. From an organizational perspective, this can have measurable positive implications for return on investments [95-96].

From a forecasting perspective, this study has yielded several insights to forecaster behavior and implications for FDSS design. We find that little has been done in the forecasting literature by way of developing a formal taxonomy for forecasting tasks. The principal reason for this is that a taxonomy for forecasting tasks essentially depends upon a codification of series complexity. We have endeavored to begin this classification work [52]. Our framework provides an initial attempt to do so in the domain of time series forecasting. Researchers in various other domains may find explorations of similar classifications to be beneficial in making recommendations for systems design in their own domains. Further, we find that forecasters' behaviors regarding direction and magnitude of these adjustments is impacted by complexity of the forecasting task, thereby underscoring the value of parsing out simple from complex tasks. Finally, considering the above contributions, we recommend the need for congruence between system features and task features. Our research, in some aspects, is exploratory in nature and further work is required to solidify this research stream.


Feature Description and Implementation as in C&A Operationalization
Coefficient of Variation
Standard deviation divided by the mean for the trend adjusted data. Automatic identification - C&A
Regression T-Statistic
The t-statistic for linear regression. If T-statistic is greater than abs(2), the series is classified as having a significant basic trend. Automatic identification - C&A
Functional Form (FF) Expected pattern of the trend of the series. Can be multiplicative or additive. Judgmental identification - C&A*
Basic Trend (BT) Direction of trend after fitting linear regression to past
Automatic identification - C&A
Recent Trend (RT) The direction of trend that results from fitting Holt’s to
past data.
Automatic identification - C&A
Near a Previous Extreme (Ext.) A last observation that is 90% more than the highest or 110% lower than lowest observation. Automatic identification - C&A
Outliers (Out.) Isolated observation near a 2 std. deviation band of linear regress. Automatic identification - C&A
Recent Run Not Long (RR) The last six period-to-period movements are not in same direction. Judgmental identification - C&A
Changing Basic Trend (CB) Underlying trend that is changing over the long run. Judgmental identification - C&A*
Irrelevant Early Data (Irr.) Early portion of the series results from a substantially different underlying process. Judgmental identification - C&A
Unusual Last Observation (ULO) Last observation deviates substantially from previous data. Judgmental identification - C&A*
Suspicious Pattern (Sus.) Series that show a substantial change in recent pattern. Judgmental identification - C&A
Level Discontinuities (LD) Changes in the level of the series (steps) Judgmental identification - C&A*
Causal Forces (CF) The net directional effect of the principal factors acting on the series. Growth exerts an upward force. Decay exerts a downward force. Supporting forces push in direction of historical trend. Opposing forces work against the trend. Regressing forces work towards a mean. When uncertain, forces should be unknown. Judgmental identification - C&A
Trend Conflict (TC) If recent trend conflicts with causal forces, e.g. recent
trend is growing while causal forces are decay, then
a trend conflict is flagged.
Judgmental assessment for this study
Trend Variation (TV) Standard deviation divided by the mean for the trend adjusted data. If coefficient is >0.2, the series is flagged
as being uncertain.
Automatic identification - C&A

Table A.


  1. 1. Fildes R. Goodwin P. Lawrence M. 2006 The Design Features of Forecasting Support Systems and Their Effectiveness Decision Support Systems 42 351 361
  2. 2. Armstrong J. S. 2001 Extrapolation of Time-series and Cross-sectional Data In: JS Armstrong (Ed.) Principles of Forecasting: A Handbook for Researchers and Practitioners Boston Kluwer Academic Press
  3. 3. Clemen R. T. 1989 Combining Forecasts: A Review and Annotated Bibliography International Journal of Forecasting 5 559 583
  4. 4. Remus W. 1987 A Study of the Impact of Graphical and Tabular Displays and Their Interactions with Environmental Complexity Management Science 33 1200 1205
  5. 5. Önkal D. Goodwin P. Thomson M. Gönül S. Pollock A. 2009 The Relative Influence of Advice from Human Experts and Statistical Methods on Forecast Adjustments Behavioral Decision Making 22 390 409
  6. 6. Bystrom K. Jarvelin K. 1995 Task Complexity Affects Information Seeking and Use. Information Processing & Management 31 191 215
  7. 7. Wood R. E. Atkins P. Tabernero C. 2000 Self-efficacy and Strategy on Complex Tasks Applied Psychology: An International Review 49 330 466
  8. 8. Gönül M. S. Önkal D. Lawrence M. 2006 The Effects of Structural Characteristics of Explanations on Use of a DSS Decision Support Systems 42 1481 1493
  9. 9. Lawrence M. O’Connor M. 1992 Exploring Judgmental Forecasting International Journal of Forecasting 8 15 26
  10. 10. Jiang J. J. Muhanna W. A. Pick R. A. 1996 The Impact of Model Performance History Information on Users’ Confidence in Decision Models: An Experimental Examination Computers in Human Behavior 12 193 207
  11. 11. Lee W. Y. Goodwin P. Fildes R. Nikolopoulos K. Lawrence M. 2007 Providing Support for the Use of Analogies in Demand Forecasting Tasks International Journal of Forecasting 23 377 390
  12. 12. Adya M. Lusk E. J. Belhadjali M. 2009 Decomposition as a Complex-skill Acquisition Strategy in Management Education: A Case Study in Business Forecasting Decision Sciences Journal of Innovative Education 7 9 36
  13. 13. Goodwin P. 2000 Improving the Voluntary Integration of Statistical Forecasts and Judgment International Journal of Forecasting 16 85 99
  14. 14. Lim J. S. O’Connor M. 1996 Judgmental Forecasting with Interactive Forecasting Support Systems Decision Support Systems 16 339 358
  15. 15. Zigurs I. Buckland B. K. 1998 The Theory of Task/Technology Fit and Group Support Systems Effectiveness MIS Quarterly 22 313 334
  16. 16. Dishaw M. T. Strong D. M. 1998 Supporting Software Maintenance with Software Engineering Tools: A Computed Task-technology Fit Analysis Journal of Systems and Software 44 107 120
  17. 17. Maruping L. M. Agarwal R. 2004 Managing Team Interpersonal Processes Through the Task-Technology Fit Perspective Journal of Applied Psychology 89 975 990
  18. 18. Mathieson K. Keil M. 1998 Beyond the Interface: Ease of Use and the Task/Technology Fit Information & Management 34 221 230
  19. 19. Smith C. Mentzer J. 2010 Forecasting Task-technology fit: The Influence of Individuals, Systems and Procedures on Forecast Performance International Journal of Forecasting 26 144 161
  20. 20. Goodhue D. L. Thompson R. L. 1995 Task-Technology Fit and Individual Performance. MIS Quarterly 19 213 236
  21. 21. Goodhue D. L. 1995 Understanding User Evaluations of Information Systems Management Science 41 1827 1844
  22. 22. Shaw M. E. 1954 Some Effects of Problem Complexity Upon Problem Solution Efficiency in Different Communication Nets Journal of Experimental Psychology 48 211 217
  23. 23. Campbell D. J. 1988 Task Complexity: A Review and Analysis Academy of Management Review 13 40 52
  24. 24. Lee C. C. Cheng H. K. Cheng H. H. 2005 An Empirical Study of Mobile Commerce in Insurance Industry: Task-technology Fit and Individual Differences Decision Support Systems 43 95 110
  25. 25. Klopping I. Mc Kinney E. 2004 Extending the Technology Acceptance Model and the Task-Technology Fit Model to Consumer e-commerce Information Technology Learning and Performance Journal 22 35 48
  26. 26. Armstrong J. S. Collopy F. Yokum J. T. 2005 Decomposition by Casual Forces: A Procedure for Forecasting Complex Time Series International Journal of Forecasting 21 25 36
  27. 27. Rossana R. J. Seater J. J. 1995 Temporal Aggregation and Economic Time Series Journal of Business & Economic Statistics 13 441 452
  28. 28. Johnes G. Kalinoglou A. Manasova A. 2005 Chaos and the dancing stars: Non-linearity and entrepreneurship Journal of Entreprenuership 1 1 19
  29. 29. Lee D. Schmidt P. 1996 On the Power of KPSS Test of Stationarity Against Fractionally-Integrated Alternatives Journal of Econometrics 73 285 302
  30. 30. Mac Kinnon. J. G. 1994 Approximate Asymptotic Distribution Functions for Unit-root and Co-integration Tests Journal of Business and Economic Statistics 12 167 176
  31. 31. MacKinnon J. G. 1996 Numerical Distribution Functions for Unit-root and co-integration tests Journal of Applied Econometrics 11 601 618
  32. 32. LeBaron B. Arthur W. B. Palmer R. 1999 Time Series Properties of an Artificial Stock Market Journal of Economic Dynamics & Control 23 1487 1516
  33. 33. Vokurka R. J. Flores B. E. Pearce S. L. 1996 Automatic Feature Identification and Graphical Support in Rule-based Forecasting: A comparison International Journal of Forecasting 12 495 512
  34. 34. O’Connor M. Lawrence M. 1992 Time Series Characteristics and the Widths of Judgmental Confidence Intervals International Journal of Forecasting 7 413 420
  35. 35. Collopy F. Armstrong J. S. 1992 Rule-based Forecasting: Development and Validation of an Expert Systems Approach to Combining Time Series Extrapolations Management Science 38 1394 1414
  36. 36. Adya M. Armstrong J. S. Collopy F. Kennedy M. An Application of Rule-based Forecasting to a Situation Lacking Domain Knowledge International Journal of Forecasting 2000 16 477 484
  37. 37. Armstrong J. S. Adya M. Collopy F. 2001 Rule-based Forecasting: Using Judgment in Time-Series Extrapolation In: JS Armstrong (Ed.) Principles of Forecasting: A Handbook for Researchers and Practitioners Boston Kluwer Academic Press
  38. 38. Adya M. Critical Issues in the Implementation of Rule-based Forecasting Systems: Refinement, Evaluation, and Validation PhD Dissertation The Weatherhead School of Management, Case Western Reserve University Cleveland, OH, USA 1997
  39. 39. Armstrong J. S. Collopy F. 1992 The Selection of Error Measures for Generalizing About Forecasting Methods: Empirical Comparisons International Journal of Forecasting 8 69 80
  40. 40. Robey D. Taggert W. 1982 Human Information Processing in Information and Decision Support Systems MIS Quarterly 6 61 73
  41. 41. Marsden J. Pakath R. Wibowo K. 2002 Decision Making Under Time Pressure with Different Information Sources and Performance-based Financial Incentives Decision Support Systems 34 75 97
  42. 42. Vila J. Beccue B. 1995 Effect of Visualization on the Decision Maker When Using Analytic Hierarchy Process Proceedings of the 20th Hawaii Conference on System Sciences, HI
  43. 43. Benbasat I. Lim L. 1992 The Effects of Group, Task, Context, and Technology Variables on the Usefulness of Group Support Systems: A Meta-analysis of Experimental Studies Small Group Research 24 430 462
  44. 44. Kester L. Kirschner P. A. van Merrienboer J. J. G. 2005 The Management of Complex Skill Overload During Complex Cognitive Skill Acquisition by Means of Computer Simulated Problem Solving British Journal of Educational Psychology 75 71 85
  45. 45. Carley K. M. Zhiang L. 1997 A Theoretical Study of Organizational Performance Under Information Distortion Management Science 43 976 997
  46. 46. Payne J. W. Bettman J. R. Johnson E. J. 1993 The adaptive decision maker Cambridge University Press Cambridge, England
  47. 47. Katz I. Assor A. 2007 When Choice Motivates and When It Does Not Educational Psychology Review 19 429 442
  48. 48. Bisantz A. M. Llinas J. Seong Y. Finger R. Jiam J. Empirical Investigations of Trust-related System Vulnerabilities in Aided, Adversarial Decision-making Department of Industrial Engineering SUNY Buffalo 2000
  49. 49. Welch E. Bretschneider S. Rohrbaugh J. 1998 Accuracy of Judgmental Extrapolation of Time Series Data: Characteristics, Causes, and Remediation Strategies for Forecasting International Journal of Forecasting 14 95 110
  50. 50. Dora C. S. Sarkar M. Sundaresh S. Harmanec D. Yeo T. Poh K. Leong T. 2001 Building Decision Support Systems for Treating Severe Head Injuries Proceedings of the 2001 IEEE International Conference on Systems, Man and Cybernetics Tucson, AZ
  51. 51. Lerch F. J. Harter D. E. 2001 Cognitive Support for Real-time Dynamic Decision-making Information Systems Research 12 63 82
  52. 52. Adya M. Lusk E. J. Development and Validation of a Time Series Complexity Taxonomy: Implications for Conditioning Forecasting Support Systems Working Paper 2012
  53. 53. Edmondson R. H. Lawrence M. J. O’Connor M. J. 1988 The Use of Non-time Series Information in Sales Forecasting: A Case Study Journal of Forecasting 7 201 211
  54. 54. Sanders N. R. Ritzman L. P. 1992 The Need for Contextual and Technical Knowledge in Judgmental Forecasting Journal of Behavioral Decision Making 5 39 52
  55. 55. Sanders N. Ritzman L. 2001 Judgmental Adjustments of Statistical Forecasts In JS. Armstrong (Ed.), Principles of Forecasting: A Handbook for Researchers and Practitioners Kluwer Academic Publishers Norwell, MA
  56. 56. Holsapple C. Winston A. 1996 Decision Support Systems: A Knowledge-Based Approach St. Paul, MN West Publishing
  57. 57. Simon H. 1997 Models of Bounded Rationality 3 New York, NY MIT Press
  58. 58. Keen P. G. W. Morton M. S. 1978 Decision Support Systems: An Organizational Perspective Addison-Wesley, Reading PA
  59. 59. Keen P. G. W. 1981 Information Systems and Organizational Change Communications of the ACM 24 24 33
  60. 60. Leidner D. E. Elam J. J. 1993 Executive Information Systems: The Impact on Executive Decision Making Journal of Management Information Systems 10 139 155
  61. 61. Silver M. J. 1991 Decisional Guidance for Computer-based Support MIS Quarterly 15 105 133
  62. 62. Alavi M. 1982 An Assessment of the Concept of Decision Support Systems as Viewed by Senior-level Executives MIS Quarterly 6 1 10
  63. 63. Lamberti D. M. Wallace W. A. 1990 Intelligent Interface Design: An Empirical Assessment of Knowledge Presentation in Expert Systems MIS Quarterly 14 279 311
  64. 64. Fazlollahi B. MA Parikh Verma. S. 1997 Adaptive Decision Support Systems Decision Support Systems 20 297 315
  65. 65. Silver M. J. 1990 Decision Support Systems: Directed and Non-directed Change Information Systems Research 1 47 70
  66. 66. Xiao B. Benbasat I. 2007 E-commerce Product Recommendation Agents: Use, Characteristics, and Impact MIS Quarterly 31 137 209
  67. 67. Gettity T. P. 1971 The Design of Man-machine Decision Systems: An Application to Portfolio Management Sloan Management Review 12 59 75
  68. 68. Vessey I. 1994 The effect of information presentation on decision making: A cost-benefit analysis Information & Management 27 103 119
  69. 69. Montezami A. R. Wang F. Nainar S. M. K. Bart C. K. 1996 On the effectiveness of decisional guidance Decision Support Systems 18 181 198
  70. 70. Lee F. J. Anderson J. R. 2001 Does Learning a Complex Task Have to be Complex? A Study in Learning Decomposition. Cognitive Psychology 42 267 316
  71. 71. MacGregor D. G. 2001 Decomposition for Judgmental Forecasting and Estimation In JS Armstrong (Ed.), Principles of forecasting: A Handbook for Researchers and Practitioners 107 123 Norwell, MA Kluwer Academic Publishers
  72. 72. Plous S. 1993 The Psychology of Human Judgment and Decision Making New York McGraw Hill
  73. 73. Webby R. O’Connor M. Lawrence M. 2001 Judgmental Time Series Forecasting Using Domain Knowledge In J.S. Armstrong (Ed.) Principles of Forecasting: A Handbook for Researchers and Practitioners Kluwer Academic Publishers Norwell, MA
  74. 74. Ghahramani Z. Wolper D. M. 1997 Modular Decomposition in Visiomotor Learning Nature 386 392 395
  75. 75. Jordan M. I. Jacobs R. A. 1994 Hierarchical Mixture of Experts and the EM Algorithm Neurological Computing 6 181 214
  76. 76. Card S. K. Moran T. P. Newell A. The Psychology of Human-computer Interaction Hillsdale, NJ Earlbaum 1983
  77. 77. Newell A. Rosenbloom P. S. 1981 Mechanisms of Skill Acquisition and the Law of Practice In J. R. Anderson (Ed.), Cognitive Skills and their Acquisition 1 55 Hillsdale, NJ Earlbaum
  78. 78. Sterman J. D. 1989 Misperceptions of Feedback in Dynamic Decision Making Organizational Behavior and Human Decision Processes 43 301 335
  79. 79. Goodman J. S. Wood R. E. Hendrickx M. 2004 Feedback Specificity, Exploration, and Learning Journal of Applied Psychology 89 248 262
  80. 80. De Nisi A. S. Kluger A. N. 2000 Feedback Effectiveness: Can 360-degree Appraisals be Improved? Academy of Management Executive 14 129 139
  81. 81. Kayande U. A. de Bruyn A. Lilien G. Rangaswamy A. Van Bruggen G. 2006 The Effect of Feedback and Learning on Decision-support System Adoption In Avlonitis, GJ., Papavassiliou, N, Papastathopoulou, P. (Eds.) Proceedings of the 35th EMAC Conference Athens European Marketing Academy Brussels
  82. 82. Singh D. T. 1998 Incorporating Cognitive Aids into Decision Support Systems: The Case of the Strategy Execution Process Decision Support Systems 24 145 163
  83. 83. Bandura A. Cervone D. 1983 Self-evaluative and Self-efficacy Mechanisms Governing the Motivational Effects of Goal Systems Journal of Personality and Social Psychology 41 586 598
  84. 84. Todd P. Benbasat I. 1999 Evaluating the Impact of DSS, Cognitive Effort, and Incentives on Strategy Selection Information Systems Research 10 356 374
  85. 85. Chen F. Drezner Z. Ryan J. K. Simchi-Levi D. 2000 Quantifying the Bullwhip Effect in a Simple Supply Chain: The Impact of Forecasting, Lead times, and Information Management Science 46 436 443
  86. 86. Eierman M. Niederman F. Adams C. 1995 DSS Theory: A Model of Constructs and Relationships Decision Support Systems 14 1 26
  87. 87. Guimaraes T. Igbaria M. Lu M. 1992 The Determinants of DSS Success: An Integrated model Decision Sciences 23 409 430
  88. 88. Grabowski M. Sanborn S. 2001 Evaluation of Embedded Intelligent Real-time Systems Decision Sciences 32 95 124
  89. 89. De Lone W. Mc Lean E. 1992 Information Systems Success: The Quest for the Dependent Variable Information Systems Research 3 60 95
  90. 90. Lawrence M. Low G. 1993 Exploring Individual Satisfaction Within Use-led Development MIS Quarterly 17 195 208
  91. 91. Seddon P. 1997 A Re-specification and Extension of the DeLone and McLean Model of IS Success Information Systems Research 8 240 253
  92. 92. Lawrence M. Goodwin P. Fildes R. 2002 Influence of User Participation on DSS Use and Decision Accuracy Omega 30 381 392
  93. 93. Goodwin P. Fildes R. Lawrence M. Stephens G. 2011 Restrictiveness and Guidance in Support Systems Omega 39 242 253
  94. 94. Jarvenpaa S. L. 1989 The Effect of Task Demands and Graphic Format on Information Processing Strategies Management Science 35 285 303
  95. 95. Bannister F. Remenyi D. 2000 Acts of Faith: Instinct, Value and IT Investment Decisions Journal of Information Technology 15 231 241
  96. 96. Lin C. Huang Y. Burns J. 2007 Realising B2B e-commerce Benefits: The Link with IT Maturity, Evaluation Practices, and B2BEC Adoption Readiness European Journal of Information Systems 16 806 819

Written By

Monica Adya and Edward J. Lusk

Submitted: March 8th, 2012 Published: October 17th, 2012