Open access peer-reviewed chapter

Developing the EBS Management Model to Assist SMEs to Evaluate E-Commerce Success

Written By

Mingxuan Wu, Ergun Gide and Rod Jewell

Submitted: 01 May 2017 Reviewed: 10 April 2018 Published: 05 November 2018

DOI: 10.5772/intechopen.77149

From the Edited Volume

Management of Information Systems

Edited by Maria Pomffyova

Chapter metrics overview

1,309 Chapter Downloads

View Full Metrics

Abstract

In the literature, a lack of strong consensus or well-known theoretical research framework exists to defining and evaluating e-commerce success among small to medium enterprises (SMEs). Exploring more effective methods to describe and evaluate e-commerce success becomes a challenging task. This research seeks to help fill the gap by proposing a new model to evaluate e-commerce success from a business perspective. This measure has been termed e-commerce business satisfaction (EBS). A total of 2401 surveys were successfully sent to SMEs. The usable response rate for the surveys was 7.54%. Principal component analysis with varimax rotation method was then adopted within the factor analysis. Using the 15 critical success factors (CSFs) obtained from previous research as a foundation, an EBS management model was finally simply developed to assist SMEs business managers in effectively adopting e-commerce systems or evaluating e-commerce success, which was categorised into five components including Marketing, Management Support and Customer Acceptance, Website Effectiveness and Cost, Managing Change and Knowledge and Skills. Further research is needed to determine the weighting of each CSF so that a yardstick measurement method might be further developed to assist SMEs in adopting e-commerce successfully.

Keywords

  • E-commerce satisfaction
  • E-commerce success
  • evaluation
  • management model
  • SMEs

1. Introduction

Electronic commerce (e-commerce) is not a new concept, but it has had increasing and unpredictable developments [1]. The fact is that the number of small to medium enterprises (SMEs) adopting e-commerce has increased significantly in recent years. It has been proved that successful e-commerce/IS use can create net benefits concerning financial and operational performance for SMEs, even in developing countries [2]. A significant number of SMEs have, however, failed in adopting e-commerce while many businesses are not satisfied with their e-commerce systems. The research stated, therefore, that quantifying the value contribution of e-commerce has become an issue for managers seeking to justify the enormous expenditures involved in new IT investment [3].

Among the research methods, the evaluation has been one of the five top research areas on the adoption of e-commerce along with trust, technology acceptance and technology application, e-commerce task-related application, and e-markets, which resulted from the analysis of a total of 1064 e-commerce-related articles and 33,173 references published in leading e-commerce journals between 2006 and 2010 [4]. Researchers have also enunciated the need for evaluating e-commerce success as avoiding failure again, learning from experience, indicating actual business benefits, the requirement for adoption guidelines, and for further improvement and development [5].

Investigation to date does not clearly show how to evaluate the success of e-commerce systems [6]. In the investigation of e-commerce, many research studies use the IS success model to evaluate e-commerce systems [7]. For example, the original or updated DeLone and McLean model have been widely used for evaluating the degree of IS/e-commerce success [8]. The Technology Acceptance Model (TAM) and its extensions have also been used as reliable and robust models for predicting the user acceptance of e-commerce [9].

IS success approach may not, however, be methodologically and theoretically feasible in e-commerce among SMEs [10]. The literature has noted that the difficulties existed in using such IS models and its extensions [5]. The main difficulty has been highlighted that the determinants of e-commerce might be dissimilar to the concepts in IS success studies [5, 10]. Other difficulties could be focused on the involvement of top management, beyond Internet technology and lack of experiences [5].

In the literature, no strong consensus or well-known comprehensive and integrated theoretical framework currently exists [10, 11, 12]. Research frameworks also lack a theoretical approach to defining and evaluating e-commerce success among SMEs [10, 11, 12]. Exploring more effective methods to describe and evaluate e-commerce success, thereby, becomes a challenging task [13, 14]. This research seeks to help fill the gap by proposing a new model to evaluate e-commerce success from a business perspective.

Advertisement

2. CSFs and E-commerce success

2.1. Using business satisfaction for evaluating e-commerce success

In the literature, satisfaction is a very important element for a successful long-term relationship with e-commerce adoption/success [9]. Research highlighted that satisfaction rather than system use was adopted as an appropriate measure of e-commerce success [15]. Any unsatisfactory performance regarding this criterion could not be compensated for by better performance concerning one or more other criteria [16]. Satisfaction study is, therefore, being used widely for a better understanding of e-commerce success.

This research proposed that the term e-commerce business satisfaction (EBS) would be a better measure of e-commerce success where e-commerce satisfaction was discussed from a business perspective, which has been previously defined as ‘the overall satisfaction that a business has with an e-commerce system meeting its requirements and expectations’ [17].

2.2. Using CSFs for understanding EBS to evaluate e-commerce success

The concept and approach of CSFs have been studied and applied successfully in a broad range of contexts in many areas of IS/IT research including identifying system needs [11, 18]. Research states that satisfaction with e-commerce is significantly affected by organisational determinant critical success factors (CSFs) [10].

Using a set of CSFs identified as one of the key implementation strategies employed in the context of performance analysis and strategic management, SMEs may focus on and manage the few key areas in the implementation process to measure their e-commerce systems, judge the efficacy of e-commerce systems, improve their e-commerce systems, and predict e-commerce usage, which could contribute to the success of these experiential-driven initiatives and help achieve a successful implementation outcome of e-commerce [14, 18, 19, 20, 21, 22]. Identifying CSFs that impact the adoption of e-commerce will also make it possible to assess its future growth [21].

Over the past decade, therefore, a large number of investigations have focused on determining what CSFs affected the adoption of e-commerce successfully [23, 24]. Despite the increasing work on e-commerce, however, the investigation indicated that there were still very few studies that attempted to investigate the effect of proposed CSFs on the implementation of e-commerce [18] and also lack of an integrated approach for the development of a well-established e-commerce CSFs model [25]. This research adopted the CSFs identified in previous work for understanding business satisfaction associated with e-commerce success.

Advertisement

3. Research methodology and the survey results

A blend of research methods consisting of focus group studies, pilot tests, and surveys have been used and discussed.

3.1. Focus group studies

A focus group study involves a formalised process of bringing a small group of people together for an interactive and spontaneous discussion or interview of one particular topic or concept [26]. With origins in sociology, focus group study became widely used in market research during the 1980s and was used for more diverse research applications in the 1990s [27]. Focus group studies might help prepare for a survey by providing sufficient information about the survey objective, by defining and improving indicators and about preventing possible errors [28].

Most experts agree that the optimal number of participants in any type of focus group interview is from six to 12 members in nondirective interactive communications facilitated by a moderator who prepares and uses a loosely constructed set of relevant questions [27, 28, 29]. In this research, a target of 18 SMEs business managers (nine for each in Australia and China) were formed to define and improve indicators preventing possible errors.

3.2. Pilot tests

A pilot test is an important and essential step in checking the rigour of the survey instrument and the need for any final modification before conducting the survey proper [28, 30, 31, 32]. The objectives of this step were to examine the validity of each item in the survey and to avoid any misleading cultural differences due to inaccurate translation [33], as it is extremely difficult even for experienced social scientists to write a questionnaire [30].

Research advises that a pilot test of 20–50 cases is usually sufficient to discover the major flaws in a questionnaire before they damage the main study [30]. Research further suggested that researchers use open questions in pilot tests and then develop closed-question responses from the answers given to the open questions for large-scale surveys [26].

Twenty businesses were involved in pilot tests (10 for each in Australia and China) in this research, which were carried out with open questions to modify the proposed questionnaires and any errors.

3.3. Surveys and survey results

The survey samples were selected first. Since most computer programs use standard error algorithms based on the assumption of simple random samples, the standard errors reported in the literature often underestimate sampling error [34]. The goal of the sampling is to collect data representative of a population within the limits of random error, which the researcher then uses to generalise findings from a drawn sample back to a population [35]. It is critical that the chosen respondents are representative of the study population [31]. The random sampling method was chosen in this research to select samples [26] as follows:

  • Stage 1: random sampling of big clusters.

  • Stage 2: random sampling of small clusters within each selected big clusters.

  • Stage 3: the sampling of elements from within the sampled small clusters.

Research recommends a mailing sequence for sending the survey questionnaire followed by a reminder sent about 1 week later [36]. Two follow-up reminder letters should then be sent to those not responding while the first should arrive about 1 week after sending the questionnaire and the second a week later [26]. If the higher response rate is necessary, phone calls can be made to the non-respondents about 2 weeks after two reminders [36]. Follow-up notices with personally requesting non-responders’ participation may increase response rates to some extent [36, 37]. This research adopted the following data collection sequence represented based on research advice [32, 36, 37, 38]:

  • Step 1: sending survey forms with an invitation letter.

  • Step 2: 2 weeks later, sending the first e-mail reminder to non-respondents.

  • Step 3: another 2 weeks later, sending a second e-mail reminder to non-respondents.

  • Step 4: followed 2 weeks later, making phone calls to remaining non-respondents.

In this research, a total of 2401 surveys were successfully sent to SMEs in Australia and China including Australian SMEs (1528) and Chinese SMEs (873). The usable response rate for the surveys was 7.54% (181 out of 2401) including Australian SMEs (69) and Chinese SMEs (112) [39]. Consequently, a total of 15 items from item 1 to item 15 were finally identified (Table 1).

Items Description
Item 1 CEO IT/e-commerce/e-commerce marketing knowledge
Item 2 Senior staff IT/e-commerce knowledge
Item 3 Regular staff training in the appropriate or relevant IT skills
Item 4 Flexibility of e-commerce systems to change depending on the business process
Item 5 Ability to keep up with the rate of technology change (externally)
Item 6 The response time effectiveness/performance of an e-commerce website
Item 7 Trust in the interface design and information displayed on a website
Item 8 Support from top management/decision-maker
Item 9 Support from senior management
Item 10 Customer pressure/acceptance/ interest
Item 11 Cost associated with keeping up to date or upgrade of e-commerce system
Item 12 Decision-maker’s effective e-commerce marketing plan
Item 13 Effective e-commerce marketing strategy
Item 14 Adoption of different e-commerce marketing strategies based on different business requirements/ needs
Item 15 The consistency of graphics and backgrounds with business culture used on a website

Table 1.

The 15 items identified.

Source from [39].

Advertisement

4. Data analysis

During the data analysis procedures, this research conducted an initial reliability analysis and validity analysis first. Principal component analysis with varimax rotation method was then adopted within the factor analysis.

4.1. Reliability analysis

Reliability is the extent to which a question yields the same responses over time, or a scale produces consistent results when repeated measurements are made [36, 40]. Any summated scale should be first analysed for reliability to ensure its appropriateness before proceeding to an assessment of its validity [41]. In this research, reliability was assessed using internal consistency analysis [36]. The earliest and simplest measure of the internal consistency of a set of data items is the split-half reliability of the scale [40, 41]. On assessing split-half reliability, the total set of items is divided into two equivalent halves—if the two scales correlate highly, they should produce similar scores [42, 43]. The total scores for the two halves are then correlated, and this is taken as the measure of the reliability of the survey [42, 43].

In practice, the approach for assessing internal consistency is the coefficient alpha (a.k.a., ‘the reliability coefficient’) or Cronbach’s alpha popularised in a 1951 article by Cronbach based on the work in the 1940s by Guttman and others and is the most common measure of internal consistency of items in a scale [32, 36, 42, 44, 45], which is the most commonly applied estimate of a survey’s reliability [40, 43]. It provides a summary measure of the inter-correlations that exist among a set of items [40, 42, 45, 46].

At the initial reliability analysis stage, Cronbach’s alpha should be considered as the critical characteristic. Cronbach’s alpha varies from 0 to 1 [40, 46]. The higher the correlations among the items is the greater the Cronbach’s alpha values which imply that high scores on one question are associated with high scores on other questions [36]. If the value of Cronbach’s alpha is low and if the item pool is sufficiently large, this suggests that some items do not equally share in the common core and should be eliminated [42]. Research also indicates that a value is considered to have very good reliability between 0.80 and 0.95, good reliability between 0.70 and 0.80, fair reliability between 0.60 and 0.70 and poor reliability below 0.60 [43].

In the test of reliability in this research, Cronbach’s alpha is 0.837. The results showed strong evidence of meeting the reliability standards of exploratory research and are considered to have very good reliability.

4.2. Validity analysis

The validity of a survey instrument may be defined as the extent to which it accurately measures what it is supposed to measure [36, 40]. There are three basic approaches to establishing validity namely content validity, criterion validity, and construct validity [36, 40, 43].

4.2.1. Content validity

Content validity, sometimes called face validity, measures the extent to which a survey’s content logically appears to reflect what was intended to be measured [43]. It typically involves a systematic review of the survey’s contents to ensure that it includes everything it should and nothing that it should not [46]. Although there does not yet exist a scientific measure of the content validity for a survey instrument [46], content validity is often assessed practically by approaches such as focus groups, and/or pilot test studies [41, 46, 47].

In this research, content validity was assessed subjectively but systematically to establish the appropriateness of the variables used—items not considered appropriate were rejected by two focus group studies (one for each in Australia and China) and 20 pilot tests (10 for each in Australia and China).

4.2.2. Criterion validity

Criterion validity addresses the ability of a measure to correlate with other standard measures of similar established criteria [43]. Criterion validity may be classified as either concurrent or predictive depending on the time sequence in which the new measurement scale and the criterion measure are correlated [40, 43] as follows:

  • Concurrent criterion validity—if there is an existing instrument that one can compare with a newly devised questionnaire [40, 46], or if the new measure is taken at the same time as the criterion measure and is shown to be valid, then it has concurrent validity [43].

  • Predictive criterion validity is established when a new measure predicts a future event [36, 43].

However, no method for assessing criterion validity is foolproof while none can conclusively show if a concept is truly measuring what it should [36]. As the concern is more about the validity of the use of the survey instrument than its own inherent validity [36], most researchers appear to more commonly use construct validity as discussed below. In this research, criterion validity analysis was not conducted, as no similar established surveys were available with which to compare it, and the measure is not being used to predict a future event.

4.2.3. Construct validity

Construct validity addresses the question of what constructor characteristic the survey is measuring and how an instrument ‘behaves’ when it is used [40, 46] as follows:

  • Convergent validity is the extent to which the survey correlates positively with other measures of the same construct [40]. It assesses the extent to which different data collection methods produce similar results [46]. Convergent validity is also another way of expressing internal consistency as highly reliable scales contain convergent validity [43].

  • Discriminant validity is the extent to which a survey result does not correlate with other constructs from which it is supposed to differ and represents a measure of its distinctiveness [40, 43]. When two item values are correlated above 0.75, discriminant validity between items may be questioned and the items rejected [43].

  • Nomological validity is the extent to which the summated scale makes accurate predictions of other concepts in a theoretically based model [40, 41].

In this research, convergent validity was being assessed actually through reliability analysis. Technically, discriminant validity was being indirectly established through the following factor analysis. Nomological validity was not analysed as no similar established relationships appeared to exist in the literature.

4.3. Factor analysis

The basic rationale of factor analysis is that the variables are correlated because they share one or more common components, and if they did not correlate, there would be no need to perform factor analysis, which operates on the correlation matrix of the variables to be factored [36]. For the ease of interpretation, principal component analysis with varimax rotation method was conducted within the factor analysis. In practice, principal component analysis can be conducted using SPSS software through factor analysis.

Table 2 shows that the criteria for evaluating a matrix of the factor loading for each variable onto each component are quite good that the 15 items are grouped into five components with the suppression of loadings not less than 0.4. Items with factor loadings less than 0.4 have not been displayed for clarity [48].

Factors Components Cronbach's Alpha Items
1 2 3 4 5
F15 0.792 0.743 Item 13
F14 0.788 Item 14
F13 0.67 0.492 Item 12
F12 0.818 0.661 Item 9
F11 0.705 Item 8
F10 0.461 Item 10
F9 0.771 0.696 Item 15
F8 0.738 Item 11
F7 0.522 0.43 Item 6
F6 0.439 Item 7
F5 0.805 0.578 Item 5
F4 0.753 Item 4
F3 0.785 0.662 Item 1
F2 0.782 Item 2
F1 0.606 Item 3
Extraction Method: Principal Component Analysis.
Rotation Method: Varimax with Kaiser Normalization.
a. Rotation converged in 7 iterations.

Table 2.

Rotated component matrix.

Any components with a variance (represented by the eigenvalue) less than 1.0 were rejected as they contribute less than other factors to the model [36]. Table 3 shows that all eigenvalues are over 1.0 of component 1, component 2, component 3, component 4, and component 5. According to research, the components accepted should account for at least 60% of the cumulative variance [36, 40]. In this research, cumulative percentage of the variance for the four components accounts for 64.843% which satisfies the normally accepted measure (see Table 3).

Component Initial eigenvalues Extraction sums of squared loadings Rotation Sums of Squared Loadings
Total % of Variance Cumulative % Total % of Variance Cumulative % Total % of Variance Cumulative %
1 4.759 31.730 31.730 4.759 31.730 31.730 2.031 13.537 13.537
2 1.505 10.036 41.766 1.505 10.036 41.766 2.008 13.388 26.925
3 1.256 8.371 50.137 1.256 8.371 50.137 1.943 12.954 39.879
4 1.185 7.899 58.036 1.185 7.899 58.036 1.907 12.712 52.591
5 1.021 6.807 64.843 1.021 6.807 64.843 1.838 12.252 64.843
6 0.851 5.676 70.519
7 0.751 5.004 75.523
8 0.635 4.232 79.755
9 0.603 4.019 83.774
10 0.523 3.485 87.260
11 0.471 3.143 90.402
12 0.465 3.097 93.499
13 0.404 2.693 96.192
14 0.319 2.124 98.317
15 0.253 1.683 100.000

Table 3.

Total variance explained.

Extraction method: Principal component analysis.

Advertisement

5. Results and discussion

Based on the above analysis, the 15 CSFs were categorised into five components by the strength of relationship:

  • Component 1: for the three items grouped including F15, F14 and F13, there is a single focus on Marketing.

  • Component 2: for the three items grouped including F12, F11 and F10, the centre points are Management Support and Customer Acceptance.

  • Component 3: for the four items grouped including F9, F8, F7 and F6, the focuses are Website Effectiveness and Cost.

  • Component 4: for the two items grouped including F5 and F4, the common factor is Managing Change.

  • Component 5: for the three items grouped including F3, F2 and F1, the common focus is on Knowledge and Skills.

Thus, an EBS model for business managers—EBS management model—can be simply developed encompassing the above five components (see Figure 1) [13].

Figure 1.

The EBS management model.

By superimposing the 15 CSFs onto the above EBS management model, SMEs Business Managers might effectively adopt or evaluate e-commerce satisfaction and success from a business perspective (see Figure 2).

Figure 2.

Fifteen CSFs categorised into the EBS management model.

Advertisement

6. Conclusions, limitation and further research

This research seeks to explore a new method to describe and evaluate e-commerce success from a business perspective. An EBS management model with 15 CSFs as a foundation was finally simply developed to assist SMEs business managers in effectively adopting e-commerce systems or evaluating e-commerce success, which was categorised into five components including Marketing, Management Support and Customer Acceptance, Website Effectiveness and Cost, Managing Change and Knowledge and Skills.

The major limitation is that the investigation just focuses on SMEs in Australia and China. This implies to conduct further research in other counties. Further research is also needed to determine the weighting of each CSF so that this EBS management model might be further updated and extended as a yardstick measurement method to assist SMEs in adopting e-commerce successfully.

Advertisement

Acknowledgments

This research is sponsored by the Fund for Shanxi ‘1331 Project’ Collaborative Innovation Center, China.

References

  1. 1. Choshin M, Ghaffari A. An investigation of the impact of effective factors on the success of e-commerce in small- and medium-sized companies. Computers in Human Behavior. 2017;66:67-74
  2. 2. Ghobakhloo M, Tang SH. Information system success among manufacturing SMEs: Case of developing countries. Information Technology for Development. 2015;21(4):573-600
  3. 3. Lai JY, Kan CW, Ulhas KR. Impacts of employee participation and trust on e-business readiness, benefits, and satisfaction. Information Systems and e-Business Management. 2013;11:265-285
  4. 4. Shiau WL, Dwivedi YK. Citation and co-citation analysis to identify Core and emerging knowledge in electronic commerce research. Scientometrics. 2013;94(3):1317-1337
  5. 5. Wu MX, Gide E. A model to demonstrate the common CSFs in e-commerce business satisfaction for measuring e-commerce success. In: Encyclopaedia of E-Commerce Development, Implementation, and Management. USA; 2016. pp. 480-495
  6. 6. Ghobakhloo M, Hong TS, Standing C. Business-to-business electronic commerce success: A supply network perspective. Journal of Organizational Computing and Electronic Commerce. 2014;24:312-341
  7. 7. Susanto A, Lee H, Zo HJ, Ciganek AP. Factor affecting internet banking success: A comparative investigation between Indonesia and South Korea. Journal of Global Information Management. 2013;21(2):72-95
  8. 8. Wang WT, Wang YS, Liu ER. The stickiness intention of group-buying websites: The integration of the commitment-trust theory and e-commerce success model. Information Management. 2016;53:625-642
  9. 9. Guzzo T, Ferri F, Grifoni P. A model of e-commerce adoption (MOCA): Consumer’s perceptions and Behaviours. Behaviour & Information Technology. 2016;35(3):196-209
  10. 10. Ghobakhloo M, Hong TS, Standing C. B2B e-commerce success among small and medium-sized enterprises: A business network perspective. Journal of Organizational and End User Computing. 2015;27(1):1-32
  11. 11. Chang KP, Graham G. E-business strategy in supply chain collaboration: An empirical study of B2B e-commerce project in Taiwan. International Journal of Electronic Business Management. 2012;10(2):101-111
  12. 12. Agarwal J, Wu T. Factors influencing growth potential of e-commerce in emerging economies: An institution-based N-OLI framework and research propositions. Thunderbird International Business Review. 2015;57(3):197-215
  13. 13. Wu MX, Gide E, Jewell R. The EBS management model: An effective measure of e-commerce satisfaction in SMEs in service industry from management perspective. Electronic Commerce Research. 2014;14(1):71-86
  14. 14. Rouibah K, Lowry PB, Almutairi L. Dimensions of business-to-consumer (B2C) systems success in Kuwait: Testing a modified Delone and Mclean IS success model in an e-commerce context. Journal of Global Information Management. 2015;23(3):41-71
  15. 15. Wang WT, Lu CC. Determinants of success for online insurance web sites: The contributions from system characteristics, product complexity, and trust. Journal of Organizational Computing and Electronic Commerce. 2014;24:1-35
  16. 16. Ramanathan R. E-commerce success criteria: Determining which criteria count most. Electronic Commerce Research. 2010;10:191-208
  17. 17. Gide E, Wu MX. A study to establish e-commerce business satisfaction model to measure e-commerce success in SMEs. International Journal of Electronic Customer Relationship Management. 2007;1(3):307-325
  18. 18. Ram J, Corkindale D, Wu ML. Implementation critical success factors (CSFs) for ERP: Do they contribute to implementation success and post-implementation performance? International Journal of Production Economics. 2013;144:157-174
  19. 19. Wang CB, Mao ZF, Johansen J, Luxhøj JT, O’Kane J, Wang J, Wang LG, Chen XZ. Critical success criteria for B2B e-commerce systems in Chinese Medical Supply Chain. International Journal of Logistics Research and Applications. 2016;19(2):105-124
  20. 20. Krathu W, Pichler C, Xiao GH, Werthner H, Neidhardt J, Zapletal M, Huemer C. Inter-organizational success factors: A cause and effect model. Information Systems and e-Business Management. 2015;13:553-593
  21. 21. Walker JH, Saffu K, Mazurek M. An empirical study of factors influencing e-commerce adoption/non-adoption in Slovakian SMEs. Journal of Internet Commerce. 2016;15(3):89-213
  22. 22. Lowe J, Maggioni I, Sands S. Critical success factors of temporary retail activations: A multi-actor perspective. Journal of Retailing and Consumer Services. 2018;40:74-81
  23. 23. Ahmad MM, Cuenca RP. Critical success factors for ERP implementation in SMEs. Robotics and Computer-Integrated Manufacturing. 2013;29:104-111
  24. 24. Ahmad SZ, Bakar ARA, Faziharudean TM, Zaki KAM. An empirical study of factors affecting e-commerce adoption among small- and medium-sized. Information Technology for Development. 2015;21(4):555-572
  25. 25. Tsironis LK, Gotzamani KD, Mastos TD. E-business critical success factors: Toward the development of an integrated success model. Business Process Management Journal. 2016;23(5):874-896
  26. 26. Neuman WL. Social Research Methods: Qualitative and Quantitative Approaches. USA: Pearson Education Inc; 2000
  27. 27. Cooper DR, Emory CW. Business Research Methods. USA: IRWIN; 1995
  28. 28. Sarantakos S. Social Research, 2nd edn, South Yarra: Macmillan Publishers Australia; 1998
  29. 29. Phipps PA, Butani SJ, Chun YI. Research on establishment-survey questionnaire design. Journal of Business & Economic Statistics. 1995;13(3):337-346
  30. 30. Rossi PH, Wright JD, Anderson AB. The Handbook of Survey Research. USA; 1983
  31. 31. Hopkins WG. Quantitative Research Design. May 4 last update-[online]. 2000. Available at URL: http://www.sportsci.org/jour/0001/wghdesign.html [Accessed: Mar 22 2005]
  32. 32. Garson D. Scales and Standard Measures. 2007. Viewed 24 Oct 2007, [online] http://www2.chass.ncsu.edu/garson/pa765/standard.htm
  33. 33. Wang YD, Emurian HH. Trust in e-commerce: Consideration of Interface design factors. Journal of Electronic Commerce in Organizations. 2005;3(4):42-60
  34. 34. Garson D. Sampling. 2006. Viewed 24 Oct 2007, [Online] Http://Www2.Chass.Ncsu.Edu/Garson/Pa765/Sampling.Htm
  35. 35. Barlett JE, Kotrlik JW, Higgins CC. Organizational research: Determining appropriate sample size in survey research. Information Technology, Learning, and Performance Journal. 2001;9(1):43-50
  36. 36. SPSS. Survey Research Using SPSS. 1998. Viewed 20 Mar 2006, [online] www.ts.vcu.edu/manuals/spss/guides/Survey%20Research.pdf
  37. 37. Sheehan K. E-mail survey response rates: A review. Journal of Computer-Mediated Communication. 2001;6(2)
  38. 38. Neuman WL. Social Research Methods: Qualitative and Quantitative Approaches. 5th ed. USA: Pearson Education; 2003
  39. 39. Wu MX, Jewell R, Gide E. An eyeball diagram: Illustrating the common CSFs in e-commerce business satisfaction for successful adoption of e-commerce systems by SMEs. International Journal of Electronic Customer Relationship Management. 2012;6(2):169-192
  40. 40. Malhotra N, Hall J, Shaw M, Oppenheim P. Marketing Research: An Applied Orientation. Australia: Pearson; 2006
  41. 41. Hair JF, Anderson RE, Tatham RL, Black WC. Multivariate Data Analysis. New Jersey: Prentice Hall; 1998
  42. 42. Churchill GA, Iacobucci D. Marketing Research Methodological Foundation. South-Western. USA: Thomson Learning; 2002
  43. 43. Zikmund WG, Babin BJ. Exploring Marketing Research. USA: Thomson Higher Education; 2007
  44. 44. Field A. Chapter. London: Reliability Analysis on SPSS. Discovering Statistics Using SPSS, Sage; 2005a. p. 15
  45. 45. Garson GD. Reliability Analysis. 2008. Viewed 30 May 2008, [online]http://www2.chass.ncsu.edu/garson/PA765/reliab.htm
  46. 46. Kitchenham B, Pfieeger SL. Principles of survey research part 4: Questionnaire evaluation. Software Engineering Notes. 2002;27(3):20-23
  47. 47. Hair JF, Lukas BA, Miller KE, Bush P, Ortinau DJ. Marketing Research. Australia: McGraw-Hill Australia Pty Ltd; 2008
  48. 48. Field A. Chapter 15: Factor analysis on SPSS. In: Discovering Statistics Using SPSS. London: Sage; 2005

Written By

Mingxuan Wu, Ergun Gide and Rod Jewell

Submitted: 01 May 2017 Reviewed: 10 April 2018 Published: 05 November 2018