InTechOpen uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy.

Computer and Information Science » Numerical Analysis and Scientific Computing » "Efficient Decision Support Systems - Practice and Challenges From Current to Future", book edited by Chiang Jao, ISBN 978-953-307-326-2, Published: September 9, 2011 under CC BY-NC-SA 3.0 license. © The Author(s).

# Identification of Key Drivers of Net Promoter Score Using a Statistical Classification Model

By Daniel R. Jeske, Terrance P. Callanan and Li Guo
DOI: 10.5772/16954

Article top

## Overview

Figure 1. MLEs of Slopes for 7-Point Likert Scale Customer Satisfaction Covariates

Figure 2. Constrained MLEs of Slopes for 7-Point Likert Scale Customer Satisfaction Covariates

Figure 3. Predicted NPS for Different Scenarios

# Identification of Key Drivers of Net Promoter Score Using a Statistical Classification Model

Daniel R. Jeske1, Terrance P. Callanan2 and Li Guo3

## 1. Introduction

Net Promoter Score (NPS) is a popular metric used in a variety of industries for measuring customer advocacy. Introduced by Reichheld (2003), NPS measures the likelihood that an existing customer will recommend a company to another prospective customer. NPS is derived from a single question that may be included as part of a larger customer survey. The single question asks the customer to use a scale of 0 to 10 to rate their willingness and intention to recommend the company to another person. Ratings of 9 and 10 are used to characterize so-called ‘promoters,’ ratings of 0 through 6 characterize ‘detractors,’ and ratings of 7 and 8 characterize ‘passives.’ The NPS is calculated as the percentage of respondents that are promoters minus the percentage of respondents that are detractors.

The idea behind the labels given to customers is as follows. Promoters are thought to be extremely satisfied customers that see little to no room for improvement, and consequently would offer persuasive recommendations that could lead to new revenue. The passive ratings, on the other hand, begin to hint at room for improvement and consequently the effectiveness of a recommendation from a Passive may be muted by explicit or implied caveats. Ratings at the low end are thought to be associated with negative experiences that might cloud a recommendation and likely scare off prospective new customers. Additional discussion on the long history of NPS can be found in Hayes (2008).

Some implementations of NPS methodology use reduced 5-point or 7-point scales that align with traditional Likert scales. However it is implemented, the hope is that movements in NPS are positively correlated with revenue growth for the company. While Reichheld’s research presented some evidence of that, other findings are not as corroborative (Kenningham et al., 2007). Regardless of whether there is a predictive relationship between NPS and revenue growth, implementing policies and programs within a company that improve NPS is an intuitively sensible thing to do [see, for example, Vavra (1997)]. A difficult and important question, however, is how to identify key drivers of NPS. Calculating NPS alone does not do this.

This chapter is an illustrative tutorial that demonstrates how a statistical classification model can be used to identify key drivers of NPS. Our premise is that the classification model, the data it operates on, and the analyses it provides could usefully form components of a Decision Support System that can not only provide both snapshot and longitudinal analyses of NPS performance, but also enable analyses that can help suggest company initiatives aimed toward lifting the NPS.

We assume that the NPS question was asked as part of larger survey that also probed customer satisfaction levels with respect to various dimensions of the company’s services. We develop a predictive classification model for customer advocacy (promoter, passive or detractor) as a function of these service dimensions. A novelty associated with our classification model is the optional use of constraints on the parameter estimates to enforce a monotonic property. We provide a detailed explanation of how to fit the model using the SAS software package and show how the fitted model can be used to develop company policies that have promise for improving the NPS. Our primary objective is to teach an interested practitioner how to use customer survey data together with a statistical classifier to identify key drivers of NPS. We present a case study that is based on a real-life data collection and analysis project to illustrate the step-by-step process of building the linkage between customer satisfaction data and NPS.

## 2. Logistic regression

In this section we provide a brief review of logistic and multinomial regression. Allen and Rao (2000) is a good reference that contains more detail than we provide, and additionally has example applications pertaining to customer satisfaction modeling.

### 2.1. Binomial logistic regression

The binomial logistic regression model assumes that the response variable is binary (0/1). This could be the case, for example, if a customer is simply asked the question “Would you recommend us to a friend?” Let {Yi}i=1n denote the responses from n customers, assigning a “1” for Yes and “0” for No. Suppose a number of other data items (covariates) are polled from the customer on the same survey instrument. These items might measure the satisfaction of the customer across a wide variety of service dimensions and might be measured on a traditional Likert scale. We let x˜i denote the vector of covariates for the i-th sampled customer and note that it reflects the use of dummy variable coding for covariates that are categorical scale. For example, if the first covariate is measured on a 5-point Likert scale, its value is encoded into x˜i by using five dummy variables {x1j}j=15 , where x1j=1 if and only if the Likert response is j.

The binomial logistic regression model posits that Yi is a Bernoulli random variable (equivalently, a binomial random variable with trial size equal to one) with success probability pi , and further, that the success probability is tied to the covariates through the so-called link function pi=exp(α+β˜x˜i)/[1+exp(α+β˜x˜i)] , where β˜ is a vector of model parameters (slopes). Continuing with the 5-point Likert scale example above, there would be five slopes {β1j}j=15 associated with the five dummy variables {x1j}j=15 used to code the first covariate.

Model fitting for the binomial logistic regression model entails estimating the parameters α and β˜ via maximum likelihood. The likelihood function for this model is

 L(α,β˜)=∏i=1nexp[(α+β˜′x˜i)Yi]1+exp(α+β˜′x˜i) (1)

and the maximum likelihood estimate (MLE) of (α,β˜) is the value, say (α^,β˜^) , that maximizes this function. Once the MLE is available, the influence of the covariates can be assessed by the magnitude (relative to their standard error) of the individual slopes. In particular, it can be ascertained which attributes of customer service have a substantial effect on making the probability of a 'Yes' response high.

### 2.2. Multinomial logistic regression

Suppose now that the outcome variable has more than two categories. For example, suppose the responses {Yi}i=1n are measured on the 11-point NPS scale. The multinomial logistic model seeks to represent the probability of each response as a function of the covariates. Since each response is an ordinal categorical variable taking on values in the set {0,1,,10} , we consider a multinomial logistic regression model with 10 cumulative link functions:

 Pr(Yi≤j)=exp(αj+β˜′x˜i)1+exp(αj+β˜′x˜i),j=0,1,…,9 (2)

where {αj}j=09 are intercept parameters and β˜ is again a vector of slope parameters. In order to affect the required non-decreasing behavior (relative to j) of the right hand sides of the link functions, the constraint α0α2α9 is imposed on the intercept parameters. Starting with Pr(Yi=0)=Pr(Yi0) , and then differencing the above expressions in (2) as in Pr(Yi=j)=Pr(Yij)Pr(Yij1) , j=1,,10 , we obtain expressions for the individual probabilities of the response as a function of the intercept and slope parameters. Defining the responses {Yij}j=010 to be one (zero) if and only if Yi=j , the likelihood function for this model is

 L(α˜,β˜)=∏i=1n∏j=010P(Yi=j)Yij (3)

and the MLE of (α˜,β˜) is the value, say (α^˜,β˜^) , that maximizes this function. Once the MLE is available, the magnitudes of the slope estimates (relative to their standard errors) can be used to identify the covariates that push the distribution of the response towards 9s and 10s. We note that the constraint on the intercepts is a standard constraint. In the next section, we will discuss an additional and novel constraint that can optionally be imposed on the slope parameters.

## 3. Case study

Carestream Health, Inc. (CSH) was formed in 2007 when Onex Corporation of Toronto, Canada purchased Eastman Kodak Company’s Health Group and renamed the business as Carestream Health. They are an annual \$2.5B company and a world leader in medical imaging (digital and film), healthcare information systems, dental imaging and dental practice management software, molecular imaging and non-destructive testing. Their customers include medical and dental doctors and staff and healthcare IT professionals in small offices and clinics to large hospitals and regional and national healthcare programs. A major company initiative is to create a sustainable competitive advantage by delivering the absolute best customer experience in the industry. Customer recommendations are key to growth in the digital medical space and no one has been able to do it consistently well. The foundation for taking advantage of this opportunity is to understand what's important to customers, measure their satisfaction and likelihood to recommend based on their experiences, and drive improvement.

While descriptive statistics such as trend charts, bar charts, averages and listings of customer verbatim comments are helpful in identifying improvement opportunities to improve the Net Promoter Score (NPS), they are limited in their power. First, they lack quantitative measurements of correlation between elements of event satisfaction and NPS. As a consequence, it is not clear what impact a given process improvement will have on a customer’s likelihood to recommend. Second, they lack the ability to view multi-dimensional relationships – they are limited to single factor inferences which may not sufficiently describe the complex relationships between elements of a customer’s experience and their likelihood to recommend.

This section summarizes the use of multinomial logistic regression analyses that were applied to 5056 independent customer experience surveys from Jan 2009 – Jan 2010. Each survey included a question that measured (on a 5-point Likert scale) how likely it would be for the customer to recommend colleagues to purchase imaging solutions from CSH. Five other questions measured the satisfaction level (on a 7-point Likert scale) of the customer with CSH services obtained in response to an equipment or software problem. Key NPS drivers are revealed through the multinomial logistic regression analyses, and improvement scenarios for specific geographic and business combinations are mapped out. The ability to develop a quantitative model to measure the impact on NPS of potential process improvements significantly enhances the value of the survey data.

### 3.1. CSH Customer survey data

The 5-point Likert response to the question about willingness to recommend is summarized in Table 1 below. CSH calculates a unique net promoter score from responses on this variable using the formula NPS=i=15wip^i , where w˜=(1.25,0.875,0.25,0.75,1.0) is a vector of weights and where p^i is the estimated proportion of customers whose recommendation score is i. Two interesting characteristics of the weight vector are, first, the penalty for a 1 (2) exceeds the benefit of a 5 (4), and second, the negative weight for a neutral score is meant to drive policies toward delighting customers.

 Recommendation Interpretation 1 Without being asked, I will advise others NOT to purchase from you 2 Only if asked, I will advise others NOT to purchase from you 3 I am neutral 4 Only if asked, I will recommend others TO purchase from you 5 Without being asked, I will recommend others TO purchase from you

### Table 1.

Meaning of Each Level of Recommendation Score

Let Y{1,2,3,4,5} be a random variable denoting the willingness of a particular customer to recommend CSH. The values {pi}i=15 represent the theoretical probabilities of the possible values for Y. That is, pi=Pr(Y=i) . The multinomial logistic regression model treats the customer demographic variables and the customer satisfaction ratings as the covariates x˜i , linking their values to the probability distribution of Y such that their influence on the values {pi}i=15 can be ascertained. Since the expected value of NPS is a function of these {pi}i=15 values, the end result is a model that links NPS to what customers perceive to be important. This linkage can then be exploited to determine targeted pathways to improve NPS via improvement plans that are customer-driven.

The demographic covariates include the (global) region code, country code, business code and the customer job title. The demographic covariates are coded using the standard dummy variable coding technique. For example, region code utilizes 7 binary variables {RCi}i=17 where

 RCi={1if customer falls in i-th global region  0otherwise . (4)

Country code utilizes similar dummy variables, but because countries are nested within regions we use the notation {CCj(i)}i=17j=1ni , where ni is the number of country codes within the i-th region and where

 CCj(i)={1if customer falls in j-th country within i-th region  0otherwise . (5)

In the data set we have {ni}i=17={3,1,1,7,5,4,2} . Business code utilizes two dummy variables {BCi}i=12 where

 BCi={1if customer is aligned with i-th business code  0otherwise . (6)

Finally, job title utilizes 10 dummy variables {JTi}i=110 where

 JTi={1if customer has i-th job title        0otherwise . (7)

The customer satisfaction covariates are also coded using the dummy variable scheme. The data on these covariates are the responses to the survey questions identified as q79, q82a, q82b, q82d and q82f. These questions survey the customer satisfaction on ‘Overall satisfaction with the service event,’ ‘Satisfaction with CSH knowledge of customer business and operations,’ ‘Satisfaction with meeting customer service response time requirements,’ ‘Satisfaction with overall service communications,’ and ‘Satisfaction with skills of CSH employees,’ respectively.

Survey questions q82c and q82e, which survey satisfaction with ‘Time it took to resolve the problem once work was started,’ and ‘Attitude of CSH employees’ were also considered as covariates, but they did not show themselves to be statistically significant in the model. Their absence from the model does not necessarily imply they are not important drivers of their overall satisfaction with CSH, but more likely that their influence is correlated with the other dimensions of overall satisfaction that are in the model. Each customer satisfaction covariate is scored by customers using a 7-point Likert scale (where ‘1’ indicates the customer is “extremely dissatisfied” and ‘7’ indicates “extremely satisfied”), and thus each utilizes 7 dummy variables in the coding scheme. We denote these dummy variables as {q79i}i=17 , {q82ai}i=17 , {q82bi}i=17 , {q82di}i=17 , and {q82fi}i=17 , respectively, and they are defined as follows:

 q79i={1if customer response to q79 is i        0otherwise , (8)
 q82ai={1if customer response to q82a is i        0otherwise , (9)
 q82bi={1if customer response to q92b is i        0otherwise , (10)
 q82di={1if customer response to q82d is i        0otherwise , (11)
 q82fi={1if customer response to q82f is i        0otherwise . (12)

Assembling all of the covariates together, we then have a total of 77 covariates in x˜ . Thus, the vector of slopes β˜ in the link equations has dimension 77×1 . Combined with the 4 intercept parameters {αi}i=14 , the model we have developed has a total of 81 parameters. We note it is conceivable that interactions between the defined covariates could be important contributors to the model. However, interaction effects based on the current data set were difficult to assess because of confounding issues. As the data set gets larger over time, it is conceivable the confounding issues could be resolved and interaction effects could be tested for statistical significance.

### 3.2. Model fitting and interpretation

The SAS code for obtaining maximum likelihood estimates (MLEs) for the model parameters {αi}i=14 and β˜ is shown in Appendix A. Lines 1-4 are used to read in the data that is stored as a space delimited text file ‘indata.txt’ that is located in the indicated directory. All of the input variables on the file are coded as integer values. The PROC LOGISTIC section of the code (lines 5-10) directs the fitting of the multinomial logistic regression model. The class statement is used to specify that all of the covariate variables are categorical in nature, and the param=glm option specifies to use the dummy variable coding scheme that was defined in the previous section. Table 2 summarizes the portion of the SAS output that reports the maximum likelihood estimates for {αi}i=14 and β˜ . Note that the zero for the slope of the last level of each covariate is a structural zero resulting from the non-full rank dummy variable coding used when fitting the model.

 Parm. Est. Parm. Est. Parm. Est. Parm. Est. α1 -7.80 β18(CC6(4)) -.41 β39(JT7) -.15 β60(q82b4) .53 α2 -5.69 β19(CC7(4)) 0 β40(JT8) -.28 β61(q82b5) .14 α3 -2.34 β20(CC1(5)) -.11 β41(JT9) -.62 β62(q82b6) .23 α4 -.045 β21(CC2(5)) .092 β42(JT10) 0 β63(q82b7) 0 β1(RC1) .11 β22(CC3(5)) -1.63 β43(q79a1) 1.92 β64(q82d1) 1.27 β2(RC2) .59 β23(CC4(5)) .11 β44(q79a2) 2.09 β65(q82d2) .92 β3(RC3) 1.23 β24(CC5(5)) 0 β45(q79a3) 1.43 β66(q82d3) .67 β4(RC4) -.13 β25(CC1(6)) -.13 β46(q79a4) .84 β67(q82d4) .77 β5(RC5) .59 β26(CC2(6)) .34 β47(q79a5) .58 β68(q82d5) .44 β6(RC6) .40 β27(CC3(6)) -.23 β48(q79a6) .19 β69(q82d6) .19 β7(RC7) 0 β28(CC4(6)) 0 β49(q79a7) 0 β70(q82d7) 0 β8(CC1(1)) -.45 β29(CC1(7)) .42 β50(q82a1) 2.68 β71(q82f1) .85 β9(CC2(1)) -.60 β30(CC2(7)) 0 β51(q82a2) .71 β72(q82f2) 1.69 β10(CC3(1)) 0 β31(BC1) -.21 β52(q82a3) 1.05 β73(q82f3) 1.08 β11(CC1(2)) 0 β32(BC2) 0 β53(q82a4) .89 β74(q82f4) .69 β12(CC1(3)) 0 β33(JT1) -.097 β54(q82a5) .31 β75(q82f5) .59 β13(CC1(4)) .78 β34(JT2) -.25 β55(q82a6) .14 β76(q82f6) .16 β14(CC2(4)) -.53 β35(JT3) -.20 β56(q82a7) 0 β77(q82f7) 0 β14(CC3(4)) .83 β36(JT4) -.48 β57(q82b1) 1.31 β16(CC4(4)) -.050 β37(JT5) .11 β58(q82b2) .81 β17(CC5(4)) .24 β38(JT6) -.47 β59(q82b3) .75

### Table 2.

Maximum Likelihood Estimates of {αi}i=14 and β˜

The section of the PROC LOGISTIC output entitled ‘Type-3 Analysis of Effects’ characterizes the statistical significance of the covariates through p-values obtained by referencing a Wald chi-square test statistic to a corresponding null chi-square distribution. Table 3 shows the chi-square tests and the corresponding p-values, and it is seen that all covariate groups are highly significant contributors in the model.

One way to assess model adequacy for multinomial logistic regression is to use the model to predict Y and then examine how well the predicted values match the true values of Y. Since the output of the model for each customer is an estimated probability distribution for Y, a natural predictor of Y is the mode of this distribution. We note that this predictor considers equal cost for all forms of prediction errors. More elaborate predictors could be derived by assuming a more complex cost model where, for example, the cost of predicting 5 when the actual value is 1 is higher than the cost of predicting 5 when the actual value is 4. Table 4, the so-called confusion matrix of the predictions, displays the cross classification of all 5056 customers based on their actual value of Y and the model-predicted value of Y.

 CovariateGroup Degrees ofFreedom WaldStatistic p-value RC 6 41.2 < .0001 CC 16 40.9 < .01 BC 1 7.9 < .01 JT 9 43.7 < .0001 q79 6 84.8 < .0001 q82a 6 56.5 < .0001 q82b 6 34.4 < .0001 q82d 6 34.8 < .0001 q82f 6 39.9 < .0001

### Table 3.

Statistical Significance of Covariate Groups

 ActualY Predicted Y Total 1 2 3 4 5 1 3 2 7 4 4 20 2 3 8 48 22 7 88 3 2 3 342 486 133 966 4 0 0 126 1233 723 2082 5 0 0 39 705 1156 1900 Total 8 13 562 2450 2023 5056

### Table 4.

Confusion Matrix of Multinomial Logistic Regression Model

A perfect model would have a confusion matrix that is diagonal indicating the predicted value for each customer coincided identically with the true value. Consider the rows of Table 4 corresponding to Y=4 and Y=5. These two rows account for almost 80% of the customers in the sample. It can be seen that in both cases, the predicted value coincides with the actual value about 60% of the time. Neither of these two cases predicts Y=1 or Y=2, and only 4% of the time is Y=3 predicted. The mean values of the predicted Y when Y=4 and Y=5 are 4.28 and 4.59, respectively. The 7% positive bias for the case Y=4 is roughly offset by the 11.8% negative bias for the case Y=5.

Looking at the row of Table 4 corresponding to Y=3, we see that 86% of the time the predicted Y is within 1 of the actual Y. The mean value of the predicted Y is 3.77, indicating a 26% positive bias. Considering the rows corresponding to Y=1 and Y=2, where only about 2% of the customers reside, we see the model struggles to make accurate predictions, often over-estimating the actual value of Y. A hint as to the explanation for the noticeable over-estimation associated with the Y=1, Y=2 and Y=3 customers is revealed by examining their responses to the covariate questions. As just one example, the respective mean scores on question q79 (“Overall satisfaction with the service event”) are 3.8, 4.1 and 5.2. It seems a relatively large number of customers that give a low response to Y are inclined to simultaneously give favorable responses to the covariate questions on the survey. Although this might be unexpected, it can possibly be explained by the fact that the covariate questions are relevant to the most recent service event whereas Y is based on a customer’s cumulative experience.

Overall, Table 4 reflects significant lift afforded by the multinomial logistic regression model for predicting Y. For example, a model that utilized no covariate information would have a confusion matrix whose rows were constant, summing to the row total. In sum, we feel the accuracy of the model is sufficient to learn something about what drives customers to give high responses to Y, though perhaps not sufficient to learn as much about what drives customers to give low responses to Y.

Figure 1 is a graphical display of the slopes for each of the customer satisfaction covariates. The larger the coefficient value, the more detrimental the response level is to NPS. The y-axis is therefore labeled as ‘demerits.’

### Figure 1.

MLEs of Slopes for 7-Point Likert Scale Customer Satisfaction Covariates

In view of the ordinal nature of the customer satisfaction covariates, the slopes, which represent the effect of the Likert scale levels, should decrease monotonically. That is, the penalty for a ‘satisfied’ covariate value should be less than or equal to that of a ‘dissatisfied’ covariate value. As such, it would be logical to have the estimated values of the slopes display the monotone decreasing trend as the response level of the covariates ascends. Figure 1 shows that the unconstrained MLEs for the slopes associated with the customer satisfaction covariates nearly satisfy the desired monotone property, but not exactly. The aberrations are due to data deficiencies or minor model inadequacies and can be resolved by using a constrained logistic regression model introduced in the next section.

### 3.3. Constrained logistic regression

Consider the situation where the i-th covariate is ordinal in nature, perhaps because it is measured on a k-point Likert scale. The CSH data is a good illustration of this situation, since all the customer satisfaction covariates are ordinal variables measured on 7-point Likert scale. Let the corresponding group of k slopes for this covariate be denoted by {βij}j=1k . In order to reflect the information that the covariates are ordered, it is quite natural to impose the monotone constraint βi1βi2βik onto the parameter space. Adding these constraints when finding the MLEs complicates the required maximization of the likelihoods in (1) and (3). In this section, however, we will show how this can be done using SAS with PROC NLP.

In order to simplify our use of PROC NLP, it is convenient to work with a full-rank parameterization of the logistic regression model. Because countries are nested within regions, a linear dependency exists between the dummy variables corresponding to regions and countries within regions. We can eliminate the linear dependency by removing region from the model and specifying country to be non-nested factor. The result of this model reparameterization is that instead of 6 degrees of freedom in the model for regions and 16 degrees of freedom for countries nested within regions, we equivalently have 22 degrees of freedom for countries. For the same purpose, we also redefine the dummy variable coding used for other categorical and ordinal covariates by using a full rank parameterization scheme. In particular, we use k-1 dummy variables (rather than k) to represent a k-level categorical or ordinal variable. With the full rank parameterization, the highest level of customer satisfaction has a slope parameter that is fixed to be 0. Lines 3-10 in the SAS code shown in Appendix B are used to set up the full rank parameterization of the logistic regression model.

### Figure 2.

Constrained MLEs of Slopes for 7-Point Likert Scale Customer Satisfaction Covariates

Beginning with line 12 in the SAS code, PROC NLP is used to derive the MLEs of the parameters under the constrained parameter space. The ‘max’ statement (line 13) indicates the objective function is the log-likelihood function of the model and that it is to be maximized. The maximization is carried out using a Newton-Raphson algorithm, and the ‘parms’ statement (line 14) specifies initial values for the intercept and slope parameters. The SAS variables bqj, baj, bbj, bdj and bfj are used to symbolize the slope parameters corresponding to the j-th response level of the customer satisfaction covariates q79, q82a, q82b, q82d and q82f. Similarly, bccj, bbcj, and bj j are used to denote the slopes associated with different countries, business codes and job titles. The ‘bounds’ and ‘lincon’ statements (lines 15-21) jointly specify the monotone constraints associated with the intercept parameters and the slopes of the customer satisfaction covariates. Lines 22-29 define the log likelihood for each customer which, for the i-th customer, is given by

 logliki(α˜,β˜)=∑j=15YijlogP(Yi=j) (13)
 Covariate MLE of Slope Covariate MLE of Slope Unconstrained Constrained Unconstrained Constrained q79a1 1.92 2.01 q82d1 1.27 1.27 q79a2 2.09 2.01 q82d2 .92 .96 q79a3 1.43 1.43 q82d3 .67 .73 q79a4 .84 .82 q82d4 .77 .73 q79a5 .58 .57 q82d5 .44 .42 q79a6 .19 .19 q82d6 .19 .19 q79a7 Structural 0 Implied 0 q82d7 Structural 0 Implied 0 q82a1 2.68 2.56 q82f1 .85 1.28 q82a2 .71 .98 q82f2 1.69 1.28 q82a3 1.05 .98 q82f3 1.08 1.07 q82a4 .89 .88 q82f4 .69 .69 q82a5 .31 .30 q82f5 .59 .58 q82a6 .14 .14 q82f6 .16 .16 q82a7 Structural 0 Implied 0 q82f7 Structural 0 Implied 0 q82b1 1.31 1.28 q82b2 .81 .85 q82b3 .75 .77 q82b4 .53 .56 q82b5 .14 .21 q82b6 .23 .21 q82b7 Structural 0 Implied 0

### Table 5.

Unconstrained and Constrained Slope MLEs of Customer Satisfaction Covariates

Table 5 provides a side-by-side comparison of the constrained and unconstrained MLEs for the slopes of the customer satisfaction covariates, and Figure 2 is a plot that shows the monotone behavior of the constrained estimates. There is very little difference between the unconstrained and constrained MLEs for the demographic covariates. Recall that for the unconstrained MLEs, the zero for the slope of the last level of each covariate is a structural zero resulting from the non-full rank dummy variable coding used when fitting the model. In the case of the constrained MLEs, the slopes of the last levels of the covariates are implied zeros resulting from the full-rank dummy variable coding used when fitting the model. Table 5 shows that incorporating the constraints do not lead to a substantial change in the estimated slopes. In an indirect way, this provides a sanity check of the proposed model. We will use the constrained estimates for the remainder of the case study.

### 3.4. Model utility

A purely empirical way to compute NPS is to use the observed distribution (based on all 5,056 survey responses) of Y for p˜ in the formula NPS=i=15wipi , and this yields 61.7%. Consider now filling out the covariate vector x˜ with the sample frequencies for the observed demographic covariates and with the observed sample distributions for the sub-element covariates. Using this x˜ with the model yields a predicted NPS of 65.7%. The close agreement between the data-based and model-based NPS scores is additional evidence that the model fits the data well, and it also instills confidence in using the model to explore “What If?” scenarios as outlined in Figure 3. Figure 3 defines sixteen “What If?” scenarios, labels them with brief descriptions, and then shows expected NPS score if the scenario is implemented. Table 6 contains a longer description of how each scenario was implemented. Each scenario can be evaluated on the basis of how much boost it gives to the expected NPS as well as the feasibility of establishing a company program that could make the hypothetical scenario real.

We illustrated potential pathways to improve the overall NPS score, but this can also be done with specific sub-populations in mind. For example, if the first region was under study, then one could simply adjust the demographic covariates as illustrated in section 3.4.2 before implementing scenarios adjustments.

### Figure 3.

Predicted NPS for Different Scenarios

 Scenario Brief Description 1 For each of q79, q82a, q82b, q82d and q82f, alter the distributions of their responses by reassigning the probability of a neutral response (4) equally to the probability of responses (5), (6) and (7) 2 Replace the response distribution for sub-elements q82a, q82b, and q82d with what was observed for q82f (which was the sub-element that had the most favorable response distribution) 3 Make the response distribution for each of q82a, q82b, q82d and q82f perfect by placing all the probability on response (7) 4 Improve the response distribution for each of q82a, q82b, q82d and q82f by placing all the probability equally on responses (6) and (7) 5 Improve the response distribution for q79 by placing all the probability on response (7) 6 Improve the response distribution for q82a by placing all the probability on response (7) 7 Improve the response distribution for q82b by placing all the probability on response (7) 8 Improve the response distribution for q82d by placing all the probability on response (7) 9 Improve the response distribution for q82f by placing all the probability on response (7) 10 Improve the response distribution for each of q79, q82a, q82b, q82d and q82f by distributing the probability of response (1) equally on responses (2)-(7) 11 Improve the response distribution for each of q79, q82a, q82b, q82d and q82f by distributing the sum of the probability of responses (1) and (2) equally on responses (3)-(7) 12 Improve the response distribution for each of q79, q82a, q82b, q82d and q82f by distributing the sum of the probability of responses (1), (2) and (3) equally on responses (4)-(7) 13 Simulate making Business Code 2 as good as Business Code 1 by settingBC=(1,0) 14 Improve the response distributions of q79, q82a, q82b, q82d, and q82f by replacing them by the average across the different Region Codes, excluding the worst Region Code 15 Improve the response distributions of q79, q82a, q82b, q82d, and q82f by replacing them by the average across the different Region Codes, excluding the two worst Region Codes 16 Improve the response distributions of q79, q82a, q82b, q82d, and q82f by replacing them all by the observed, respective, distributions for Region Code 2(which was the region that had the most favorable response distribution)

### Table 6.

Implementation Detail for Each Scenario

## 4. Discussion

Alternative measures to NPS of customer advocacy include customer satisfaction (CSAT) and Customer Effort Score (CES) (Dixon et al., 2010). CES is measured on a 5-point scale and is intended to capture the effort required by a customer to resolve an issue through a contact-center or self-service channel. (Dixon et al., 2010) compared the predictive power of CSAT, NPS and CES on service customers' intention to do repeat business, increase their spending, and speak positively about the company. They concluded that CSAT was a relatively poor predictor, while CES was the strongest. NPS ranked in the middle.

The choice of which customer advocacy measure to use depends on many factors such as the type of company-to-customer relationship, the degree to which recommendations (for or against a company) influence a purchase decision, and whether the measures will be complemented by other customer feedback. To gain an in-depth understanding of customers' experiences and how to improve them may require multiple indicators. In the end, it is the action taken to drive improvements that customers value that is most critical.

Our case study validates the feasibility for using a multinomial logistic regression model as a means to identify key drivers of NPS, though it is clear that the same methodology could be employed with alternative measures of customer advocacy. Improvement teams at CSH have used this model to prioritize projects relative to their expected impacts on NPS. A novel aspect of our model development was the implementation of monotone constraints on the slope parameters of the ordinal covariates. Our illustrative SAS code showing how to impose the constraints on the maximum likelihood estimates should be of significant help to practitioners interested in doing the same thing.

## 5. Appendix A

1. data indata;

2. infile 'C:\CarestreamHealth\indata.txt';

3. input RC CC BC JT Y q79 q82a q82b q82d q82f;

4. run;
5. proc logistic data=indata;

6. class RC CC BC JT

7. q79 q82a q82b q82d q82f/param=glm;

8. model Y = RC CC(RC) BC JT

9. q70 q79 q82a q82b q82d q82f;

10. run;

## 6. Appendix B

1. data indata;

2. set indata;

3. array cc{23} cc1-cc23; do i=1 to 23; if CC=i then cc{i}=1; else cc{i}=0;end;

4. if BC=1 then bc1=1;else bc1=0;

5. array jt{9} jt1-jt9; do i=1 to 9; if JT=i then jt{i}=1; else jt{i}=0;end;

6. array q{6} q1-q6; do i=1 to 6; if q79=i then q{i}=1; else q{i}=0;end;

7. array a{6} a1-a6; do i=1 to 6; if q82a=i then a{i}=1; else a{i}=0;end;

8. array b{6} b1-b6; do i=1 to 6; if q82b=i then b{i}=1; else b{i}=0;end;

9. array d{6} d1-d6; do i=1 to 6; if q82d=i then d{i}=1; else d{i}=0;end;

10. array f{6} f1-f6; do i=1 to 6; if q82f=i then f{i}=1; else f{i}=0;end;

11. run;

12. proc nlp data=indata;

13. max loglik;

14. parms alp1=-7, alp2=-5, alp3=-2, alp4=-1, bcc1-bcc10=0, bcc12-bcc23=0, bbc1=0, bj1-bj9=0, bq1-bq6=0, ba1-ba6=0, bb1-bb6=0,bd1-bd6=0, bf1-bf6=0;

15. bounds 0 <= bq6,0 <= ba6,0 <= bb6,0 <= bd6,0 <= bf6;

16. lincon 0<=alp4-alp3, 0<=alp3-alp2, 0<=alp2-alp1;

17. lincon 0<= bq5-bq6, 0<= bq4-bq5, 0<= bq3-bq4, 0<= bq2-bq3, 0<= bq1-bq2;

18. lincon 0<= ba5-ba6, 0<= ba4-ba5, 0<= ba3-ba4, 0<= ba2-ba3, 0<= ba1-ba2;

19. lincon 0<= bb5-bb6, 0<= bb4-bb5, 0<= bb3-bb4, 0<= bb2-bb3, 0<= bb1-bb2;

20. lincon 0<= bd5-bd6, 0<= bd4-bd5, 0<= bd3-bd4, 0<= bd2-bd3, 0<= bd1-bd2;

21. lincon 0<= bf5-bf6, 0<= bf4-bf5, 0<= bf3-bf4, 0<= bf2-bf3, 0<= bf1-bf2;

22. tp=cc1*bcc1+cc2*bcc2+cc3*bcc3+cc4*bcc4+cc5*bcc5+cc6*bcc6+cc7*bcc7+cc8*bcc8+cc9*bcc9+cc10*bcc10+cc12*bcc12+cc13*bcc13+cc14*bcc14+cc15*bcc15+cc16*bcc16+cc17*bcc17+cc18*bcc18+cc19*bcc19+cc20*bcc20+cc21*bcc21+cc22*bcc22+cc23*bcc23+bc1*bbc1+jt1*bj1+jt2*bj2+jt3*bj3+jt4*bj4+jt5*bj5+jt6*bj6+jt7*bj7+jt8*bj8+jt9*bj9+q1*bq1+q2*bq2+q3*bq3+q4*bq4+q5*bq5+q6*bq6+a1*ba1+a2*ba2+a3*ba3+a4*ba4+a5*ba5+a6*ba6+b1*bb1+b2*bb2+b3*bb3+b4*bb4+b5*bb5+b6*bb6+d1*bd1+d2*bd2+d3*bd3+d4*bd4+d5*bd5+d6*bd6+f1*bf1++f2*bf2+f3*bf3+f4*bf4+f5*bf5+f6*bf6;

23. pi1=exp(alp1+tp)/(1+exp(alp1+tp));pi2=exp(alp2+tp)/(1+exp(alp2+tp));

24. pi3=exp(alp3+tp)/(1+exp(alp3+tp));pi4=exp(alp4+tp)/(1+exp(alp4+tp));

25. if Y=1 then loglik=log(pi1);

26. if Y=2 then loglik=log(pi2-pi1);

27. if Y=3 then loglik=log(pi3-pi2);

28. if Y=4 then loglik=log(pi4-pi3);

29. if Y=5 then loglik=log(1-pi4);

30. run;

## References

1 - D. R. Allen, T. R. N. Rao, 2000 Analysis of Customer Satisfaction Data, ASQ Quality Press, 978 0873894531Milwaukee.
2 - M. Dixon, K. Freeman, N. Toman, 2010 Stop Trying to Delight Your Customers, Harvard Business Review, July-August, http://hbr.org/magazine.
3 - B. E. Hayes, 2008 Measuring Customer Satisfaction and Loyalty, 3rd edition, ASQ Quality Press, 978 0873897433Milwaukee.
4 - T. Keiningham, B. Cooil, T. Andreassen, L. Aksoy, 2007A Longitudinal Examination of Net Promoter and Firm Revenue Growth, Journal of Marketing, 71 3July 2007) 39-51, 0022-2429.
5 - F. Reichheld, 2003The One Number You Need to Grow, Harvard Business Review, 81 12December 2003) 46-54.
6 - T. G. Vavra, 1997 Improving Your Measurement of Customer Satisfaction: Your Guide to Creating, Conducting, Analyzing and Reporting Customer Satisfaction Measurement Programs, ASQ Quality Press, 978 0873894050Milwaukee.