Open access peer-reviewed chapter

A Service Management Metric with Origin in Plant Management

Written By

Robert G. Batson

Submitted: 21 May 2020 Reviewed: 09 June 2020 Published: 07 July 2020

DOI: 10.5772/intechopen.93139

From the Edited Volume

Concepts, Applications and Emerging Opportunities in Industrial Engineering

Edited by Gary Moynihan

Chapter metrics overview

568 Chapter Downloads

View Full Metrics

Abstract

The discipline of industrial engineering (IE) was originated in the US by Frederick W. Taylor, who first applied what he termed scientific management to machine shop management in the 1880s. IE expanded world-wide with applications of work measurement to all manner of manufacturing, then to services. A century later, as the Japanese practice of Total Productive Maintenance (TPM) became known in the US, the associated equipment management metric Overall Equipment Effectiveness (OEE) became well-known as a production metric that could be applied to individual manufacturing machines, production lines, and the overall production system. In this chapter, illustrations of the calculation of OEE are provided, along with modification of the three OEE inputs (availability, performance efficiency, and quality rate) to create a new service management metric Overall Service Effectiveness (OSE). Definitions and measures of service quality are reviewed. The first published application of OSE was to a city public transportation system, and is reviewed as a prototype. Essentially, applications of OSE require the industrial engineer to define service-specific measures of availability, processing rate, and quality at the management’s level of interest: work station, process, or system. The data collected to calculate OSE will also point toward actions that would improve OSE.

Keywords

  • service systems
  • service management
  • performance measurement
  • overall equipment effectiveness (OEE)
  • overall service effectiveness (OSE)

1. Introduction

The discipline we now call industrial engineering (IE) originated in the US with the practices of Frederick W. Taylor at the Midvale Steel Company in the 1880s as he progressed from machinist, to time clerk, to machine shop foreman, ultimately becoming chief engineer upon receiving a mechanical engineering degree in 1883. His participation in the American Society of Mechanical Engineers (ASME) provided him with the opportunity to present his shop management practices which were referred to as “work measurement” when applied to a specific work task (manual labor such as shoveling; skilled labor such as lathe operation). Broader applications to groups of workers in a plant or service organization (educational organizations, government agencies, the ASME) became known world-wide as Scientific Management [1], especially after Taylor testified before the US Interstate Commerce Commission in 1911.

Henry L. Gantt was recruited to perform work measurement at Midvale Steel under Taylor’s guidance, and as a consultant one speed and feed problems in metal cutting at Bethlehem Steel. Gantt modified one of Taylor’s published practices (piece-rate system) to account for productivity factors outside the workers control. Gantt became an independent consultant and ultimately lectured on IE at four US universities. Another early practitioner of IE was Morris L. Cooke, whom Taylor funded to work on efficiency and effectiveness of the ASME, the Carnegie Foundation, and the municipal government of Philadelphia. Frank B. Gilbreth originated the practice of work measurement in the construction trades, though he never attended college. His approach came to be known as time and motion study, which he first applied to bricklaying (a trade he learned as an apprentice). He insisted on division of labor between the brick mason (skilled labor) and the unskilled workers who “set up” the mason with bricks and fresh mortar; the specific location of the bricks and mortar relative to the mason, and even the consistency of the mortar, could be planned to make the mason as productive as possible. Furthermore, with appropriate design of the motions the mason should use, he demonstrated that the mason could increase the number of bricks laid in a given time by a factor of three. At age 27, Gilbreth founded (1895) a highly successful construction firm wherein all work was designed using time and motion study, but gave it up at age 44 to become a full-time management consultant. Frank’s wife, Lillian M. Gilbreth, was a PhD psychologist who assisted Frank in the preparation of six books between 1908 and 1917 to disseminate what he had learned about the broad topic of performance measurement, starting with the worker and broadening to the work processes and the overall work system.

As Japan began to recover from destruction of its industrial base during WWII, and to transition from essentially an agrarian society to an economic powerhouse, their industrial/production engineers originated many practices now considered part of modern industrial engineering. Starting in the 1970s and intensifying in the 1980s, there was significant debate in the US and other advanced economies in the West concerning what was enabling the Japanese to capture larger and larger market share in technological products such as automobiles, televisions, and copy machines. There was a US IE professor, Richard J. Schonberger, who spend a significant amount of time in Japan and authored several books [2, 3, 4] detailing his interviews and observations from visiting top-performing Japanese manufacturing firms. In Japanese Manufacturing Techniques [2], he revealed the following nine “hidden lessons in simplicity”:

  1. Fewer suppliers

  2. Reduced part counts

  3. Focused factories (focus on a narrow line of products)

  4. Scheduling to a rate, instead of scheduling by lots

  5. Fewer racks on the plant floor

  6. More frequent deliveries (in-plant moves, as well as deliveries from suppliers)

  7. Smaller plants

  8. Shorter distances, less reporting, less inspectors, less buffer stock

  9. Fewer job classifications.

In World Class Manufacturing [3], Schonberger claimed production management in the US had become overly focused on “managing by the numbers” by which he meant measuring plant performance at too high a level (revenue, fixed and variable costs, profit) to really uncover hidden efficiency and quality effects; whereas in contrast, WCM mandates simplification and direct action—do it, judge it, measure it, diagnose it, fix it, manage it, on the factory floor. He provided a variety of examples with diagrams/photographs in [4], World Class Manufacturing Casebook. In [3] Schonberger observed that in 1980, the first US WCM thrusts followed two parallel paths: a quality path with a goal of zero defects known in Japan at Total Quality Control (TQC) and in the US as Total Quality Management (TQM); a just-in-time (JIT) productivity path, as a means of coping with high-variety, small lot, short lead-time production. JIT aims to have every operation make the needed items at the right time in the right quantities, at low cost. JIT pursues a goal of one-piece flow, small-lot production, with minimal inventory throughout the system. A third WCM practice which supports both TQC and JIT is known as Total Productive Maintenance (TPM) and will be described below when we introduce the metric Overall Equipment Effectiveness (OEE). The three practices are synergistic—for instance, JIT will fail if incoming part quality is not (nearly) perfect and processing equipment does not have OEE near 100%.

Most US industrial engineers first learned details of TPM through the 1988 English version of Nakajima’s TPM: Introduction to Total Productive Maintenance [5]. According to Nakajima “TPM is an innovative approach to maintenance that optimizes equipment effectiveness, eliminates breakdowns, and promotes autonomous operator maintenance through day-to-day activities involving the total workforce.” Seven years later, two North American practitioners Charles Robinson and Andrew Ginder produced the well-known Implementing TPM: The North American Experience [6] and therein defined TPM to be “a plant improvement methodology which enables continuous and rapid improvement of the manufacturing process the use of employee involvement, employee empowerment, and closed-loop measurement of results.” In the decade following Robinson and Ginder [6], publications on “TPM Practices and Cases” [7] and “Lean TPM” [8] appeared in the US and Great Britain. In the recent past, Ortiz contributed The TPM Playbook: A Step-by-Step Guideline for the Lean Practitioner [9] and Peng, in response with the digital revolution in production, published Equipment Maintenance in the Post-Maintenance Era: A New Alternative to Total Productive Maintenance (TPM) [10]. All of these references include a section or entire chapter explaining the measurement of Overall Equipment Effectiveness; furthermore, all suggest an engineer or improvement team could focus on OEE for a single machine (workstation), OEE for a production line or process, and OEE for the overall plant. Comparisons across shifts, days, months, etc. would detect improving or deteriorating performance. Furthermore, comparisons across similar machines, processes, or plants could be very useful to department, process, or plant managers.

OEE for a given machine, line, or plant is the product of availability, performance efficiency (processing rate ratioed with the design “ideal”), and quality rate (proportion of good products produced—the yield). Because each of these inputs is measured as a percentage, the closer OEE is to 100%, the better; world-class OEE is considered 85% or higher (some authors say 90% or higher). OEE is formally calculated using the following expressions, each expressed as a percentage:

Availability=loadingtimedowntime/loadingtimeE1
PerformanceEfficiency=theoreticalcycletime×processedamount/operatingtimeE2
QualityRate=processedamountdefectamount/processedamountE3
OEE=Availability×PerformanceEfficiency×QualityRate.E4

The calculation of each of these quantities is illustrated by example in the references by Nakajima [5], and Robinson and Ginder [6]. The example in Nakajima further illustrates why the three input quantities to OEE are seldom calculated to be 100%. Essentially, the use of OEE in TPM uncovers the “Six Big Losses” which become the focus of improvement efforts by an individual engineer, or a team. The Six Big Losses are grouped as follows:

Losses that determine equipment availability

  1. Equipment failure losses (requiring corrective maintenance)

  2. Set-up and adjustment losses

    Losses that determine performance efficiency

  3. Idling and minor stoppages (e.g., clearing a jammed workpiece, or stopping for a visitor)

  4. Reduced speed (e.g., running slower to avoid overheating, or avoid early job completion)

    Losses that determine rate of quality products

  5. Defects and rework

  6. Reduced yield due to start-up losses (either due to nature of process, or company policy)

Some examples of how to improve OEE above status quo, based on the six big losses as numbered above, would be:

  1. Study equipment failure and repair records. Use the Pareto Principle to identify which machines are causing downtime, in rank order: then, for the least available machine, identify the machine elements that are the causes of downtime, in rank order. Focus improvement efforts on the most problematic machines, and once identified, the most problematic machine elements.

  2. To reduce set-up time, there is a Japanese practice originally known as Single Minute Change of Die (SMED)—meaning work to reduce change-over times to less than 10 minutes (single digits) with a goal of single minute change-overs; in the US, this practice is called Quick Changeover Technology and, for example, has been observed in casting machines in pipe shops, and welding machines for tubular steel products. This practice is essentially a specialized time and motion study, again carried out by a single engineer or improvement team, and the saving from avoided set-up losses can be substantial.

  3. Periods of idling and instances of minor stoppages should be recorded (total time lost, situation and/or causes).

  4. Reduced speed losses—there may be good reasons for running equipment at less than ideal processing rate, such as to avoid overstressing the equipment or safety concerns for the operator or other workers in the vicinity. In instances where work crews intentionally slow down, management needs to re-plan schedules so every worker can get in a full shift, having come to work intending to be paid for a full shift.

  5. Defects and rework must be recorded and carefully examined to determine root causes, and then immediate corrective actions taken (standards modified or adjusted) to hopefully prevent the same problems in the future. Follow-up is critical by the engineer, manager, or team to verify the action installed is working and has become the standard.

  6. Start-up losses may be unavoidable with the materials, machine, and set-up required; or, they may indicate a company policy that is outdated (current machine can now produce acceptable quality with first unit produced, or could with appropriate attention from engineering, maintenance, and production).

For more details on data collection and the calculation/application of OEE, see three references focused specifically on OEE: Muchiri and Pintelon’s 2008 article “Literature Review and Practical Application Discussion” [11]; Hansen’s Overall Equipment Effectiveness book [12]; and The OEE Primer: Understanding Overall Equipment Effectiveness, Reliability, and Maintainability [13] by Stamatis. As with the world-wide spread of Scientific Management, the use of OEE by industrial engineers has spread to practitioners in Europe (e.g., see an application in France to large-scale production of ductile iron pipe [14], and in Asia, for example, applications to sugar mills in India [15, 16]).

Advertisement

2. Service management

A simple definition of service is “work performed for someone else.” In other words, services are all those economic activities in which the primary output is neither a good nor a construct, so services are for the most part intangible. Of course, services may occur internal to a company, educational institution, medical facility, or government agency, or in service encounters between individuals, or exchanges between larger-scale entities—two companies, or perhaps a university and a government agency. In general, services cannot be inventoried. When considering services in context of the overall economy, a big distinction is that product-oriented sectors of an economy always produce a tangible product; a service may or may not terminate in a tangible product. Also, services are rendered on demand—either instant demand or scheduled demand—often with the customer present and involved in the service. So, the reliability characteristic “ready on demand” is critical to high quality service. Once service begins, uninterrupted service may be another customer expectation. Variability of service in response to specific customer requests may be intentional—information gets transformed into customized action in an attempt to satisfy the requests. It may also be unclear which party “owns” the service. In contrast, a manufactured product is tangible, produced and consumed at different locations at different times, expected to be of consistent quality, to be inventoried with like items, and to have clear ownership as it changes hands.

The Service Sector of the US economy in 2019 accounted for 79% of US employment. The percentage of service workers in other advanced economies are approximately 75% in Great Britain, 65% in France, and 60% in Germany and Japan. Service is considered a tertiary sector following the Primary Sector “Extractive” industries (Fishing, Agriculture, Mining, Oil and Gas, etc.) and the Secondary Sector “Transformative” industries (Manufacturing and Construction). The service sector is highly diverse; see the following groupings used by the US Dept. of Labor:

  • Trade, Transportation, and Utilities

    • Wholesale Trade (NAICS 42)

    • Retail Trade (NAICS 44-45)

    • Transportation and Warehousing (NAICS 48-49)

    • Utilities (NAICS 22)

  • Information (NAICS 51)

  • Financial Activities

    • Finance and Insurance (NAICS 52)

    • Real Estate and Rental and Leasing (NAICS 53)

  • Professional and Business Services

    • Professional Scientific and Technical Services (NAICS 54)

    • Management of Companies and Enterprises (NAICS 55)

    • Administrative and Support; Waste Management and Remediation (NAICS 56)

  • Education and Health Services

    • Educational Services (NAICS 61)

    • Health Care and Social Assistance (NAICS 62)

  • Leisure and Hospitality

    • Arts, Entertainment, and Recreation (NAICS 71)

    • Accommodations and Food Services (NAICS 72)

  • Other Services (except Public Administration) (NAICS 81)

  • Government

Characteristics of services are:

  • The product is intangible, for the most part

  • Usually performed in real time, with the customer present and often participating

  • Seldom inventoried, so must be delivered on the customer’s schedule

  • Something of value is provided (like manufacturing) but in a more immediate, personalized manner.

Quality of a product (service, good, or software) has been defined by Juran [17] as “fitness for use, in the intended environment” and by Deming [18] as “meets or exceeds customer expectations.” Therefore, a broad definition of service quality might be “fitness for use as determined by those features of the service that the customer considers to be beneficial.” As explained in Jain and Gupta [19], “services require a distinct framework for quality explication and measurement” and “involves evaluation of the outcome (i.e., what the customer actually receives from the service) and the process of service act (i.e., the manner in which the service is delivered).” Another line of research at that time concerned measurement of service expectations, pre- and post-consumption [20]. Parasuraman et al. [21] created the SERVQUAL scales and questionnaire organized around these five measures of service quality:

  • Reliability—the ability to perform the desired service dependably and accurately

  • Assurance—the knowledge and courtesy of employees and their ability to convey trust and confidence

  • Tangibles—the appearance of physical facilities, equipment, personnel, and communication materials

  • Empathy—the provision of caring, individualized attention to the customer

  • Responsiveness—the willingness to help customers and provide prompt service.

In a later publication [22], these same three authors presented a list of nine dimensions classifying how customers perceive service quality:

  • Tangibles—physical appearance

  • Reliability—performed as promised, consistently

  • Responsiveness

  • Competence

  • Credibility

  • Security/Safety

  • Access—easy to do business with

  • Communication—keeping customer informed

  • Understanding customer needs.

Pitt et al. [23] concluded that SERVQUAL is an appropriate instrument for researchers seeking a measure of information system service quality.

We have collected together six widely-known customer reactions to service quality:

  • Poor or inattentive service costs companies about 10% of volume annually (until corrected)

  • 96% of unhappy customers never complain, but 90% never return. Each unhappy customer tells at least nine others.

  • Each happy customer tells five others, who may become future customers.

  • The best opportunity to increase sales and market share is through your present customer base

  • Customer perception of quality of service depends heavily on employee’s job satisfaction (dis-satisfaction)

  • Service personnel are critically dependent on systems (often computer-based) to deliver quality to the customer.

Let us consider several examples of the last point, which is very important for the industrial engineer:

  • A college instructor depends on:

    • The student registration system to provide accurate class rolls and a means to report final grades to the registrar

    • The classroom assignment system to provide a lecture hall to match the enrollment

    • The textbook ordering system to order, receive, and distribute the correct class materials in the correct quantities, on time

    • The classroom audio-visual system and computer software/internet access provided in the lecture hall.

  • A medical doctor seeing patients in his/her clinic depends on:

    • The patient appointment scheduling system

    • The measurement of patient vital signs by nurses as the visit begins, with computer access

    • Blood testing machines and/or radiological scans done prior to the visit, with computer access

    • Computer access to records of previous ailments and treatments, surgeries, vaccinations, etc.

    • Equipment he/she may use during the patient encounter.

  • A service representative at a cable television/internet provider depends on:

    • An information system showing the customer’s current service details, including start-up and service end dates

    • An information system showing additional or alternative services available to the customer based on location, with costs and time frame for change in service

    • A billing system should balances in accounts, due dates, penalties for late payments, etc.

  • A bank teller for customers who walk in the branch and queue for service, depends on:

    • An information system showing a customer’s accounts, safe deposit boxes, progress on any money electronically moved from or to customer’s accounts

    • An information system tracking cash transactions (deposits, dispersals, exchanges, etc.) completed by the teller and what should be the status of their cash drawer

    • In a large bank, a formal schedule of teller work assignments and for each, their schedule of breaks and lunch, and an out-of-office schedule for the current day and perhaps for the weeks or months ahead.

Advertisement

3. Overall service effectiveness

Overall Service Effectiveness (OSE) was first described in Berhan [24], and is the focus of the remainder of this chapter. The OSE metric for services extends the OEE production metric developed along with TPM as described earlier in the chapter. For the reader’s convenience, we shall demonstrate how OSE is a simple rewrite of the formulas for OEE and its three input components in a manner that fits service transactions an industrial engineer might be challenged to design or improve, using OSE as a guide.

In the equations below, the term “units” could be units of a manufactured good or quantities of a service completed. Examples of the latter might be: queries to an information systems; patients seen by a doctor or dentist; customer transactions at a bank—in person or electronic; riders transported by a bus or aircraft, or by the bus line or airline. Note these are all situations which an industrial engineer might encounter, and traditional IE tools such as queuing theory or system simulation might be in use. Agreeing with the OSE equations of Berhan [24], we shall use:

Availability=TotalTimeAvailableDowntime/TotalTimeAvailableE5
Performance=NumberofUnitsProduced/IdealNumberofUnitsProducedE6
Quality=NumberofUnitsProducedNumberofDefects/NumberofUnitsProducedE7
OSE=Availability×Performance×Quality.E8

An example for an urban transportation system described in Berhan [24] adapted the equations for the three inputs above to the specifics of the service operation as follows:

Availability=TotalTransportTimeAvailableLostTransportTime/TotalTimeAvailableE9
Performance=NumberofPassengersTransported/TargetNumberofPassengersTransportableE10
Quality=NumberofPassengersTransportedNumberofDissatisfiedPassengers/NumberofPassengersTransportedE11

Note that the bus service, like many encountered in modern society, is a “knowledge embedded service” [25] which are services which embed the customer value in a system that provides the service, so human-machine system reliability is a key component of the availability input to OSE for such services. Here, the driver’s knowledge of the route and how to operate the bus matters as much as the bus reliability.

Using real data from the public transport (bus) system in Addis Ababa, Ethiopia: Berhan first computed planned downtime (lunch breaks and shift changes), downtime and speed losses, performance efficiency losses, and finally quality and yield losses; Berhan then computed:

Availability=78.85%E12
Performance=74.89%E13
Quality=70.83%E14

which yielded a system ‐ wide effectiveness measure of

OSE=78.85%×74.89%×70.83%=41.83%E15

showing this service system needs significant improvement in order to be rated “world-class”.

Just like in manufacturing, the OSE could also have been calculated for each bus individually, or for groups of busses that act together to cover a given route or sector within the city. Hence, OSE would be useful for service performance improvement at the bus, route, or (as demonstrated) system level.

Advertisement

4. Conclusions

This chapter provided background on the application of work measurement to services, starting with Taylor and his associates, and tracing the evolution from the plant performance metric Overall Equipment Effectiveness to an innovative service performance metric Overall Service Effectiveness (OSE). As illustrated in the analysis of an existing city bus system, the details used to compute the OSE inputs (availability, performance efficiency, quality rate) point toward actions that would improve OSE toward 100%. When designing a new service system (e.g., bus line, bank layout, fast food restaurant) the OSE metric can be used along with other industrial engineering tools (e.g., classic queuing formulas, systems simulation, engineering economy) to arrive at the most cost-effective layout, equipment/software, and staffing to handle forecast service demands.

References

  1. 1. Taylor FW. Principles of Scientific Management. New York: Harper & Brothers; 1911
  2. 2. Schonberger RJ. Japanese Manufacturing Techniques: Nine Hidden Lessons in Simplicity. New York: The Free Press; 1982
  3. 3. Schonberger RJ. World Class Manufacturing. New York: The Free Press; 1986
  4. 4. Schonberger RJ. World Class Manufacturing Casebook. New York: The Free Press; 1987
  5. 5. Nakajima S. TPM: Introduction to Total Productive Maintenance. Cambridge, MA: Productivity Press; 1988
  6. 6. Robinson CJ, Ginder AP. Implementing TPM: The North American Experience. Cambridge, MA: Productivity Press; 1995
  7. 7. Bernstein RF, editor. TPM Collected Practices and Cases. Cambridge, MA: Productivity Press; 2005
  8. 8. Rich N, McCarthey D. Lean TPM: A Blueprint for Change. 2nd ed. Waltham, MA: Butterworth-Heinemann; 2015
  9. 9. Ortiz CA. The TPM Playbook: A Step-by-Step Guideline for the Lean Practitioner. New York: Productivity Press; 2016
  10. 10. Peng K. Equipment Maintenance in the Post-Maintenance Era: A New Alternative to Total Productive Maintenance (TPM). Boca Raton, FL: CRC Press; 2018
  11. 11. Muchiri P, Pentelon L. Performance measurement suing overall equipment effectiveness (OEE): Literature review and practical application discussion. International Journal of Production Research. 2008;46(13):3517-3535
  12. 12. Hansen RC. Overall Equipment Effectiveness. New York: Industrial Press, Inc.; 2011
  13. 13. Stamatis DH. The OEE Primer. Boca Raton, FL: CRC Press; 2010
  14. 14. Charaf K, Ding H. Is overall equipment effectiveness universally applicable? The case of Saint-Gobain. International Journal of Economics and Finance. 2015;7(2):241-252
  15. 15. Singh K, Singh S. Eliminating obstacles in TPM implementation in sugar mill industry. International Journal of Advance Research and Innovation. 2018;6(4):332-334
  16. 16. Singh S, Szzingh K, Mahajan V, Singh G. Justification of overall equipment effectiveness (OEE) in Indian sugar mill industry for attaining core excellence. International Journal of Advance Research and Innovation. 2020;8(1):34-36
  17. 17. Juran JM. Juran on Leadership for Quality. New York: The Free Press; 1989
  18. 18. Deming WE. Out of the Crisis. Cambridge, MA: The MIT Press; 1982
  19. 19. Jain SK, Gupta G. Measuring service quality: SERVQUAL vs. SERVPERF scales. Vikalpa. 2004;29(2):25-38
  20. 20. Polonsky M, Higgs B, Hollick M. Measuring expectations: Pre and post consumption: Does it matter? Journal of Retailing and Consumer Services. 2005;12(1):49-64
  21. 21. Parasuraman A, Zeithaml V, Berry LL. SERVQUAL: A multiple-item scale for measuring customer perceptions of service quality. Journal of Retailing. 1985;62(1):12-40
  22. 22. Zeithaml V, Parasuraman A, Berry LL. Delivering Service Quality: Balancing Customer Perceptions and Expectations. New York: The Free Press; 1990
  23. 23. Pitt LF, Watson RT, Bruce Kavan C. Service quality: A measure of information system effectiveness. MIS Quarterly. 1995;19(2):173-187
  24. 24. Berhan E. Overall service effectiveness on urban public transport system in the City of Addis Ababa. British Journal of Applied Science and Technology. 2016;12(5):1-9
  25. 25. Krause DR, Scannell TV. Supplier development practices: Product- and service-based industry comparisons. The Journal of Supply Chain Management. 2002;38(2):13-21

Written By

Robert G. Batson

Submitted: 21 May 2020 Reviewed: 09 June 2020 Published: 07 July 2020