Weeek-5 After reading chapter-14 from the attached textbook answer the below question. The answer should be in own words and do not copy from any oth

Weeek-5
After reading chapter-14 from the attached textbook answer the below question. The answer should be in own words and do not copy from any other sources. The definition of the pareto and histogram should be based on the attached textbook and journal article.
Please answer all parts of the question thoroughly. the assignment should be in APA format and strictly no plagiarism.

Briefly describe a Pareto Chart and a Histogram. How are these charts similar? How are they different? Describe a specific situation where one chart would be the better choice for quality managementand control than the other. Explain why.

Don't use plagiarized sources. Get Your Custom Assignment on
Weeek-5 After reading chapter-14 from the attached textbook answer the below question. The answer should be in own words and do not copy from any oth
From as Little as $13/Page

int. j. prod. res., 1999, vol. 37, no. 6, 1403 1426

An empirical examination of quality tool deploy ment patterns and their

impact on performance

R . HA NDFIELD *, J. JA Y A R A M and S. GHOSH

A lthough research suggests that quality management initiatives often fail to meet
managers expectations, few studies consider that an inappropriate choice of
quality tools may adversely a ect the results. This paper analyses the pattern of
quality tool deployment and its impact on performance using a sample of 313
North American and European rms. The analysis reveals that four primary
types of quality tool applications occur: Human R esource (HR ) tools, Design
tools, Discipline tools and Measurement tools. Several signi cant relationships
between these dimensions and quality performance were found, suggesting that
successful tool deployment of ten depends on competitive conditions and internal
strategies.

1. Introduction

A recent academic study f ound that very f ew quality initiatives have consistent
and universal e ects on rm perf ormance (A merican Quality Foundation and Ernest
& Y oung 1992). Several reasons have been o ered by researchers to explain this
paradox. One group of researchers suggests that indices of rm perf orm ance such as
market share, return on assets, etc., are di cult to measure especially in diversi ed
organizations ( Pennings 1984, Kaplan and Norton 1992). Consequent ly, multi-
industry empirical studies of ten reveal that even `successf ul quality initiatives have
little or no im pact on perf ormance, due to psychometric properties associated with
measuring the dependent variable of rm perf ormance. A nother line of thought is
that rms may simply have a business strategy which is misaligned with its current
competitive conditions. Wruck and Jensen ( 1994) document the case of Sterling
Chemical which had a successf ul TQM program in place, yet experienced a decline
in rm performance largely because of adverse industry and market f actors. Still
another possibility is that rms may emphasise a quality strategy that is poorly
matched with its competitive environm ent. The nal possibility is that an otherwise
appropriate quality strategy may be deployed with an inappropriate set of quality
tools and technologies .

The problem of nding empirical support f or the relationship between quality
deployment and perf orm ance is a major driver of this research. This study examines

0020 7543/99 $12. 00 1999 Taylor & Francis Ltd.

R evision received May 1998.
Department of Marketing and Supply Chain Management, Eli Broad Graduate

School of Management, Michigan State University, N370 North Business Complex, East
Lansing, MI 48824, USA . Tel: + 1 (517) 353-6381; Fax: + 1 (517) 432-1112; e-mail: hand el
@ pilot.msu.edu.

Charles H. Lundquist College of Business, 1208 University of Oregon, Eugene, OR
97403-1208, USA .

Department of Management, Georgia Institute of Technology, 225 North A venue NW,
A tlanta, GA 30084, USA. Tel.: + 1 (404) 894-4927.

* To whom correspondence should be addressed.

the im pact of quality tool selection/deploym ent on perf ormance by rst `grouping a
number of tools used in a number of industries into classes ( using f actor analysis. )
We then investigate the relationship between these quality tool groups and a set of
perf ormance indicators ( quality strategy, quality perf ormance and business perf orm-
ance). These relationships are tested using data obtained through eld interviews in
14 rms and f rom a large-scale survey of 313 US rms in several industries. The
study concludes with the strategic implications f or quality and technology managers
f aced with the task of allocating resources, developing quality objectives, and
evaluating the bene ts that result f rom the deployment of quality tools and
technologies.

2. Literature review

Three major them es can be discerned from the literature on quality tools. First,
there have been many attem pts to group quality tools into comprehensive categories.
In most cases these categorisatio ns are based on inductive case-based descriptions by
practitioners and consultants ( Brocka and Brocka 1992, Greene 1993). Second, the
literature emphasizes that the types of tool that rms employ have evolved over time
( Lee and Ebrahim pour 1985, Lascelles and Dale 1988, Wilkinson et al. 1995, Mann
and Kehoe 1994). Firms are not using the same tools as they did a decade ago. Third,
there has been an increasing emphasis on empirical studies geared towards under-
standing the types of quality tools used by companies and the relative importance of
the di erent tools ( Modaress and A nsari 1989, Larson and Sinha 1995).
Nevertheless, these studies do not yet reveal any signi cant results regarding the
types of tools to be used in di erent business situations.

Brocka and Brocka ( 1992) proposed a spiral model which consists of the f our key
principles of vision, empowerment, continuous evaluation, and customer orientation
as a basis f or understandin g the role of quality tools in TQM implem entation.
Examples of tools grouped into the eight categories are as f ollows:

graphical tools ( cause e ect diagrams, quality function deploym ent)
company-wi de techniques ( benchmarki ng, quality circles, quality f unction

deployment )
data analysis tools (control charts, design of experiments)
problem identi cation tools (cause e ect diagrams, control charts)
decision-m aking tools ( auditing, nominal group technique)
modeling tools ( work ow analysis, quality f unction deployment)
preventive tools ( control charts, foolproo ng)
creativity tools ( quality circles, brainstorm ing).
In examining these categories, it is clear that some tools can be grouped in more

than one category. For example, quality f unction deployment has been categorized
as a graphical tool, a company-wi de technique, a problem identi cation tool, and a
modeling tool.

In a somewhat more com prehensive typology, Greene ( 1993) described 98 quality
tools and classi ed them into the f ollowing groups: Group dynamics tools; Statistical
tools; Managemen t tools; Implementation tools; Process tools; Know ledge tools;
A dvanced statistics tools; Systems tools; Managemen t-by-events tools; Customer
understandin g tools; Commitment tools; Innovation tools; Sof tware tools; and
Social connectionism tools. A lthough comprehens ive in scope, a large num ber of
the quality tools in this typology are not widely used by companies.

1404 R. Hand eld et al.

Quality tools can also be classi ed using an evolutionary approach.
Traditionally, the use of quality programs and tools have emphasized reactive
stances. For example, the use of tools such as Statistical Process Control and
R eliability Engineering were employed with the objective of detecting defects. In
later years, tools such as zero defects programs, and Total Quality Control ( TQC)
were sought to prevent def ects and assure quality. Garvin ( 1987) was the rst to
point out that a strategic quality initiative requires a deliberate and system-wide use
of quality tools and techniques that enable rms to di erentiate their products f rom
those of their competitors.

Empirical studies addressing the use of quality tools can be grouped into three
categories: Comparison s of practices across countries; Comparison s of practices
within a country; and Com parisons of practices across f unctions in rms.
A merican rms tend to use statistical tools such as control charts, acceptance
sampling and histograms ( Lee and Ebrahimpou r 1985). However, Japanese rms
emphasized tools such as Pareto charts and checksheets. In a later study, A merican
rms that adopted Japanese techniques were f ound to employ tools such as quality
planning, worker participation, and team work techniques ( Ebrahimpou r and Lee
1988).

A similar change in the emphasis on quality tools over time can be found in
studies that examined practices within countries. In the UK rms traditionally
emphasized inspection-or iented techniques and paid little attention to cross-f unc-
tional techniques such as quality circles and value analysis ( Lascelles and Dale 1988).
Over tim e, this pattern changed to an increasing emphasis on customer-driven tech-
niques such as custom er satisf action surveys and quality improvement projects
( Wilkinson et al. 1995, Mann and Kehoe 1994). In Canadian rms, the current
usage of tools such as TQM, benchmark ing and SPC appears to be motivated by
strategic concerns ( Larson and Sinha 1995).

Some studies have pointed to di erences in practices across f unctions within a
company. Modaress and A nsari ( 1989) f ound that the manuf acturing f unction
tended to use post f acto tools such as inspection, SPC and process capability.
However, the design and engineering f unction used proactive tools such as design
of experiments ( DOE) and Pareto charts.

To sum marize, there has been a global interest in the types of quality tools
and techniques employed by rms to achieve a num ber of competitive objectives.
The relative use of quality tools varies signi cantly across countries, across
industries and within rms. To date, no attem pt has been made to link the
latent pattern of quality tool usage to speci c quality or business perf ormance
objectives. For instance, rms often have a broad set of objectives underlying
their quality management strategy ( e.g. improving design manuf acturability versus
process improvement versus inspection f or visual def ects). There is theref ore a
need to examine the relationship between various quality strategies and the
resulting pattern of quality tool deploym ent. A n analysis of the relationships
among quality tools, quality perf ormance and business perf ormance is also
required to understand when certain tools can be used successf ully. Finally, the
possibility of industry-spe ci c e ects in the relationships among quality tools,
quality strategies, quality perf orm ance and business perf ormance needs to be
explored. This paper seeks to address these gaps in the quality management
literature.

Q uality tool deploym ent patterns 1405

3. Q uality strategies and tools

3.1. Patterns of quality tool deploym ent
A s can be seen in the discussion in section 2 the concepts associated with the

di erent quality strategies are well known. However, the patterns of quality tool
deployment are of ten unclear to quality managers. This was made clear in an inter-
view with a quality manager at a large Fortune 500 computer manuf acturer who
made the f ollowing observation: `We have had successes and f ailures in deploying a
variety of quality tools. Sometimes they work, and sometimes they don t. It appears
to be a hit-and-miss process.

A lthough it appears that the pattern of quality tool deployment is a rm-speci c
phenomenon , we are exploring the possibility that there are common groups of
quality tools that tend to be used as a set across a large number of rms.
Form ally this proposition is stated as f ollows.

Proposition 1: There exist a small nite number of underlying dimensions in the
pattern of quality tool deployment across organizations.

Moreover, we posit that there exists a grouping of tool deployment patterns that
vary based upon the objectives being sought. These strategies are next described.

3.2. Q uality strategies
Historically, rms in the early stages of TQM implementation employ quality

tools in a random manner ( Hand eld and Ghosh 1994). In contrast, rms in uenced
by Japanese quality control management recognized that quality excellence can be
achieved by employing a series of interdepend ent strategies: designing f or quality;
process control; process improvement and inspection ( Hand eld 1989).

3.2.1. D esign quality
This strategy involves recognizing that product design makes major contribu-

tions to the three major business outcomes of cost, quality and timeliness (Fleischer
and Liker 1992). This recognition leads to the idea that quality is designed into the
product at least as much as it is built in during manuf acture ( Boothroyd and
Dewhurst 1987, Dean and Susman 1989, Hauser and Clausing 1988, Tushman
1979, Whitney 1988). This strategy is typically used in the engineering and product
design phase.

3.2.2. Process control
The strategy of process control is basically a precursor to the achievement of the

broader objective of process management (A nderson et al. 1994). The methodologi-
cal aspects of process managem ent require the use of quality control tools, preven-
tive maintenance and unif orm production workloads ( Mizuno 1988, Garvin 1984,
1983, Hayes 1981 ). Process control and measurement is required f or the stability and
reliability of the manuf acturing process.

3.2.3. Process im provem ent
The strategy of process improvement is the logical step af ter process control.

Examples of process improvement techniques recommended by experts include
Taguchi methods, quality f unction deployment, and def ect prevention ( Taguchi
1979, A kao 1990, Sarazen 1990). A study conducted by the A merican Quality
Foundation and Ernest & Y oung ( 1992) reported that only three managem ent prac-

1406 R. Hand eld et al.

tices reportedly had a signi cant impact on perf ormance, regardless of industry,
country, or starting position. These three practices include process improvement
methods, strategic plan deployment, and supplier certi cation program s.

3.2.4. Inspection
The strategy of inspecting incoming, in-process and nished goods f or def ects

completes the cycle. The inspection and process control strategy are precursor activ-
ities in the evolution of quality control. These activities have been superseded by
design improvement methods (Fortuna 1990). A dvanced companies producing piece
parts typically employ design, whereas service-oriented rms utilize process control.
Today, the intense competitive environment dictates that most rms cannot a ord to
ignore both product and process design improvements.

There also appears to be a new and evolving pattern to the deploym ent of quality
strategies. For example, it has been emphasized that quality managers in leading
edge companies have shif ted their attention f rom inspecting quality to designing
quality into products and services ( Georgantzas et al. 1995). The f our strategies
identi ed here f orm a basis f or selection of quality tools. Tools such as Quality
Function Deploym ent, quality circles, equipment calibration testing and design f or
manuf acturability are more likely to be used by TQM proponents emphasizing
design quality and process im provement as opposed to tools such as acceptance
sampling and SPC which are pursued by inspection-or iented rms
( R adhakrishnan and Srinidhi 1994, Sower et al. 1993, Flynn et al. 1995). In con-
trasting the strategies of high perf ormers with those of low perf ormers, the A merican
Quality Foundation and Ernst & Y oung study (1992) found that low-perf orming
rms tend to inspect-in quality, while high-perf orming rms tend to design-in qual-
ity. However, the literature is not consistent with respect to the type of strategies
( design versus inspection) that rms should pursue. In an interesting contrast,
R adhakrishnan and Srinidhi ( 1994) f ound that quality management decisions relat-
ing to designing-in and inspecting-in are context-speci c. They f ound that partial
acceptance sam pling ( an inspection tool) was optim al in certain contexts. Moreover,
the implementation of quality strategies such as inspection, design quality, process
control and process improvement should be related to the types of tools deployed
within the organization. This is stated in the f orm of the f ollowing proposition.

Proposition 2: The pattern of quality tool deployment is related to the type of quality
strategy adopted by organizations .

A s noted in the literature review, f ew studies have examined the relationship
between the pattern of quality tool deployment and quality perf ormance. We there-
f ore explore this relationship through the f ollowing proposition.

Proposition 3: The pattern of quality tool deployment is related to quality perf orm-
ance in organizations .

We also posit that the deployment of certain quality tools has the potential to
a ect overall rm perf ormance measures such as market share, return on assets and
growth measures. This relationship is suggested in a recent study which f ound that
conf ormance to speci cations was signi cantly related to three overall rm perf orm-
ance measures: return on investment ( R OI) growth, sales growth, and return on sales
( R OS) grow th ( Forker et al. 1996). Our study posits that the relationship between

Q uality tool deploym ent patterns 1407

quality strategy and business perf ormance may be mediated by the types of quality
tools used.

Proposition 4: The pattern of quality tool deployment is related to business perf orm-
ance.

The f our stated propositions attempt to identif y overall patterns in the use of
quality tools. The prolif eration of quality managem ent concepts and tools across a
variety of industries is evident, given that previous winners of various quality awards
( including the Baldridge A ward and New Y ork s state quality award) span a variety
of organizations, including a hotel, a rock supplier, the New Y ork State Police, a
school system, and architects ( Godf rey 1993). R ecognizing that the pattern of tool
deployment may vary across industries, the f our propositions stated above are tested
both across multiple industries and within three subsamples: the automotive, elec-
tronics and consumer products industries.

4. M ethodo logy

4.1. S urvey instrum ent
A comprehensive list of quality tools and techniques was compiled using a two

stage technique. In the rst stage, a caref ul review of the literature was conducted to
identif y an a priori set of tools and techniques used in di erent settings and indus-
tries. In particular, care was taken to include a wide variety of tools and techniques
such as graphical tools, problem identi cation tools, modeling tools, preventive tools
and creativity tools ( Brocka and Brocka 1992). In the second stage, in-depth case
studies of 14 North A merican and European manuf acturing organizations were used
to selectively reduce the list of quality tools to include those that were commonly
used. In selecting these 14 rms, care was taken to include a diverse sample of rms
that were in various stages of TQM implementation. In most cases, the quality
director or vice-president at the corporate o ce was contacted and interviewed.
A ll the interviews were carried out on-site using a structured interview protocol.
The eld notes from these interviews were used as the basis f or choosing the quality
tools f or the questions in this survey.

The two stage technique yielded a list of 38 quality tools and techniques (contact
the authors f or a set of de nitions f or these tools); 27 out of the 38 quality tools
included in this research are f requently mentioned in the quality literature. The
remaining 11 tools were being employed in a number of rms interviewed in stage
one of the study.

For each of the 38 quality tools and techniques, respondents were asked to: rate
the implementation status of the tool ( on a 4 point scale with 4 being `high use and 1
being `not used ); and rate the tool s impact on quality improvement ( on a 7 point
scale with 7 being `high impact , and 1 being `low impact ). R espondents were also
asked to allocate 100 points among f our quality strategies ( inspection, process con-
trol, process improveme nt, and design). The allocation of points represented the
relative emphasis of the rm s quality strategy.

4.2. S am ple
The rms used in the sample were identi ed through the A merican Society f or

Quality Control ( A SQC). A list of 3000 quality directors and vice-preside nts was
obtained, and a sub-sample of 1469 manufactu ring rms in the automotive, chemi-
cal, computer, construction , consumer products, def ense electronics, industrial prod-

1408 R. Hand eld et al.

ucts, medical device, packaging, pharmaceu tical, paperboard, semiconduc tor and
telecom munications industries were identi ed. Two mailings with one f ollow-up
reminder produced 351 surveys ( a 23% response rate). Of these surveys, 38 con-
tained missing data f or the variables used in this study, and were removed from the
sample, resulting in a nal sample size of 313 rms ( a 21% response rate). The rms
were located in all 50 A merican states. The rms were f rom all of the f ollowing
industry groups: autom otive, building materials, computer, consumer products,
def ence and aerospace, electronics , f ood, chemicals, scienti c, service industries
and others. There is evidence to show that quality managem ent techniques can
vary signi cantly within the same organization across business units (Benson et al.
1991). Consequent ly, it was deemed appropriate to select the division level as the unit
of analysis in this study.

5. Results

5.1. D escriptive statistics
Prior to analysis, the sample was screened f or outliers and data errors. The

means, standard deviations, and correlations am ong the quality tool f actors, quality
strategies, quality perf ormance and business perf ormance are shown in tables 1( a)
( d). The f actor analysis results which were used in developing the quality tool con-
structs are discussed later.

Q uality tool deploym ent patterns 1409

Std. Minimum Maximum Sample
Variable Mean Dev. value value size

1 Designing quality 25.858 15.565 0 80 313
2 Inspection 22.645 18.121 0 100 313
3 Proces control 27.359 13.234 0 80 313
4 Process improvement 24.138 12.663 0 100 313
5 HR factor 4.324 1.385 1 7 155
6 Measurement Factor 4.253 1.295 1 7 172
7 Design Factor 3.527 1.488 1 7 126
8 Discipline Factor 4.944 1.284 1 7 232
9 Customer rejects 0.060 0.428 1 + 1 255

10 Defect rates 0.079 0.352 1 + 1 225
11 Rework rates 0.070 0.335 1 + 1 191
12 Scrap rates 0.047 0.310 1 + 1 232
13 Market share 5.155 1.403 1 7 310
14 ROA 4.813 1.321 1 7 305
15 MS growth 4.860 1.434 1 7 307
16 Sales growth 4.938 1.455 1 7 307
17 ROA growth 4.683 1.327 1 7 303
18 Production costs 4.482 1.159 1 7 305
19 Customer service 5.360 1.090 2 7 308
20 Product quality 5.594 1.009 2 7 308
21 Competitive position 5.271 1.165 1 7 310
22 Customer relationship 5.023 1.147 1 7 310

For items marked scale is from 0 to 100. For items marked scale is from 1 to 7. For items marked
scale is from 1 to + 1.

Table 1( a). Descriptive statistics of quality tool factors, quality strategy and performance
items.

In table 1(a) the descriptive statistics ( mean, standard deviations, minimum and
maximum values) f or the items comprising the quality tool f actors, quality strategies
and performance variables are presented. Table 1( b) shows the correlations of the
f our quality strategies ( design, inspection, process control and process improvement)
with the f our quality tool f actors (Hum an R esources, Measurement, Design and
Discipline). A s can be seen f rom this table, all quality strategies are correlated
with one another at a p < 0.05 signi cance level. In table 1( c) the correlations between quality tool f actors and indicators of quality perf ormance are presented. In table 1( d) the correlations between quality tool factors and indicators of business perf ormance are presented. The implications of these statistics f or the results will be discussed f ollowing the presentation of the proposition testing. 5.2. Measurem ent The items f orming the quality tool factor scales were subject to a process of item puri cation by testing f or unidim ensionality within quality tool f actors using factor analyses, and by testing f or internal consistency using Cronbach s alpha (Cronbach 1951). A ll scales within the quality tool f actors were f ound to be unidimension al. A s can be seen f rom table 2, the scales f or the quality tool f actors were internally consistent and reliable, with the Cronbach s alphas ranging f rom 0.79 to 0.84. Consistent with the technique of item puri cation suggested by Churchill ( 1979), items within a scale were eliminated if their corrected item-total correlation ( rst
column in table 2) was less than 0. 45. A high score for an item s corrected item-total
correlation indicates that all items within a domain of a concept have an equal

1410 R. Hand eld et al.

1 2 3 4 5 6 7

1 Designing quality
2 Inspection 0.455
3 Process control 0.336 0.505
4 Process improvement 0.259 0.516 0.246
5 HR factor 0.246 0.377 0.136 0.159
6 Measurement f actor 0.101 0.368 0.236 0.247 0.567
7 Design f actor 0.233 0.322 0.151 0.078 0.631 0.653
8 Discipline factor 0.148 0.274 0.178 0.087 0.693 0.636 0.633

n 107. Signi cant at 0.05. Signi cant at 0.10.

Table 1( b). Correlations f or quality tool f actors and quality strategy items.

1 2 3 4 5 6 7

1 Customer rejects
2 Def ect rates 0.605
3 R ework rates 0.396 0. 548
4 Scrap rates 0.476 0. 582 0.539
5 HR factor 0.086 0.169 0.004 0.071
6 Measurement f actor 0.140 0.042 0.079 0.089 0.631
7 Design f actor 0.149 0.054 0.057 0.045 0.727 0.651
8 Discipline factor 0.279 0.216 0.081 0. 041 0.705 0.670 0.735

n 69. Signi cant at 0.05. Signi cant at 0.10.

Table 1( c). Correlations f or quality tool f actors and quality performance items.

Q uality tool deploym ent patterns 1411

1
2

3
4

5
6

7
8

9
1
0

1
1

1
2

1
3

1
M

a
rk

et
sh

ar
e

2

R
O

A
0.

31
1

3

M
S

gr
o

w
th

0.
24

6

0
.4

4
9

4

S
a
le

s
g
ro

w
th

0.
21

4

0
.3

9
0

0
.7

8
1

5

R
O

A
gr

o
w

th
0.

12
2

0
.6

6
8

0
.5

7
6

0.
6
4
3

6

P
ro

d
u

ct
io

n
co

st
s

0.
02

3
0
.0

44
0
.0

6
8

0.
1
0
3

0.
1
0
4

7

C
u

st
o

m
er

se
rv

ic
e

0.
00

9
0
.2

3
9

0
.2

6
2

0.
2
8
1

0.
1
7
3

0.
0
5
2

8

P
ro

d
u

ct
q

u
a
li

ty
0.

17
8

0
.1

7
1

0
.2

1
8

0.
2
6
0

0.
1
0
6

0.
1
5
2

0
.5

12

9

C
o

m
p

et
it

iv
e

p
o

si
ti

o
n

0.
53

3

0
.3

6
2

0
.4

7
2

0.
4
6
7

0.
3
0
9

0.
0
2
0

0
.1

94

0
.4

37

1
0

C
u

st
o

m
er

re
la

ti
o

n
sh

ip
0.

09
7

0
.3

4
7

0
.2

7
1

0.
3
1
7

0.
3
8
4

0 .
05

4
0
.3

53

0
.2

98

0
.1

76

1
1

H
R

fa
ct

o
r

0.
11

2
0
.1

4
1

0
.2

9
7

0.
2
7
1

0.
2
4
8

0
.1

2
9

0
.1

37
0
.1

91

0
.1

06
0
.3

8
7

1
2

M
ea

su
re

m
en

t
fa

ct
o

r
0.

24
3

0
.0

3
5

0
.2

0
5

0.
1
5
6

0.
0
7
1

0
.0

3
8

0
.1

28
0
.2

51

0
.1

88

0
.2

4
8

0.
5
6
7

1
3

D
es

ig
n

fa
ct

o
r

0.
00

4
0
.0

6
5

0
.2

4
1

0.
2
1
7

0.
2
1
0

0
.0

3
7

0
.2

10

0
.2

44

0
.1

19
0
.2

7
9

0.
6
4
6

0
.6

4
4

1
4

D
is

ci
p

li
n

e
fa

ct
o

r
0.

04
2

0
.1

8
8

0
.2

5
4

0.
1
6
9

0.
1
9
9

0
.0

8
5

0
.2

23

0
.2

14

0
.1

29
0
.2

1
9

0.
6
9
0

0
.6

3
1

0
.6

36

n
10

4.

S
ig

n
i

ca
n

t
a
t

0.
0
5
.

S

ig
n

i
ca

n
t

a
t

0
.1

0
.

T
a
b

le
1
(d

).
C

o
rr

el
at

io
n

s
fo

r
q

u
a
li

ty
to

o
l

fa
ct

o
rs

an
d

b
u

si
n

es
s

p
er

fo
rm

an
ce

it
em

s.

amount of common core, and hence responses to these similar items should have
high intercorrela tions. The third column in table 2 indicates that f or all the items
included in the nal instrument, the non-inclusion of each item results in a reduction
of internal consistency ( as can be seen f rom the reduction in alpha values). Overall,
the measurement analyses indicated that the scales were reliable. A f ter the item
puri cation process, an examination of the item s within the f our quality tool factors
revealed that the items had high content validity. For example, the ve items under
the hum an resources f actor ( HR ) are item s indicating the deliberate deployment of
methods that empow er and recognize employees in quality im provement e orts.

5.3. T esting of propositions
A combination of f actor analysis, sim ple regression analysis, stepw ise regression

analysis, and subsample testing was used to test the f our propositions identi ed in
earlier paragraphs.

5.3.1. Proposition 1
The results of testing Proposition 1 are shown in table 3. A n exploratory factor

analysis using principal components with varimax rotation was conducted on the set
of 38 quality tools, identi ed earlier in the section on developing the survey instru-
ment. Only f actors that accounted for variances greater than one ( i.e. , eigen-
values > 1) were extracted. Four f actors were extracted that accounted f or 63. 3%

1412 R. Hand eld et al.

Corrected Cronbach s
item-total Cronbach s alpha if item Sample

Factor, with items correlation alpha is deleted size

HR factor 0.8282 155
Workers perform nal inspection 0.6990 0.7658
Workers perform in-process inspection 0.6954 0.7712
Workers responsible for defect does 0.6607 0.7770

rework
Employee suggestion program 0.5516 0.8078
Quality circles 0.5152 0.8227

Measurement f actor 0.8429 172
Histograms 0.6897 0.8006
Pareto analysis 0.6614 0.8058
Process capability studies 0.7447 0.7808
R egression 0.4928 0.8499
Statisical process control 0.6639 0.8047

Design f actor 0.8054 126
Quality function deployment 0.6980 0.7347
Zero defects program 0.5849 0.7684
Design of experiments 0.6294 0.7542
Failure mode and e ects analysis 0.4529 0.8073
Design f or manuf acturability 0.5910 0.7664

Discipline factor 0.7352 232
Continuous improvement programs 0.5857 0.6252
Total quality management program 0.6006 0.5984
Preventive maintenance 0.5005 0.7177

Table 2. R eliabilities of items f or the four quality tool factors.

of the total variation in the observed variables. Table 3 shows the total and the
cumulative variance f or each extracted f actor. To interpret the f actors only items
which had `strong f actor loadings (greater than 0.5 in absolute value, shown in bold)
were included ( Norusis 1990). The resulting f our f actors may be interpreted, respect-
ively, as Human R esource Tools, Measurem ent Tools, Design Tools, and Discipline
Tools. Note that the names provided to the f our latent factors are by no means
unique. How ever, regardless of the nomenclatur e used, the substantive import of the
pattern of tools is unif orm. It may also be noted that the item `employee involvement
in quality planning did not load uniquely on one of the f actors. A s can be seen f rom
table 3, this tool loaded high on both HR f actor and Discipline Factor. Similarly, the
item `pro t sharing with employees loaded high on both HR factor and Design
Factor. Since cross loading can be a problem, especially in summated scales, we
excluded both these items in arriving at the four quality tool f actors.

Proposition 1 posits that a small number of underlying dimensions in the pattern
of quality tool deployment exists across a large number of organizations . The results
f rom table 3 indicate that there exist f our fundament al types of quality tool deploy-
ment: Human R esource Tools; Measurement Tools; Design Tools; and Discipline
Tools.

The set of items under Human R esource Tools ( inspection and rework and
involvement in the f orm of employee suggestions and participation in team s such

Q uality tool deploym ent patterns 1413

Factor 1 Factor2 Factor 3 Factor 4
Variables ( HR ) ( Measurement) (Design) (Discipline)

Workers perform nal inspection 0.795 0.152 0.128 0.192
Workers perform in-process inspection 0.731 0.292 0.049 0.298
Workers responsible f or defect does rework 0.671 0.152 0.313 0.294
Employee suggestion program 0.468 0.114 0.112 0.524
Quality circles 0.506 0.172 0.383 0.342
Histograms 0.199 0.842 0.144 0.089
Pareto analysis 0.136 0.803 0.089 0.269
Process capability studies 0.102 0.691 0.381 0.311
R egression 0.307 0.626 0.390 – 0.038
Statistical process control 0.100 0.651 0.175 0.446
Quality function deployment 0.139 0.187 0.788 0.209
Zero defects program 0.431 0.141 0.685 0.017
Design of experiments 0.177 0.419 0.621 0.271
Failure mode and e ects analysis – 0.137 0.219 0.599 0.287
Design f or manufacturability 0.277 0.246 0.540 0.393
Continuous improvement programs 0.

Leave a Comment

Your email address will not be published. Required fields are marked *