Office, Ijaa2166
Office, Ijaa2166
Faculty of Hotel and Tourism Management, Universiti Teknologi MARA, Selangor, Malaysia
*Corresponding author: [email protected]
ABSTRACT. Various social sciences researchers have always debated the operationalisation of formative or a reflective
measurement in Partial Least Squares Structural Equation Modeling (PLS-SEM). This paper aims to offer guidance on
formative and reflective measurement model assessment in PLS-SEM. First, this paper explores and discuss the
similarities and differences between the formative and reflective measurement model. Next, this paper reviews the
practice of measurement model assessment for formative and reflective construct based on the latest methodological
background. Finally, this paper proposes a set of guidelines in classifying the formative and reflective constructs and
the steps in assessing the formative and reflective measurement model. This paper addresses the issue of measurement
misspecification in PLS-SEM assessment by providing logical guidelines for researchers. This paper also helps to
explain and suggest appropriate PLS-SEM assessment for researchers as they plan future research projects.
1. INTRODUCTION
Partial Least Squares Structural Equation Modeling (PLS-SEM) is a second-generation
data analysis technique in the family of structural equation modelling ([1]; [2]). Different from the
SEM covariance-based groups, PLS-SEM is a prediction-oriented approach to SEM, usually used
for exploratory research and also appropriate for confirmatory research ([3]; [4]). Lauro and Vinzi
([5]) suggested that PLS-SEM is particularly useful for causal-predictive analysis in situations of
high complexity and low theoretical information availability. Meanwhile, other researchers used
Received May 3rd, 2020; accepted May 27th, 2020; published August 10th, 2020.
2010 Mathematics Subject Classification. 91B02.
Key words and phrases. measurement model; PLS-SEM; structural equation modelling; formative; reflective.
©2020 Authors retain the copyrights
of their papers, and all open access articles are distributed under the terms of the Creative Commons Attribution License.
876
Int. J. Anal. Appl. 18 (5) (2020) 877
the PLS-SEM approach because of its advantages over the covariance approach ([2]; [3]). The
benefits of this soft-modelling approach include its ability to account the theoretical conditions,
measurement conditions, distributional considerations, and practical considerations ([3]).
Besides, PLS-SEM is also an exploratory statistical tool that is able to process primary or
secondary data ([6]). Meanwhile, other researchers claimed that the PLS-SEM approach is suitable
with prediction-oriented objective, abnormal data distribution and accommodates small sample
sizes ([7]; [8]; [9]; [10]). Table 1.1 illustrates how the PLS-SEM approach is compared to the
Covariance-based Structural Equation Modeling (CB-SEM) approach.
Table 1.1. Comparing Partial Least Square (PLS) to Covariance-based (CB) approaches of SEM
Criterion PLS-SEM CB-SEM
As shown above, the approach of PLS-SEM is to explain the variance, similar to basic
regression analysis and therefore, it is essential to note that PLS-SEM also provides Coefficient of
determination (R2) values besides indicating the significant relationships that lie among the
construct which are able to denote on the performance of the model such as how far the model is
performing. Among the advantages possessed by PLS-SEM if to be compared with the basic
regression is its ability in handling various independent variable at one time even when it
displays multicollinearity ([3]; [6]). Besides, some of the assumptions on regression are also
shared by PLS-SEM, i.e. the ones which concern on the outliers as well as nonlinear data
relationships. Lastly, the PLS-SEM’s characteristics which include the minimal demands when it
comes to measurement scales, sample size and also residual distributions allow it to be utilised
Int. J. Anal. Appl. 18 (5) (2020) 878
in the circumstances either the relationships exist or not, and it can also be utilised in suggesting
the propositions for the later testing ([6]; [7]).
PLS-SEM involves a two-step approach which revolves around the estimation of the
measurement model right before an analysis is done for the structural model. It is also known as
an iterative algorithm which has the ability in separately solving out the blocks of the
measurement model and later estimates the path coefficients in the structural model. This paper
focuses on the differences and assessment of the formative and reflective measurement models.
2. MEASUREMENT MODEL
A measurement model is a component of the general model where latent constructs are
prescribed. Measurement models, as discussed in the psychological, sociological and
management literature - identify various instances where reflective and formative measures
differ. The most common distinction between reflective and formative measures has to do with
the relationship that is present when it comes to the construct and its measurement items ([11];
[12]; [13]; [14]; [15]). Commonly, the reflective modes act as the indicator of causality from
constructs to measurement items, and it is the other way around for formative modes. Figure 1
below exhibits the differences between formative and reflective construct.
changing/replacing a formative indicator will alter the meaning of the latent variable ([17]).
Alternatively, in reflective construct, indicators are deemed as the consequences of the latent
variable to which they belong, which means items are manifested by the construct ([2]; [18]; [19]).
The use of reflective indicators is interchangeable, and to a certain extent, it can even be removed.
Another critical differentiation between the two models is whether the measurement items
possess any correlation. In reference to the formative model, all measurement items are not
necessary to appear having a high correlation, while the reflective model stipulates that there is
a need for all measurement items to be highly correlated.
3. CONSTRUCT CLASSIFICATION
As the research data was collected prior to the model specification, the next step involved
classifying the constructs as either formative or reflective. While some scholars argue that no
construct is inherently reflective or formative, others suggest that a construct must be either
reflective or formative based on its theoretical considerations ([14]; [15]; [20]). The rationale of
these theoretical considerations is to develop items that measure the actual construct. The choice
of measurement would affect the content, parsimony and criterion validity of the measurement
model ([13]). Other researchers suggested that usage of incorrect measurement model will
undermine the content validity of constructs, misrepresents the structural relationships between
them, and ultimately lowers the usefulness of the research findings ([14]; [15]; [20]).
After reviewing the works of literature, this study found that three criteria: (i) the nature
of the construct, (ii) the direction of causality between the indicators and the latent construct; and
(iii) indicators characteristics that are used to measure the construct, is applicable in classifying
3.1. Nature of the construct. Based on a reflective model, the latent construct is present (in an
absolute sense) independently of the measures ([21]). This aspect is in line with many businesses
and related methodological studies that use reflective measurement ([2]; [22]). Alternatively, for
the formative model, the latent construct is dependent based on constructive, operational or
Int. J. Anal. Appl. 18 (5) (2020) 880
instrumental interpretation ([23], [24]). It is vital to highlight that due to the fact that formative
indicators define the latent variable, they are not interchangeable. However, it was found that
only there were only limited examples of formative models included in the literature of social
science, specifically with regards to secondary data ([14]; [25]; [26]). They argued that secondary
data tends to be very descriptive, may be challenging to obtain, and most of the time, it may not
measure all the variables that are important to the research construct. Despite the limitations, they
also supported that secondary data allows the researchers in testing complex hypotheses which
involve multiple variables as well as large samples that act in facilitating the use of statistical
3.2. The direction of causality. The direction of causality between the construct and the indicators
is the second consideration in deciding whether the measurement model is reflective or formative
([13]; [16]). Reflective models assume that the flow of causality flows is from construct to the
indicators. Hence when there is a change in the construct, there will be a change in the indicators
as well. Meanwhile, the reverse is true for formative models, where causality flows from the
indicators to the particular construct. When there is a change in the indicators, it will result in a
change in the construct under study. Also, it is essential to note that different causal direction can
contribute towards significant implications in terms of the measurement error as well as the
model estimation ([13]). Formative and reflective models were also found to have the main
difference of which is basically on the treatment of measurement error, which then may affect the
reflective or formative, the differences with regards to specific indicator characteristics need to be
analysed. For a reflective model, the content validity of the construct is not triggered by the
inclusion or exclusion of one or even more indicators outside a domain. The indicators are
interchangeable as they shared a common theme ([4]; [27]; [28]). However, in the case of formative
models, types of indicators representing the construct as well as the number of constructs affect
Int. J. Anal. Appl. 18 (5) (2020) 881
the constructs itself, and thus, the conceptual meaning of the construct can change there is an
addition or removal of an indicator. In this case, if the indicators represent the model
conceptually, they are still considered adequate in the viewpoint of the empirical prediction.
Based on the above measurement and theoretical considerations, research constructs can
be classified to either formative or reflective measurement model ([13]; [14]; [15]). Table 3.1
describes the justification process used to determine which constructs were reflective and which
construct understudy
Latent construct exists (in Causality flows from the Construct is not sensitive; Reflective
an absolute sense) construct to the indicators; a does not materially alter
independently of the change in the construct causes the content validity of the
measures a change in the indicators construct
Internal consistency Do not use Cronbach’s alpha; composite reliability > Bagozzi and Yi
reliability 0.70 (1988) [18]
Convergent validity Average variance extracted (AVE) > 0.50 Bagozzi and Yi
(1988) [18]
Discriminant validity - Each construct’s AVE should be higher than its Fornell and
Fornell-Larcker criterion squared correlation with any other construct Larcker (1981)
[30]
Cross loadings Each indicator should load highest on the construct Chin (1999) [7]
it is intended to measure
A threshold value of 0.70 was applied in assessing the internal consistency of the model
specifically in the effort to determine the item’s minimum factor loadings (18). Measurements
with loadings lesser than 0.70 were removed in cases where failure to eliminate them may
contribute towards the increase of composite reliability that is greater than the threshold value
([3]). Meanwhile, the convergent validity was determined using the widely accepted method
‘average variance extracted (AVE)’ ([3]). The AVE value indicates that; on average, each construct
can explain more than half of the variance of its measuring items and must be more than 0.50
([14]; [18]).
Fornell and Larcker ([30]) criteria were used in examining the discriminant validity at the
construct-level, whereas the discriminant validity at the item level was examined using Chin’s
criteria ([7]). Implementing this two-fold technique in testing for the discriminant validity is
supported by various researchers, as they suggested that the variance extracted estimates should
be greater than the squared correlation estimate ([8]; [9]; [13]; [18]). Lately, many researchers
proposed the Heterotrait-Monotrait ratio of correlations (HTMT) to assess discriminant validity
in PLS-SEM ([31]). The HTMT can achieve higher specificity and sensitivity rates compared to the
Int. J. Anal. Appl. 18 (5) (2020) 883
cross-loadings and Fornell-Lacker criterion. From the HTMT results, if the HTMT values are less
than 0.85, it indicated no discriminant validity problems and implied that the HTMT criterion did
not detect the collinearity problems among the latent constructs ([31]).
Construct Estimate the indicator weights to measure the contribution of each Petter et al.
Validity formative indicator to the variance of the latent variable. (2007) [33]
Indicator Calculates the outer loadings of the formative construct; if the item Hair et al.
Reliability loadings are relatively high (>.50), the indicator should be retained (2012) [9]
The higher the value, the greater the correlation of the variable with other variables ([34]). For
formative measures, there is a rule of thumb that clarifies; if VIF values are greater than 5, thus it
represents high multicollinearity ([20]). Recently, other researchers recommended that
multicollinearity exists if the VIF value is higher than 10 ([6]; [31]).
Next, it is important to note that the Cronbach’s alpha and the composite reliability will
not be estimated, as much as formative indicators are not internally consistent ([7]; [35]).
Moreover, the AVEs were not calculated, given the assumption that formative indicators
demonstrate convergent validity ([35]). Therefore, in order to test for the construct validity, the
estimation of the indicator weights in measuring the contribution of every each of the formative
indicators to the variance of the latent variable should be applied. The item weights indicate
whether or not an indicator can explain a significant portion of the variance of a formative
construct ([36]; [37]). This step is in line with other researchers who suggested that indicator
weights can be used to test the construct validity ([12]; [13]; [15]).
Lastly, researchers should also look at indicator reliability. The outer loadings of a
formative construct should be tested to confirm the indicator reliability. When an indicator’s
weight is not significant, but the corresponding item loadings are relatively high (>.70), the
indicator should be retained, as been proposed by researchers ([9]; [29]). This will ensure that
measurements are prioritised according to their reliability with regard to making estimations
([15]; [38]).
6. PROPOSITION
In any study, it is vital to acknowledge the different types of measurement models and
understand the criteria involved when it comes to determining the measurement models’ mode.
The formulation of the measurement model depends on the direction of the relationships that are
present in reference to latent variables as well as the corresponding manifest variables. In general,
there are distinctive types of measurement model that are available namely, (i) the reflective
model or also called outward-directed model; and (ii) the formative model or also called the
directed model. The general distinction between measurement model for reflective and formative
measures must be distinguished. It is important to note that reflective measures generally
represent the causality from the constructs to the specified measurement items, and meanwhile,
Int. J. Anal. Appl. 18 (5) (2020) 885
the formative measures consider the opposite. This study found that each measurement model
must be tested by assessing the validity and reliability of the items and constructs used in each
(reflective and formative) model. These specific steps must be taken to ensure that only reliable
and valid constructs and measures are used, prior to assessing the nature of the relationships
proposed by the research hypotheses. This study, therefore, suggests the following guidelines for
researchers in assessing the differences between reflective and formative constructs as per Table
6.1 below.
Measures have similar sign and significance of Measures may not have the similar significance of
relationships with the antecedents/ consequences relationships with the antecedents/consequences
as the construct as the construct
Taking measurement error into account at the Taking measurement error into account at the
measurement level; error terms in indicators can construct level; error term cannot be identified if
be identified the formative measurement model is estimated in
isolation
Assessments: Internal consistency reliability, Assessments: Multicollinearity, Construct
Indicator reliability, Convergent validity, Validity, and Indicator Reliability
Discriminant validity Fornell-Larcker criterion,
Cross loadings, and Heterotrait-monotrait ratio of
correlations (HTMT)
Int. J. Anal. Appl. 18 (5) (2020) 886
As per Table 6.1, the most typical distinction between reflective and formative measures
has to do with the relationship between the construct and its measurement items. The reflective
modes indicate causality from constructs to measurement items, whereas formative modes reflect
the opposite. In formative measurement models, the latent variable is considered a consequence
of its respective indicators and because the latent variable is defined by its indicators,
changing/replacing a formative indicator will alter the meaning of the latent variable.
Alternatively, in reflective measurement models, indicators are regarded as the consequences of
the latent variable to which they belong, which means items are manifested by the construct. The
reflective indicators can be used interchangeably and can, to a certain extent, even be discarded.
Another critical differentiation between the two models has to do with whether or not the
measurement items are correlated. In the formative model, it is not essential for all measurement
items to be highly correlated, while the reflective model stipulates that all measurement items
need to have a high level of correlation.
7. CONCLUSION
This paper proposes a set of guidelines in classifying the formative and reflective
constructs and the steps in assessing the formative and reflective measurement model. In
addition, this paper confirms that there are apparent differences between reflective and formative
constructs and the construct identification and validation depends on the type of construct
specified by the researcher. This paper proposes that quantitative researchers that the decision
whether to use a formative or reflective indicator should be based on the theoretical grounds.
Misspecification of measurement models may affect research outcome or mislead future research.
Conflicts of Interest: The author(s) declare that there are no conflicts of interest regarding the
publication of this paper.
Acknowledgement: This research work was supported by the Universiti Teknologi MARA
Malaysia.
Int. J. Anal. Appl. 18 (5) (2020) 887
References
[1] J.F. Hair, ed., A primer on partial least squares structural equations modeling (PLS-SEM), SAGE, Los
Angeles, 2014.
[2] R.B. Kline, Principles and practice of structural equation modeling, Guilford Press, New York, 2015.
[3] J.F. Hair, C.M. Ringle, Sarstedt M. PLS-SEM: Indeed a silver bullet. J. Market. Theory Practice.
19(2)(2011), 139-152.
[4] X. Wang, L.M. Jessup, P.F. Clay. Measurement model in entrepreneurship and small business
research: a ten year review. Int. Entrepren. Manage. J. 11(1)( 2015), 183-212.
[5] V.E. Vinzi, C.N. Lauro, S. Amato, PLS Typological Regression: Algorithmic, Classification and
Validation Issues, in: H.-H. Bock, et al. (Eds.), New Developments in Classification and Data Analysis,
Springer-Verlag, Berlin/Heidelberg, 2005: pp. 133–140.
[6] J.F. Hair Jr., L.M. Matthews, R.L. Matthews, M. Sarstedt, PLS-SEM or CB-SEM: updated guidelines on
which method to use. Int. J. Multivar. Data Anal. 1(2)( 2017), 107-123.
[7] W.W. Chin, P.R. Newsted, Structural equation modeling analysis with small samples using partial
least squares. Stat. Strat. Small sample Res. 1(1)( 1999), 307-341.
[8] R.R. Sinkovics, ed., New challenges to international marketing, Emarald, London, 2009.
[9] J.F. Hair, M. Sarstedt, C.M. Ringle, J.A. Mena. An assessment of the use of partial least squares
structural equation modeling in marketing research. J. Acad. Market. Sci. 40(3)( 2012), 414-433.
[11] A. Diamantopoulos, P. Riefler, K.P. Roth. Advancing formative measurement models. J. Bus. Res.
61(12)(2008), 1203-1218.
[12] A. Diamantopoulos, Incorporating formative measures into covariance-based structural equation
models. MIS Quart. 35(2011), 335-358.
[13] K.A. Bollen, A. Diamantopoulos, Notes on measurement theory for causal-formative indicators: A
reply to Hardin. Psychol. Meth. 22(2017), 605–608.
[14] T. Coltman, T.M. Devinney, D.F. Midgley, S. Venaik. Formative versus reflective measurement
models: Two applications of formative measurement. J. Bus. Res. 61(12)( 2008), 1250-62.
[15] E.A. Khan, M.N.A. Dewan, M.M.H. Chowdhury. Reflective or formative measurement model of
sustainability factor? A three industry comparison. Corp. Owner. Control. 13(2)( 2016), 83-92.
[16] K. Bollen, R. Lennox. Conventional wisdom on measurement: A structural equation perspective.
Psychol. Bull. 110(2)( 1991), 305–314.
Int. J. Anal. Appl. 18 (5) (2020) 888
[17] A. Diamantopoulos, H.M. Winklhofer. Index construction with formative indicators: An alternative
[19] C.B. Jarvis, S.B. MacKenzie, P.M. Podsakoff. A critical review of construct indicators and measurement
model misspecification in marketing and consumer research. J. Consumer Res. 30(2)(2003), 199-218.
[20] A. Diamantopoulos, J.A. Siguaw. Formative versus reflective indicators in organisational measure
development: A comparison and empirical illustration. Br. J. Manage. 17(4)(2006), 263-282.
[21] D. Borsboom, A.O.J. Cramer, R.A. Kievit, A.Z. Scholten, S. Franić. The end of construct validity. In R.
W. Lissitz (Ed.), The concept of validity: Revisions, new directions, and applications (p. 135–170). IAP
Information Age Publishing, 2009.
[22] R.G. Netemeyer, W.O. Bearden, S. Sharma, Scaling procedures: issues and applications, Sage
Publications, Thousand Oaks, Calif, 2003.
[23] J.B. Wilcox, R.D. Howell, E. Breivik, Questions about formative measurement. J. Bus. Res. 61(12)(2008),
1219-1228.
[24] D. Borsboom, G.J. Mellenbergh, J. van Heerden, The theoretical status of latent variables., Psychol.
[25] D.R. Allen, T. Finlayson, A. Abdul-Quader, A. Lansky. The role of formative research in the National
HIV Behavioral Surveillance System. Public Health Rep. 124(1)(2009), 26-33.
[26] J.R. Macnamara. Research in public relations: A review of the use of evaluation and formative
research. Asia-Pac. Public Relat. J. 1(1992), 2-11.
[27] J.C. Nunnally, I.H. Bernstein. Psychological theory. McGraw-Hill, New York, 1994.
[28] J. Hulland, Use of partial least squares (PLS) in strategic management research: A review of four recent
studies. Strat. Manage. J. 20(2)(1999), 195-204.
[29] J.F. Hair, J.J. Risher, M. Sarstedt, C.M. Ringle. When to use and how to report the results of PLS-SEM.
Eur. Bus. Rev. 31(1)(2019), 2-24.
[30] C. Fornell, D.F. Larcker. Structural equation models with unobservable variables and measurement
error: Algebra and statistics. Sage Publications Sage CA: Los Angeles, CA; 1981.
[31] J. Henseler, C.M. Ringle, M. Sarstedt. A new criterion for assessing discriminant validity in variance-
based structural equation modeling. J. Acad. Market. Sci. 43(1)(2015), 115-135.
[33] S. Petter, D. Straub, A. Rai, Specifying formative constructs in information systems research. MIS
[35] W.W. Chin. The partial least squares approach to structural equation modeling. Mod. Meth. Bus. Res.
295(2)(1998), 295-336.
[36] G.R. Franke, K.J. Preacher, E.E. Rigdon. Proportional structural effects of formative indicators. J. Bus.
Res. 61(12)(2008), 1229-1237.
[37] N. Roberts, J. Thatcher, Conceptualizing and testing formative constructs: tutorial and annotated
example, SIGMIS Database. 40(2009), 9–39.
[38] K.A. Bollen, A. Diamantopoulos. In defense of causal-formative indicators: A minority report.