Development and Validation of Learning Agility Instrument
Development and Validation of Learning Agility Instrument
Received: August 25, 2023 Revised: October 19, 2023 Accepted: November 20, 2023
*Correspondent Author
KEYWORDS ABSTRAK
Cognitive Perspective This study aimed to develop a learning agility measuring instrument based
Learning Agility on De Meuse's theory (2015) with a sample of Government Employees
Psychometric
from the National Police of the Indonesian Republic and members of Polda
Police Woman
Metro Jaya (N=60). The development of an Indonesian version of the
learning agility measuring tool is important to adapt the construct
according to the character of Indonesian culture so that it can be used
widely and practically by organizational management. The data was
collected using an online questionnaire survey distributed via Jotform. The
stages of developing measuring instruments start from construct
operationalization, item development, readability testing by expert
judgment, pilot tests to eliminate invalid items, and field studies. The
results of the statistical test using a differential power test where the
researcher aborted 35 items in the pilot study, which were initially 95
items, so that the items that could be continued to the next test stage in a
large sample totaled 60 items, then from the average item reliability value
has a value per -item ≥ 0.9, this figure shows consistent numbers using the
same measuring instrument (test-retest reliability) when testing on small
samples and also large samples. Construct validity was assessed using item
homogeneity techniques, employing the Product Moment correlation,
revealing p-values ≤ 0.05 for all items. Additionally, the Pearson correlation
values for all things were ≥ 0.279 (r-table) with N=50. Therefore, all items
in the second phase of the trial can be deemed valid and suitable for use.
Moreover, positive correlations were observed among dimensions. The
results of the factor loading test indicated that the Standardization
Estimate for each dimension exceeded 0.71, confirming that each
dimension contributes significantly to the latent construct.
This is an open-access article under the CC–BY-SA license.
Introduction
Executing complicated initiatives requires high-potential workers who are open, willing to learn, and
flexible (Hallenbeck & De Meuse, 2008). According to Charan, Hewitt, and SHRM (2006), a crucial part of talent
management is creating a structured procedure for evaluating and identifying high-potential personnel.
Learning agility is one characteristic that has drawn much attention as a predictor of high potential.
14
Jurnal Spirits e-ISSN 2622-3236
Vol. 14, No. 1, November 2023, pp. 14-33 p-ISSN 2087-7641
The idea of learning agility answers the question of what personal qualities are required for someone to get
the most from such a developmental experience. All employees have talent, but each person's skill is unique in
type and quantity. Learning agility is a multifaceted idea, similar to the complexity of developmental activities.
This skill area connects to the notion of learning agility. Learning agility is the capacity and willingness to quickly
absorb and apply new information to succeed in novel and challenging leadership settings. Everyone needs to
develop their agility. Furthermore, if we so want, we can further enhance it. As a result, learning from complex
and demanding work experiences involves more than just having a learning objective (Hallenbeck & De Meuse,
2008).
According to Lombardo and Eichinger (2000), learning agility is the readiness and capacity to absorb
knowledge from experience and then use it to succeed under novel circumstances or for the first time. Over the
past few years, learning agility has gained wide acceptance as a strategy for helping human resource specialists
and organizational executives make talent decisions. There are a few opinions about learning agility, how to
evaluate it, when to utilize it, and how closely it relates to other concepts (De Meuse, 2015).
Authors intend to develop learning agility assessment tools from theory by drawing from diverse studies
that address the concept of learning agility (De Meuse, 2015). The construction of this assessment tool is to
assess learning agility so that the measurement outcomes can be effectively applied to identifying, selecting, and
developing high-potential talent and selecting people for leadership positions. Learning agility can also be used
to find, select, and develop high-potential individuals as well as executives, managers, and supervisors.
The need for organizations to uncover and develop people who can continually let go of knowledge, views,
and ideas that are out-of-date and learn new ones has led researchers to build learning agility measurement
tools (Ferry, 2011). It is critical to stay current with learning and development trends in the modern world,
where change is the only constant. In order to remain competent in their current roles and ultimately succeed
in an organization, people must continue to develop themselves (Burke, 2018). Individuals must be capable of
growing in their abilities amid a development process that moves at such a rapid pace. Learning agility is a
psychological measurement instrument that must gauge an individual's development to determine whether a
person is growing. Learning agility may be one of the abilities that human resource professionals perceive most
important for identifying qualified candidates for the organization. Therefore, industrial parties or human
resource professionals can use the established measurement instruments.
When choosing employees, maximizing their potential, and keeping them on board, businesses or
organizations should pay particular attention to their learning agility. Agility in learning can also have a
favorable effect on an organization's performance. According to DeRue et al. (2012), Eichinger and Lombardo
(2004), and Smith (2015), learning agility is linked to improved performance and promotions in addition to
predicting the potential and future outcomes of leaders and members. The correct future leaders, essential to
organizational success, can be found and developed using learning agility (DeRue et al., 2012; Silzer & Church,
2009). The outcomes of developing the learning agility concept can be a valuable resource when hiring talent,
Fitriani et al (Development And Validation Of Learning …..) 15
Jurnal Spirits e-ISSN 2622-3236
Vol. 14, No. 1, November 2023, pp. 14-32 p-ISSN 2087-7641
such as whether to consider internal candidates for management promotions, whom to include in high-
potential leadership programs, or whether to bring in external leaders. The findings from this analysis, however,
offer diagnostic advice for the areas of leadership that still require work for development purposes.
Research on learning agility is continually expanding. Since readability measures are essentially not
acceptable everywhere in the world, numerous studies have generated new instruments to modify the
characteristics of learning agility (Gravett & Caldwell, 2016). Access to the learning agility construct from De
Meuse's (2015) theory may be found at https://thetalentx7.com/. However, from a readability metric that is
not acknowledged globally, as well as the fact that little research has been carried out in Indonesia. In order to
adapt the learning agility construct from De Meuse's (2015) theory to Indonesian culture and language, the
researcher plans to do so. The reason researchers developed the concept of learning agility because focus
organizations must now shift to discover and develop individuals who can continually unleash skills,
perspectives, and ideas that are no longer relevant and learn new ones (Ferry, 2011). Progress high-speed
technology also emphasizes personal improvement and learning agility. In today's world, where change is the
only constant thing, keeping up with learning trends and development is important. People must continue to
develop themselves; if they do, they may become competent in their current and eventual position, struggling
to survive in an organization (Burke, 2018). Research has been carried out on the development of learning
agility measuring tools previously by Nurnaifah Selvia Wardhani, Marina Sulastiana, and Rezki Ashriyana in
July 2022 by developing Lombardo's and Eichinger (2000) theory, which contains 4 (four) dimensions of
learning agility, namely: people agility, results agility, mental agility, and change agility on the subject of
employees to be able to increase organizational agility. Sample in this research are individuals who are fresh
graduates, job seekers, or employees in a government agency, State-Owned Enterprise (BUMN), or private
institutions aged 18 to 45 years. Number of employees filling in in total there are 235 employees. This research
wants to continue previous research by using the latest theory from De Meuse (2015) with seven dimensions,
where the construct that previous researchers have developed is still four dimensions using the theory of
Lombardo and Eichinger (2000).
De Meuse's (2015) theory was used to produce the learning agility construct, which has seven (seven)
components: the cognitive perspective, interpersonal acumen, change alacrity, drive to excel, self-insight,
environmental mindfulness, and feedback responsiveness. A recent research finding that supports the
significance of including measurements of "environmental concern" and "feedback response" in a thorough
assessment of learning agility was the learning agility construct from De Meuse's (2015) theory (Hulsheger et
al., 2013; Sheldon et al., 2014). Table 1 below shows the information.
Table 1
Definition of the Learning Agility Dimension
Dimension Indicator
Cognitive Perspective People can think critically and strategically, approach organizational
challenges from a high-level and comprehensive viewpoint, and
Method
This study aims to create a tool for measuring the learning agility construct from De Meuse's
(2015) theory. This study falls under quantitative research in development (R&D). According to
Azwar (2014), quantitative research emphasizes its analysis in the form of numerical data (numbers)
processed using statistical methods. In research, this form of development focuses on creating or
validating specific items and evaluating their efficacy (Sugiyono, 2018). This development research
aims to learn more about valid and reliable measuring tools for learning agility. In this study, there
were 50 participants in the large sample trial and 20 people in the pilot study. The National Police
(Government Employees) and the Polda Metro Jaya Police Department in Indonesia are the samples.
The sample was chosen based on the requirements of having a two-year service history and having
worked permanently. The simplicity of getting data is the basis for the decision to take samples from
police facilities where the research results will be used at the institution studied. Non-probability
sampling utilizing convenience sampling methods is the sampling approach employed. Sugiyono
(2018) claims that convenience sampling is a sampling approach based on chance, meaning that
workers who randomly cross paths with researchers may be employed as samples provided it is
determined that the individual is suitable as a data source. The data was gathered using Jotform to
distribute an online questionnaire survey.
Table 2
Demography
Data Characteristic n %
Fitriani et al (Development And Validation Of Learning …..) 17
Jurnal Spirits e-ISSN 2622-3236
Vol. 14, No. 1, November 2023, pp. 14-32 p-ISSN 2087-7641
Male 32 62.7%
Gender
Female 18 35.3%
Stage 2, Researchers did an integrated examination of the literature on learning agility. The
literature review was part of an integrated methodology that enables the generation of perspectives on
novel conceptual models. Integrated literature research is a fundamental technique for reasoning and
explanation through data analysis. The theory and relevant information concerning current phenomena
are then clarified. No matter how extensive the research, an integrated literature review might offer a
fresh perspective or way of thinking.
Stage 3, since the sample to be measured is chosen: It is necessary to identify the subject to be
measured before creating a questionnaire. The participants in this study are already employed
employees.
Stage 4, involves identifying the components of the Learning Agility construct from De Meuse's
(2015) theory, which includes seven indicators: cognitive perspective, Interpersonal Acumen,
change alacrity, self-insight, drive to excel, environmental mindfulness, and feedback
responsiveness.
Stage 5, Gathering Items. The questionnaire items are prepared and divided into favorable and
unfavorable items. Where are the declarations about the favorable and unfavorable items? Favorable
items operationally define behavior that supports the subject's behavioral features. In contrast,
unfavorable items clash with or do not support the behavioral qualities that the subject's indicators on
the subject want.
Stage 6, Creating a score or rating for each question item: A Likert scale will be used to process data
from the questionnaire, specifically to give the weight of the assessment of each item. The variables that
are measured are converted into indicators or variables.
Stage 7, Following creating items in the learning agility construct, the researchers conducted a
readability test with 20 students from Esa Unggul University's Faculty of Psychology. An expert panel of
three psychologists familiar with the development of psychological assessment methods and the concept
of learning agility was used for content validity. Twenty of the 95 items created were revised. The 60
elements are then entered into the questionnaire on learning agility, which is available as an online
survey through the Jotfrom platform.
Stage 8, Conducting a Survey/Pilot Study. Following Arikunto (2013), this study is required to
determine whether the study's questionnaire is practicable. In order to gather data for this study, a
sample of 20 respondents was used.
Stage 9, Item Validity and Reliability Test. The consistency of several measures or measuring tools
is the subject of a reliability test. Reliability tests can take measurements made with the same measuring
tool yielding the same results (test with retest) or, for more subjective measurements, whether two
raters produce results reasonably close to each other (inter-rater reliability). Additionally, a validity test
is performed to demonstrate how well the measurement tool employed in a measurement measures
what is being measured. A test is said to have high validity if it fulfills its intended measurement purpose
or produces precise and accurate measurement results.
Stage 10, Establishing Measurement Instrument Standards: To interpret the quantitative
20 Fitriani et al (Development And Validation Of Learning …..)
Jurnal Spirits e-ISSN 2622-3236
Vol. 14, No. 1, November 2023, pp. 14-33 p-ISSN 2087-7641
data obtained as qualitative, the researcher creates measuring instrument norms from the
learning agility construct after getting valid and reliable item items.
Result
The statistical information from the questionnaire results from 50 participants using
the SPSS 25 program. Testing of data included:
Normality Test
The normality test in this study used the Kolmogorov-Smirnov method (K-S test). To
determine the normality of the data tested
Table 3
Normality Data
Test of Normality
Learning Agility
N 50
Normal Parametersa.b Mean 277.52
Std.Deviation 29.519
Most Extreme Differences Absolute 0.120
Positive 0.120
Negative -0.66
Test Statistic 0.120
Asymp. Sig. (2-tailed) 0.069c
From the results of the Normality test, the value of Asymp. Signature. (2-tailed) namely: 0.069
≥ 0.05, so it can be concluded that the data is normally distributed.
Homogeneity Test
The homogeneity test uses the One-Way ANOVA test with a level of 5% or looks at the
significance value.
Table 4
Homogeneity Data
Test of Homogeneity of Variances
Levene
df1 df2 Sig.
Statistic
Learning Based on Mean 6.970 1 48 0.011
Agility
Based on Median 7.078 1 48 0.011
Based on Median and 39.74
7.078 1 0.011
with adjusted df 1
Based on trimmed mean 6.987 1 48 0.011
From the results of the homogeneity test, the Asymp. Sig. (2-tailed) namely: 0.011 ≥ 0.05 so
that the data is homogeneous.
Table 5
Data ANOVA
ANOVA
Learning Agility
Sum of Mean
df F Sig.
Squares Square
Between
14522.605 1 1452.605 1.690 0.200
Groups
Within
412455.875 48 859.289
Groups
Total 426998.480 49
From the results of the ANOVA test, the value of Asymp. Sig. (2-tailed) namely: 0.200 ≥ 0.05 so
that the data is, on average, the same.
Item Discriminating
Following the completion of the precondition tests, the normality and homogeneity
tests—another item test—were conducted. The item differential power test aims to choose
items whose measuring function matches the test's (Azwar, 2012). Test items will be eliminate
if they are deemed low or poor quality. The researcher used a sample of 20 participants in a
pilot study to examine the items' differential power. The primary goal of the pilot study was to
evaluate the questionnaire's value as a survey tool for researchers and respondents (Hartono,
2010).
Testing the discriminating power of item or item discriminating power in the learning
agility construct is carried out by correlating the score of each item with the total score using
the Pearson Product Moment correlation technique. According to Azwar (2012), the criteria for
selecting items based on item correlations use the rix ≥ 0.30.
From the results of the Pearson Product-Moment correlation test, there are several items
with rix ≤ 0.30, namely items: 9, 18, 22, 23, 24, 27, 28, 29, 33, 39, 43, 44, 46, 47, 48, 52, 60, 64,
66, 67, 69, 72, 73, 75, 76, 77, 78, 81, 83, 85, 88, 90, 91, 93, 94. So that the item is no longer used
in the next test.
Kaiser Meyer Olkin and Bartlett’s test (KMO) Test
The KMO test and Bartlett's test are factor analysis tests used to identify the structure of
the correlation relationship between questions (indicators) so that the relationship between
latent dimensions is identified without knowing the theoretical basis behind them. The cut-off
value of the Kaiser Meyer Olkin Measure of Sampling Adequacy ≥ 0.5.
Table 6
Kaiser Meyer Olkin Measure of Sampling Adequacy Data
Kaiser Meyer Olkin Measure of Sampling Adequacy
Kaiser-Meyer-Olkin Measure of Sampling Adequacy 0.815
Approx. Chi-Square 235.569
Bartlett’s Test of Sphericity df 21
Sig. 0.000
From the results of the KMO MSA test, it obtained a value of 0.815 ≥ 0.5. Also, it obtained
Bartlett's Test of Sphericity value of (Sig.) 0.000 ≤ 0.05, so factor analysis can be done to see
the validity of the measurements in this study.
After the KMO and Bartlett's tests were carried out, the researcher continued with the
item validity test, which aimed to measure whether the data obtained after the research was
valid data or not. Validity indicates the degree of accuracy between the data that occurs on the
object and the data collected by the researcher. In testing the validity, the researcher carried
out 2 (two) times validity testing processes on small samples and large samples; the reason for
doing 2 (two) times the testing process was that researchers wanted to see the measurement
results with the degree of accuracy between the data that occurred on the subject and the data
collected by researchers.
The scale's items can be deemed invalid if they have a total item probability score of less
than 0.5. The scale's items are valid if the Pearson product-moment value has a total score
probability of less than 0.5.
Table 9
Pearson Product Moment between Item and dimension (Pilot Test)
Cognitive Perspective r p Interpretation
Item 1 0.607 0.005 Valid
Item 2 0.681 0.001 Valid
Item 3 0.584 0.007 Valid
Item 4 0.688 0.001 Valid
Item 5 0.597 0.005 Valid
Item 6 0.627 0.003 Valid
Item 7 0.799 0.000 Valid
Item 8* 0.429 0.059 Not valid
Item 10 0.690 0.001 Valid
Item 11 0.504 0.023 Valid
Item 12 0.808 0.000 Valid
Item 13 0.664 0.001 Valid
Item 14 0.715 0.000 Valid
Item 15 0.675 0.001 Valid
Interpersonal Acumen r p Interpretation
Item 16 0.733 0.000 Valid
Item 17 0.824 0.000 Valid
Item 19 0.733 0.000 Valid
Item 20* 0.359 0.120 Not valid
Item 21 0.499 0.025 Valid
Item 25 0.688 0.001 Valid
Item 26 0.735 0.000 Valid
Item 30* 0.362 0.117 Not valid
Change Alacrity r p Interpretation
Item 31 0.696 0.001 Valid
Item 32 0.648 0.002 Valid
Table 10
Pearson Product Moment between Item and dimension (field test)
Cognitive Perspective r p Interpretation
Item 1 0.740 0.000 Item valid
Item 2 0.809 0.000 Item valid
Item 3 0.738 0.000 Item valid
Item 4 0.746 0.000 Item valid
Item 5 0.669 0.000 Item valid
Item 6 0.723 0.000 Item valid
Item 7 0.755 0.000 Item valid
Item 8 0.622 0.000 Item valid
Item 10 0.678 0.000 Item valid
Item 11 0.692 0.000 Item valid
Item 12 0.808 0.000 Item valid
Item 13 0.772 0.000 Item valid
Item 14 0.743 0.000 Item valid
Item 15 0.732 0.000 Item valid
Interpersonal Acumen r p Interpretation
Item 16 0.743 0.000 Item valid
Item 17 0.861 0.000 Item valid
Item 19 0.847 0.000 Item valid
Item 20 0.649 0.000 Item valid
Item 21 0.759 0.000 Item valid
Item 25 0.806 0.000 Item valid
Item 26 0.819 0.000 Item valid
Item 30 0.473 0.001 Item valid
Change Alacrity r p Interpretation
Item 31 0.782 0.000 Item valid
Item 32 0.794 0.000 Item valid
Item 34 0.822 0.000 Item valid
Item 35 0.696 0.000 Item valid
Item 36 0.844 0.000 Item valid
Item 37 0.545 0.000 Item valid
Item 38 0.734 0.000 Item valid
Item 40 0.776 0.000 Item valid
Item 41 0.750 0.000 Item valid
Item 42 0.796 0.000 Item valid
Item 45 0.447 0.001 Item valid
Table 11
Pearson Product Moment between dimension and total score (Pilot Test)
Dimension r p Interpretation
Cognitive Perspective 0.938 0.000 Positive correlation
Interpersonal Acumen 0.750 0.000 Positive correlation
Change Alacrity 0.740 0.000 Positive correlation
Drive to Excel 0.905 0.000 Positive correlation
Table 12
Pearson Product Moment between dimension and total score (Field Test)
Dimension r p Interpretation
Cognitive Perspective 0.874 0.000 Positive correlation
Interpersonal Acumen 0.779 0.000 Positive correlation
Change Alacrity 0.842 0.000 Positive correlation
Drive to Excel 0.889 0.000 Positive correlation
Self-Insight 0.632 0.000 Positive correlation
Environmental Mindfulness 0.796 0.000 Positive correlation
Feedback Responsiveness 0.720 0.000 Positive correlation
Using the findings of Product-Moment Validity testing, calculate each dimension's value
and the dimension's value using the total value of the sum of the dimensions between the items.
The validity results between the small sample and the large sample are different. The
researcher obtains multiple items with a probability value 0.05 from testing with a small
sample, including items: 8, 20, 30, 50, 55, 59, 68, and 79. The Items were thus classified as
invalid. Additionally, a dimension called Self Insight has a bad association with the concept of
learning agility.
However, from the results of item revisions and validity testing again with a large
sample, all items' Product-Moment Validity values have a Sig value. (2-tailed) ≤ 0.05. Moreover,
it can also be seen from the Pearson correlation value of all items ≥ 0.279 (r-table) of N=50. So
that all items can be said to be valid and feasible to use, it can also be seen that the dimensions
are positively correlated and can be included in the learning agility construct.
After conducting item validity through the Pearson moment product, 60 items are
feasible and valid. Then, the researcher conducted a First Order Confirmatory Factor Analysis
(CFA) test to identify the correct numerical model and explain the relationship between a set of
items and the construct measured by the items. These items become indicators of the construct,
measured by looking at the low measurement error values and high component factor loadings.
According to Hu and Bentler (1999), a good structural model must be assessed: X2 and its
relationship with df, one absolute fit index (e.g., GFI, RMSEA, or SRMR), one incremental index
fit (e.g., CFI or TLI), one goodness-of-fit index (GFI et al.), and one badness-of-fit index (RMSEA
and SRMR).
Fitriani et al (Development And Validation Of Learning …..) 27
Jurnal Spirits e-ISSN 2622-3236
Vol. 14, No. 1, November 2023, pp. 14-33 p-ISSN 2087-7641
Figure 1 shows the correlation of the estimated numbers from the loading factor of the learning
agility construct with its dimensions.
Table 13
Model fit result
Variable X2 df p NNFI CFI RMSEA
Learning agility 49.625 14 ≤ 0.001 0.808 0.872 0.226
From the Goodness of fit Index measurement result at table 13, most of the Goodness of fit
criteria do not meet the cut-off value. This value follows the opinion of Ghozali (2006), which
states that some Goodness of Fit parameters do not meet the requirements. It can be seen from
the other parameters. If most of the parameters meet the requirements, then it can be stated
that the model meets the Goodness of Fit assumption.
Table 14
Factor loading result
Factor Indicator Estimate p Std.Est (all)
Cognitive Perspective 6.315 ≤ 0.001 0.812
The results of the factor loading test show that the Standardization Estimate All numbers
for each dimension are ≥ 0.71. According to Tabachnick and Fidell (2007), the factor loading
number ≥ 0.71 shows excellent dimensional results.
Measurement Norm
At the measuring level, the created learning agility construct generates numerical scores.
However, it can only generate categories or groups of scores at the ordinal level regarding
interpretation. The interpretation of psychological scale scores is normative, which means that
the score is used as a parameter to compare the score to a mean theoretical population norm,
allowing for the qualitative interpretation of quantitative measurement results. The norms
employed in this study are within group norms, broken down into three categories: high,
medium, and low. The different parts are the range, the hypothetical mean, the hypothetical
standard deviation, and the maximum hypothetical value. This norm was created using the
Azwar formula (2012).
The normal curve is used to categorize the research's norm. The learning agility construct
consists of 60 items, each with a response option of 5 on a Likert scale.
Figure 2
Measurement Norm of Learning Agility
Medium
Low High
Table 15
Table of Norms Categorization Learning agility measuring tool
Low : 0 – 140
Medium : 140 – 220
High : 220 – 300
Discussion
The bell curve categorizes the research's norm. The learning agility construct has 60
elements with a response range of 5 options. The data in psychological measurement tools are
valuable as the foundation for predictions or as part of the fundamental considerations for
decisions about the subject through interpretation and diagnosis (Azwar, 2022). Psychological
measuring tools, which contain the dimensions and signs of developing individuals, can be
Fitriani et al (Development And Validation Of Learning …..) 29
Jurnal Spirits e-ISSN 2622-3236
Vol. 14, No. 1, November 2023, pp. 14-33 p-ISSN 2087-7641
dimensional findings.
Based on the overall results of psychometric property testing, where it is proven that the
learning agility measurement dimensions constructed from the items in this research are
proven to be valid and reliable where all dimensions are positively correlated with loading
factor values >0.71, it can be concluded that the learning agility measuring tool can be used for
other purposes. Practical matters such as management decision-making or research needs
with similar themes.
Conclusion
Small and large samples were used to construct measuring instruments from the learning
agility construct. Based on the test results of a set of items from the power difference test, the
researcher eliminated 35 of the original 95 items, bringing the number of items that could be
tested further to 60. The items' reliability values showed that each item had a value of less than
0.9, with a mean of 0.946 on small samples. This value shows that the items are in the very high
correlation category. The researcher revises the items on a large sample, hoping that the item
revisions and reliability testing again show consistent numbers using the same measuring
instrument when testing small and large samples. After revising the items on a large sample,
the researcher conducted another reliability test with the results of Cronbach's Alpha
increasing by 0.968. The conclusion is that the items tested are reliable or trusted and can be
used in data collection.
The researcher next performed a validity test using the findings of the reliability test. The
researcher used a small sample size and a large sample size to test the dependability of the
carried conducted validity. Based on the findings of validity testing, which involved adding up
the values of each dimension's component elements and the value of each dimension when
added together. The validity results between the small sample and the large sample are
different. The researcher obtains multiple items with a probability value > 0.05 from testing
with a small sample, including items: 8, 20, 30, 50, 55, 59, 68, and 79. So, the item is invalid.
Furthermore, there is also a Self Insight dimension that negatively correlates with the learning
agility construct. However, from the results of item revisions and validity testing again with a
large sample, all items' Product-Moment Validity values have a Sig value. (2-tailed) ≤ 0.05.
Moreover, it can also be seen from the Pearson correlation value of all items ≥ 0.279 (r-table)
from N 50. So that all items can be said to be valid and feasible to use, it can also be seen that
the dimensions are positively correlated and can be included in the learning agility construct.
The researcher ran the First Order Confirmatory Factor Analysis (CFA) test through the
goodness of fit test using the results of the item validity test that had been acquired. The
outcomes revealed that while they nearly complied with the established restrictions, they still
needed improvement. However, the results of the factor loading values included some
excellent value indexes, as the researchers could see from other indications, particularly factor
loading. According to the CFA analysis test findings, the conclusion is that the measuring tool
developed for De Meuse's (2015) theory complies with basic psychometric principles.
SUGGESTION
Based on the validity and reliability testing results, we found that the Items were valid and
reliable. However, this research has limitations where the number of samples is still relatively
small and only came from police officers with a minimum of 2 years of work experience. For
this reason, we suggest that future research increase the number of samples and then conduct
convergent and divergent validity tests, considering that the validity of this study only looks at
homogeneity items.
Reference
AP2T, I. A., PLN, P., & Singajaya, J. A. Arikunto, Suharsimi. (2013). Prosedur Penelitian Suatu Pendekatan Praktik.
Jakarta: Rineka Cipta.
Azwar, S. (2012). Reliabilitas dan Validitas. Yogyakarta. Pustaka Pelajar
Azwar, S. (2014). Metode Penelitian, Yogyakarta: Pustaka Pelajar,
Azwar, S. (2022). Penyusunan Skala Psikologi edisi 2. Pustaka pelajar.
Burke, W. (2018). Technical Report: Burke Learning Agility Inventory TM v3. 3. EASI Consult.
Charan, R. (2005). Ending the CEO succession crisis. Harvard Business Review, 83(2), 72-81.
De Meuse, K. P., Dai, G., Hallenbeck, G. S., & Tang, K. Y. (2008). Using learning agility to identify high potentials
around the world. Korn/Ferry Institute, 1-22.
De Meuse, K. P., & Feng, S. (2015). The Development and Validation of the TALENTx7 Assessment: A Psychological
Measure of Learning Agility. Shanghai, China: Leader’s Gene Consulting.
DeRue, D. S., Ashford, S. J., & Myers, C. G. (2012). Learning agility: In search of conceptual clarity and theoretical
grounding. Industrial and Organizational Psychology: Perspectives on Science and Practice, 5, 258-
279.
Eichinger, R. W., & Lombardo, M. M. (2004). Learning agility as a prime indicator of potential. Human Resource
Planning, 27, 12–15.
Ferry, K. (2011). Learning Agility Self-Assessment. The Art Science of Talent,2(3), 1–3.
Ghozali, Imam. (2006). Aplikasi Ananlisis Multivariate dengan Program SPSS. Semarang: Badan Penerbit
UNDIP .
Gravett, L. S., & Caldwell, S. A. (2016). Learning Agility: The Impact on Recruitment and Retention. In Learning
Agility: The Impact on Recruitment and Retention.
Hartono, Jogiyanto. (2010). Metodologi Penelitian Bisnis: Salah Kaprah dan Pengalaman-Pengalaman. Edisi
Pertama. BPFE. Yogyakarta.
Hewitt Associates. (2005). The top companies for leaders. The Journal of the Human Resource Planning Society,
28(3), 18-23.
Hu, L.-t., & Bentler, P. M. (1999). Cut-off criteria for fit indexes in covariance structure analysis: Conventional
criteria versus new alternatives. Structural Equation Modeling, 6(1), 1-55.
32 Fitriani et al (Development And Validation Of Learning …..)
Jurnal Spirits e-ISSN 2622-3236
Vol. 14, No. 1, November 2023, pp. 14-33 p-ISSN 2087-7641
Hülsheger, U. R., Alberts, H. J. E. M., Feinholdt, A., & Lang, J. W. B. (2013). Benefits of mindfulness at work: The
role of mindfulness in emotion regulation, emotional exhaustion, and job satisfaction. Journal of
Applied Psychology, 98, 310–325.
Lombardo, M. M., & Eichinger, R. W. (2000). High potentials as high learners. Journal Human Resource
Management, 39(4), 321-330.
Sheldon, O. J., Dunning, D., & Ames, D. R. (2014). Emotionally unskilled, unaware, and uninterested in learning
more: reactions to feedback about deficits in emotional intelligence. Journal of Applied Psychology, 99,
125–137.
Smith, B. C. (2015). How does learning agile business leadership differ? Exploring a revised model of the
construct of learning agility in relation to executive performance. In Columbia University. Columbia
University.
SHRM. (2006). Succession Planning: Survey Report. Alexandria, Virginia: Society for Human Resource
Management.
Silzer, R., & Church, A. H. (2009). The pearls and perils of identifying potential. Industrial and organizational
psychology: Perspectives on Science and Practice, 2, 377–412.
Sugiyono. (2018). Metode Penelitian Kuantitatif. Bandung: Alfabeta
Tabachnick, B. G., & Fidell, L. S. (2007). Using Multivariate Statistics (5th ed.). New York: Allyn and Bacon.