0% found this document useful (0 votes)
32 views17 pages

Social Research Instruments

This document presents an introduction to the concepts and instruments of social research. It defines instruments as tools to collect information and differentiates between techniques, which are procedures, and instruments, which are tangible resources. Explains the common components of the instruments and classifies questions according to their form, content, and levels of measurement.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views17 pages

Social Research Instruments

This document presents an introduction to the concepts and instruments of social research. It defines instruments as tools to collect information and differentiates between techniques, which are procedures, and instruments, which are tangible resources. Explains the common components of the instruments and classifies questions according to their form, content, and levels of measurement.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

UNIVERSITY OF CARABOBO

FACULTY OF ECONOMICS AND SOCIAL SCIENCES


SCHOOL OF INDUSTRIAL RELATIONS
LA MORITA CAMPUS

Instruments of
Social investigation

Prof. : Laura Maldonado Members:


Cesar García CI 15,601,099
María José Zavala Orozco CI 25,708,361

March, 2020
Research instruments: Concepts.
There are a wide variety of authors on research methodology who define or
conceptualize what research instruments are within the social sciences.
To begin, there is the definition given by Duarte and Parra (2014:92) where
they indicate that a research instrument is “the tool used by the researcher to
collect information from the selected sample and be able to solve the research
problem.”
Likewise, Rodríguez (2008: 59) points out that instruments are “devices, tools
or apparatus that the researcher uses to record the information obtained through a
technique.”
For his part, Ávila (2000: 65) states that the instruments of an investigation
“are tools that allow us to carefully observe the phenomenon, fact or case, take
and record information about it, and then analyze it.”
Based on these authors, it can be concluded that research instruments are
resources that a researcher has to collect information about the object of the study
that will provide input for its analysis and subsequent solution.

Differences between research techniques and instruments .

For Duarte and Parra (2014), the technique is rules that allow something to
be done well and the instrument is the means that allows information to be
recorded.
For Ávila (2000) The technique is the procedure used to achieve a certain
objective, while an instrument is the tool (resource) the "with what" with which we
are going to achieve it.
Likewise for Hurtado (2012), depending on the method used for research, it
influences the technique and the instrument.
For this author, the technique is a resource that the researcher has to
approach the facts and access their knowledge. These techniques are supported
by instruments that can be very varied, such as: the notebook for recording
observations and facts, the field diary, questionnaires, interview scripts, among
others.

As said previously, each method or paradigm has a different technique and


methods.
It can be said that the main difference between a technique and an
instrument is that the technique is not something tangible, since it is a resource
that the researcher has, while the instruments are tangible elements and with
which the collection of data is evidenced. data, that is, the instruments materialize
and allow the information to be recorded. Below is a summary table of some
techniques and instruments.

Taken from Hurtado (2012: 57)


Parts and/or components of research instruments
Hurtado (2012) points out that whatever the research instrument, they must
have an organization and summarizes them in 3 parts:
Cover : It is a type of header where the identification of the place or research
center to which the researcher belongs and the logo of the institution can be
placed. Sometimes an instrument's own logo or a symbol is added to identify it.
Presentation: where the researcher writes the purpose of the instrument to be
applied. It is mainly used in questionnaires.
Instructions : where it is explained how to use the instrument.

Type of questions, classification

Rodríguez (2008) states that a question is any form of expression that


involves the investigation of something. They are also called item or reagent. It is a
statement or sentence that is written in an interrogative or affirmative form and that
constitutes the fundamental body of the instrument that is intended to be
constructed.
The author indicates that when these questions are for the instrument called
questionnaire, they can be classified as follows:
A-) Sociodemographic data questions (sex, age, religion, profession, among
others.
B-) Content questions, referring to the variables under study.

For their part, authors such as Bisquerra (2012) and Murillo (2004) indicate
that the questions are classified as follows:

According to the form of response


 Unstructured or open questions : they are open-response questions, the
respondents answer in their own words. They are useful for exploratory
research and as opening questions in a questionnaire. Example: Which air
transport company do you consider provides the best service? For analysis,
after obtaining the answers, the researcher will indicate the names of the
airlines and they will be rated (a score will be assigned).
 Structured or closed questions : Structured questions present a group of
pre-established response alternatives. These can be:
a) Multiple choice questions : these are those in which a series of answers
are offered and the participant is asked to select one or more of the alternatives
offered. Example: The clothing items that you have requested the most in the last
six months in the CLOTHES store
a) Women's clothing _____
b) Men's clothing _____
c) Children's clothing _____
d) Cosmetics _____
e) Jewelry _____
f) Shoes _____
g) Others (please specify) _______________

b) Dichotomous questions: these are items that provide only two (2) response
alternatives such as: true-false, yes-no, agreement-disagreement, present-absent,
among others. Example: I have attended a dental consultation in the last three
months:
But ____
This type of question can frequently be presented as a multiple choice
question if it is complemented with a neutral alternative such as: I don't think, I
don't know, both or neither, it doesn't apply, etc.
c) Scale questions: these are questions whose answers are given through a
pre-established scale, whether developed by the researcher, a Likert scale or
another. Example: You intend to take a trip to Europe in the next six months:
According to its content :
 Action: They deal with the actions of the interviewees. Ex. Do
you go to the movies? Do you smoke?
 Intention : They inquire about the intentions of the
respondents. Ex. Are you going to vote?
 Opinion : They deal with the opinion surveyed on certain
topics. Ex. What do you think about…?
 Information: They analyze the degree of knowledge of the
respondents on certain topics.
 Reasons : They try to know the reason for certain opinions or
actions.

For authors such as Ávila (2000), Bisquerra (2012), Duarte and Parra (2014),
the questions have the following advantages:

Measurement levels: concept and examples

Ruiz (1998) indicates that Measurement means “assigning numbers to


objects and events according to rules.” However, in the field of social sciences,
measurement is “the process of linking abstract concepts with empirical indicators.”
The measurement of the variables can be carried out through four
measurement scales. Two of the scales measure categorical variables and the
other two measure numerical variables.
The scales are: nominal, ordinal, interval and ratio. They are used to help in
the classification of variables, the design of questions, to measure variables, and
even indicate the type of statistical analysis appropriate for data treatment.
a) Nominal Measurement .
At this level of measurement, distinctive categories are established that do
not imply a specific order. For example, if the unit of analysis is a group of people,
to classify them, the sex category can be established with two levels, male (M) and
female (F), the respondents only have to indicate their gender, a royal order. Thus,
if numbers are assigned to these levels they only serve for identification and can
be indistinct: 1=M, 2=F or, the numbers can be inverted without affecting the
measurement: 1=F and 2=M.
In summary, on the nominal scale, numbers are assigned to events for the
purpose of identifying them. There is no quantitative reference.

b) Ordinal Measurement.
Categories are established with two or more levels that imply an inherent
order among themselves. The ordinal measurement scale is quantitative because it
allows events to be ordered based on the greater or lesser possession of an
attribute or characteristic. For example, in basic level school institutions they
usually train students by height, a quantitative order is developed but it does not
provide measurements of the subjects. Classifying a group of people by the social
class to which they belong implies a prescribed order that goes from highest to
lowest. These scales support the assignment of numbers based on a prescribed
order.
The most common forms of ordinal variables are attitudinal (reactive) items
establishing a series of levels that express an attitude of agreement or
disagreement with respect to some referent. For example, when faced with the
item: The Venezuelan economy should be dollarized, the respondent can mark his
or her response according to the following alternatives:

___ Totally agree


___ OK
___ Indifferent
___ In disagreement
___ Totally disagree

The previous response alternatives can be coded with numbers ranging from
one to five that suggest a pre-established order but do not imply a distance
between one number and another. Attitude scales are ordinal scales

c) Interval Measurement.
Interval measurement has the characteristics of nominal and ordinal
measurement. Establishes the distance between one measurement and another.
The interval scale applies to continuous variables but lacks an absolute zero point.
The most representative example of this type of measurement is a thermometer,
when it registers zero degrees Celsius temperature it indicates the freezing level of
the water and when it registers 100 degrees Celsius it indicates the boiling level,
the zero point is arbitrary not real, which means that at this point there is no
absence of temperature.

d) Ratio Measurement.
A ratio measurement scale includes the characteristics of the three previous
levels of measurement (nominal, ordinal, and interval). Determines the exact
distance between the intervals of a category. Additionally, it has an absolute zero
point, that is, at the zero point the characteristic or attribute that is measured does
not exist. The variables of income, age, number of children, etc. are examples of
this type of scale. The ratio level of measurement applies to both continuous and
discrete variables.
Aspects to consider when writing questions for a research instrument
There are various criteria for writing the questions that make up an
instrument, among the possible rules to follow in an amalgamation of various
authors (Ávila, 2000; Murillo, 2004; Palella and Martins, 2003) there are:

 Clearly define the topic being addressed


 Use common words and go according to the vocabulary level of the
participants, avoid technical terms.
 Avoid questions that make the respondent uncomfortable.
 It is not advisable to make generalizations, the items must be specific
and should not lead the participant to calculate estimates.
 The questions, especially those that measure attitudes and lifestyles,
are written as statements with which participants indicate their degree
of agreement or disagreement.
 Use positive (affirmative) and negative propositions.
 Take care of the writing and spelling used.
 Avoid writing biased or biased questions or directed answers that
predispose the interviewee towards some possible form of response.
They must present it in a balanced way.
 Omitting possible answer options is also a form of bias.
 Regarding the final number of questions that make up an instrument, it
should not be too short because information is lost and not too long
because it can become tedious.

Standardized and/or normalized instruments.


Most studies use instruments that are not standardized, but are developed
explicitly for the research itself.
However, there are studies that use standardized instruments, for example
those related to the health field and some studies in the educational field. In this
sense, for Ruiz (1998), standardization is the process through which, during the
development process of a measurement instrument, the invariant conditions under
which it must be used are established.
Standardization covers three aspects:
-Content
-Administration and qualification conditions
- Application rules

In the standardization process, the standards for its application and


interpretation of results are determined. Thus, for the application of a test it must
be done under certain conditions, which must be met by both those who apply it
and those to whom it is applied.

- Standardization of Content : It consists of all subjects must be examined


using the same or equivalent items in order to be able to directly compare their
performance, since the results will be based on equivalent samples of reagents.
- Standardization of administration and qualification conditions : It consists of
the administration and qualification of the instrument under invariant conditions. In
this way it is possible to eliminate a large part of the error variance.
- Standardization by application standard: consists of pre-established criteria
that are used as a point of comparison of the individual scores obtained by the
subjects in the tests with respect to the performance of a representative group that
is taken as a reference.
Validity: concept, types and usefulness
In the field of metrology, psychometrics and statistics, validity is a concept
that refers to the ability of a measuring instrument to significantly and adequately
quantify the trait for which it has been designed to be measured. Within the
research field, Validity according to Ruiz (1998) refers to the degree to which an
instrument actually measures the variable it intends to measure. This author
classifies validity into three types:

- Content validity : Defines whether a test or experiment lives up to its claims


or not. It thus refers to an operational definition of a variable. It is said that a test or
test meets the conditions of content validity if it constitutes an adequate and
representative sample of the contents and scope of the construct or dimension to
be evaluated.
- Criterion Validity : Evaluates whether a test reflects a certain set of skills or
not. It refers to the degree of effectiveness with which a variable of interest
(criterion) can be predicted or forecast from the scores on a test. It is common for
personnel selection processes to use instruments that aim to determine or predict
the future performance of job candidates based on the answers obtained.
- Construct validity : It is a more complex concept. It refers to the degree to
which the measuring instrument meets the assumptions that would be expected for
a measuring instrument designed to measure precisely what it was intended to
measure. It can be considered a general concept that would encompass the other
types of validity.
The importance or usefulness within a validity investigation is that it allows
you to measure what you really want to measure and that it represents the
possibility that a research method is capable of answering the questions
formulated. With validity, the revision of the presentation is determined. of the
content, the contrast of the indicators with the items (questions) that measure the
corresponding variables. Validity is estimated as the fact that a test is conceived,
developed and applied in such a way and that it measures what it proposes to
measure.
Reliability: concept, types and usefulness

Reliability for Ruiz (1998) refers to the level of accuracy and consistency of
the results obtained when applying the instrument a second time in conditions that
are as similar as possible. It states that the key question to determine the reliability
of a measurement instrument is: If phenomena or events are measured over and
over again with the same measurement instrument, are the same or very similar
results obtained? If the answer is affirmative, it can be said that the instrument is
reliable.
Among the methods to estimate reliability, there are:
- Test-Retest Method : one way to estimate the reliability of a test or
questionnaire is to administer it twice to the same group and correlate the scores
obtained. This procedure is not suitable for applying it to knowledge tests but rather
for measuring physical and athletic abilities, personality and motor tests. The
coefficient obtained is called the stability coefficient because it denotes the
consistency of the scores over time.
- Common method of dividing by halves or Hemitest : this method computes
the correlation coefficient between the scores of the two halves of the test or
questionnaire applied. This assumes that the two test halves are parallel, have
equal length and variance with each other. It is estimated through the
SpearmanBrown reliability coefficient:
.- Rulon's halving method: uses the division of the test into halves, but its
method does not necessarily assume equal variances in the sub-tests.
- Guttman's halving method : also called internal consistency coefficient. Its
formula is:
- Cronbach's Alpha Coefficient : To evaluate the reliability or homogeneity of
the questions or items, it is common to use Cronbach's alpha coefficient when it
comes to polychotomous response alternatives, such as Likert-type scales; which
can take values between 0 and 1, where: 0 means null reliability and 1 represents
total reliability.
- Kuder-Richarson 20 method : allows reliability to be obtained from the data
obtained in a single application of the test. Internal consistency coefficient. It can
be used in questionnaires with dichotomous items and when there are
dichotomous alternatives with correct and incorrect answers.
- Kuder-Richarson method 21 : allows reliability to be obtained from the data
obtained in a single application of the test. The basic assumption is to consider that
all items have equal variance. Internal consistency coefficient.
Below is a summary table of the methods to estimate the reliability of an
instrument.

Taken from Palella and Martins (2003:155)


Reliability and validity are essential qualities that must be present in all
scientific instruments for data collection. In the words of Chávez (2001:71), if the
instrument or instruments meet these requirements there will be a certain
guarantee of the results obtained in a given study and, therefore, the conclusions
can be credible and worthy of greater confidence.

Order of the questions and length of the instruments .

According to authors such as Ávila (2000), Palella and Martins (2003),


Murillo, (2004), Duarte and Parra (2014), there is no rule regarding the size that a
particular instrument, questionnaires, should have. However, the authors agree
that if it is too short, information is lost and if it is too long, it can be tedious. That is
why the size of the instrument will depend on the number of variables and
dimensions to be measured, the interest of the respondents and the way in which it
is administered.
Regarding the order of the questions placed in a questionnaire, the authors
agree that it is convenient to start with questions that are easy to answer so that
the respondent gets deeper into the situation and their disposition to it is more
favorable. It can also start with demographic questions.
Then you can begin by placing the questions that measure each variable and
dimension, taking into account the way in which the researcher wants to present
them.

Pilot test and/or pretest: concept and importance

For Chávez (2001) a pilot test is a validation procedure of the instrument that
is carried out before applying the instrument to the final sample of a study.
For this, the instrument that has been designed for research is applied to a
small group of the population. This must guarantee the same conditions of
realization as real field work. A small group of subjects is recommended that do not
belong to the selected sample but do belong to the population or a group with
characteristics similar to that of the study sample. In this way the reliability of the
instrument will be estimated.
One of the functions of the pilot test is to re-evaluate the clarity with which the
items are written. Although the experts have helped evaluate this characteristic,
they are not the target population, so the application of the test pilot must have the
presence of the person who created the instrument, in order to clarify the concepts
written in it and that the target population does not understand.
The purpose of building an instrument is that it can be used through the data
collection technique called a survey, where the instrument has the ability to explain
itself and that in its final application does not require the presence of the
researcher or the person who created the instrument.

Process and preparation of the instrument


.
To build the instrument to use in data collection in research, Ruiz (2002)
recommends following the following steps or phases:
1) List the variables that you want to measure.
1) Determine the purpose of the instrument, make decisions about the
purpose of the instrument, what we want it for.
2) Deciding on the type of instrument is the second decision in the process of
designing and developing the measuring instrument.
3) Conceptualize the construct; it is essential to carry out a detailed and
careful review of the specialized literature in order to define the construct.
4) Operationalize the construct, in this phase the construct is conceptualized
in specific procedures through a set of tasks, reagents, questions or items, which
allow the construct to be empirically validated.
5) Integrate the instrument. It is necessary to think about the number of items
required, the type of reagent or question, the spatial organization of the
information, the precision of the instructions, the clarity of the wording, the time to
answer it, among others.
6) Carry out the Pilot Test. The first version of the instrument is applied to a
sample with characteristics similar to that of the population.
7) Technical study. Includes: item analysis, estimation of the reliability of the
measure, study of the validity of the instrument, standardization and normalization

REFERENCES

Avila, Rosa. (2000), Research Methodology. Peru: Estudios y Ediciones


publishing house.

Bisquerra, Rafael. (2012). Educational research methods Barcelona-Spain: La


Muralla

Chávez, Nolverto. (2001). Introduction to educative research . Maracaibo:


without publisher

Duarte, José; Parra, Eglee. (2014). What you should know about a research
paper (3rd ed.). Maracay: Morles
Hurtado de B., Jacqueline (2012). Holistic research methodology. Caracas:
Sypal Foundation.

Murillo, Javier. (2004). Data Collection TechniquesI: Questionnaires and


Attitude Scales. Spain: Autonomous University of Madrid. Faculty of Teacher
Training and Education. (Online document). Available:
www.uam.es/personal_pdi/stmaria/jmurillo/Metodos/Ap_Instrumentos.doc

Palella, Santa. and Martins, Feliberto. (2003). Quantitative Research


Methodology. Caracas: Fedupel .

Rodríguez, Magín. (2008). Successful strategy for research. . Maracay: Fedupel

Ruiz, Carlos. (1998). Educational Research Instruments . Venezuela: CIDG,


CA.

You might also like