Research Methods and Measurement Levels
Research Methods and Measurement Levels
Measurement:
Measurement is the process of observing and recording the observations that are
collected as part of a research effort. There are two major issues that will be
considered here.
First, to understand the fundamental ideas involved in measuring. Here we consider
two of major measurement concepts. In Levels of Measurement, the meaning of the
four major levels of measurement: nominal, ordinal, interval and ratio. Then we move
on to the reliability of measurement, including consideration of true score theory and
a variety of reliability estimators.
Second, to understand the different types of measures that you might use in social
research. We consider four broad categories of measurements. Survey
research includes the design and implementation of interviews and
questionnaires. Scaling involves consideration of the major methods of developing and
implementing a scale. Qualitative research provides an overview of the broad range
of non-numerical measurement approaches. And unobtrusive measures presents a
variety of measurement methods that don’t intrude on or interfere with the context of
the research.
LEVELS OF MEASUREMENT
There are different levels of measurement. These levels differ as to how closely they
approach the structure of the number system we use. It is important to understand the
level of measurement of variables in research, because the level of measurement
determines the type of statistical analysis that can be conducted, and, therefore, the
type of conclusions that can be drawn from the research.
Nominal Level
A nominal level of measurement uses symbols to classify observations into categories
that must be both mutually exclusive and exhaustive. Exhaustive means that there
must be enough categories that all the observations will fall into some category.
Mutually exclusive mean
s that the categories must be distinct enough that no vations will fall
obser
into more than one category. This is the most basic level of measurement; it is
essentially labeling. It can only establish whether two observations are alike or
different, for example, sorting a deck of cards into two piles: red cards and black cards.
In a survey of boaters, one variable of interest was place of residence. It was measured
by a question on a questionnaire asking for the zip code of the boater's principal place
of residence. The observations were divided into zip code categories. These categories
are mutually exclusive and exhaustive. All respondents live in one zip code category
(exhaustive) but no boater lives in more than one zip code category (mutually
exclusive). Similarly, the sex of the boater was determined by a question on the
questionnaire. Observations were sorted into two mutually exclusive and exhaustive
categories, male and female. Observations could be labeled with the letters M and F, or
the numerals 0 and 1.
The variable of marital status may be measured by two categories, married and
unmarried. But these must each be defined so that all possible observations will fit into
one category but no more than one: legally married, common-law marriage, religious
marriage, civil marriage, living together, never married, divorced, informally separated,
legally separated, widowed, abandoned, annulled, etc.
In nominal measurement, all observations in one category are alike on some property,
and they differ from the objects in the other category (or categories) on that property
(e.g., zip code, sex). There is no ordering of categories (no category is better or worse,
or more or less than another).
Ordinal Level
An ordinal level of measurement uses symbols to classify observations into categories
that are not only mutually exclusive and exhaustive; in addition, the categories have
some explicit relationship among them.
For example, observations may be classified into categories such as taller and shorter,
greater and lesser, faster and slower, harder and easier, and so forth. However, each
observation must still fall into one of the categories (the categories are exhaustive) but
no more than one (the categories are mutually exclusive). Meats are categorized as
regular, choice, or prime; the military uses ranks to distinguish categories of soldiers.
Most of the commonly used questions which ask about job satisfaction use the ordinal
level of measurement. For example, asking whether one is very satisfied, satisfied,
neutral, dissatisfied, or very dissatisfied with one's job is using an ordinal scale of
measurement.
Interval Level
An interval level of measurement classifies observations into categories that are not
only mutually exclusive and exhaustive, and have some explicit relationship among
them, but the relationship between the categories is known and exact. This is the first
quantitative application of numbers.
In the interval level, a common and constant unit of measurement has been established
between the categories. For example, the commonly used measures of temperature are
interval level scales. We know that a temperature of 75 degrees is one degree warmer
than a temperature of 74 degrees, just as a temperature of 42 degrees is one degree
warmer than a temperature of 41 degrees.
Numbers may be assigned to the observations because the relationship between the
categories is assumed to be the same as the relationship between numbers in the
number system. For example, 74+1=75 and 41+1=42.
The intervals between categories are equal, but they originate from some arbitrary
origin. that is, there is no meaningful zero point on an interval scale.
Ratio Level
The ratio level of measurement is the same as the interval level, with the addition of a
meaningful zero point. There is a meaningful and non-arbitrary zero point from which
the equal intervals between categories originate.
For example, weight, area, speed, and velocity are measured on a ratio level scale. In
public policy and administration, budgets and the number of program participants are
measured on ratio scales.
In many cases, interval and ratio scales are treated alike in terms of the statistical tests
that are applied.
Variables measured at a higher level can always be converted to a lower level, but not
vice versa. For example, observations of actual age (ratio scale) can be converted to
categories of older and younger (ordinal scale), but age measured as simply older or
younger cannot be converted to measures of actual age.
Questionaries & Instruments:
A questionnaire is a research tool featuring a series of questions used to collect useful
information from respondents. These instruments include either written or oral
questions and comprise an interview-style format. Questionnaires may be qualitative
or quantitative and can be conducted online, by phone, on paper or face-to-face, and
questions don’t necessarily have to be administered with a researcher present.
Questionnaires feature either open or closed questions and sometimes employ a
mixture of both. Open-ended questions enable respondents to answer in their own
words in as much or as little detail as they desire. Closed questions provide
respondents with a series of predetermined responses they can choose from.
Advantages of Questionnaires
Some of the many benefits of using questionnaires as a research tool include:
Practicality: Questionnaires enable researchers to strategically manage their
target audience, questions and format while gathering large data quantities on
any subject.
Cost-efficiency: You don’t need to hire surveyors to deliver your survey questions
— instead, you can place them on your website or email them to respondents at
little to no cost.
Speed: You can gather survey results quickly and effortlessly using mobile tools,
obtaining responses and insights in 24 hours or less.
Comparability: Researchers can use the same questionnaire yearly and
compare and contrast research results to gain valuable insights and minimize
translation errors.
Scalability: Questionnaires are highly scalable, allowing researchers to
distribute them to demographics anywhere across the globe.
Standardization: You can standardize your questionnaire with as many
questions as you want about any topic.
Respondent comfort: When taking a questionnaire, respondents are
completely anonymous and not subject to stressful time constraints, helping
them feel relaxed and encouraging them to provide truthful responses.
Easy analysis: Questionnaires often have built-in tools that automate analyses,
making it fast and easy to interpret your results.
Disadvantages of Questionnaires
Questionnaires also have their disadvantages, such as:
Answer dishonesty: Respondents may not always be completely truthful with
their answers — some may have hidden agendas, while others may answer how
they think society would deem most
acceptable.
Question skipping: Make sure to require answers for all your survey questions.
Otherwise, you may run the risk of respondents leaving questions unanswered.
Interpretation difficulties: If a question isn’t straightforward enough,
respondents may struggle to interpret it accurately. That’s why it’s important to
state questions clearly and concisely, with explanations when necessary.
Survey fatigue: Respondents may experience survey fatigue if they receive too
many surveys or a questionnaire is too long.
Analysis challenges: Though closed questions are easy to analyze, open
questions require a human to review and interpret them. Try limiting open-
ended questions in your survey to gain more quantifiable data you can evaluate
and utilize more quickly.
Unconscientious responses: If respondents don’t read your questions
thoroughly or completely, they may offer inaccurate answers that can impact
data validity. You can minimize this risk by making questions as short and
simple as possible.
Types of Questionnaires in Research
There are various types of questionnaires in survey research, including:
Postal: Postal questionnaires are paper surveys that participants receive
through the mail. Once respondents complete the survey, they mail them back
to the organization that sent them.
In-house: In this type of questionnaire, researchers visit respondents in their
homes or workplaces and administer the survey in person.
Telephone: With telephone surveys, researchers call respondents and conduct
the questionnaire over the phone.
Electronic: Perhaps the most common type of questionnaire, electronic surveys
are presented via email or through a different online medium.
A research instrument is a tool used to obtain, measure, and analyze data from subjects
around the research topic.
To decide the instrument to use based on the type of study you are conducting:
quantitative, qualitative, or mixed-method. For instance, for a quantitative study, you
may decide to use a questionnaire, and for a qualitative study, you may choose to use a
scale.
What is sampling?
Sampling is a technique of selecting individual members or a subset of the population
to make statistical inferences from them and estimate characteristics of the whole
population. Different sampling methods are widely used by researchers in market
research so that they do not need to research the entire population to collect
actionable insights.
It is also a time-convenient and a cost-effective method and hence forms the basis of
any research design. Sampling techniques can be used in a research survey software
for optimum derivation.
For example, if a drug manufacturer would like to research the adverse side effects of
a drug on the country’s population, it is almost impossible to conduct a research study
that involves everyone. In this case, the researcher decides a sample of people
from each demographic and then researches them, giving him/her indicative feedback
on the drug’s behavior.
In this blog, we discuss the various probability and non-probability sampling methods
that you can implement in any market research study.
For example, in a population of 1000 members, every member will have a 1/1000
chance of being selected to be a part of a sample. Probability sampling eliminates bias
in the population and gives all members a fair chance to be included in the sample.
For example, if the United States government wishes to evaluate the number
of immigrants living in the Mainland US, they can divide it into clusters based
on states such as California, Texas, Florida, Massachusetts, Colorado, Hawaii,
etc. This way of conducting a survey will be more effective as the results will
be organized into states and provide insightful immigration data.
Four types of non-probability sampling explain the purpose of this sampling method in
a better manner:
Convenience sampling: This method is dependent on the ease of access to
subjects such as surveying customers at a mall or passers-by on a busy street.
It is usually termed as convenience sampling, because of the researcher’s ease
of carrying it out and getting in touch with the subjects. Researchers have
nearly no authority to select the sample elements, and it’s purely done based
on proximity and not representativeness. This non-probability sampling
method is used when there are time and cost limitations in collecting
feedback. In situations where there are resource limitations such as the initial
stages of research, convenience sampling is used.
For example, startups and NGOs usually conduct convenience sampling at a
mall to distribute leaflets of upcoming events or promotion of a cause – they
do that by standing at the mall entrance and giving out pamphlets randomly.
Since there is a method for deciding the Since the sampling method is
sample, the population demographics are arbitrary, the population
Sample
conclusively represented. demographics representation is
almost always skewed.
Takes longer to conduct since the research This type of sampling method is
design defines the selection parameters quick since neither the sample or
Time Taken
before the market research study begins. selection criteria of the sample
are undefined.
In probability
sampling,
there is an In non-probability sampling, the hypothesis is derived after conducting the
Hypothesis
underlying research study.
hypothesis
before the
study begins
and the
objective of
this method is
to prove the
hypothesis.
getting to know the data and understanding what has to be done before the data
becomes useful in a particular context.
Discovery is a big task, but Talend’s data preparation platform offers visualization tools
which help users profile and browse their data.
3. Cleanse and validate data
Cleaning up the data is traditionally the most time consuming part of the data
preparation process, but it’s crucial for removing faulty data and filling in gaps.
Important tasks here include:
Removing extraneous data and outliers.
Filling in missing values.
Conforming data to a standardized pattern.
Masking private or sensitive data entries.
Once data has been cleansed, it must be validated by testing for errors in the data
preparation process up to this point. Often times, an error in the system will become
apparent during this step and will need to be resolved before moving forward.
4. Transform and enrich data
Transforming data is the process of updating the format or value entries in order to
reach a well-defined outcome, or to make the data more easily understood by a
wider
audience. Enriching data refers to adding and connecting data with other related
information to provide deeper insights.
5. Store data
Once prepared, the data can be stored or channeled into a third party application—
such as a business intelligence tool—clearing the way for processing and analysis to
take place.
WhatisDataExploration?
Data exploration definition: Data exploration refers to the initial step in data analysis in
which data analysts use data visualization and statistical techniques to describe dataset
characterizations, such as size, quantity, and accuracy, in order to better understand
the nature of the data.
Data exploration techniques include both manual analysis and automated data
exploration software solutions that visually explore and identify relationships between
different data variables, the structure of the dataset, the presence of outliers, and the
distribution of data values in order to reveal patterns and points of interest, enabling
data analysts to gain greater insight into the raw data.
Data is often gathered in large, unstructured volumes from various sources and data
analysts must first understand and develop a comprehensive view of the data before
extracting relevant data for further analysis, such as univariate, bivariate, multivariate,
and principal components analysis.
DataExplorationTools
Manual data exploration methods entail either writing scripts to analyze raw data or
manually filtering data into spreadsheets. Automated data exploration tools, such as
data visualization software, help data scientists easily monitor data sources and
perform big data exploration on otherwise overwhelmingly large datasets. Graphical
displays of data, such as bar charts and scatter plots, are valuable tools in visual data
exploration.
A popular tool for manual data exploration is Microsoft Excel spreadsheets, which can
be used to create basic charts for data exploration, to view raw data, and to identify the
correlation between variables. To identify the correlation between two continuous
variables in Excel, use the function CORREL() to return the correlation. To identify the
correlation between two categorical variables in Excel, the two-way table method, the
stacked column chart method, and the chi-square test are effective.
Humans process visual data better than numerical data, therefore it is extremely
challenging for data scientists and data analysts to assign meaning to thousands of
rows and columns of data points and communicate that meaning without any visual
components.
Data visualization in data exploration leverages familiar visual cues such as shapes,
dimensions, colors, lines, points, and angles so that data analysts can effectively
visualize and define the metadata, and then perform data cleansing. Performing the
initial step of data exploration enables data analysts to better understand and visually
identify anomalies and relationships that might otherwise go undetected.
For example, the data preparation process usually includes standardizing data formats,
enriching source data, and/or removing outliers.
Additionally, as data and data processes move to the cloud, data preparation moves
with it for even greater benefits, such as:
Accelerated data usage and collaboration — Doing data prep in the cloud
means it is always on, doesn’t require any technical installation, and lets
teams collaborate on the work for faster results.
You Will Know Your Target Customers Better: Data analysis tracks how well
your products and campaigns are performing within your target
demographic. Through data analysis, your business can get a better idea of
your target audience’s spending habits, disposable income, and most likely
areas of interest. This data helps businesses set prices, determine the length
of ad campaigns, and even help project the quantity of goods needed.
Reduce Operational Costs: Data analysis shows you which areas in your
business need more resources and money, and which areas are not
producing and thus should be scaled back or eliminated outright.
You Get More Accurate Data: If you want to make informed decisions, you
need data, but there’s more to it. The data in question must be accurate. Data
analysis helps businesses acquire relevant, accurate information, suitable for
developing future marketing strategies, business plans, and realigning the
company’s vision or mission.
Data Requirement Gathering: Ask yourself why you’re doing this analysis,
what type of data analysis you want to use, and what data you are planning
on analyzing.
Data Cleaning: Not all of the data you collect will be useful, so it’s time to clean
it up. This process is where you remove white spaces, duplicate records, and
basic errors. Data cleaning is mandatory before sending the information on
for analysis.
Data Analysis: Here is where you use data analysis software and other tools
to help you interpret and understand the data and arrive at conclusions. Data
analysis tools include Excel, Python, R, Looker, Rapid Miner, Chartio,
Metabase, Redash, and Microsoft Power BI.
Data Interpretation: Now that you have your results, you need to interpret
them and come up with the best courses of action, based on your findings.
Data analysis, therefore, plays a key role in distilling this information into a more
accurate and relevant form, making it easier for researchers to do to their job.
Data analysis also provides researchers with a vast selection of different tools, such as
descriptive statistics, inferential analysis, and quantitative analysis.
So, to sum it up, data analysis offers researchers better data and better ways to analyze
and study said data.
Prescriptive Analysis: Mix all the insights gained from the other data analysis
types, and you have prescriptive analysis. Sometimes, an issue can’t be solved
solely with one analysis type, and instead requires multiple insights.
Text Analysis: Also called “data mining,” text analysis uses databases and data
mining tools to discover patterns residing in large datasets. It transforms raw
data into useful business information. Text analysis is arguably the most
straightforward and the most direct method of data analysis.
Displaying data in research is the last step of the research process. It is important
to display data accurately because it helps in presenting the findings of the
research effectively to the reader. The purpose of displaying data in research is to
make the findings more visible and make comparisons easy. When the researcher
will present the research in front of the research committee, they will easily
understand the findings of the research from displayed data. The readers of the
research will also be able to understand it better. Without displayed data, the data
looks too scattered and the reader cannot make inferences.
There are basically two ways to display data: tables and graphs. The tabulated data
and the graphical representation both should be used to give more accurate picture
of the research. In quantitative research it is very necessary to display data, on the
other hand in qualitative data the researcher decides whether there is a need to
display data or not. The researcher can use an appropriate software to help
tabulate and display the data in the form of graphs. Microsoft excel is one such
example, it is a user-friendly program that you can use to help display the data.