0% found this document useful (0 votes)
38 views

Collection: Instructor: Crizylen Mae P. Lahoylahoy

The document discusses various methods for collecting data, including primary and secondary sources. It notes that primary data is collected first-hand through methods like surveys, interviews, and observation, while secondary data has already been collected from sources like published reports and government records. The document also outlines considerations for data collection plans such as identifying relevant sources and variables. It provides examples of common data collection techniques in health research like medical records, laboratory tests, and questionnaires.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Collection: Instructor: Crizylen Mae P. Lahoylahoy

The document discusses various methods for collecting data, including primary and secondary sources. It notes that primary data is collected first-hand through methods like surveys, interviews, and observation, while secondary data has already been collected from sources like published reports and government records. The document also outlines considerations for data collection plans such as identifying relevant sources and variables. It provides examples of common data collection techniques in health research like medical records, laboratory tests, and questionnaires.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 55

DATA

COLLECTION
Instructor: Crizylen Mae P. Lahoylahoy
TABLE OF CONTENTS

01 02
DATA 03
VALIDITY AND
INTRODUCTION COLLECTION
METHODS RELIABILITY
INTRODUCT
ION 01
DATA
● Are commonly understood as factual information concerning people, their behavior, events
and situations derived through systematic methods, which explain the rationality of certain
assumption, decisions and actions.

● Data can be define as the quantitative or qualitative value of a variable (e.g. number,
images, words, figures, facts or ideas)

● It is a lowest unit of information from which other measurements and analysis can be done.

● Data is one of the most important and vital aspect of any research study.
DATA
COLLECTION
● Is a method in which information related to
the study is gathered or collected by
suitable mediums.

● Accurate and systematic data collection is


critical to conducting scientific research

● Includes documents review, observation,


questioning, measuring, or a combination of
different methods.
Factors to be Considered Before
Collection of Data (plan)
● Objectives and scope of the enquiry (research question).

● Sources of information (type, accessibility).

● Quantitative expression (measurement/scale).

● Techniques of data collection.

● Unit of collection.
UNDERSTANDING THE PROBLEM
Data
Requirements

Determine Data Data


Source
Sources of Collection Acquisition
Information
Data Design Plan

Budget
DATA COLLECTION PLAN
IDENTIFY TYPES TYPES OF
OF DATA MEASUREMENTS AND
INSTRUMENTS
VARIABLES
SCALES
Research question METHODS
Research hypothesis
REVISE

Written
permissions
PILOT TESTING

IMPLEMENTATION 1. DATA
COLLECTION
FORMS
2. OPERATIONAL
PROCEDURES
Sources of Data

External Sources
External Sources

Internal Sources
Primary Data Secondary Data
Primary Data Secondary Data
Internal vs. External Sources of Data

INTERNAL EXTERNAL
o Many institutions and departments have o When information is collected from
information about their regular functions, outside source.
for their own internal purposes.
o Such types of data are either primary
o When those information are used in any or secondary.
survey is called internal sources of data.
o This type of information can be
Routine surveillance, hospital records collected by census or sampling.
I. PRIMARY DATA

o Collected from first-hand experiences is known as primary


data. More reliable, authentic and not been published
anywhere.

o Primary data has not been changed or altered by human


being, therefore its validity is greater than secondary data.
Direct personal investigation
(interviewing)

Measurements
Indirect
Case
oral
Investigation Studies
investigation
through
Lab results observation
Experimentation

METHODS
OF COLLECTING PRIMARY DATA
PRIMARY
DATA
MERIT DEMIRI
S TS
 Targeted issues are  Cost
addressed  Time
 Data interpretation is  More personnel /
better resources
 High accuracy of data  Inaccurate feedback
 Addressing specific  Training, skill and
research issues laborious.
 Greater control
II. SECONDARY DATA

 Already been collected by others.


 Journals, periodicals, research publication ,official record etc.
 May be available in the published or unpublished form.
 Resorted to when primary data sources/methods are
infeasible -, inaccessible.
METHOD OF COLLECTION:
SECONDARY DATA International
Government
PUBLISHED
Corporation

Institutional
UNPUBLISHED
SECONDARY
DATA
MERIT DEMIRI
S TS
 Quick and cheap
 Not fulfilling specific
 Wider geographical
research needs
area  Poor accuracy
 Longer orientation
 Not up to date
period  Poor accessibility in
 Leading to primary data
some cases
PRIMARY VS.
SECONDARY DATA
PRIMARY SECONDARY
DATA DATA
 Real time
 Past data
 Sure about the sources
 Not sure about sources
 Can answer research
 Refining the research
question
problem
 Cost and time
 Cheap and no time
 Can avoid bias
 Bias can’t be ruled out
 More flexible
 Less flexible
Data Sources for Health Research
Primary or secondary sources

 Birth and death records


 Medical records at physician offices, hospitals, nursing homes, etc.
 Medical databases within various agencies, universities, and
institutions.
 Physical exams and laboratory testing
 Diseases registries
 Self-report measures: interviews and questionnaires
Research Data: Considerations
Data collection vs. data analysis
• Poor data collection and management can render a perfectly
executed trial useless
• Bad data practices carry resource and ethical costs
• Good practices:
– What are data?
– How are they represented?
– How are they stored for retrieval and use?
Primary Research Methods and Techniques
PRIMARY
RESEARCH

QUANTITATIVE QUALITATIVE DATA


DATA

Experiments Focus groups


Surveys
 Personal Mechanical Individual in-depth
observation interviews
interview
(intercepts) Simulation
 Mail Human
 In-house, self- observation
administered
 Telephone, Case studies
fax, email,
Web
Differentiation between data collection
techniques and tools

TECHNIQUES TOOLS

 Using available data • Data compilation sheet


 Observation • Check list, eye, watch,
scales, Microscope, pen and
paper
 Interviewing • Schedule, agenda,
questionnaire, recorder.
 Self-administered • Questionnaire.
questionnaire
DATA
COLLECTION
METHODS
1. SELF REPORTED DATA
 Some information can be gathered only by asking people questions
(i.e. not easily observable)
 Self report measures are estimates of true scores

True score + Measurement error = Survey response


PITFALLS OF SELF-REPORTED
INFORMATION
Susceptible to the respondent’s
A. Mood
B. Motivation
C. Memory
D. Understanding

Also susceptible to:


• Context circumstances of interview
• Social desirability choosing answers that are viewed favorably
COMMON TYPES OF QUESTIONS

Open-ended
What health conditions do you have?

Closed-ended
Which of the following conditions do you currently have? Say yes or no to each.
-Diabetes?
-Asthma?
- Hypertension
COMMON TYPES OF QUESTIONS
I- Response options
• Nominal – unordered response categories (e.g. male, female)
• Ordinal – ordered response categories (e.g. excellent, good, fair,
poor)

II- Type of information


• Factual – objectively verifiable facts and events
• Subjective – knowledge, perceptions, feelings, judgment
COMMON TYPES OF QUESTIONS
I- Response options
• Nominal – unordered response categories (e.g. male, female)
• Ordinal – ordered response categories (e.g. excellent, good, fair,
poor)

II- Type of information


• Factual – objectively verifiable facts and events
• Subjective – knowledge, perceptions, feelings, judgment
1.1 DOCUMENT
REVIEW
A qualitative (sometimes quantitative) research project may require
review of documents such as:

 Course syllabi
 Faculty journals
 Meeting minutes
 Strategic plans
 Newspapers
SURVEY
Self reported data collection Computerized Paper-based
methods
Interviewer administration (human) In person In person
Telephone Telephone

Self Administration Web, Smartphone, Tablet Web, Smartphone, Tablet

Interactive voice response Telephone, Web Not applicable

Pros 1- Faster data availability 1- Answer respondent questions, probe


2- Can handle complex skip Patterns for adequate answers
3- Can be tailored to severity of 2- Administer to illiterate/low reading
symptoms or situation (computerized level
adaptive testing) 3- Easier to reach poor, homeless, etc.
4- Build rapport
5- People feel more anonymous 6- Can
use visual aids

Cons 1- Data can get lost if system crashes 2- 1- Expensive


Requires power source 2- Longer data collection period
3- Interviewer presence/technique can
bias results
A. PERSONAL
INTERVIEW
Interviews consist of collecting data by asking questions.
• Data can be collected by listening to individuals, recording, filming
their responses, or a combination of methods.

There are four types of interview:


• Structured interview
• Semi-structured interview
• In-depth interview, and
• Focused group discussion
SEMI-STRUCTURED
STRUCTURED AND IN-DEPTH
INTERVIEW INTERVIEWS
In structured interviews the questions as well • Semi-structured interviews include a
as their order is already scheduled. number of planned questions, but the
• Your additional intervention consists of interviewer has more freedom to
modify the wording and order of
giving more explanation to clarify your questions.
question (if needed), and to ask your • In-depth interview is less formal and the
respondent to provide more explanation if least structured, in which the wording
the answer they provide is vague and questions are not predetermined.
(probing). This type of interview is more
appropriate to collect complex
information with a higher proportion of
opinion-based information.
INTERVIEW
PROS CONS

1. Collect complete information with 1. Data analysis—especially when there is


greater understanding. a lot of qualitative data.
2. It is more personal, as compared to 2. Interviewing can be tiresome for large
questionnaires, higher response rates. numbers of participants.
3. It allows more control over the order 3. Risk of bias is high due to fatigue and to
and flow of questions. becoming too involved with interviewees
4. Necessary changes in the interview
schedule based on initial results can be
made (which is not possible in the case
of a questionnaire study/survey)
B. TELEPHONE
INTERVIEW
PROS CONS

1. Lower Costs 1. Omit persons without phones


2. Can ensure uniform data collection 2. Phone accessibility
3. Shorter data collection period 3. Need complex statistical framework
4. Cell phones are best way to reach 4. Cannot use visual aids
transient people 5. Many of us do not answer our
phone
2. PAPER AND PEN SELF
ADMINISTERED
PROS CONS

1. Anonymity 1. Omit persons without phones


2. Can use longer, more complex 2. Phone accessibility
response categories 3. Need complex statistical framework
3. Can use visual aids 4. Cannot use visual aids
4. Consistent across respondents 5. Many of us do not answer our
5. Cover large geographic area phone
6. Length easy to see (plus or minus?)
3. Web, smart phone administered survey
PROS CONS

1. Anonymity 1. Varying degrees of computer skills,


2. Better for sensitive items access, connection speeds,
3. Timely data configurations
4. Lower cost 2. Challenge to verify informed
5. Can use long list of response consent
categories 3. Concern about multiple responses
6. Can use visual aids from same person
7. Any time/location 4. Difficult to track non responders
8. Cover large geographic area 5. Could be biased sample
9. Can use complex skip patterns
4. FOCUS GROUP
DISCUSSION
• Focus group is a structured discussion with the purpose of stimulating
conversation around a specific topic.
• Focus group discussion is led by a facilitator who poses questions and the
participants give their thoughts and opinions.
• Focus group discussion gives us the possibility to cross check one
individual’s opinion with other opinions gathered.
• A well organized and facilitated FGD is more than a question and answer
session.
• In a group situation, members tend to be more open and the dynamics
within the group and interaction can enrich the quality and quantity of
information needed.
FOCUS GROUP DISCUSSION:
PRACTICAL ISSUES
The ideal size of the Focus groups:
• 8-10 participants
• 1 Facilitator
• 1 Note-taker

Preparation for the Focus Group


• Identifying the purpose of the discussion
• Identifying the participants
• Develop the questions

Running the Focus Group


1. Opening the Discussion
2. Managing the discussion
3. Closing the focus group
4. Follow-up after the focus group
FOCUS GROUP DISCUSSION:
PRACTICAL ISSUES
The ideal size of the Focus groups:
• 8-10 participants
• 1 Facilitator
• 1 Note-taker

Preparation for the Focus Group


• Identifying the purpose of the discussion
• Identifying the participants
• Develop the questions

Running the Focus Group


1. Opening the Discussion
2. Managing the discussion
3. Closing the focus group
4. Follow-up after the focus group
5. OBSERVATION
OBSERVATION is a technique that involves systematically selecting,
watching and recording behavior and characteristics of living beings,
objects or phenomena.

• Without training, our observations will heavily reflect our personal


choices of what to focus on and what to remember.
• You need to heighten your sensitivity to details that you would
normally ignore and at the same time to be able to focus on
phenomena of true interest to your study.
TYPES OF OBSERVATION
Observation of human behavior

1. Participant observation:
The observer takes part in the situation he or she observes – Example: a
doctor hospitalized with a broken hip, who now observes hospital
procedures ‘from within’

2. Non-participant observation:
The observer watches the situation, openly or concealed, but does not
participate
OPEN
– (e.g., ‘shadowing’ a health worker with his/her permission
during routine activities)

CONCEALED
– (e.g., ‘mystery clients’ trying to obtain antibiotics without
medical prescription)

OBSERVATIONS OF OBJECTS
– For example, the presence or absence of a operative room
hand washing facilities and its state of cleanliness
VALIDITY
AND
RELIABILITY
Validity and reliability are two very important
concepts in research undertaking. In most cases,
these two terms are used interchangeably.
VALIDITY
Refers to the accuracy of an indicator to measure or represent a
particular concept.

It may be classified into three types, namely


a. Face validity
b. Internal validity
c. External validity
a. FACE VALIDITY
• Is a subjective, but, careful judgment that an “indicator appears,
on the face of it, to measure the concept it is supposed to
measure” (Grimm and Wozniak, 1990:167)
• It is a representation of an indicator or of a concept being
measured based on its outside appearance.
• This is the most common method of measuring validity.
b. INTERNAL
VALIDITY
• The “conformity or consistency among terms, definitions,
measurement and findings within a research report” (Guy,
Edgley,Arafat, and Allen, 1987:450; Creswel, 2014)
• In thesis writing this type of validity is what thesis panel
members usually look into.
c. EXTERNAL
VALIDITY
• Is the correct correspondence between the terms and findings of
a research operation and the real world to which it is presumed
to apply (Guy and et.al., 1987:449)
RELIABILITY
Refers to the consistency of the outcomes of an indicator when used
repeatedly to particular group of respondents.
In other words, an indicator is assumed to be reliable when it gives
consistent measurements in various occasions using the same
respondents.
METHODS OF
MEASURING
VALIDITY
a. Predictive Validity
b. Construct Validity
c. Criterion-Related Validity
a. Predictive Validity
Test
• Measures the probability that a given indicator would be able to predict a given
behavior or situation, which it intends to measure (Grimm and Wozniak, 1990)

Example:
The assumption behind the civil service eligibility requirement in employment is that
eligible individuals are more likely to work efficiently than those who are not.
Eligibility then is used as an indicator of competence.
b. Construct Validity Test
• A method of measuring the accuracy of an indicator to represent
or measure a given concept by comparing relationship with a
particular construct or several constructs, which are proven to be
good indicators of such concept (Grimm and Wozniak, 1990)
c. Criterion-Related Validity
Test
• A method of measuring the accuracy of an indicator by comparing the scores
of a given group in such items with their scores using another known and
presumed valid indicator of the concept being evaluated (Grimm and
Wozniak, 1990)
For instance, the validity of a given instrument maybe measured by doing the
following steps;
a. Administer the instruments to a particular group
b. Administer the known and presumed valid measure of the concept being
evaluated to the same group
c. Compare the results of the two instruments;
d. Do Correlation test
MEASURES OF
RELIABILITY
a. Test-Retest Method
b. Parallel or Alternative Reliability
Method
c. Split-Half Reliability Method
a. Test-Retest Method
The most common method of assessing the reliability of an indicator
or instrument is by conducting a series of pre-test to a given group.
The instrument is assumed to be reliable when the scores of the
individuals are consistently the same in repeated testing.
b. Parallel or Alternative Method
The most common method of assessing the reliability of an indicator
or instrument is by conducting a series of pre-test to a given group.
The instrument is assumed to be reliable when the scores of the
individuals are consistently the same in repeated testing.
c. Split-Half Reliability Test
Is usually applied in psychological and attitudinal test.

The following steps are suggested in performing this technique:


a. Administer a scale or index containing several items, which are designed
to measure a concept, to a group;
b. Divide the scale containing several items into two sets after the entire set
has been administered to a group; and
c. Compare the results of the two sets using correlation (Grimm and
Wozniak, 1990)

The instrument is assumed to be reliable when the scores of the respondents


in the two halves of the divided instrument are similar or positively related.

You might also like