Mec-9 (Mec109) - em
Mec-9 (Mec109) - em
1
ASSIGNMENT SOLUTIONS GUIDE (2016-2017)
M.E.C.-9
Research Methods in Economics
Disclaimer/Special Note: These are just the sample of the Answers/Solutions to some of the Questions given in the
N
Assignments. These Sample Answers/Solutions are prepared by Private Teachers/Tutors/Authors for the help and guidance
of the student to get an idea of how he/she can answer the Questions given the Assignments. We do not claim 100%
accuracy of these sample answers as these are based on the knowledge and capability of Private Teacher/Tutor. Sample
answers may be seen as the Guide/Help for the reference to prepare the answers of the Questions given in the assignment.
As these solutions and answers are prepared by the private teacher/tutor so the chances of error or mistake cannot be
denied. Any Omission or Error is highly regretted though every care has been taken while preparing these Sample Answers/
Solutions. Please consult your own Teacher/Tutor before you prepare a Particular Answer and for up-to-date and exact
information, data and solution. Student should must read and refer the official study material provided by the university.
Note: Answer all the questions. While questions in section A carry 20 marks each (to be answered in
about 700 words each) those in section B carry 12 marks each (to be answered in about 500 words each). In
case of numerical questions word limit does not apply.
Section-A
Q. 1. What do you mean by the term ‘paradigm’? How does acquisition of paradigm status by any
discipline makes it a mature science? Give illustration in support of your answer.
Ans. Paradigm is defined as universally recognized achievement that provides model problems and solutions to
community of practitioners for that time period. So, a paradigm specifies what are the ultimate constituents of that
sphere of reality which any given science is inquiring into; recognizes the model problems; mentions different
possible solutions; provides the necessary strategies and techniques for solving the problems; and gives some
examples on how to solve any given problem.
In other words, a paradigm refers either to an example or a model to be followed or to an established system or
way of doing things. The concept of paradigm was introduced by Kuhn.
KARL POPPER’S PHILOSOPHY OF SCIENCE
Positivists made an effort to work out a sophisticated version of Inductivism. Popper found out a sophisticated
version of Hypothesisism. He was the first ever who reacted against positivist philosophy of science.
According to Popper, the basic task of philosophy of science is not to solve problem of induction as proposed
by positivists because of two reasons. First, the problem of induction cannot be solved; second, it need not to be
solved as the method of science is not inductive in nature. Popper claimed that the central task of philosophy of
science is to solve the Kant’s problem, i.e. to identify the line of demarcation between science and non-science.
According to Popper, falsifiability is the line between science and non-science. To call anything or any knowledge
to be science, it must satisfy the criteria of falsifiability. Any statement must be falsifiable to be called science.
Scientific theories are falsifiable in the sense that they explicitly mention the conditions in which they will be
declared to be false. In simple words, a model scientific theory or law or statement must mention the conditions in
which it will be falsified. It must mention testable implications in which it becomes false. A religious theory about
the world is also unfalsifiable as the conditions in which, it would prove to be false can not be stated or implications
of the theory cannot be explicitly expressed. But the good fact is that religious theories do accept that they are non
scientific and hence, are not falsifiable but the theories like, Marxists’ which are non-scientific but claim themselves
to be scientific become pseudo-scientific.
Popper also gave a hallmark of what he called to be an adequate model of scientific method. He claimed his
model of scientific method as Hypothetico-Deductive Model. In his opinion, the method of science is hypothetico-
deductive in nature and not inductive.
Let us elaborate on the differences between these methodological models:
2
(a) According to Inductivists, the observations are theory independent and therefore, they are indubitable. In
other words, they are theory independent which makes the value of their probability as one. They agree that the
actual theories may have something more than observations and therefore, the result may vary from probability
value to some extent. It calls for the need of verification. On the other hand, popper rejects this view and claims that
theories are not winnowed from facts but are pure inventions and created by human minds. Therefore, the initial
probability of a theory is zero.
It implies that inductivists claim that scientific tests find out the truth of a scientific theory, which means that if
a test gives a positive result, then a scientific theory is accepted as true. But according to Popper, scientific tests
cannot establish the truth of a scientific theory even if the test gives positive result. In other words, according to
inductivists a theory can be accepted true if it passes the test and gives positive result but in pooper’s scheme, a
N
theory always remains tentative as it can be proved wrong at any time by some other evidence or logic. The basic
difference between positivism and Popper’s scheme is that of verificationism and falsificationism. In other words,
Inductivism and Hypothetico-Deductive scheme can be understood by comparing it with two systems of laws. One
starts with the assumption that an accused is innocent until proved to be guilty and without evidence, he must be
taken as innocent. It can be compared to Inductivism. But according to latter, we must start with the assumption that
the culprit is guilty and must be taken as guilty until proved to be innocent. It can be compared to hypothetico-
deductivism.
The Steps of Scientific Procedure: In the Popper’s scheme, we follow following steps to solve a problem:
(a) Suggest a hypothesis as a tentative solution;
(b) Deduce the test implications for the solution;
(c) Test the solution and check if it passes or fails the tests;
(d) Solution is considered to be corroborated if repeated attempts to falsify it fail.
Popper gave following reasons in support of his claim that Hypothetico-Deductive model is superior to
Inductivism.
(i) It justifies the spirit of science because the aim of scientific testing is to falsify our theories and scientific
theories are, however, corroborated, going to remain permanently tentative. In other words, Popper claims that
scientific theories are permanently vulnerable to the possible falsification. The special status that has been given to
science is because of the fact that science is open minded and anti dogmatic. It gives a central place to such an
attitude and hence, it is superior to Inductivism.
(ii) Science could not progress further, if it would have followed the principles of Inductivism. If we start
accepting something as true until we find situations in which it is not true then we shall keep searching for different
situations which will impose heavy restrictions on the field of science. It will make the application of scientific
theories to be very narrow. But when a scientist is reaching at conclusions on the basis of hypothetico-deductivism,
he will reject the theory when he gets any evidence which falsify it and he will not try to fit it to suit the supporting
observations. He will look for an alternative and yield fresh test implications. The fast advancement in technology
and science has become possible only because the scientists adopted Hypothetico Deductive scheme and did not
defend their theories to make them consistent with the facts.
(iii) Popper further claims the Hypothetico-Deductive scheme avoids the predicament faced by Inductivism in
the face of Hume’s Challenge. Hume concluded with evidence that the principle of induction could not be justified
on logical grounds. If Hume is right, it proves science to be irrational. According to Hypothetico-Deductivist views
falsification is the central scheme of science. It means that one single negative instance is sufficient to falsify a
theory and no amount of positive instances can prove a theory to be permanently true. A theory is true as long as no
negative instance is brought against it. Hence, it always remains tentative. Hume claimed that a generalization
cannot be conclusively verified and hence Inductivism cannot be accepted as a method of science.
(iv) A theory is rejected as false only when another alternative theory is available at hand. It means that it has
more testable implications and a greater number of testable implications are already satisfied. It further roves the
convergent nature of growth of science because a new theory contains some part of the old theory and in this sense
provides continuity in the growth of science.
(v) Inductivists claim that observations come first and then a theory is established while if someone asks
Popper what comes first a theory or observations, he would say that the question is similar to one what comes first
seed or tree.
3
Let us list the main thesis of Popper’s Philosophy of science:
(i) Science is different from, superior to and ideal for all other areas of disciplines in terms of quality.
(Scientism)
(ii) The difference, superiority and idealhood possessed by science can be traced to its possession of a
method. (Methodlogism)
(iii) All sciences have one common method irrespective of their subject matter. (Methodological Monism)
(iv) All sciences use Hypothetico-deductive method. (Hypothetico-deductivism)
(v) Science is different from other disciplines as its statements can be systematically falsified.
(vi) Scientific observations are theory dependent.
(vii) Theories are pure inventions of human mind.
N
(viii) There is inter dependence between theory and observations.
(ix) There may be more than one theory which corresponds to a given set of observation statement.
(x) Science is not value neutral and value commitments are not purely objective. Therefore, fact value
dichotomy is unacceptable.
(xi) Popper accepted Deductive-Nomologism pattern for all scientific explanations.
(xii) Science aims at providing an account for observable world in terms of unobservable entities and
unobservable entities in terms of further unobservable entities.
(xiii) Science keeps in progressing in terms of nearness to truth. The progress of science is continuous in the
sense that new theory contains some aspects of the old theory.
(xiv) Science is objective because theories are inter-subjectively testable.
(xv) Science is rational as well irrational. It is not rational because principle of induction can not be verified
in a rational manner. It is rational in the sense that it embodies critical thinking.
THOMAS KUHN’S PHILOSOPHY OF SCIENCE
Thomas Kunh was another important philosopher of Science who propounded the idea of normal science and
extraordinary science and lead to the concept of scientific revolution. His work The Structure of Scientific Revolutions
is the milestone in the history of twentieth century of philosophy of science.
Kunh claimed that there are two stages in the life of every major science.
(a) Pre-paradigmatic Stage: In this stage, there is pluralism of ideas i.e. divergent mode of practicing these
disciplines co-exist.
(b) Paradigmatic Stage: It is the stage in which pluralism disappears.
Kuhn opined that there are some areas like arts, philosophy, medicine etc. which can never reach the second
stage as plurality can not disappear from these disciplines. Kunh further explained that this process of transition
takes place by acquisition of a paradigm. In fact, a discipline becomes a science only when it reaches second stage.
Social sciences are imperfect sciences as there is no consensus amongst social scientists on different fundamental
concepts. Therefore, social sciences have distinct schools and many opinions on different concepts.
Therefore, a science becomes mature i.e. goes to second stage when it acquires a paradigm.
Paradigm can be defined as universally recognized achievements that provide model problems and solutions to
community of practitioners for that time period. So, a paradigm does the following functions:
(a) It specifies what are the ultimate constituents of that sphere of reality which any given science is inquiring
into.
(b) It recognizes the model problems.
(c) It mentions different possible solutions.
(d) It provides the necessary strategies and techniques for solving the problems.
(e) It also gives some examples on how to solve any given problem.
In simple words, a paradigm is a disciplinary matrix of a professional group. When a science is able to develop
a paradigm, then it develops “Normal Science Tradition’ according to Kuhn.
Certainly, scientific practice is not exhausted in terms of ‘normal science’. It is possible that an existing paradigm
fails to promote fruitful, interesting and smooth normal science. In such a case, that science is said to be in crisis.
When this crisis gets deepened then it leads to the replacement of existing paradigm by a new one. This process of
replacement is called “Scientific Revolution”. Therefore, scientific revolutions are the tradition shattering
complements to tradition bound activity of normal science. Therefore, when a science enters second stage, it has
4
two features of normal science and revolutions. Science is revolutionary occasionally but in general it remains
normal. Therefore, we can say that the science is normal ordinarily and is interrupted by revolutions in between
which brings about a change in its paradigm.
Kuhn made an effort to account for normal science which is smooth, pre-established and directional. It gives no
space for radical thinking. It never questions paradigm. The professional training in science is based on accepting
the paradigm as given and is to be followed. The student should equip himself to promote the cause of paradigm and
by giving greater precision and further elaborations. It does not aim at anything which is basically new. It only adds
more elaboration to existing paradigm.
Now the question arises, if the paradigm is not changed then how research under a paradigm must be a particularly
effective way of inducing paradigm change. Let us elaborate on it.
N
Normal science keeps on following a paradigm whose validity is accepted without question. It is possible that
we encounter many hurdles in the process of solving puzzles. It is at this point when we talk of anomalies. An
anomaly is a situation in which there is no chance of solving a puzzle within the framework of the given paradigm.
One or two anomalies are not sufficient to reject any paradigm. When there are many major anomalies, then the
paradigm is replaced with the new one but certainly before we reject the old paradigm, we must have new paradigm
in our hands. When enough number of anomalies is gathered against a paradigm, then it is said to crisis ridden.
There is no quantitative or objective measurement to demarcate line between major and minor anomalies. It is to be
decided by the community of practitioners of that discipline whether any given anomaly is just a puzzle or an
indicator of paradigm being crisis ridden. Once, a paradigm is declared to be crisis ridden, search and research
begins for new and better paradigm. During the period of search of the new paradigm, the scientific debate becomes
radical. Crisis ridden paradigm is given up only after new paradigm is accepted to replace it.
In the process of search for an alternative paradigm, scientists make a comparative analysis of competing
theories. The choice between paradigms is not based on logic and experimentation. It is settled by the consensus of
the concerned scientific community. In other words, it is influenced by the value judgements that are held by that
particular scientific community. Different scientists may put forward their own arguments for and against available
paradigms. They might advocate that the selected theory solves the important problems. But these are all value
judgements as no objective criteria are there to discriminate between important and simple problems. Kuhn claims
that the question of values can be answered only in terms of criteria that are outside the boundary of normal science
totally and it is externality of the criteria that makes paradigm debate revolutionary. In simple words, it is not
possible to justify the choice of criteria on the basis of mathematical equations, experimentation or logic but it
needs to consider the basic values that are held by the scientific community of that enterprise. Therefore, he gave
importance to the concept of scientific community. It means that we need to define scientific practice in terms of
paradigm and paradigms and paradigmatic changes and the paradigmatic changes in terms of scientific community
which will share paradigmatic changes and bring about these changes. But the concept of scientific community is
sociological in nature. Therefore, the ultimate terms for explication of scientific activity are sociological in nature.
Kuhn claims that two paradigms speak two different languages. Even when two paradigms use same term, they
have different meanings in two paradigms. Kuhn compares paradigm shift to a gestalt shift. Therefore, he concludes
that the idea of scientific progress is ongoing and there is no actual absolute standard truth which can stand forever.
Truth is intra paradigmatic i.e. it is true in relation to a particular paradigm and there is no truth lying outside all
paradigms.
Q. 2. What is the distinction between positive measures and normative measures of inequality? Discuss
Atkinson’s index and Daltion’s index as normative measures of inequality.
Ans. Positive Measures: If dispersion is present in the distribution due to unequal values, then it is said to exists
inequality in the distribution. Those measures of inequality, which are based on statistical properties of distribution,
are known as the positive measures of inequality. For example, the relative share of total income would be given by
fi xi / Nµ where fi denotes the frequency of occurrence of income xi, and N = ∑ fi.
Relative Range
A measure of relative dispersion i.e. relative range is also a measure of inequality and it is defined as the relative
difference between the highest income and the lowest income and given by
Max i xi – Min i xi
RR1= µ
5
If income is equally distributed, then RR1 = 0.
If one person received all the income, then RR1 is maximum.
If one wants to make the index lie in the interval between 0 and 1, then relative range is defined as
Maxi xi – Min i xi
RR2= Nµ
Thus, the major drawback of these range-based measures is that they are not based on all values and hence, they
do not reflect the change in inequality in case of transfer of income between two non-extreme recipients.
Normative Measures
Measures of inequality, which are articulated through the explicit incorporation of social welfare function or
N
social welfare considerations, are known as the normative measures of inequality. Dalton had pointed out the issues
related to measure of inequality like according to him, economist is interested, not in distribution as such, but in
effects of the distribution upon the distribution (and total amount) of economic welfare which may be derived from
income’. But one another fact is that though inequality defined in terms of economic welfare, it has to be measured
in terms of income. The main points around which the discussion revolve are:
(1) The relationship between income of a person and his welfare.
(2) The relationship between personal income-welfare functions, and
(3) The relationship between personal welfare and social welfare.
Dalton’s index and Atkinson’s index are the two sub-approaches within normative approach. In Atkinson’s
index, a new idea of equally distributed equivalent income is introduced and the present level of income is compared
with that of equally distributed level of income. While in Dalton’s approach, present social welfare is compared with
that could be obtained by equally distributing the total income. Sen Index is a generalization of the Atkinson’s index.
While, Theil’s index is based on information theory.
Dalton Index
According to Dalton’s assumptions, an equal distribution is preferable to an unequal one for a given amount of
total income in case of social welfare. It implies that for a given total income, the economic welfare of the society
will be maximum, when all incomes are equal. Thus, the inequality of any given distribution may therefore, be
defined as:
N
∑ U(µ)
i =1 NU(µ)
D1= N = N
∑ U( x ) ∑ U( x )
i =1
i
i =1
i
∑ U( x )
i =1
i
For calculating numerical magnitude, Dalton used Bernaulli’s hypothesis, according to which proportionate
additions to income make equal additions to personal welfare, i.e.
d Ui = d xi/ xi => Ui= log xi + ci
Now, if every person has the same functional relationship, then the Dalton’s index can be given as.
D = 1 – (log μ + c/log µ + c)
where, μ = geometric mean of personal incomes.
Atkinson Index
Atkinson had different view and he did not accept Dalton’s measure as D is not invariant with respect to positive
linear transformations of personal income-welfare functions. Through the equally distributed equivalent income,
Atkinson redefined the index in such a way that measurement would be invariant with respect to permitted
transformations of welfare numbers. Both the distributions, the original and the new one, are supposed to yield the
same level of welfare.
6
The Atkinson index is defined as the additive inverse of the ratio of equivalent mean income to actual mean
income. Thus, mathematically,
A = 1 – (µ’/µ)
Here, 0 < A < 1, complete equality occur at 0 and incomplete inequality occur at 1. (ε)
Section-B
Q. 3. ‘Research design deals with a logical problem and not a logistical problem’-Explain with the help of
an example.
Ans. Logical Truth and Material Truth: It is not necessary that if a statement is logically true then it is
materialistically true as well. For example,
Saints have no tongue.
N
Mahatma is a saint.
Therefore, Mahatma has no tongue.
The statement is logically true but there can't be a person without tongue. In fact, the statements in this case are
used metamorphically.
Deductive Fallacies: Deductive fallacies are those where premises do not ensure that the given conclusion will
be arrived at. There are three main types of deductive fallacies:
(a) Formal (logical) fallacies: These fallacies affirm the consequence. For example,
Correct reasoning or Modus Poneus
If X is Y then Y is X.
X is Y.
Therefore, X is Y.
Formal Fallacy
If Y is X then X is Y.
Y is X.
Therefore Y is X.
(b) Verbal Fallacies (Fallacies of Composition): It involves a statement, which is true for a particular person or
event and it is generalized to include a group or all. For example, “If one stands up he can watch the show better; it
implies that if everyone stands up they can watch the show better.” In this case, what is true for an individual is not
true for the whole group.
(c) Material Fallacies: These are of many types.
(i) Post hoc ergo propter hoc (After this, therefore because of this) if any event is caused after some event then
the succeeding event is necessarily caused by the preceding event.
(ii) “Argument by analogy”: It is a statement which claims that when two events X and Y are similar then what
is true for X is also true for Y. but it is not so. For example, China was over populated. It could control its population
by one child norm. India is also over populated. India can also control birth rate by following the policy of one child
norm.
(iii) “Appeal to Authority”: It refers to a statement in which the truth is sought to be asserted by referring to an
authority. For example, we are not bodies but souls playing parts through bodies because it is given in Holy Gita. So,
Gita is authority in this statement and it is claimed to be a truth that we are not bodies but souls playing part through
the bodies.
Q. 4. State the various assumptions of Classical linear Regression Model. How would you detect hetero-
scedasticity? What are its consequences?
Ans. The proof that OLS generates the best results is known as the Gauss-Markov theorem, but the proof
requires several assumptions. These assumptions, known as the classical linear regression model (CLRM) assumptions,
are the following:
l The model parameters are linear, meaning the regression coefficients don't enter the function being estimated
as exponents (although the variables can have exponents).
l The values for the independent variables are derived from a random sample of the population, and they
contain variability.
l The explanatory variables don't have perfect collinearity (that is, no independent variable can be expressed
as a linear function of any other independent variables).
l The error term has zero conditional mean, meaning that the average error is zero at any specific value of the
7
independent variable(s).
l The model has no hetero-skedasticity (meaning the variance of the error is the same regardless of the
independent variable's value).
l The model has no autocorrelation (the error term doesn't exhibit a systematic relationship over time).
In econometrics, an informal way of checking for heteroskedasticity is with a graphical examination of the
residuals. If you want to use graphs for an examination of heteroskedasticity, you first choose an independent variable
that's likely to be responsible for the heteroskedasticity. Then you can construct a scatter diagram with the chosen
independent variable and the squared residuals from your OLS regression.
The following figure illustrates the typical pattern of the residuals, if the error term is homoskedastic.
N
2
εi
The next figure exhibits the potential existence of heteroskedasticity with various relationships between the
residual variance (squared residuals) and the values of the independent variable X. Each graph represents a specific
example, but the possible heteroskedasticity patterns are limitless because the core problem in this case is the
changing of the residual variances as the value of the independent variable X changes.
2 2 2
^ε i ^ε i ^ε i
x x x
2
^ε i
2 2
^ε i ^ε i
x x x
Q. 5. State the various uses of factor analysis for conducting research in social sciences. Discuss the methods
for estimating the parameters of a factor model.
Ans. Factor analysis is a statistical method used to describe variability among observed, correlated variables in
terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations
in six observed variables mainly reflect the variations in two unobserved (underlying) variables. Factor analysis
searches for such joint variations in response to unobserved latent variables. The observed variables are modelled as
linear combinations of the potential factors, plus "error" terms. Factor analysis looks for independent dimensions,
8
which limits its applicability in biological sciences. Followers of factor analytic methods believe that the information
gained about the interdependencies between observed variables can be used later to reduce the set of variables in a
dataset. Factor analysis is not used to any significant degree in physics, biology and chemistry but is used very
heavily in psychometrics personality theories, marketing, product management, operations research. Users of factor
analysis believe that it helps to deal with data sets where there are large numbers of observed variables that are
thought to reflect a smaller number of underlying/latent variables.
Factor analysis is related to principal component analysis (PCA), but the two are not identical. There has been
significant controversy in the field over differences between the two techniques. Clearly though, PCA is a more
basic version of exploratory factor analysis (EFA) that was developed in the early days prior to the advent of high-
speed computers. From the point of view of exploratory analysis, the eigenvalues of PCA are inflated component
N
loadings, i.e., contaminated with error variance.
The Purpose of factor analysis is to characterise the correlations between variables xδ of which the xp, iδ are a
particular instance or set of observations.
xδ i − µδ
z =
i
σδ
Where the sample mean is
= N ∑ xδ i
1
i i
= N ∑ ( xδ i – δ )
1 2
2
i i
Where bpq is the krnonecker delta. (o when P ≠ q and 1 when P = q) The errors are assumed to be independent
of the factorx.
∑ Fpi
i δi = 0
or
Explain the various steps involved in processing and analysis of qualitative data.
Ans. Data Collection at Different Levels: Using social resources maps, discussion etc., the data can be gathered
at the village level and through semi-structured interviews at the household level. The other qualitative approach at
the village level includes preference ranking, opinion of the villagers, which is basically reliable for the smaller
region only. Other reliable qualitative technique is RRA/PRA. The qualitative method investigates the why and how
of decision making, not just what, where, when. Hence, smaller but focused samples are more often needed than
large samples.
Tools for Qualitative Data Collection
Depending upon the nature of qualitative study, the tools which are used for data collection are:
1. Mopping Social mapping–of the village i.e. to collect the basic information like the name of house head, his
possession on agricultural land, LPG, TV etc. of the villagers.
2. Resource mapping/transect walk–This tool is helpful in providing information like types of trees around
the village, cropping pattern use of type of fuel etc. It is a teamwork involving the contribution of villagers also.
3. Wealth ranking of households–The criteria of wealth ranking include types of house and farm animals
owned, size of house, type of vehicle owned etc.
4. Preference ranking–It provides insights into usage pattern in order to guide and control the related programme.
5. Semi-structured interviews–The semi-structured interviews take the form of conversation, in which the
researchers uncover new clue and provide new dimensions.
6. Group discussions–In this approach, a small number of subjects are brought together to discuss topics related
to programme.
9
Participatory Mapping: People, Health, Nutrition
Rural people have greater capacity to map, model, quantify, rank etc. than has been commonly supposed. Actual
or potential applications for health and nutrition related programmes include participatory social, demographic and
health mapping of villages, seasonal analysis of deprivation and disease incidence, ranking wealth and wellbeing,
matrix ranking and time lines and trend analysis. To realize the potential of these approaches requires outsider
professionals to overcome trained disabilities by being humble, showing respect thus facilitating learning people.
Rapid procedures in health and nutrition have largely evolved separately, with surprisingly little cross-fertilization
with other schools and practitioners of RRA, and have evidently drawn upon medical anthropology, emphasizing
qualitative assessment.
Seasonal Analysis: The seasonal round guided the analysis of qualitative data on annual cycles of activities
N
increasing understanding of exposures and their variation. Generally villagers in India have the ability to estimate
and rank the conditions like number of days of rain, amount of rain, soil moisture etc. The occurrence of disease by
season is also one of the conditions indicated.
Ranking of Wealth and Well-Being: It is the simplest method to estimate relative wealth in the society. Various
NGOs are using this method to identify the poorest and those most at risk.
Matrix Ranking and Scoring: Listing and ranking of, for example livestock species, by the attributes or outputs
for which they are most valued. There are varieties of techniques to score or rank aspects of livelihoods and production
systems (for example, animal feeds) against different criteria (for example, nutritive quality, palatability, availability).
Ranking is simply putting things in order, from best to worst, smallest to largest, etc. Scoring involves farmers
assigning a numerical value to each of a set of things, perhaps between zero and ten. This is classically done on a
matrix, drawn on the ground using local materials or on paper.
Time Lines and Trends: These are either verbal or visual chronologies of important trends or events that
influence a topic under consideration. After studying the past events this approach provide a framework for discussing
the changes that have taken place.
Techniques of Data Collection
The process involves a combination of different methods and technologies depending upon the research study
and according to presence of various interlinked factors. Data can be collected from both the primary and secondary
sources.
Secondary Data Collection
Secondary data are data that are collected by persons or agencies for purposes other than solving the problem at
hand.
The sources of secondary data can be:
(a) Various Government Publications: Government and private organisations collect data related to business,
trade, prices, consumption; production, industries, income, health, etc. These are very dominant source of secondary
data. Central Statistical Organization, National Sample Survey Organization, office of the Registrar, and Census
Commissioner of India, Directorate of Economics and Statistics and Labour Bureau-Ministry of labour are some of
the government publications.
(b) Foreign or International Publication Bodies: Governments in the world and international agencies frequently
publish reports on data collected by them on various aspects. For example, U.N.O.’s Statistical Year book, Demography
Year Book, etc. can be named in this category.
(c) The Other Sources of Secondary Data: Journals and Newspapers, diaries, letters etc.
PRIMARY DATA COLLECTION
Semi-structured Interviews: These are the backbone of primary data collection, including that for livestock
research. The term Semi-structured is the key component – checklists should be used flexibly to aid the interviewers
and to ensure that nothing relevant is missed out, rather than as questionnaires. The essential part of such interviews
is to develop a relationship with the community, which is done by listening to the people and by discussing their
problems.
Group Discussions: This approach is advantageous over interviews as more information could be obtained
from comparatively large number of people in less time. It is used for confirming the information also. The selected
group must be homogeneous in nature.
10
Focus Group Discussions: A focus group is a form of qualitative research in which a group of people are asked
about their perceptions, opinions, beliefs, and attitudes towards a product, service, concept, advertisement, idea, or
packaging. Questions are asked in an interactive group setting where participants are free to talk with other group
members. A focus group is composed of 6 to 8 individuals selected at random on the basis of certain characteristics
relevant to the research. The objective is to obtain information on participant's beliefs or perception on the topic of
interest.
Direct Observations: Observation method is best suited and typically used to investigate after-effect of any
natural or manmade disaster. For example, an area may suffer from flood, earthquake, famine, then government or
any aid agency may desire to know extent of people's suffering and their needs. A team of trained investigators is
sent to the affected area to observe the things directly. The information obtained is valuable for checking differences
N
between knowledge and actual practices.
Key Informants: Key informants are the major source of information. The principle advantages of the key
informant technique relate to the quality of data that can be obtained in a relatively short period of time. But the
informants are unlikely to represent, or even understand, the majority view of those individuals in their community
and any difference in status between informant and researcher can result in an uncomfortable interaction. The
identification of key informants may be in error because some societies may attract people who wish to improve
their status but do not have the necessary skills of a true key informant. Informants have been differentiated from
informers, who are more likely to be biased and to have their own agenda. Government officials, local head of
personal services, local shop owners etc. can be key informants.
Village or Community Profiles: To make the notes of the location or the relevant information is included in
this approach. During Initial efforts of developing relationship with the people of village, it is useful to make map
and to write information on paper. It is a handy tool and proved excessively valuable during data collection.
Aerial Surveys: While collecting qualitative data aerial surveys can be used but it is quite expensive technique.
With an advance in technology, aerial survey has stepped into the technological age to provide digital aerial
photography, electronic mapping, photogr-ammetric, and Geographic Information System (GIS) service. These
methods are used to take photographs of the area under consideration.
ANALYSIS AND REPORT WRITING
In data analysis, the notes which have been taken by different members are compared as every member has its
own perspective before writing the final report. The transcription and analysis of the qualitative data is more challenging
and requires judgement skills. Systematic coding and ethnographic summarization are the two basic approaches,
which are extremely helpful in the analysis of the qualitative data. The coding process enables researchers quickly to
retrieve and collect together all the text and other data that they have associated with some thematic idea so that they
can be examined together and different cases can be compared in that respect while ethnographic summarization is
a powerful way of opening up and extending understandings of how human beings live in the world. It is a relational
approach to social life in which the researcher is fully implicated. The ethnographer requires skills of patience,
endurance, perspicacity, diplomacy. The analysis of data can be done manually or by computers using word processing
programmes. It facilitates the work, but the total dependency on computers is wrong as it contains a risk of performing
oversimplified analysis. Investigators are also using content analysis as the content analysis is considered a scholarly
methodology in the humanities by which texts are studied as to authorship, authenticity, or meaning. After the
ethnographic summarization the analysis concentrates on the nature of research question and nature of those people
for whom the research is conducted. If the research question is exploratory then the emphasis is given on alternative
possibilities, while in case of testing a hypothesis, the best answer of the problem is chosen.
Method of Analysis: Identification of themes and incidence density are the two methods of content analysis.
The theme identification is associated with themes, patterns, responses in which a variable is listed according to
given group. The group is the unit of analysis and an effective way in deciding the important characteristics of the
problem under concern. Graphic representation is preferred, if the findings are interpreted for community members
themselves as it is easily understood by them. But for complex data the method of ethnographic summary is adopted.
An ethnographic understanding is developed through close exploration of several sources of data. Using these data
sources as a foundation, the ethnographer relies on a cultural frame of analysis.
Incidence density is defined as a number of times a theme is mentioned within each group. Under this method of
analysis the narrative text is coded using theme identification.
11
General Findings: For qualitative data, we are primary more interested in detailed and specific findings. If you
are using a mixed approach, which we strongly recommend, make sure to triangulate your findings or describe how
the findings supplement each other and help explain a more complete picture. Adoption of the Rapid Results Approach
(RRA) as a programme implementation instrument within the public service and beyond is therefore consistent with
the quest for results, capacity enhancement, and ultimately ensuring delivery of development results. RRA achieves
systematic change through a series of small-scale, results producing and momentum-building initiatives. In addition
Rapid Results Approach taps into the human desire to succeed, creates real empowerment, motivation and innovation
in working towards results. It strengthens accountability and commitment for results and unleashes and enhances
implementation capacity. The premise of the Rapid-Results Approach is to create a context for organizational learning
and for enhancing implementation capacity, by helping organizations work on sharply defined 100-day initiatives
N
that dovetail into short, medium and long term plans. It is therefore a management tool that accelerates implementation
and the achievement of performance goals. In so doing it helps challenge leaders to continually adapt and refine their
overall implementation strategy based on what works and what does not work on the ground. Rapid Results Approach
has been implemented in different country situations in different parts of the world and in different sectors of the
economy.
Q. 6. Discuss the characteristics of data flowing from the agencies involved in compilation of agricultural
data.
Ans. Agricultural Data : In the context of the strategy for agricultural development, knowledge of the detailed
structure and characteristics of agricultural holdings is imperative for responsive and efficient planning and
implementation of the programmes. For this purpose, it is essential to have information by operational holdings as
distinct from ownership holdings. Information by ownership holding is no doubt useful to have an idea of the
distribution of wealth but information by operational holdings is more important for implementation of the Agricultural
Development Programmes. As such an operational holding is defined as “all land which is used wholly or partly for
agricultural production and is operated as one technical unit by one person alone or with others without regard to
title, legal form, size or location”. The main sources of data related to agriculture can be obtained from the Directorate
of Economics and Statistics (DESMOA) in the Department of Agriculture and Cooperation and the Animal Husbandry
Statistics Division (AHSD) of the Department of Animal Husbandry, Dairy and Fisheries in the Ministry of Agriculture.
“Cost of Cultivation in India”, the monthly bulletin “Agricultural Situation In India” are the other publications,
which publishes the agricultural data. Some of the important data included the quiquennial agricultural census,
livestock census and the input survey; the cost of cultivation studies; the annual estimates of crop production;
integrated sample survey to estimate the production of major livestock products.
Agricultural Census: Agriculture Census forms part of a broader system of collection of Agricultural Statistics.
It is a large-scale statistical operation for the collection and derivation of quantitative information about the structure
of agriculture in the country. An agricultural operational holding is the ultimate unit for taking decision for development
of Agriculture at micro level. It is for this reason that an operational holding is taken as the statistical unit of data
collection for describing the structure of agriculture. Through Agriculture Census, it is endeavored to collect basic
data on important aspects of agricultural economy for all the operational holdings in the country. Aggregation of data
is done at various levels of administrative units. The agricultural census was started in 1970-71 and the seventh
censuses relates to 2001. It is carried out in three phases. The first is the preparation of a list of all holdings with data
on primary characteristics like area, location (code) of the holding. The second is the collection of detailed data on
the irrigation status, the cropping pattern and tenancy particulars of the holding. The third phase i.e. the Input Survey
is associated with the collection of data on the pattern of input-use across crops, regions and size-groups of holdings.
The availability of infrastructural facilities and use of chemical fertilizers, organic manures, pesticides etc. are the
inputs covered. The results of the agricultural census and input surveys are available in tehsil level, in computerized
CD ROM and on the internet also.
Studies on Cost of Cultivation: The cost of cultivation of principal crops are published by DESMOA in its
publication “Cost of Cultivation in India” and also in ASG, which helps in estimating the crop-wise and State-wise
cost of cultivation and production in respect of 29 principal crops leading to estimates of costs of cultivation and
their computations. These estimates are used by the Commission for Agricultural Costs and Prices (CACP) for the
analysis of a wide spectrum of data on variables like market prices, productivity of the crops concerned, domestic
and global inter-crop price parity, so that they gave their recommendations to Government on Minimum Support
12
Prices (MSP). The study of the cost of cultivation is required for determining the index of the terms of trade between
the agricultural and non-agricultural sectors. This is defined as:
Index of the Terms of Trade (ITT) Index of Prices received (IPR) Index of Prices paid (IPP)
Annual Estimates of Crop Production: For the annual estimation of the crop production, DESMOA makes
and releases annual estimates of area, production and yield regarding principal crops of foodgrains, oil seeds sugar
cane, fibres and important commercial and horticulture crops. Estimates of area are based on a reporting system
obtained from the complete coverage and coverage by a sample, while estimates of yield are based on a system of
crop cutting experiments and General Crop Estimation Surveys. It takes time to prepare these estimates.
The first estimate of the kharif crop is made in the middle of September and the second advance estimate of the
kharif crop and the first estimates of the rabi crop is made in January. The third advance estimate is made at the end
N
of March or the beginning of April and the fourth in June. The methodology for estimating area, yield of crops and
for making advance estimates are given in ASG.
Livestock Census: By livestock, we mean farm animals like cattle, buffaloes, sheep, goats, pigs, horses and
ponies, mules, donkeys, camels, yak, pigs, dogs and rabbits. The last seventeenth livestock census was conducted in
2003 covering the information, district-wise on livestock, poultry, fishery and also agricultural implements. The
results of the 2003 livestock census are available on the website of the Department of Animal Husbandry on a query-
based format. Some of the data regarding livestock is published by ASG and BAHS.
Data on Production of Major Livestock Products: AHSD provide the data on animal husbandry, dairy and
fisheries. These are published in BAHS. The latest data up to 2003-04 presents:
(i) Long time series and state-wise short time series of estimates of production of milk, eggs, meat and wool.
(ii) State-wise and national estimates of per capita availability of milk and eggs.
(iii) Computation of cows, buffaloes and goats to milk production and of fowls and ducks to egg production in
different states in 2003-04.
(iv) Average annual growth rates of production of major livestock products, milk, eggs and wool.
(v) Short time series on imports and exports of livestock, their products and on area under fodder crops, pastures
and grazing in different states.
(vi) A short time series on the no. of artificial inseminations performed in different states and many more.
Agricultural Statistics at a Glance (ASG): Agricultural statistics covers rainfall statistics, area statistics
comprising the data on land use, area and production and yield statistics of various crops produced in the state. The
Agricultural Statistics System is very comprehensive and provides data on a wide range of topics such as crop area
and production, land use, irrigation, land holdings, agricultural prices and market intelligence, livestock, fisheries,
forestry, etc. It has been subjected to review several times since independence so as to make it adaptive to contemporary
changes in agricultural practices. Some of the important expert groups, which examined the working of the system
are: the Technical Committee on Coordination of Agricultural Statistics, the National Commission on Agriculture,
the High Level Evaluation Committee.
Crop and land use statistics form the backbone of the Agricultural Statistics System. Reliable and timely
information on crop area, crop production and land use is of great importance to planners and policy-makers for
efficient agricultural development and for taking decisions on procurement, storage, public distribution, export,
import and many other related issues. ASG provides data on land, cropping pattern, water, seeds, kinds of soil
present, fertilizers, pesticides etc.
Other kinds of data on the agricultural sector presented in ASG are related to procurement of grains like rice,
wheat and other crops like cotton, raw jute, oil seeds, pulses, and onion by public agencies, minimum support prices
(MSP) for different agricultural commodities, per capita availability of important articles of consumption, conversion
factors between primary and secondary agricultural commodities etc.
Another Sources of Data on Irrigation: The data on irrigation presented in ASG is the combination of a
reporting system and sample surveys, which is provided by DESMOA. The hydrological data on all the important
rivers in the country is provided by Central Water Commission (CWC) under the Ministry of Water Resources,
which is collected through 877 hydrological observation sites. The three Censuses were conducted in 1986-87,
1993-94 and 2000-01 and their reports were released in 1993, 2000 and 2006. These reports provides the information
such as ground water and surface water works for irrigation purpose, crop-wise utilization of the irrigation potential
created, the way of the distribution of water in the field - sprinkler, drip, open channel or underground water irrigation.
13
Other Data on the Agricultural Sector: Data on forest cover is provided by Forest Survey of India (FSI),
which use the Remote Sensing (RS) technology since 1987. Data on the production of industrial wood and fuel wood
are available with the Principal Chief Conservator of Forests in the Ministry of Environment and Forests. The data
on wholesale prices of food and non-food articles, livestock products by quality, wholesale prices in international
markets of various commodities and markets for the current and earlier months are published in the “Agricultural
Situation in India”, a monthly publication of DESMOA, as a title “Statistical Tables”. Farm Harvest Prices of
Principal Crops in India is another publication of DESMOA, that gives farm harvest prices collected from the field
at the end of each crop season. Despite of these the “Indian Agriculture in Brief - 2000” and the “Bulletin on Food
Statistics 1998-2009 published by DESMOA also gave information related to agricultural sectors. The “Indian
N
Horticulture Data Base”, the Horticulture Production Yearbook both published by the Department of Agriculture
and Cooperation and The National Bank for Agriculture and Rural Development (NABARD) are another sources of
agricultural data. Statistics on area, production and yield of crops, are also available as time series in the Economic
Survey, Monthly Statistical Abstract of India and the RBI Handbook of Statistics on Indian Economy. The RBI
Handbook also provides information regarding the short and long-term loan accounts of farmers. Institutions like
SCBs, Co-operatives, RRBs and the Rural Electrification Corporation (REC) provide indirect credit to agriculture
and allied activities.
National Accounts Statistics NAS) publishes data on the contribution of agriculture and its sub-sectors to GDP
and other measures of national/domestic product. NAS also provides information on value of output of various
agricultural crops including those of drugs, fibres, vegetables, livestock products like milk group products, meat
group products, eggs, wool, silkworm cocoons, inland fish and marine fish. The data on capital formation in agriculture,
animal husbandry, forestry and fishing is also available on the website of NAS.
Q. 7. What is Action Research? Briefly discuss the various step involved in conducting Action Research.
Ans. Action research involves actively participating in a change situation, often via an existing organization,
whilst simultaneously conducting research. Action research can also be undertaken by larger organizations or
institutions, assisted or guided by professional researchers, with the aim of improving their strategies, practices and
knowledge of the environments within which they practice. As designers and stakeholders, researchers work with
others to propose a new course of action to help their community improve its work practices.
The steps and processes involved in planned change through action research. Action research is depicted as a
cyclical process of change.
1. The cycle begins with a series of planning actions initiated by the client and the change agent working
together. The principal elements of this stage include a preliminary diagnosis, data gathering, feedback of results,
and joint action planning. In the language of systems theory, this is the input phase, in which the client system
becomes aware of problems as yet unidentified, realizes it may need outside help to effect changes, and shares with
the consultant the process of problem diagnosis.
2. The second stage of action research is the action, or transformation, phase. This stage includes actions relating
to learning processes (perhaps in the form of role analysis) and to planning and executing behavioral changes in the
client organization. As shown in Figure 1, feedback at this stage would move via Feedback Loop A and would have
the effect of altering previous planning to bring the learning activities of the client system into better alignment with
change objectives. Included in this stage is action-planning activity carried out jointly by the consultant and members
of the client system. Following the workshop or learning sessions, these action steps are carried out on the job as part
of the transformation stage.
3. The third stage of action research is the output or results phase. This stage includes actual changes in behaviour
(if any) resulting from corrective action steps taken following the second stage. Data are again gathered from the
client system so that progress can be determined and necessary adjustments in learning activities can be made.
Minor adjustments of this nature can be made in learning activities via Feedback Loop B see in below figures.
14
Input Transformation Output
N
Action Planning Action sleps
or
Distinguish between any three of the following:
(i) Deductive approach and inductive approach for formulation of hypothesis.
Ans. Hypothetico-Deductive Model: There was a time, when no role was being given to theoretical explanations.
Hypothetico-deductive model was put forward by Hempel and Carnap which not only describes the structure of
theories, but also provides answers to the questions of the status and functions of theories also.
The model concentrates on the problems of structure of a theory. The propositions in a deductive system are seen
as arranged in an order. The hypotheses are categorized into two: one is higher level hypotheses which refer to
theoretical entities which are not required to be tested; second is lower-level hypothesis which include observable
phenomena and it can be tested against reality. H-D model is concerned with the problem of status of theoretical
terms which are not directly testable but need to be tested indirectly to gain successful confirmation of the theory in
which the phenomena have occurred.
In H-D model, theories are compared with the realistic data. This model turns and converts the old realists-
instrumentalists controversy in to a mute debate. Realists claim that a model which does not provide real entities for
the terms hat its using cannot be accepted. On the other hand, instrumentalists believe that theories are only predictive
in nature. H-D model has accommodated both these concerns in one model and thereby has defused the controversy.
It rejects the idea that theories have no role and they are only instruments. It has emphasized on following roles of
theories:
(a) It has brought about generalizations in the specification of scientific laws.
(b) There is a feature of “formal simplicity” in the theories which enables them to make use of ‘powerful
mathematical machinery’.
(c) Theories can provide a scientist to discover the interdependencies between observable facts and thereby
helps in practical function of a theory.
(d) Theories are intellectual devices which serve the function of explanation of a phenomenon.
Therefore, we can say that H-D model has provided for far more substantial role for theories and theoretical
terms in comparison to its predecessors.
15
Inductive Probabilistic (I-P) Model: This model was developed by Hempel in 1960s. The characteristics of
these models are similar to D-N model. In this model, the explanations consists of sentences which describe the
requisite initial conditions and statistical laws which are expected to confer upon the explanations which are highly
logical and upon inductive probability.
(iii) Cross section data and Time Series data
Ans. Time Series Data: A time series is defined as a series in which data is collected for different time periods.
The interval between two successive time periods may remain same or vary but it generally remains same to make
more relevant comparison. Time interval is known as frequency of the time series. If time interval is less than one
year, it is called high frequency time series. If time interval is more than or equal to one year, it is called low
frequency time series. It has a severe limitation of non-stationary data. It gives birth to nonsense correlation.
N
Cross Section Data: In this type of data, we collect observation on same variable at same point of time but in
different units. For example, suppose we are talking of HDI indices of different countries in a particular year.
Heterogeneity is the main issue with this type of data.
In statistics, a time series is a sequence of data points, measured typicaly at successive time instants spaced at
uniform time intervals. Time series data have a natural temporal ordering. This makes time series analysis dstinct
from other common data analysis problems, in hich there is no natural ordering of the observations.
While the cross-sectional data or cross section (of a study population) in statistics is a type of one-dimensional
data set. Cross-sectional data refers to data collected by observing many subjects (such as individulas, firms or
countries/ regions) at the same point of time, or without regard to differences in time. Analysis of cross-sectional
data usually consists of comparing the differences among the subjects.
(v) Sampling and non-sampling error.
Ans. Sampling Error: This error is attributed to fluctuations of sampling. Sampling error is due to the fact that
only a subset of the population has been used to estimate the population parameters and draw inferences in a sample
survey and is completely absent in census method.
Non-sampling Errors: They are due to certain causes which can be traced and may arise at any stage of the
enquiry, viz., planning and execution of survey and collection, processing and analysis of the data. Non-sampling
errors are thus present both in census and sampling surverys.
n n
16