C1 Education in Research
C1 Education in Research
For the most optimal reading experience we recommend using our website.
A free-to-view version of this content is available by clicking on this link, which
includes an easy-to-navigate-and-search-entry, and may also include videos,
embedded datasets, downloadable datasets, interactive questions, audio
content, and downloadable tables and resources.
Research in Education
Summary
The aim of this chapter is to explore the concept of educational research, its purposes and processes.
It presents research as an integral and essential part of professional practice, and it explores some of
the key issues which researchers need to consider when planning their research, such as questions of
subjectivity and scope; positivist and interpretivist approaches; and the distinction between assump-
tions and hypotheses. In the process, it introduces some vocabulary that will be useful to you as a
researcher. By the end of this chapter you should be able to formulate your own answers to the ques-
tions: ‘Why do we do educational research?’ and, ‘How do we articulate and pursue our research ques-
tions?’
Key words used in this chapter: qualitative, quantitative, subjectivity, interpretivist, positivist.
If someone asked us why we do educational research, we would probably come up with an answer along the
lines of: ‘We carry out research into education in order to help us – and others – to a better understanding of
what constitutes effective teaching and learning.’ The focus of the research may be about issues as disparate
as funding, student behaviour, inclusion, teacher education or social justice; but in the end the purpose of the
enquiry – the use to which its discoveries are put – will normally be to improve the effectiveness of our pro-
fessional practice and the systems within which we operate to support learners in their learning. We shall be
arguing throughout this book that research, both into our own professional practice and into the impact which
policies have upon its context and content, is central to the concept of teaching as a profession.
Mo teaches Geography to 11- to 16-year-olds. He has noticed over the past three years that the girls consis-
tently perform better than the boys, both in their homework and in classroom activities. This pattern is appar-
ent across all year groups. Mo is undertaking a Master's degree in education as part of his continuing profes-
sional development, and decides that he would like to focus on this issue as part of his research project. As a
first step he must formulate his research question. After some thought, he comes up with the following:
• Why do the boys in my classes achieve less well overall than the girls?
We shall return to the wording of Mo's question shortly, but first take a moment to think of a question
about your own teaching, or the way your institution operates, or the functioning of the education sys-
tem as a whole – the question you would most of all like to discover an answer to at this moment. For
example, it might be:
• To what extent do my head teacher's (or principal's) beliefs and priorities affect my classroom prac-
tice?
Or it might be:
• If national standards of achievement for post-16 learners are continuing to rise, why does class-
room behaviour seem to be deteriorating?
Take some time to think of your own particular question, and then write it down before reading on.
Now let's look again at Mo's question – ‘Why do the boys in my classes achieve less well overall than the
girls?’ – and consider it in more detail. Why, for example, might he need the answer, and what use could he
make of it?
An obvious reason for wishing to answer this question is that it might help him to arrive at strategies which
would enable him to help the boys improve their attainment levels and work to their full potential as the girls
already appear to be doing. Even as he formulates the question, he might well have some possible answers
in mind. For example, he might suspect that the subject matter, or the teaching resources, or even his own
style of teaching are likely to engage the girls more easily than the boys. The answer that he might have in
mind we can call his hypothesis. It might be a conclusion which he's come to as a result of his own observa-
tion and experience; or it might be an idea which has been proposed by another researcher whose work he's
read in an academic journal, and which he wants to test out for himself. On the other hand, he may have no
preconceived idea of the answer at all. In that case, his research will not be testing a hypothesis, but will be
a case of collecting data in order to form a possible answer or set of answers – answers which may in some
cases take him by surprise.
Having formulated his question, his next step will be to look at what has been published on this topic by other
teachers and academics. He may find that some possible answers or theories have already been suggested,
tested and even ‘proved’. If this is the case he may feel that his curiosity is satisfied. On the other hand, he
may decide to see whether these answers really do work when applied to his own professional practice. In
other words, he will continue to pursue the research question. The steps he will have taken to this point can
be summarised like this:
His next task will be to collect the necessary evidence; analyse it to discover what it tells him; compare what
he finds with what was claimed in the literature; and draw some conclusions. If we summarise this process,
the steps in his research journey now look like this:
Step 1. Reflection on his professional practice in order to identify a question. Mo has been thinking
about his teaching and the main issue which concerns him is the achievement level of boys, com-
pared to girls.
Step 2. Formulation of the question. He decides to formulate this as a research question in order
to explore possible ways in which the issue could be addressed: ‘Why do the boys in my classes
achieve less well overall than the girls?’ One advantage of phrasing the research title as a question
is that it will help him to keep his focus when he is exploring the literature and planning the collection
and analysis of his data.
Step 3. Review of the literature which already exists on this topic. He uses the key words, ‘boys’
achievement’ to find articles in academic journals which throw some light on this question. He uses
the same key words to search specialist publications such as the Times Education Supplement
(TES) for recent, relevant reports. He discovers a number of theories relating to boys’ achievement.
One of these suggests that boys respond less well than girls to continuous assessment, and tend to
perform better in time-constrained tests and examinations.
Step 4. Design of a research process to answer the question, or to test existing theories, for yourself.
In the light of the literature, he decides to rephrase and refine his research question, giving it a sharp-
er focus. It becomes: ‘Do the boys in my class perform better on timed tests and less well on course-
work than girls?’ Now he will test this theory by introducing some timed tests as supplements to the
students’ coursework.
Step 5. Collection of data. He assesses the class using both timed tests and coursework.
Step 6. Analysis of data. He analyses the results of the timed tests and the coursework according to
student gender, and also compares the results of individual students in both assessment methods.
He finds that, overall, boys score better in the tests than girls, and that the majority of individual boys’
scores in the tests are higher than their scores in the coursework.
Step 7. Conclusion. He finds that his own research appears to confirm the claim that boys achieve
better in time-constrained tests than in continuous assessment, and resolves that he will look at ways
in which a greater element of timed testing could be introduced into school-based coursework.
He may – and we hope he will – want to add a further step, which is:
Step 8. Thedissemination of his findings so that other educational professionals can learn and ben-
efit from them.
Activity
Mo's was a fairly straightforward example of practitioner research. Now let's look again at the other
examples that we listed:
To what extent do my head teacher's (or principal's) beliefs and priorities affect my classroom practice?
If national standards of achievement of post-16 learners are continuing to rise, why does classroom
behaviour seem to be deteriorating?
• To what extent would each of these three research questions fit neatly into the model of the re-
search process we've just looked at?
• Which of these three research questions do you think would be easiest to answer, and why?
You should give yourself some time to think carefully about this and make some notes. When you have
some ideas that you would be happy to share, you might find it useful to discuss them with someone
else, such as a colleague, a fellow student or a mentor.
So, is there anything about each of these research questions which might make it difficult to fit them neatly
into our first model of the research process? We suggest that you compare the notes you've made with what
follows.
1. To What Extent Do My Head Teacher's (or Principal's) Beliefs and Priorities Affect My Classroom Practice?
This question poses a number of problems. First, how will you reliably identify your head teacher's beliefs and
priorities? Are you going to assume they are implicit in his or her policies or actions or conversation? Won't
this raise problems of interpretation? How would you know that your understanding of the head teacher's be-
liefs was the same as your colleagues’ understanding of them? Whose would be ‘true’? Perhaps you could
interview the head teacher as part of your research, and ask him or her to describe their values, priorities and
beliefs. Ah, but then how would you know they were telling you what they genuinely believed, and not what
they thought you thought they ought to say? Or what the local authority or government thought they ought to
say? How could you ever really reliably identify those ‘beliefs and priorities’?
Then, how could you measure how they affect your own classroom practice? How could you know? What
would your evidence be? In what sense would it have any reliability? Could it ever have any status as a ‘fact’
or a ‘truth’?
Here we see that this type of research question, that sets out to enquire about personal values, motives and
beliefs, is of a different order to one which compares student achievement under different types of assess-
ment. It is the sort of question that might need rephrasing or reframing before we can attempt to answer it.
You might like to think of ways in which this question could be adapted in order to make it more accessible to
research.
This question fits rather better into our initial model of the research process. Nevertheless, there are two
things about it which distinguish it from the ‘boys’ achievement’ type of question First, there is the fact that it
will probably lead us to try a number of different strategies. We can keep changing the seating arrangement
until we've tried every possible pattern. If there's no improvement in behaviour, the answer to our question
is probably ‘no’; but if one particular configuration coincides with an improvement in behaviour and does so
every time we try it, we have arrived at a ‘yes’ answer. And it is exactly that element of ‘trial and error’ which
is the first distinguishing feature of this question. It encourages us to try out strategies to find one that works.
The second feature which distinguishes it from the ‘boys’ achievement’ type of question is that whatever an-
swer we find is probably particular to that class of ours at that time. We're not trying to explore or answer a
wider question about ‘all students and all classes’ which we've found in the literature. We're looking to see
whether an intervention or action which we take will have a desired effect on that class. It may not work
with other classes, or for other teachers, but we are looking at our own teaching, here and now. This type of
question falls into the category of action research, and we shall come back to it in a later chapter.
In terms of which would be the easiest of those three questions to answer, this investigation about room layout
would probably be the one. And it is easiest to answer because it involves trying actions or strategies to see
what ‘works’. It doesn't rely on subjective assessments and value judgements which may call its results into
question.
Activity
• Or does it? Can you see how and where this question might well involve some degree of subjec-
tivity?
• And if you've spotted the potential difficulty, have you also spotted a way to reduce the potential
for unreliable, subjective judgements in this research?
Take some time to think carefully about these two questions and make some notes. When you have
some ideas that you would be happy to share, you might find it useful to discuss them with someone
else, such as a colleague, a fellow student or a mentor.
Let's look now at the third question in our list. It was this:
3. If National Standards of Achievement at Post-16 Are Continuing to Rise, Why Does Classroom Behaviour
Seem to Be Deteriorating?
You will probably have noticed straight away that this question is different from the others in terms of scope.
It is asking a question which refers to trends on a national scale: trends in post-16 achievement and stu-
dent behaviour. As the question stands, it is not one which you could address within the parameters of, for
example, an MA research skills module. If you wanted to explore this issue as part of a postgraduate pro-
gramme you would be advised to scale it down to your own institution or group of schools. The question then
might become: ‘Why are standards of student behaviour in the school deteriorating even though standards
of achievement at post-16 within the school are continuing to rise?’ This limits the scope of the enquiry to
within manageable parameters. It provides you with a question to which you have some possibility of finding
an answer, if you can design your research in a way that enables you to collect relevant data.
Activity
• Can you see anything which is still problematic about the revised version of the question?
When you have some ideas that you would be happy to share, you might find it useful to discuss them
with someone else, before reading on to compare your ideas with what we have to say.
You probably noticed very quickly that our revised question – ‘Why are standards of student behaviour in
the school deteriorating even though standards of achievement at post-16 within the school are continuing to
rise?’ – still makes an assumption which may not be easy to substantiate. While it should be straightforward
to find and present documentary evidence that post-16 achievement at the school has shown a steady im-
provement (if it has), it is another matter entirely to substantiate the assumption that standards of behaviour
have declined.
Activity
If you did not identify this issue as a problem, take your time to think it through now, until you feel con-
fident that you understand why it could present difficulties for a researcher. Discuss it with a colleague,
mentor or fellow student if you wish.
Let's go back now, at last, to the research question we asked you to write down for yourself earlier in this
chapter – the question relating to your own professional practice for which you'd most like to find an answer.
Activity
• Consider whether your question fits within the model of the research process which we drew up
around the ‘boys’ achievement’ question.
• If it does, fill in the steps as we did for that question earlier. Here they are again:
5. Collection of data
6. Analysis of data
7. Conclusion.
If your question doesn't fit the model process very well, or raises some of the questions or problems
we identified earlier, such as issues about:
• subjectivity;
• reliability;
• unquestioned assumptions;
• scope or scale,
then the best way to start on this task is to identify where your question does and does not fit the model
we've been looking at in this chapter, and to reflect on alternative ways of collecting data which help
you to throw some light on possible answers. This will make a useful reference point for some of the
theories and approaches we'll be covering in the chapters which follow.
This is a book about qualitative research. To explore what is meant by this term, let's take as our starting
point a short extract from one of the books listed in the recommended reading for this chapter:
It remains a mystery to me why those who work in education should attempt to aspire towards sci-
ence when scientific methods, processes and codes of conduct at best are unclear and at worst
lack the objectivity, certainty, logicality and predictability which are falsely ascribed to them. Surely
educational research would do better to aspire to being systematic, credible, verifiable, justifiable,
useful, valuable and ‘trustworthy’ (Lincoln and Guba, 1985). (Wellington, 2000: 14)
There is a lot of argument packed into this one short paragraph. For one thing, Wellington is suggesting that
even ‘scientific’ research – often held up as a model of factual and disinterested objectivity – is not necessar-
ily as objective as it is claimed to be. He then goes on to argue that ‘objectivity’ is anyway not necessarily the
best test of good research, and that other characteristics and qualities may be more important. The qualities
he mentions here are:
The point Wellington is making is an important one, because qualitative educational research, like research
in the other social sciences and the humanities, is sometimes subjected to criticism from those who favour
a quantitative or scientific model of research for being ‘too subjective’ or too much based on feelings and
personal responses. Feelings and personal responses are not accepted by such critics as being reliable data
in the same sense that numbers or percentages or anything else measurable in figures are. And of course
there's very little educational researchers can do about this, since by its very nature education is concerned
with human beings; and human beings are not predictable or static in the same way that inert materials or
fixed numbers are. So what Wellington is arguing is that we shouldn't feel we have to apologise for the fact
that our research in education is not often conducted like a laboratory experiment with measurements and
Qualitative Research in Education
Page 12 of 23
Sage Sage Academic Books
© Liz Atkins and Susan Wallace 2012
control groups, or the fact that our findings are not often reducible to repeatable formulae; but that we should
set ourselves standards (such as those listed above) which are appropriate to the more people-centred ap-
proach which our research often takes.
Although the notion of objectivity in educational research is sometimes problematic, this is not to say that
educational research is never about numbers and percentages, nor that it should be. When we undertake
research which measures something – responses, numbers of students, examination results – in finite terms,
and in which we present our findings in terms of numerical data, we call this quantitative research. An easy
way to remember this is that quantitative research measures quantity (although of course this is rather an
oversimplification).
Activity
Have a look again at Mo's research question and decide whether or not we could class this as quanti-
tative research.
Do the boys in my class perform better on timed tests and less well on coursework than girls?
Of course, the answer to that last question is: ‘It depends what data you decide to gather and how you
gather it’.
The way we planned to investigate this research question earlier in the chapter made it a straightfor-
ward quantitative enquiry. We planned to measure and compare results of tests and coursework, and
to present our data as numerical scores and comparisons. So if you answered on those grounds that,
‘Yes. It's quantitative research’ you were right.
But what if we had decided to collect the data in a different way? What if we had asked the boys and girls
which means of assessment they preferred, or which they thought they performed best at, or whether they
thought they performed better than the opposite gender? What then? We might still have come out with some
sets of figures for our results: ‘so many boys said this’; ‘so many girls said that’. So would this still make it
quantitative research? And what if, instead of presenting the findings in terms of figures, we had chosen in-
stead to quote the sort of things that the students had told us about their assessment preferences?
Activity
Now look again at your own research question which you formulated earlier. You may have revised
or refined it since then. You may find it useful now to identify what sort of data is going to help you to
answer it: quantitative or qualitative? And why? The simplified table (see Table 1.1) may help you to
arrive at an answer.
Stories, accounts, observations presented in the form of: Numbers, percentages, scores presented in the form of:
• Pictures • Tables
So, how do we use those terms, quantitative and qualitative? Well, we've seen that a quantitative approach to
research is often regarded as more ‘objective’ than a qualitative, people-focused approach. And we've seen
that data gathered in the process of research may be described as quantitative if it deals with finite measure-
ments and numbers, and qualitative if it focuses on presenting or interpreting people's views, interactions or
values. We must be careful, however, not to assume that ‘quantitative’ means the same thing as ‘positivist’.
The positivist stance is usually typified by a relatively objective style and approach, and searches for ‘facts’
which can be generalised; whereas a more typical approach in small-scale educational research would be an
interpretivist one which acknowledges some degree of subjectivity in the researcher and other participants,
may be written in the first person, and seeks to throw light on a particular case or situation. But it may involve
the collection of qualitative or quantitative data, or both.
Table 1.2 is another simple table and summarises the main contrasts between the positivist and the interpre-
tivist approach. It will help you to identify which sort of research you are dealing with when reading journal
articles, and it will help you to identify your standpoint in the research which you plan to undertake.
Interpretivist Positivist
Investigates by focusing on case studies and people as individuals and groups, their histo- Investigates using the model of the natural sci-
Often uses the first person – I, me – when writing up the research (e.g. I conducted an inter- Writes in the passive tense (e.g. An interview
Researcher acknowledges their own viewpoint, values and preconceptions and explains the Researcher claims the research has been con-
measures they have taken to prevent these from contaminating the data ducted with maximum objectivity
Its purpose is to throw some light on and develop understanding of particular cases and sit- Its purpose is to discover ‘facts’ which can be
The researcher may treat others involved as participants in rather than subjects of research The researcher treats those involved as subjects
Activity
‘Do the boys in my class perform better on timed tests and less well on coursework than girls?’
Using Table 1.2 above, you might find it useful to reflect on how (a) a positivist and (b) an interpretivist
researcher might approach this research and what claims they might make for the findings.
Let's look more closely now at some of the specialised vocabulary we've encountered so far, and some that
may be new to you. In research, the terminology we use is an important set of tools, and – as with any skilled
job – those tools have to be used with precision. In the dialogue below, Mo and his research tutor, Alia, dis-
cuss Mo's methodology – that is, his rationale for the data collection methods he plans to use.
Mo So when I'm planning to write this up, I need to write about method andmethodology?
I'm not sure I quite understand what the distinction is.
Alia OK. Method is a description of what you do, in practical terms, to collect your data. So in
your case you're going to be explaining about introducing timed tests and comparing the re-
sults by gender with coursework results, and so on. The methodology section is where you
provide a theoretical and philosophical justification for this choice. In other words, having de-
scribed the WHAT, you explain the WHY. Why you've designed the research in this way; why
you've chosen these participants and this number of participants; why you've opted for this
method of data collection. And you can draw on three main sources for this explanation. One
source would be literature about research: works such as Opie (2004), Wellington (2000),
Cohen, et al. (2011) and so on, where the advantages and disadvantages of various proce-
dures and methods are discussed. Another would be published research which you've found
in the research journals – and, of course, in your case this has already played a large part in
Mo So are you saying I need to write a watertight justification for the method I've used?
Alia No. Because no method is ‘watertight’. There are always potential flaws and doubts. But
what you do need to do is to demonstrate, as far as possible, that you're aware of these.
So, for example, when you write about trying out the timed tests you should acknowledge the
possibility that any improvement in the boys’ performance might be coincidental, or might be
due to the novelty of the activity rather than the nature of the test itself. You should acknowl-
edge the potential for unreliability in your method and explain the steps you've taken to min-
imise this. In your case, you're going to repeat the tests to see whether the data is replicated
– in other words that the first time wasn't a fluke.
Mo Right. I think I've got that now. Method is what and how, and methodology is why. But can
we just go back to epistemology for a minute?
Alia Yes, of course. Epistemology is the term we use for the study or the theory of what
constitutes knowledge. For example, if you were to measure this room with an accurate tape
measure you would have a firm basis on which to make a claim about knowledge of its di-
mensions. But if you had never been inside the room and you simply asked someone who
had been inside to tell you how big they thought it was, your claim to knowledge about its
measurements would be on much shakier ground. Similarly, if you were to look for evidence
about boys’ achievement in tests and coursework by simply asking the boys’ opinions about
it, the epistemological questions there would be: ‘How can you know what they say is true?
How can you justify the opinions they express as a basis for a claim to knowledge?’
Mo So doesn't this call into question all data that's collected by means of interviews and open-
ended questionnaires?
Alia Well, yes, in a way it does. This is one of the difficulties the qualitative researcher faces.
In education our research naturally focuses on learning and teaching, which in themselves
are lived experiences. The sort of data we need is often only obtained by listening to people
or observing them or asking them questions. If we could get the answers by simply measur-
ing or weighing people, life would be much simpler! But the questions we explore are often
complicated ones requiring data which draws on people's accounts of themselves and their
experiences. So we have to think hard and write clearly about how we justify the claim that
our findings constitute ‘knowledge’.
Alia It fits in very neatly. When we talk about research outcomes being ‘reliable’ we mean
that the same data would have emerged from the enquiry if it had been conducted by a dif-
ferent researcher, or by the same researcher using different data collecting methods. With
your research, for example, you will be making a comparison of test scores and assignment
grades and looking at correlations with gender. It should be the case that any other teacher or
researcher scrutinising your data will draw the same results or conclusions from it. So you're
on fairly safe ground. Your evidence should be pretty reliable. But if you were obtaining data
by using interviews, for example, your analysis of the data might be more open to influence
by your own preconceived ideas, and a different researcher might draw alternative conclu-
sions or arguments from the same data.
Mo But the boys in my sample might perform differently in tests on different days. So the
results might not be reliably consistent. If I'd tested them the week before or the week after-
wards, for example, the data could have been different.
Alia Yes, that's a possibility. You've tried to address that, though, by doing the tests more
than once. And that's good. The important thing is that you write all this up accurately in your
methodology section, so that you demonstrate that you're aware of the need for reliability
and that you've taken steps to improve it as far as possible.
Alia Yes and no. It depends on the context. For example, if you were using questionnaires,
and out of 100 only 15 were completed and returned, you would have to consider the possi-
bility that your data will be biased.
Mo Why?
Alia Well, because it's possible – even probable – that the respondents who bothered to com-
plete and return them were people with more interest in the topic of your enquiry than those
who didn't bother. And this would mean that they might hold views in common which wouldn't
have been apparent in the other 85 responses, had you had them. So that would be an in-
stance in which your data might be biased. It could be like trying to find out people's opinion
of cats by only asking cat owners.
And then there's researcher bias. That would be where you only see in the data what you're
looking for and ignore anything else. Or where you choose only to question cat owners be-
cause you like cats yourself. Or where you have a policy axe to grind and only present those
aspects of your data which support your view.
Mo Got it. Thanks. So there was one other thing I wanted to ask you about and that's to do
with the sort of claims I can make about my research when I write it up. If I do demonstrate
that boys at my school do better in tests than in coursework, am I allowed to make a point
from that about male pupils in general?
Alia The short answer to that is ‘No’. But I need to qualify that a little bit. The thing about doing
small-scale research as part of your professional development is that it's often conducted
within one institution – your own. So at every stage of your research paper or assignment you
need to be absolutely explicit about that, all the way through from the title to the conclusions
you draw at the end. That way, you're being up front about the scale of your enquiry. And this
doesn't only mean explaining that the context is one school, but also that it's a particular year
group and a particular subject. It may be, for example, that boys perform well in maths tests,
but not in English tests. In other words, results of research done with a maths group will not
necessarily have transferability to the context of other subjects in the curriculum. And, in the
same way, we may find that conclusions drawn from work with a Year 9 group aren't trans-
ferable when we look to apply them to Year 10. So, findings may be specific to a particular
group at a particular stage in their education at a particular school. But that doesn't make the
research any the less valid and useful for you and your institution. And although your results
may not be generalisable to other schools or situations, your research may prove to be illu-
minative to a wider range of practitioners, because it may help shed light on issues in their
own institutions and provide them with a point of comparison. That's what you found, after all,
didn't you, when you were reading other people's research about boys’ and girls’ attainment?
You read what they had discovered in their own schools and decided to investigate whether
the same applied in your own.
Alia That may still be too strong a claim. But you do hope to test whether they are replicable
and, if they are, that would speak strongly for the reliability of that research and of your own.
Mo And all of these issues we've talked about need to be mentioned in my methodology?
Alia Not just mentioned, but discussed in an informed way. Just chucking in the terminology
won't cut the mustard!
We began this chapter by looking at the question of why we undertake educational research. Our an-
swer so far has been that we can use it to inform and enhance our practice. But it serves another
purpose, too. By involving us in reflection upon our own practice and how we can monitor, regulate
and improve it, it marks us out as professionals. Carr and Kemmis (1986: 9–10) argue that:
1. Which three key features do the authors identify as distinguishing what we mean by a ‘pro-
fession’?
3. How would you apply their argument to the context or sector in which you yourself teach?
Although this was written over a quarter of a century ago, we believe it is as relevant now as it has
ever been. The status of the professional working in the field of education and training is enhanced
by engagement in research and by the dissemination of good practice to the wider community. And
we would add to this a belief, which we shall return to in the chapters which follow, that qualitative
research in education is valuable above all for its potential to change lives for the better, both those of
teachers and of learners, and of the community at large.
Key Points
• Why we should see research as a defining aspect of professional engagement with practice.
Carr, W. and Kemmis, S. (1986) Becoming Critical: Education, Knowledge and Action Research. Lewes:
Falmer Press.
Cohen, L., Manion, L. and Morrison, K. (2011) Research Methods in Education. 7th edn.London: Routledge.
Kay, E., Tisdall, M., Davis, J.M. and Gallagher, M. (2009) Researching with Children and Young People. Lon-
don: Sage.
McNiff, J. and Whitehead, J. (2009) Doing and Writing Action Research. London: Sage.
Wellington, J. (2000) Educational Research: Contemporary Issues and Practical Approaches. London: Con-
tinuum.
Wilson, E. (2009) School-based Research: A Guide for Education Students. London: Sage.
• interpretivism
• educational research
• research questions
• positivism
• standards of behavior
• transferability
• quantitative research
https://doi.org/10.4135/9781473957602