A Rubric For Assessing Mathematical
A Rubric For Assessing Mathematical
Mathematics modelling is a vital competency for students of all ages. In this study, we aim to
fill the research gap about valid and reliable tools for assessing and grading mathematical
modeling problems, particularly those reflecting multiple steps of the modelling cycle. We
present in this paper the design of a reliable and valid assessment tool aimed at gauging the
level of mathematical modelling associated with real-world modeling problems in a scientific-
engineering context. The study defines and bases the central modelling processes on the profi-
ciency levels identified in PISA Mathematics. A two-dimensional rubric was developed, reflect-
ing the combined assessment of the type and level of a modelling process. We identified criteria
that enable a clear comparison and differentiation among the different levels across each of the
modelling processes. These criteria allow for concrete theoretical definitions for the various
modelling processes, introducing a well-defined mathematical modelling framework from a
didactical viewpoint, which can potentially contribute to promoting modelling competencies
or the understanding of modelling by teachers and students. Theoretical, methodological and
practical implications are discussed.
1. Introduction
Mathematics is one of the most effective and important tools that provide solutions to real-world
situations and challenges, as well as dealing with everyday situations as citizens in a modern society
(Blum & Niss, 1991; Li, 2013; Maaß et al., 2018). In the context of Science, Technology, Engineering
and Mathematics (STEM), mathematics is considered a basic scientific discipline, making it possible to
solve a wide range of problems arising from real-world situations or relationships related to the STEM
field (Common Core State Standards Initiative, 2010). Mathematical modelling provides a method for
better understanding of real-world situations. It is defined as a cyclic process that involves translating
from mathematics to reality and vice versa (Lesh & Doerr, 2003; Blum & Leiß, 2006; Kaiser & Sriraman,
2006; Niss et al., 2007).
Modelling is used in mathematics education in various ways, reflecting different purposes and
goals. At one end of the spectrum is educational modelling, which uses real-world situations to foster
© The Author(s) 2022. Published by Oxford University Press on behalf of The Institute of Mathematics and its Applications.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.
org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is
properly cited.
A RUBRIC FOR ASSESSING MATHEMATICAL MODELLING PROBLEMS 267
understanding of mathematical concepts. On the other end of the spectrum is realistic or applied
modelling that aligns with PISA objectives (Giberti & Maffia, 2020). PISA, the Programme for
International Student Assessment of the Organization for Economic Co-operation and Development
(OECD), is one of the well-known assessment tools for mathematical modelling. Since 2012, the
modelling cycle had been particularly emphasized in PISA as an additional reporting category for
student proficiency (Stacey, 2015). The applied modelling view is more pragmatic, claiming that abstract
mathematics can be exploited to better understand real-life situations. In this approach, the modeling
process emphasizes the authentic aspect of the problem, thus presenting to the students the challenge
of each PISA item. Thus, it does not shed light on the characteristics of a real-life problem that invites
the application of mathematical modelling competency (Stacey, 2015).
In this study, we aim to design a reliable and valid assessment tool to gauge the level of mathematical
modelling associated with authentic modeling problems derived from real-world situations, in particular
within a scientific-engineering context.
to the real model. When engaging in this process, the learner undergoes a process of extrication of the
mathematics essential for analysis, definition and solution of the problem, using activities such as iden-
tification of the problem’s mathematical aspects, identification of meaningful variables, identification of
constraints and making assumptions and choice of suitable representation or construction of a new one.
Investigation of the model lies entirely within the mathematical world and simply means working
mathematically to solve the mathematical model. This process relates to the ability to implement
mathematical concepts, facts and algorithms in an effort to solve mathematically phrased problems and
reach mathematical conclusions. During the process of investigating the mathematical model, one must
carry out the necessary mathematical processes for getting results and finding a mathematical solution.
These processes include making arithmetic calculations, solving equations, carrying out symbolic
manipulations, extracting mathematical information from tables and figures, working with different
representations and analyzing data.
The final step is the interpretation of the mathematical result(s), making sure it responds well to
the real-world model. The interpretation process includes thinking of solutions and interpretation of
mathematical results. This action relates to the translation of the mathematical solutions discovered at
the end of the investigation process, returning to the problem’s real-life context, and a determination of
whether the results suit the given context. The process includes assessment of solutions or mathematical
deductions according to the context of the problem, and a determination of whether or not the results are
reasonable and logical in the given situation.
This simplified model of the modelling cycle corresponds with modelling cycles presented in
mathematics standards (e.g., Common Core State Standards Mathematics, 2010), and the modelling
cycle used in the PISA Framework (OECD, 2013). The PISA conceptual framework defines the
mathematical literacy as ‘an individual’s capacity to formulate, employ and interpret mathematics in a
variety of contexts’ (OECD, 2019 p.75). This definition overlaps with the three main processes, reflected
in the model presented in Fig. 2. In terms of terminology, the mathematization step displayed in arrow
b corresponds with ‘formulate’, investigation of the model (displayed in arrow c) corresponds with
‘employ’ or ‘compute’, and interpretation (displayed in arrow d) corresponds with ‘interpret’.
The conceptual framework for this study is the simplified model of the modelling cycle. Due to its
strong connection to the PISA conceptual framework, and as the process of rubric design is based on the
proficiency levels identified in PISA mathematics, hereinafter we refer to the main steps of the modelling
cycles as Formulate, Employ and Interpret.
270 Z. KOHEN AND Y. GHARRA-BADRAN
In this paper, we describe the development of a rubric for assessment of mathematical modelling
problems in a scientific-engineering context, i.e., authentic problems emphasizing the applicability of
mathematics in this context, with particular emphasis on technological and engineering developments.
The problems are defined as ‘authentic’, as they are based on existing workplace problems (workplace
mathematics) for scientists and engineers, and ‘simplified’ so that they suit the world of teachers and
students (Bakker, 2014; Kohen & Orenstein, 2021). The examples used to describe the rubric design are
retrieved from a pool of mathematical modelling tasks designed as part of the i-MAT (Integrated Math
& Technology) program, in which mathematics teaching materials suitable for the Israeli curriculum are
trajectory in order to know its exact trajectory and hypothesized hit site. This calculation is based on the
manner of solving a parabolic equation, i.e., solving an equation of a quadratic function, which is at the
heart of the Israeli ninth grade curriculum.
However, and as reported in previous research (Stacey, 2015), regarding some of the levels, we found
it difficult to map the different statements of the modelling processes, as some of them did not clearly
belong to a specific process. In addition, when looking at each of the three processes separately, we found
it difficult to clearly differentiate among the sentences classified as belonging to each level, specifically
relating to levels next to each other. These challenges arose since the PISA proficiency levels aim to
represent the increase in skills and abilities by locating cut-off points across levels rather than attributing
them to the modelling processes. Thus, the second stage of the rubric design process included combining
sentences or sub-sentences from each pair of levels expressing similar belonging to a specific process. In
addition, when necessary, we changed the wording of the sentence so that it best expresses its belonging
to a particular process. An example of this can be seen in the formulation process in the move between
Level 1 and Level 2. The sentence classified for the formulation process at level 1 was ‘Students can
answer questions involving familiar contexts where all relevant information is present and defined. They
are able to identify information.’, and the sentence classified for this process at level 2 was ‘Students can
interpret and recognize situations in contexts that require no more than direct inference. They can extract
relevant information from a single source and make use of a single representational mode.’ In this case,
it is difficult to point at a concrete difference between the levels, particularly as the difference between
a familiar context and a context necessitating a direct inference only is not sufficiently clear to point at
two different levels of formulation. Additionally, we found that ‘interpret and recognize’ in the context
of formulating a mathematical model out of a real-world situation can be simply changed to ‘identify’.
Thus, in the second stage of rubric design, the lower level of the formulation process, which represents
A RUBRIC FOR ASSESSING MATHEMATICAL MODELLING PROBLEMS 273
the combination of the definitions appearing at level 1 and level 2, was defined as ‘Students can identify
situations involving familiar contexts necessitating a direct inference only. All relevant information is
present and the questions are clearly defined; they are able to extract relevant information from a single
source and make use of a single representational model.’ This stage was finalized with the two lower
levels (1–2) combined into the low level, the third and fourth levels combined into the intermediate level
and the two highest levels (5–6) combined into the high level.
In the third stage of the rubric design, we adapted the sentences so that they expressed the level of
the problem’s impetus for modelling rather than the students’ modelling abilities, since the rubric’s
3.2.1. Rubric validation—phase 1. The first stage included content validity by the authors of this paper
and seven graduate students in the mathematics education field who have changed their academic or
career path from science or engineering fields to mathematics education and were pursuing an advanced
degree in mathematics education. Thus, these students had the ability to validate the transition from the
scientific-engineering context to the mathematical context and vice versa, for the modelling problems
that are at the base of this study.
The rubric for assessment of mathematical modelling problems in a scientific-engineering context
was validated during the mathematics education seminar course that was conducted during the spring
semester of 2020 and was held online as a result of an extensive transition to online learning in all
educational institutions in Israel due to COVID-19 restrictions. The course included both synchronous
and asynchronous lessons and was taken by graduate students retraining to teach mathematics from
engineering fields at a leading Israeli university. The validation process was carried out during two class
sessions, two academic hours long each. The first session included an exposure to the mathematical
proficiency scale of PISA’s conceptual framework (Table 1, left column) by the instructor of the course
(the first author). The students were then divided into three Zoom breakout rooms (2–3 students per
room) and were asked to match the sentences in the table to one of the three mathematical modelling
processes: formulating, employing or interpreting and the appropriate level—low, intermediate or high,
Table 1. The initial stage of rubric design for assessment of mathematical modelling problems
274
The original proficiency levels (OECD, 2018) Sentences classified to the Sentences classified to the Sentences classified to the
formulation process employment process interpretation process
1 At Level 1, students can answer questions At Level 1, students can and to carry out routine N/A
involving familiar contexts where all answer questions involving procedures according to direct
relevant information is present and the familiar contexts where all instructions in explicit
questions are clearly defined. They are relevant information is present situations. They can perform
able to identify information and to carry and the questions are clearly actions that are almost always
out routine procedures according to direct defined. They are able to obvious and follow
instructions in explicit situations. They identify information. immediately from the given
can perform actions that are almost stimuli.
always obvious and follow immediately
from the given stimuli.
2 At Level 2, students can interpret and At Level 2, students can Students at this level can They are capable of making
recognize situations in contexts that interpret and recognize employ basic algorithms, literal interpretations of the
require no more than direct inference. situations in contexts that formulae, procedures or results.
They can extract relevant information require no more than direct conventions to solve problems
from a single source and make use of a inference. They can extract involving whole numbers
single representational mode. Students at relevant information from a
this level can employ basic algorithms, single source and make use of
formulae, procedures or conventions to a single representational mode.
solve problems involving whole numbers.
They are capable of making literal
interpretations of the results.
3 At Level 3, students can execute clearly Their interpretations are At Level 3, students can Their solutions reflect that
described procedures, including those sufficiently sound to be a base execute clearly described they have engaged in basic
that require sequential decisions. Their for building a simple model. procedures, including those interpretation and reasoning.
interpretations are sufficiently sound to Students at this level can that require sequential
Z. KOHEN AND Y. GHARRA-BADRAN
4 At Level 4, students can work effectively At Level 4, students can work Students at this level can They can construct and
with explicit models for complex effectively with explicit utilize their limited range of communicate explanations and
concrete situations that may involve models for complex concrete skills and can reason with arguments based on their
constraints or call for making situations that may involve some insight, in interpretations, arguments and
assumptions. They can select and constraints or call for making straightforward contexts. actions.
integrate different representations, assumptions. They can select
including symbolic, linking them directly and integrate different
to aspects of real-world situations. representations, including
Students at this level can utilize their symbolic, linking them
limited range of skills and can reason directly to aspects of
with some insight, in straightforward real-world situations.
contexts. They can construct and
communicate explanations and
arguments based on their interpretations,
arguments and actions.
5 At Level 5, students can develop and At Level 5, students can They can select, compare and They begin to reflect on their
work with models for complex situations, develop and work with models evaluate appropriate work and can formulate and
identifying constraints and specifying for complex situations, problem-solving strategies for communicate their
assumptions. They can select, compare identifying constraints and dealing with complex interpretations and reasoning.
and evaluate appropriate problem-solving specifying assumptions. problems related to these
strategies for dealing with complex models. Students at this level
problems related to these models. can work strategically using
Students at this level can work broad, well-developed
strategically using broad, well-developed thinking and reasoning skills,
thinking and reasoning skills, appropriate appropriate linked
A RUBRIC FOR ASSESSING MATHEMATICAL MODELLING PROBLEMS
Table 1. Continued
The original proficiency levels (OECD, 2018) Sentences classified to the Sentences classified to the Sentences classified to the
formulation process employment process interpretation process
6 At Level 6, students can conceptualize, At Level 6, students can Students at this level are Students at this level can
generalize and utilize information based conceptualize, generalize and capable of advanced reflect on their actions and can
on their investigations and modelling of utilize information based on mathematical thinking and formulate and precisely
complex problem situations and can use their investigations and reasoning. These students can communicate their actions and
their knowledge in relatively modelling of complex problem apply this insight and reflections regarding their
non-standard contexts. They can link situations, and can use their understanding, along with a findings, interpretations,
different information sources and knowledge in relatively mastery of symbolic and arguments and the
representations and flexibly translate non-standard contexts. They formal mathematical appropriateness of these to the
among them. Students at this level are can link different information operations and relationships, original situation.
capable of advanced mathematical sources and representations to develop new approaches and
thinking and reasoning. These students and flexibly translate among strategies for attacking novel
can apply this insight and understanding, them. situations.
along with a mastery of symbolic and
formal mathematical operations and
relationships, to develop new approaches
and strategies for attacking novel
situations. Students at this level can
reflect on their actions and can formulate
Z. KOHEN AND Y. GHARRA-BADRAN
similarly to the first stage of rubric design. Following this, a class discussion of the classifications ensued,
which ended with all the students in complete agreement. After this process was completed, the rubric
was presented and was found to match at 90% level to the agreement the class had reached.
3.2.2. Rubric validation—phase 2. The second stage included a process of content between the second
author and two mathematics education experts who are part of the R&D group who design the problems
assessed in this study as part of the i-MAT program.
At this stage, the three experts underwent a similar process to the one the graduate students had
undergone, i.e., mapping of the sentences from the mathematical proficiency scale to one of the three
mathematical modelling processes: formulating, employing or interpreting and the appropriate level—
low, intermediate or high. These experts presented a match at 96% level to the classification made by the
authors of this study during the design process of the rubric. The experts further performed the next two
stages of rubric design, i.e., the combining of sentences or sub-sentences from each pair of levels for each
of the modelling processes, and the transition to the terminology that expressed the problem’s impetus for
modelling. At these stages, major changes were made to simplify the definition from a mathematical view
of point, in order to emphasize and highlight the differences among the levels regarding each process.
Also, when necessary, the wording of the sentences taken from the proficiency scale was updated. At the
end of the process, we had an initial validated rubric. Table 2 presents the section of the rubric describing
the formulation process.
After the validation of the initial rubric, the experts managed to identify criteria that allowed them to
make a clear comparison and differentiation among the three levels—low, intermediate and high. For
example, for the formulation process, four criteria were identified and defined, relating to each of the
three levels. Moreover, the three judges claimed that in some cases, not all criteria should be observed
in a problem when determining the level of each of the modelling processes. The distribution to various
criteria, therefore, enabled a clear distinction between the various levels, and a refined process of coding
a problem’s level, i.e., observing a high level in one criterion will mark the whole process as high,
regardless of whether other criteria show an intermediate level or a low level.
In this classification, the formulation process was defined on the basis of four criteria. The First
was identification of situations, relating to the degree to which the question invites identification or
generalization to an existing model. The second criterion is identification of constraints and assumptions
that relate to considering constraints, and make appropriate assumptions, if needed. The third criterion
278 Z. KOHEN AND Y. GHARRA-BADRAN
Table 3. Definition of type of information extraction criterion in the formulation process, following the experts’
content validation stage
Low level Intermediate level High level
is familiarity with the mathematical context, related to the familiarity and clarity of the mathematical
context. The fourth criterion is type of information extraction that refers to the extent to which the
information is accessible and clear.
After identification of the joint criterion of information extraction, we updated the definitions. This
process was based on a discussion among the judges regarding examples taken from the Iron Dome task
and others, which allowed one to concretely examine the wording of each criterion at each level, as well
as carry out an assessment of the accuracy of the rubric’s assessment process relating to the different
problems. Accordingly, some sentences were reworded. The major change consisted of rephrasing the
sentence mapped to the intermediate and high level in order to emphasize the method of extraction of
information, so that it starts with the phrase ‘extraction of...,’ as in the low-level definition. Other minor
changes of rephrasing were done in order to emphasize the difference and switching between levels
to make a clear distinction between the levels. For the lowest level, we emphasized that the relevant
information is presented explicitly, in order to highlight that in this level all the information needed is
presented clearly. For the intermediate level, the phrase ‘connection among information sources’ was
rewritten to say ‘direct integration of different information sources’, in order to differentiate between
the integration done at the intermediate level, which is a direct integration of information sources
or representations, and the integration done at the high level, where we need a connection between
information sources/representations, not necessarily through direct integration. For the high level, we cut
out the definition of effort, as it was found to be unclear and/or does not allow for an objective problem
definition. In addition, we added a uniform wording regarding information sources, in order to create
uniformity with the other levels. In this way we created clear cross-sections of the criteria, emphasizing
the differences between the different levels for each criterion. Table 3 presents the definitions made to
the type of information extraction criterion in the formulation process.
The rubric’s validation process ended when all three judges fully agreed regarding all the criteria and
their cross-sections, characterizing the classification into the different levels.
In an effort to examine the matching among the different codings, the judges were asked to assess the
modelling level of the various examples based on the rubric, i.e., assessing the modelling level for each of
the modelling processes separately, and to mark in the rubric the criteria they used to code each process
at the different levels. When there was no agreement among the judges regarding a particular process,
they used the coded criteria to justify their choice, so that the discussion of the coding took place on the
basis of the different criteria. As a measure of inter-rater reliability, Kappas (Fleiss & Cohen, 1973) were
calculated for each example. The kappa values arrived at were at the range of 0.609–1, demonstrating an
intermediate-to-high reliability level. In addition, in order to examine agreement percentages, we carried
out an internal reliability test using the Krippendorff method (Hayes & Krippendorff, 2007), which
includes the inter-rater agreement element, so that it matches the number of judges and multiple coding
levels. The test was carried out with 10,000 bootstrap samples with the SPSS program and demonstrated
a high Alpha Reliability level of 0.84 (Krippendorff’s Estimate), with an agreement percentage of 80%.
Below we present one of the Iron Dome samples analyzed (Example #3) (Fig. 5), its solution (Fig. 6),
and the coding the three judges gave it. It is important to mention that the rubric aims at the assessment
of a full example that includes a few sections rather than each section separately, as the examples are
constructed in a graduated manner on the basis of several inter-connected sections. For example, it is
possible that the solution of section b will rely on the formulating process carried out in section a or a
part of it, so that in such a case it is difficult to determine a different analysis for each section. Thus,
it was decided that the assessment using the rubric will be carried out on a full example, with the final
modelling level determined by adding up the sections, so that if there is a certain section raising the level
of one of the modelling processes, the final level will be determined according to this section.
Figure 6 presents a suggested solution for this example.
Regarding the suggested solution, we note a rise from intermediate to a high level of formulation for
the two last sections. In section ‘a’, we need identification and matching between the problem’s data
and a mathematical model (a quadratic function and its symmetry axis), the mathematical context is
familiar and the relevant information is provided clearly. In section ‘b’, the information needed to solve
the question is clear and presented, the mathematical context is familiar and clear and the only thing
280 Z. KOHEN AND Y. GHARRA-BADRAN
necessary is information identification. And in section ‘c’, we need to integrate information received
in former sections and match the data to a quadratic function equation, thus, in this case the level
of formulation is intermediate. In order to solve section ‘d’, two stages are required. The first is the
understanding that there is a need to find the point of intersection of the parabola with the x axis, the
second that there is a need to connect the different representations (graphic/algebraic) while considering
the constraints of an impossible zero point and the rocket’s hit site. In addition, in section ‘e’, there is a
need to generalize the model, while considering constraints from several information sources, thus it is
a formulation of a model describing a complex situation.
Table 4 presents the analysis done on the example from the Iron Dome task by the three experts, with
grades they assigned to each modelling process, along with the criteria by which they determined the
coding.
A RUBRIC FOR ASSESSING MATHEMATICAL MODELLING PROBLEMS 281
Table 4. Coding of example #3 from the Iron Dome task by three experts
As can be seen in Table 4, there was almost complete agreement between experts #1 and #3 regarding
the coding of each of the levels. The difference between them and expert #2 stemmed from the assessment
of the employment level. Expert #2 assessed this level in the example as high, when experts #1 and #3
assessed it as intermediate. After all the assessments were received, the experts met in an effort to discuss
their analysis, regarding both agreements and disagreements.
At the beginning of the meeting a summary of the results received from the different analyses was
presented by the second author of this study (Expert #1). This was followed by a deep discussion
of the elements the experts had agreed upon, making sure that their assessments regarding a specific
level stemmed from the same reason. The next stage included a discussion of their coding of each
of the modelling processes, particularly discussing the point they had disagreed upon, in an effort to
reach full agreement. Regarding the coding of the formulation process, while experts #1 and #2 based
their justification for choosing the high level on both criterions of ‘Identification of constraints and
assumptions’ and ‘Identification of situations’, expert #3 claimed that the criterion ‘Identification of
constraints and assumptions’ was sufficient for her to determine the high level of the formulation process
the problem invites. However, she also agreed that compatibility and generalization of the model were
necessary in order to deal with the situation presented in section ‘e’, where a new quadratic function
should be created, thus an agreement was reached regarding this criterion, leading to similar coding.
Regarding the coding of the employment level, expert #2 assessed the level as high, claiming that it
needs processes necessitating a series of informed decision-making. Experts #1 and #3 claimed that the
question’s employment level was intermediate, as in sections ‘d’ and ‘e’, once the formulation process is
undertaken successfully, the mathematical steps needed to solve these sections need a series of decision-
making and simple strategies (trial and error) that include examination of a number of quadratic functions
until one is found, which meets the criteria. As a supportive argument, they offer the following example,
according to which, had the question been worded thus—‘What is the range of possible y values of the
highest point for which the rocket falls in open areas?’, then, indeed, the employment process would
be defined at the high level, as in this situation there is a process of development of new strategies, and
a series of informed decision-making in needed. Expert #2 accepted these claims and agreed that the
employment process is, indeed, at an intermediate level.
282 Z. KOHEN AND Y. GHARRA-BADRAN
Table 5. The rubric for assessing mathematical modelling problems in a scientific-engineering context
Relating to the coding of the interpretation process, experts #1 and #2 agreed regarding the same two
criteria, which led to the problem being classified as being at a high level. The third expert coded the
level of the interpretation process as high based on one criterion only, which the others agreed with. The
criterion she did not mark was ‘reporting mathematical solution’. In the ensuing discussion, experts #1
and #2 justified the need for this criterion in coding the level of the interpretation process, by that section
‘d’ calls for a re-evaluation of the mathematical solution within the context of the real world, since one
of the solutions of this section needs to be rejected for a reason that comes from the real world rather
than from the mathematical one. Expert #3 accepted this claim and agreed that this criterion also testifies
to a high level of the interpretation process. A similar process had been done regarding other examples,
yielding to the final version of the rubric. Table 5 presents the full, final rubric.
A RUBRIC FOR ASSESSING MATHEMATICAL MODELLING PROBLEMS 283
Table 5. Continued
Modelling process Criterion Low level Intermediate level High level
The problem
invites . . .
as reflecting a different modelling level. In this connection between the modelling cycle and the
assessment of the level of each modelling process based on the PISA proficiency levels, this study offers
a theoretical contribution to the relation between educational modelling and applied modelling, which is
described in literature as distinctive (Kaiser & Sriraman, 2006).
Additionally, during the rubric’s validation process by experts, criteria that enable a clear comparison
and differentiation among the different levels across each of the modelling processes were identified.
Although these criteria are addressed in the proficiency scale, they are not explicitly declared, nor
separated, thus differences among the levels are inconclusive, and the assigning of each proficiency
To sum up, this study targets stakeholders who have an understanding of what lies behind authentic
mathematical modelling problems, both in their conceptualization and in the practical issues of effective
design and implementation of such problems. This requires a useful measurement methodology that
enables clarity regarding what and how to measure in such problems. The present study presents the
design process of an innovative assessment tool—a rubric for assessing the mathematical modelling level
of authentic problems, allowing us to characterize and categorize mathematical modelling problems. This
study can contribute to the theoretical understanding of the connection between authentic mathematical
modelling problems in a scientific-engineering context, reflecting applied modelling, and different
Acknowledgment
We would like to thank Dr. Ortal Nitzan, Mrs. Hadas Handelman, and all the graduate students who contributed to
the design of the rubric. Their role and contribution to this research are greatly appreciated.
References
Bakker, A. (2014) Characterising and developing vocational mathematical knowledge. Educ. Stud. Math., 86,
151–156.
Blum, W. (1996) Anwendungsbezüge im Mathematikunterricht—trends und perspektiven. Schrift. Didakt. Math.,
23, 15–38.
Blum, W. (2011) Can modelling be taught and learnt? Some answers from empirical research. Trends in Teaching
and Learning of Mathematical Modelling (G. Kaiser, W. Blum, R. Borromeo Ferri & G. Stillman eds).
Dordrecht: Springer, pp. 15–30.
Blum, W. & Leiß, D. (2006) “Filling up”—the problem of independence-preserving teacher interventions in lessons
286 Z. KOHEN AND Y. GHARRA-BADRAN
with demanding modelling tasks. In M. Bosch (Ed.), Proceedings of the Fourth Congress of the European
Society for Research in Mathematics Education (CERME 4). Barcelona, Spain: Universitat Ramon Llull
Editions, pp. 1623–1633.
Blum, W. & Niss, M. (1991) Applied mathematical problem solving, modelling, applications, and links to other
subjects—state, trends and issues in mathematics instruction. Educ. Stud. Math., 22, 37–68.
Blum, W., Galbraith, P., Henn, H.-W. & Niss, M. (eds.) (2007) Modelling and Applications in Mathematics
Education. New York: Springer.
Blum, W. (2015) Quality teaching of mathematical modelling: What do we know, what can we do? In The
Niss, M., Blum, W. & Galbraith, P. (2007, 2007) Introduction. Modelling and Applications in Mathematics
Education. The 14th ICMI Study (W. Blum, P. Galbraith, H.-W. Henn & M. Niss eds). New York, NY:
Springer Science + Business Media, LLC, pp. 3–32.
Organisation for Economic Cooperation and Development (OECD) (2013). PISA 2012 Assessment and
Analytical Framework. Paris: OECD Publishing, https://doi.org/10.1787/9789264190511-en.
Organisation for Economic Cooperation and Development (OECD) (2018). PISA 2021 Mathematics
Framework (Draft). Retrieved from http://www.oecd.org/pisa/publications/
Organisation for Economic Cooperation and Development (OECD) (2019). PISA 2018 Assessment and
Zehavit Kohen, Ph.D., is an Assistant Professor at the Faculty of Education in Science and Technology at the
Technion and the head of the Mathematics Teachers Education & Development lab. Dr Kohen is the academic
director of the i-MAT program. Her research focuses on the design and integration of i-MAT materials in professional
communities of leading teachers and of teachers who integrate these materials in their classes, as well as the effect
on students’ learning. Her doctoral research (Suma Cum Laude, 2011) focused on developing pedagogical self-
regulation at preservice teachers in a technological environment, supported by reflection in different foci. During
the academic year 2015–2016, she was a Visiting Scholar at the Center to Support Excellence in Teaching at
Stanford University, where she investigated the professional development of early-career mathematics teachers who
participated in the Hollyhock Fellowship Program. During 2012–2017, she had been a researcher at the Technion
Research and Development Foundation and at the Neaman Institute, Israel. Her research work focused on assessment
of STEM education and on the choice of and retention in STEM careers.
Yasmin Gharra-Badran is a member of the i-MAT assessment team. She is an MA student in the Faculty of
Education in Science and Technology and is working on her thesis in mathematics education under the supervision
of Dr Zehavit Kohen. Her research focuses on the development and validation of a unique rubric for assessing the
quality of mathematical modelling problems, which are embedded in hi-tech and technology real-world situations.
288 Z. KOHEN AND Y. GHARRA-BADRAN
Today, Yasmin is a mathematics teacher and coordinator in the Sindiana school in Givat Haviva. She teaches
students in grades 10–12 and prepares them for the 5-point matriculation exam in mathematics. Yasmin has a BSc
in mathematics, statistics and operations research from the Technion.