100% found this document useful (16 votes)
464 views16 pages

Individual Based Models of Cultural Evolution A Step by Step Guide Using R, 1st Edition Entire Ebook Download

This book serves as a comprehensive guide to creating individual-based models (IBMs) of cultural evolution using R, aimed at researchers from various disciplines. It covers fundamental concepts of cultural evolution, advanced topics, and provides example code to facilitate understanding and application. The book emphasizes the importance of formal modeling in social sciences and offers step-by-step instructions for building models that simulate cultural dynamics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (16 votes)
464 views16 pages

Individual Based Models of Cultural Evolution A Step by Step Guide Using R, 1st Edition Entire Ebook Download

This book serves as a comprehensive guide to creating individual-based models (IBMs) of cultural evolution using R, aimed at researchers from various disciplines. It covers fundamental concepts of cultural evolution, advanced topics, and provides example code to facilitate understanding and application. The book emphasizes the importance of formal modeling in social sciences and offers step-by-step instructions for building models that simulate cultural dynamics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Individual Based Models of Cultural Evolution A Step by Step

Guide Using R 1st Edition

Visit the link below to download the full version of this book:

https://medipdf.com/product/individual-based-models-of-cultural-evolution-a-step
-by-step-guide-using-r-1st-edition/

Click Download Now


Contents

Introduction 1

SECTION I
Basics 7

1 Unbiased transmission 9

2 Unbiased and biased mutation 24

3 Biased transmission: Direct bias 32

4 Biased transmission: Frequency-dependent indirect bias 38

5 Biased transmission: Demonstrator-based indirect bias 49

6 Vertical and horizontal transmission 59

7 Multiple traits models 69

SECTION II
Advanced topics: The evolution of cultural evolution 83

8 Rogers’ paradox 85

9 Rogers’ paradox: A solution 100

SECTION III
Advanced topics: Cultural inheritance 109

10 Reproduction and transformation 111


vi Contents

11 Social learning of social learning rules 124

12 Trait interdependence 136

SECTION IV
Advanced topics: Culture and populations 147

13 Demography 149

14 Social network structure 161

15 Group structured populations and migration 189

References 211
Index 214
Introduction

Aim of the book


The feld of cultural evolution has emerged in the last few decades as a thriving, interdisci-
plinary effort to understand cultural change and cultural diversity within an evolutionary
framework and using evolutionary tools, concepts, and methods. Given its roots in evolu-
tionary biology, much of cultural evolution is grounded in, or inspired by, formal models.
Yet many researchers interested in cultural evolution come from backgrounds that lack
training in formal models, such as psychology, anthropology, and archaeology.
This book aims to partly address this gap by showing readers how to create individual-
based models (IBMs, also known as agent-based models, or ABMs) of cultural evolution.
We provide example code written in the programming language R, which has been widely
adopted in the scientifc community. We will go from very simple models of the basic pro-
cesses of cultural evolution, such as biased transmission and cultural mutation, to more
advanced topics such as the evolution of social learning, demographic effects, and social
network analysis. Where possible we recreate existing models in the literature, so that read-
ers can better understand those existing models and perhaps even extend them to address
questions of their own interest.

What is cultural evolution?


The theory of evolution is typically applied to genetic change. Darwin pointed out that the
diversity and complexity of living things can be explained in terms of a deceptively simple
process: (1) organisms vary in their characteristics, (2) these characteristics are inherited
from parent to offspring, and (3) those characteristics that make an organism more likely to
survive and reproduce will tend to increase in frequency over time. That’s pretty much it.
Since Darwin, biologists have filled in many of the details of this abstract idea. Geneticists
have shown that heritable ‘characteristics’ are determined by genes and have worked out
where genetic variation comes from (e.g. mutation, recombination, migration) and how
genetic inheritance works (e.g. via Mendel’s laws, and DNA). The details of selection have
been explored, revealing the many reasons why some genes spread and others don’t. Others
realised that not all biological change results from selection; it can also result from random
processes like population bottlenecks (genetic drift).
The modern theory of cultural evolution began from the observation that culture con-
stitutes a similar evolutionary process to that outlined previously. ‘Culture’ is defined as
information that passes from one individual to another socially, rather than genetically.
This could include what we colloquially call knowledge, beliefs, ideas, attitudes, customs,
words or values. These are all learned from others via various ‘social learning’ mechanisms

DOI: 10.4324/9781003282068-1
2 Introduction

such as imitation and spoken/written language. The key point is that social learning is an
inheritance system. Cultural characteristics (or cultural traits) vary across individuals, they
are passed from individual to individual, and in many cases, some traits are more likely to
spread than others. This is Darwin’s insight, applied to culture. Cultural evolution research-
ers think that we can use similar evolutionary concepts, tools, and methods to explain the
diversity and complexity of culture, just as biologists have done for the diversity and com-
plexity of living forms. We hope that the models in this book will help the reader to under-
stand many of the aforementioned principles, by creating simulations of various aspects of
cultural evolution.
Importantly, we do not need to assume that cultural evolution is identical to genetic
evolution. Many of the details will necessarily be different. To take an obvious example, we
inherit genetic information in the form of DNA only once and only from our two parents.
On the other hand, we can acquire cultural traits throughout our entire life from various
sources: our parents, teachers, long-dead authors’ books, and even strangers on the internet.
Cultural evolution researchers seek to build models and do research to fll in these details.
In the last part of the book we will also explore models that go beyond a strict analogy with
biological evolution and focus on features such as the fact that the ‘rules’ that regulate
transmission can themselves culturally evolve, or that other processes than inheritance can
create and stabilise culture.

Why model?
A formal model is a simplified version of reality, written in mathematical equations or com-
puter code. Formal models are useful because reality is complex. We can observe changes
in species or cultures over time, or particular patterns of biological or cultural diversity, but
there are always a vast array of possible causes for any particular pattern or trend and huge
numbers of variables interacting in many different ways. A formal model is a highly simpli-
fed recreation of a small part of this complex reality, containing only those few elements
and processes that the modeller suspects to be important. A model, unlike reality, can be
manipulated and probed to better understand how each part works. No model is ever a
complete recreation of reality. That would be pointless: we would have replaced a complex,
incomprehensible reality with a complex, incomprehensible model. Instead, models are
useful because of their simplicity.
Formal modelling is rare in the social sciences (with some exceptions, such as econom-
ics). Social scientists tend to be sceptical that very simple models can tell us anything
useful about something as immensely complex as human culture. But the clear lesson from
biology is that models are extremely useful in precisely this situation. Biologists face simi-
lar complexity in the natural world. Despite this, models are useful. Population genetics
models of the early 20th century helped to reconcile new findings in genetics with Darwin’s
theory of evolution. Ecological models helped understand interactions between species,
such as predator-prey dynamics. These models are hugely simplifed: population genetics
models typically make assumptions like infnitely large populations and random mating.
Even though these assumptions are of course unrealistic, the models are still capable of
producing useful predictions.
Another way to look at this is that all social scientists use models, but only some use
formal models. Most theories in the social sciences are purely verbal models. The problem
is that words can be imprecise, and verbal models contain all kinds of hidden or unstated
Introduction 3

assumptions. The advantage of formal modelling is that we are forced to precisely specify
every element and process that we propose and make all of our assumptions explicit. In
comparison to verbal models, maths and programming code do not accept any ambiguity.
Models can also help to understand the consequences of our theories. Social systems,
like many others, are typically under the infuence of several different interacting forces.
In isolation, the effects of these forces can be easy to predict. However, when several forces
interact the resulting dynamics quickly become non-trivial, which is why these systems are
sometimes referred to as complex systems. With verbal descriptions, fguring out the effects
of interactions is left to our insights. With formal models, we can set up systems with these
forces and observe the dynamics of their interactions.

Why individual-based models?


There are several different types of formal models. Some models describe the behaviour of
a system at the population level, tracking overall frequencies or other descriptive statistics
of traits without explicitly modelling individuals. For example, a model can specify that
the frequency of a cultural trait A at time t depends on its frequency at time t − 1. Perhaps
it doubles at each time step. Other models, instead, describe the behaviour of a system
at the individual level, explicitly modelling the individual entities that possess the traits.
Imagine the same question, but now we specify that, in a population of N individuals, each
individual observes each time a random number of other individuals and, if at least one of
them has trait A, it copies that trait.
Another distinction concerns models that are analytically tractable and models that
are not. The former are mathematical models that consist of sets of equations that can
be solved to find specific answers (e.g. equilibria points). Our population-level model
described earlier would fit this description. A big advantage of these models is that they
can provide insight into the dynamics of a system for a wide range of parameters, or exact
results for specifc questions. However, this approach requires the studied dynamics to be
rather simple. It would be more difficult (or perhaps impossible) to write and analytically
solve the systems of equations necessary to describe the behaviours of the single individuals
in the second model.
Often, when we want or need to describe the behaviour at the individual level – if,
for example, individuals differ in their characteristics, exhibit learning or adaptation, or
are embedded in social networks – trying to write a system of equations may not be the
best strategy. Instead, we need to write code and let the computer program run. These are
individual-based models. These models are both individual level (i.e. they specify the char-
acteristics of the individuals and some rules by which those individuals interact or change
over time) and simulations (i.e. they are not solved analytically, but simulated through a
computer program).
Simulations have greater flexibility than analytical models. Due to their structure, they
are often more intuitive to understand, especially for people with little training in math-
ematics. However, it is also important to be aware of their downsides. For example, gener-
alisations are often not possible, and statements only hold for parameters (or sets thereof)
that have been simulated. Another potential downside is that the high flexibility of simu-
lations can quickly lead to models that are too complex, and it can be hard to understand
what is happening inside the model. That’s why, hopefully, our IBMs are simple enough to
understand and provide a gateway into cultural evolution modelling.
4 Introduction

How to use this book – the programming


All the code in this book is written in R. Originally R had a strong focus on statistical
data analysis. Its growing user base has turned R into a more general-purpose programming
language. While R is used less often for modelling, it is widely taught in many university
departments and is the subject of lots of online tutorials and support forums. It is quite likely
that many readers already have some experience in R for data analysis and visualisation
which can be used also for IBMs, more easily than learning another programming language.
Also, if your IBMs run in R, you can use the same language to analyse the output and plot
the results.
All the code for running the simulation is included in the book, and commented, often
line by line. As a reader, you can therefore read the online book and, alongside, run the
code. For convenience, all the code can be found in the online version of this book at
https://acerbialberto.com/IBM-cultevo/. Of course, you can just read the book, but run-
ning the code as you go will give you more direct experience of how the code executes and
will allow you to play around with parameters and commands. The best way of learning –
especially modelling! – is to try it out yourself.
We assume that the reader has basic knowledge of R (and RStudio, which provides a
powerful user interface for R), including installing it, setting it up, updating it, installing
packages and running code. We strived to proceed gradually from very simple to more com-
plex code and to explain all the non-obvious newly introduced programming techniques,
but a basic knowledge of R as a programming language, for example, the use of variables,
data frames, functions, subsetting, and loops, will greatly facilitate the reading.
In the book, we use the tidyverse package. In particular, we use the tidyverse-typical data
structures (tibbles rather than data frames) and the ggplot graphic system (rather than the
base R plot function). These are user-friendly and widely employed, and they will make
it easier to manipulate data and create professional-looking visualisations. The tidyverse,
however, has not been created with IBMs in mind. We have therefore not religiously stuck
to tidyverse, and we also use functions, data structures and programming styles that go
beyond the tidyverse (in Chapter 7, for example, we show how matrices are more effective
than tibbles in computationally heavy simulations).
Aside from the tidyverse package, we have limited the number of additional packages
needed to run the simulations wherever possible. The few packages needed to compile some
of the code are explicitly introduced in the book when needed.

How to use this book – the simulations


The book is intended – as the title suggests – as a step-by-step guide. If you are interested
in modelling cultural evolution, or in modelling in general, and you do not have previous
experience, you should go through the simulations we describe chapter by chapter. The
chapters build in complexity both from the programming and from the conceptual point of
view. Alternatively, if you are interested in specific models – and you feel confident in your
programming skills – feel free to go straight to the relevant chapter.
We organise the book in the following way. We start by presenting IBM versions of some
of the now-classic mathematical and population-level models described in the foundational
cultural evolution books, such as Robert Boyd and Peter Richerson’s Culture and the Evo-
lutionary Process and Luigi-Luca Cavalli-Sforza and Marc Feldman’s Cultural Transmission
and Evolution. The models do not add conceptually to the original analytical treatments.
Introduction 5

However, they show how to use them to develop IBMs, and they provide several general
tools to build models that describe cultural evolution. Some of the subsequent chapters
develop aspects that are possible only with IBMs, for example, simulating cultural dynamics
with many different traits (Chapter 7).
We then move to what we call ‘Advanced topics’. These chapters deal with more recent
work in cultural evolution and include different perspectives, or they concern analyses
that are not customary in cultural evolution modelling (for example, network analysis in
Chapter 14).
The book does not present new models, views or fndings on cultural evolution. Instead,
we are trying to provide some up-to-date possibilities that IBM can offer cultural evolution-
ists. If, while reading the book, you are struck by an idea for a new model or an adaptation
of one the models that we present here, we have succeeded in our mission.

Conventions and formatting


In general, we follow the tidyverse style guide for naming functions and variables and for-
matting code.
Names of functions and variables use underscores to separate words and lowercase let-
ters, for example, previous_population, biased_mutation. If in the same chapter
we have more than one function for the same model (for example because we gradually
add parameters), they are numbered as unbiased_transmission_1(), unbiased_
transmission_2(), and so on.
For the text, we use the following style conventions:

• names of functions and data structures are in fixed-width font, e.g. unbiased_
transmission(), population, output
• technical terms are in quotes, e.g. ‘geoms’, ‘chr’
• names of variables are in italics, e.g. p, generation

Further reading
For some recent general books on cultural evolution, you can check Mesoudi (2011),
Morin (2015), Henrich (2016), Laland (2017), and Acerbi (2019).
Seminal books are by Cavalli-Sforza and Feldman (1981) and Boyd and Richerson
(1985).
For more on the virtues of formal models for social scientists, with a cultural evolu-
tion perspective, see Smaldino (2017). Smaldino (2020) is dedicated to good
practices to translate verbal theories into formal, especially individual-based,
models.
A good introduction to R programming is Grolemund (2014). Another general
introduction, with a specifc focus on the tidyverse logic, is Wickham and
Grolemund (2017).
Section I

Basics
1 Unbiased transmission

We start by simulating a simple case of unbiased cultural transmission. We will detail each
step of the simulation and explain the code line by line. In the following chapters, we will
reuse most of this initial model, building up the complexity of our simulations.

1.1 Initialising the simulation


Here we will simulate a case where each of N individuals possesses one of two mutually
exclusive cultural traits. We denote these alternative traits as A and B. For example, A
might be eating a vegetarian diet, and B might be eating a non-vegetarian diet. In reality,
traits are seldom as clear-cut (e.g. what about pescatarians?), but models are designed to cut
away all the complexity to give tractable answers to simplifed situations.
Our model has non-overlapping generations. That is, in each generation all N individu-
als die and are replaced with N new individuals. Again, this is an extreme but common
assumption in evolutionary models. It provides a simple way of simulating change over
time. Generations here could correspond to biological generations, but could equally be
‘cultural generations’ (or learning episodes).
Each new individual of each new generation picks a member of the previous generation
at random and copies their cultural trait. This is known as unbiased oblique cultural trans-
mission. It is unbiased because traits are copied entirely at random. The term oblique means
that members of one generation learn from those of the previous, non-overlapping, genera-
tion. This is different from, for example, horizontal cultural transmission, where individuals
copy members of the same generation, and vertical cultural transmission, where offspring
copy their biological parents.
Given the two traits A and B and an unbiased oblique cultural transmission, how is their
average frequency in the population going to change over time? To answer this question,
we need to keep track of the frequency of both traits. We will use p to indicate the propor-
tion of the population with trait A, which is simply the number of all individuals with trait
A divided by the number of all individuals. Because we only have two mutually exclusive
traits in our population, we know that the proportion of individuals with trait B must be
(1 − p). For example, if 70% of the population have trait A (p = 0.7), then the remaining
30% must have trait B (i.e. 1 − p = 1 − 0.7 = 0.3).
The output of the model will be a plot showing p over all generations up to the last
generation. Generations (or time steps) are denoted by t, where generation one is t = 1,
generation two is t = 2, up to the last generation t = tmax.

DOI: 10.4324/9781003282068-3
10 Basics

First, we need to specify the fxed parameters of the model. These are quantities that we
decide on at the start and do not change during the simulation. In this model, these are N
(the number of individuals) and t_max (the number of generations). Let’s start with N =
100 and t_max = 200:

N <- 100
t_max <- 200

Now we need to create our individuals. The only information we need to keep about our
individuals is their cultural trait (A or B). We’ll call population the data structure con-
taining the individuals. The type of data structure we have chosen here is a tibble. This is
a more user-friendly version of a data.frame. Tibbles, and the tibble command, are part of
the tidyverse library, which we need to call before creating the tibble. We will use other
commands from the tidyverse throughout the book.
Initially, we’ll give each individual either an A or B at random, using the sample()
command. This can be seen in the following code chunk. The sample() command takes
three arguments (i.e. inputs or options). The first argument lists the elements to pick at
random, in our case, the traits A and B. The second argument gives the number of times
to pick, in our case N times, once for each individual. The fnal argument says to replace
or reuse the elements specified in the first argument after they’ve been picked (otherwise
there would only be one copy of A and one copy of B, so we could only assign traits to two
individuals before running out). Within the tibble command, the word trait denotes the
name of the variable within the tibble that contains the random As and Bs, and the whole
tibble is assigned the name population.

library(tidyverse)
population <- tibble(trait = sample(c("A", "B"), N, replace = TRUE))

We can see the cultural traits of our population by simply entering its name in the R console:

population

## # A tibble: 100 x 1
## trait
## <chr>
## 1 A
## 2 A
## 3 B
## 4 A
## 5 B
## 6 B
## 7 B
## 8 B
## 9 A
## 10 A
## # . . . with 90 more rows

As expected, there is a single column called trait containing As and Bs. The type of the
column, in this case <chr> (i.e. character), is reported below the name.
Unbiased transmission 11

A specific individual’s trait can be retrieved using the square bracket notation in R. For
example, individual 4’s trait can be retrieved by typing:

population$trait[4]
## [1] "A"

This matches the fourth row in the previous table.


We also need a tibble to record the output of our simulation, that is, to track the trait frequency
p in each generation. This will have two columns with tmax rows, one row for each generation.
The first column is simply a counter of the generations, from 1 to tmax. This will be useful for plot-
ting the output later. The other column should contain the values of p for each generation.
At this stage we don’t know what p will be in each generation, so for now let’s fill the
output tibble with ‘NA’s, which is R’s symbol for Not Available, or missing value. We can
use the rep() (repeat) command to repeat ‘NA’ tmax times. We’re using ‘NA’ rather than,
say, zero, because zero could be misinterpreted as p = 0, which would mean that all individu-
als have trait B. This would be misleading, because at the moment we haven’t yet calculated
p, so it’s non-existent, rather than zero.

output <- tibble(generation = 1:t_max, p = rep(NA, t_max))

We can, however, fll in the frst value of p for our already-created frst generation of indi-
viduals, held in population. The following command sums the number of As in popula-
tion and divides it by N to get a proportion rather than an absolute number. It then puts
this proportion in the frst slot of p in output, the one for the frst generation, t = 1. We
can again write the name of the tibble, output, to see that it worked.

output$p[1] <- sum(population$trait == "A") / N


output

## # A tibble: 200 x 2
## generation p
## <int> <dbl>
## 1 1 0.54
## 2 2 NA
## 3 3 NA
## 4 4 NA
## 5 5 NA
## 6 6 NA
## 7 7 NA
## 8 8 NA
## 9 9 NA
## 10 10 NA
## # . . . with 190 more rows

This frst value of p will be close to 0.5, meaning that around 50 individuals have trait A and
50 have trait B. Even though sample() returns either trait with equal probability, this does
not necessarily mean that we will get exactly 50 As and 50 Bs. This happens with simula-
tions and finite population sizes: they are probabilistic (or stochastic), not deterministic.
Analogously, flipping a coin 100 times will not always give exactly 50 heads and 50 tails.
12 Basics

Sometimes we will get 51 heads, sometimes 49, and so on. To see this in our simulation, you
can rerun all of the previous code and you will get a different p.

1.2 Execute generation turnover many times


Now that we set up the population, we can simulate what individuals do in each generation.
We iterate these actions over tmax generations. In each generation, we will:

• copy the current individuals to a separate tibble called previous_population to


use as demonstrators for the new individuals; this allows us to implement oblique
transmission with its non-overlapping generations, rather than mixing up the
generations
• create a new generation of individuals, each of whose trait is picked at random from
the previous_population tibble
• calculate p for this new generation and store it in the appropriate slot in output

To iterate, we’ll use a for-loop, using t to track the generation. We’ve already done gen-
eration 1 so we’ll start at generation 2. The random picking of models is done with sam-
ple() again, but this time picking from the traits held in previous_population.
Note that we have added comments briefly explaining what each line does. This is per-
haps superfluous when the code is this simple, but it’s always good practice. Code often
grows organically. As code pieces are cut, pasted, and edited, they can lose their context.
Explaining what each line does lets other people – and a future, forgetful you – know
what’s going on.

for (t in 2:t_max) {
# Copy the population tibble to previous_population tibble
previous_population <- population

# Randomly copy from previous generation’s individuals


population <- tibble(trait = sample(previous_population$trait,
N, replace = TRUE))

# Get p and put it into the output slot for this generation t
output$p[t] <- sum(population$trait == "A") / N
}

Now we should have 200 values of p stored in output, one for each generation. You can
list them by typing output, but more effective is to plot them.

1.3 Plotting the model results


We use ggplot() to plot our data. The syntax of ggplot may be slightly obscure at frst, but
it forces us to have a clear picture of the data before plotting.
In the frst line in the following code, we are telling ggplot that the data we want to plot
is in the tibble output. Then, with the command aes(), we declare the ‘aesthetics’ of the
plot, that is, how we want our data mapped in our plot. In this case, we want the values of p
on the y-axis, and the values of generation on the x-axis (this is why we created a column
in the output tibble, to keep the count of generations).
Unbiased transmission 13

We then use geom_line(). In ggplot, ‘geoms’ describe what kind of visual representa-
tion should be plotted: lines, bars, boxes, and so on. This visual representation is indepen-
dent of the mapping that we declared before with aes(). The same data, with the same
mapping, can be visually represented in many different ways. In this case, we are telling
ggplot to plot the data as a line graph. You can change geom_line() in the following code
to geom_point() to turn the graph into a scatter plot (there are many more geoms, and
we will see some of them in later chapters).
The remaining commands are mainly to make the plot look nicer. For example, with
ylim we set the y-axis limits to be between 0 and 1, that is, all the possible values of p. We
also use one of the basic black-and-white themes (theme_bw). ggplot automatically labels
the axis with the name of the tibble columns that are plotted. With the command labs()
we provide a more informative label for the y-axis.

ggplot(data = output, aes(y = p, x = generation)) +


geom_line() +
ylim(c(0, 1)) +
theme_bw() +
labs(y = "p (proportion of individuals with trait A)")

1.00
p (proportion of individuals with trait A)

0.75

0.50

0.25

0.00

0 50 100 150 200


generation

Figure 1.1 Random fluctuations of the proportion of trait A under unbiased cultural transmission

Unbiased transmission, or random copying, is by defnition random. Hence, different runs of


our simulation will generate different plots. If you rerun all the code you will get something
different. In all cases, the proportion of individuals with trait A start around 0.5 and then
oscillate stochastically. Occasionally, p will reach and then stay at 0 or 1. At p = 0 there
are no As and every individual possesses B. At p = 1 there are no Bs and every individual
14 Basics

possesses A. This is a typical feature of cultural drift. Analogous to genetic drift we find that
in small populations and in the absence of migration and mutation (or innovation in the
case of cultural evolution) traits can be lost purely by chance after some generations.

1.4 Write a function to wrap the model code


What would happen if we increase population size N? Are we more or less likely to lose one
of the traits? Ideally, we would like to repeat the simulation to explore this idea in more
detail. As noted earlier, individual-based models like this one are probabilistic (or stochas-
tic), thus it is essential to run simulations many times to understand what happens. With
our code scattered about in chunks, it is hard to quickly repeat the simulation. Instead, we
can wrap it all up in a function:

unbiased_transmission_1 <- function(N, t_max) {


population <- tibble(trait = sample(c("A", "B"), N, replace =
TRUE))

output <- tibble(generation = 1:t_max, p = rep(NA, t_max))

output$p[1] <- sum(population$trait == "A") / N

for (t in 2:t_max) {
# Copy individuals to previous_population tibble
previous_population <- population

# Randomly copy from previous generation


population <- tibble(trait = sample(previous_population$trait,
N, replace = TRUE))

# Get p and put it into output slot for this generation t


output$p[t] <- sum(population$trait == "A") / N
}
# Export data from function
output
}

With the function() command we tell R to run several lines of code that are enclosed in
the two curly braces. Additionally, we declare two arguments that we will hand over when-
ever we execute the function, here N and t_max. As you can see we have used the same
code snippets that we already ran earlier. In addition, unbiased_transmission_1()
ends with the line output. This means that this tibble will be exported from the function
when it is run. This is useful for storing data from simulations wrapped in functions; other-
wise, that data is lost after the function is executed.
When you run the previous code there will be no output in the terminal. All you have
done is define the function and not actually run it. We just told R what to do, when we
call the function. Now we can easily change the values of N and tmax. Let’s first try the same
values of N and tmax as before, and save the output from the simulation into data_model.
Unbiased transmission 15

data_model <- unbiased_transmission_1(N = 100, t_max = 200)

Let us also create a function to plot the data, so we do not need to rewrite all the plot-
ting instructions each time. While this may seem impractical now, it is convenient to
separate the function that runs the simulation and the function that plots the data for
various reasons. With more complicated models, we do not want to rerun a simulation
just because we want to change some detail in the plot. It also makes conceptual sense to
keep separate the raw output of the model from the various ways we can visualise it, or
the further analysis we want to perform on it. As previously, the code is identical to what
we already wrote:

plot_single_run <- function(data_model) {


ggplot(data = data_model, aes(y = p, x = generation)) +
geom_line() +
ylim(c(0, 1)) +
theme_bw() +
labs(y = "p (proportion of individuals with trait A)")
}

When we now call plot_single_run() with the data_model tibble we get the follow-
ing plot:

plot_single_run(data_model)

1.00
p (proportion of individuals with trait A)

0.75

0.50

0.25

0.00

0 50 100 150 200


generation

Figure 1.2 Random fluctuations of the proportion of trait A under unbiased cultural transmission

You might also like