Notes For Math 184A Combinatorics Steven V. Sam - Download The Full Ebook Now For A Seamless Reading Experience
Notes For Math 184A Combinatorics Steven V. Sam - Download The Full Ebook Now For A Seamless Reading Experience
com
https://textbookfull.com/product/notes-for-
math-184a-combinatorics-steven-v-sam/
OR CLICK HERE
DOWLOAD EBOOK
https://textbookfull.com/product/math-847-representation-
stability-2017-steven-v-sam/
textbookfull.com
https://textbookfull.com/product/math-202b-symmetric-functions-and-
groups-steven-v-sam/
textbookfull.com
https://textbookfull.com/product/math-108-notes-combinatorics-kannan-
soundararajan/
textbookfull.com
https://textbookfull.com/product/math-302-lecture-notes-kenneth-
kuttler/
textbookfull.com
https://textbookfull.com/product/math-7410-lie-combinatorics-and-
hyperplane-arrangements-marcelo-aguiar/
textbookfull.com
https://textbookfull.com/product/math-7410-lie-combinatorics-and-
hyperplane-arrangements-marcelo-aguiar-2/
textbookfull.com
https://textbookfull.com/product/math-223a-algebraic-number-theory-
notes-alison-miller/
textbookfull.com
https://textbookfull.com/product/math-845-notes-on-lie-groups-mark-
reeder/
textbookfull.com
NOTES FOR MATH 184A
STEVEN V SAM
1. Pigeon-hole principle
1.1. Basic version. The following is really obvious, but is a very important tool. The
proof illustrates how to make “obvious” things rigorous. It is important to always keep this
in mind especially in this course when many things you might want to use sound obvious.
There are many interesting ways to use this theorem which are not obvious.
Theorem 1.1 (Pigeon-hole principle (PHP)). Let n, k be positive integers with n > k. If n
objects are placed into k boxes, then there is a box that has at least 2 objects in it.
Proof. We will do proof by contradiction. So suppose that the statement is false. Then each
box has either 0 or 1 object in it. Let m be the number of boxes that have 1 object in it.
Then there are m objects total and hence n = m. However m ≤ k since there are k boxes,
but this contradicts our assumption that n > k.
Note that the objects can be anything and the boxes don’t literally have to be boxes.
Example 1.2. • Simple example: If we have 4 flagpoles and we put up 5 flags, then
there is some flagpole that has at least 2 flags on it.
• Draw 10 points in a square with unit side length. Then there is some pair of them
that are less than .48 distance apart.
√ There’s some content here since the corners
on opposite ends have distance 2 ≈ 1.4. Also, if we only have 9 points, we could
arrange them like so:
The pairs of points that are closest are .5 away from each other, so it is important
that we have at least 10 points.
To see why the statement holds, divide the square into 9 equal parts:
Then some little square has to contain at least 2 points in it (is it ok if the points are
on the boundary segments?). Each square has side length 1/3, and so the maximum
distance betweenp 2 points in the same
√ square is given by the length of its diagonal
2 2
(why?) which is (1/3) + (1/3) = 2/3 ≈ 0.4714.
1
2 STEVEN V SAM
Then some triangle must contain at least 3 points. Furthermore, each triangle fits
into a semicircle of radius 0.5.
In a crude sense, Ramsey theory is the natural generalization of PHP (we likely won’t
discuss it, see Chapter 13 for a start if you’re interested).
2. Induction
Induction is a proof technique that I expect that you’ve seen and grown familiar with in
a course on introduction to proofs. We will review it here.
NOTES FOR MATH 184A 3
2.1. Weak induction. Induction is used when we have a sequence of statements P (0), P (1), P (2), . . .
labeled by non-negative
Pn integers that we’d like to prove. For example, P (n) could be the
statement: i=0 i = n(n + 1)/2. In order to prove that all of the statements P (n) are true
using induction, we need to do 2 things:
• Prove that P (0) is true.
• Assuming that P (n) is true, use it to prove that P (n + 1) is true.
Let’s see how that works for our example:
• P (0) is the statement 0i=0 i = 0 · 1/2. Both P
P
sides are 0, so the equality is valid.
• Now we assume that P (n) is true, i.e., that ni=0 i = n(n + 1)/2. Now we want to
prove that n+1
P
i=0 i = (n + 1)(n + 2)/2. Add n + 1 to both sides of the original identity.
Pn+1
Then the left side becomes i=0 and the right side becomes n(n + 1)/2 + n + 1 =
(n + 1)(n/2 + 1) = (n + 1)(n + 2)/2, so the new identity we want is valid.
Since we’ve completed the two required steps, we have proven that the summation identity
holds for all n.
Remark 2.1. Why does this work? It is intuitively clear: if we wanted to know that P (3)
is true, then we start with P (0), which is true by the first step. By the second step, we
know P (1) holds, and again by applying the second step, we then have P (2) and P (3). This
can be repeated for any value n. In more rigorous terms, this works because the natural
numbers are well-ordered: any subset of the natural numbers has a minimum element. To
see why this is relevant, assume induction doesn’t work: then let S be the set of n such that
P (n) is false. By well-ordering, S has a minimal element, call it N . If N = 0, then we’ve
contradicted the first step for induction. Otherwise, P (N − 1) is true since N − 1 ∈/ S, but
now we’ve contradicted the second step of induction.
This may seem like more work than is necessary, but actually induction can be carried
out on index sets besides the natural numbers as long as there is some kind of well-ordering
floating around. We won’t get into this generalization though.
Remark 2.2. We have labeled the statements starting from 0, but sometimes it’s more
natural to start counting from 1 instead, or even some larger integer. The same reasoning
as above will apply for these variations. The first step “Prove that P (0) is true” is then
replaced by “Prove that P (1) is true” or wherever the start of your indexing occurs.
For the next statement, let’s clarify some terminology. A finite set of size n is a collection
of n different objects (n could be 0 in which case we call it the empty set and denote it ∅).
It could be {1, 2, . . . , n} or something more strange like {1, ?, U }. The names of the elements
aren’t really important. A subset T of a set S is another set all of whose elements belong to
S. We write this as T ⊆ S. We allow the possibility that T is empty and also the possibility
that T = S.
Theorem 2.3. There are 2n subsets of a set of size n.
For example, if S = {1, ?, U }, then there are 23 = 8 subsets, and we can list them:
∅, {1}, {?}, {U }, {1, ?}, {1, U }, {U, ?}, {1, ?, U }.
Proof. Let P (n) be the statement that any set of size n has exactly 2n subsets.
We check P (0) directly: if S has 0 elements, then S = ∅, and the only subset is S itself,
which is consistent with 20 = 1.
4 STEVEN V SAM
Now we assume P (n) holds and use it to show that P (n + 1) is also true. Let S be a set
of size n + 1. Pick an element x ∈ S and let S 0 be the subset of S consisting all elements
that are not equal to x, i.e., S 0 = S \ {x}. Then S 0 has size n, so by induction the number
of subsets of S 0 is 2n . Now, every subset of S either contains x or it does not. Those which
do not contain x can be thought of as subsets of S 0 , so there are 2n of them. To count those
that do contain x, we can take any subset of S 0 and add x to it. This accounts for all of them
exactly once, so there are also 2n subsets that contain x. All together we have 2n + 2n = 2n+1
subsets of S, so P (n + 1) holds.
Continuing with our example, if x = 1, then the subsets not containing x are ∅, {?}, {U }, {?, U },
while those that do contain x are {1}, {1, ?}, {1, U }, {1, ?, U }. There are 22 = 4 of each kind.
A natural followup is to determine how many subsets have a given size. In our previous
example, there is 1 subset of size 0, 3 of size 1, 3 of size 2, and 1 of size 3. We’ll discuss this
problem in the next section.
Some more to think about:
• Show that Pni=0 i2 = n(n + 1)(2n + 1)/6 for all n ≥ 0.
P
• Show that ni=0 2i = 2n+1 − 1 for all n ≥ 0.
• Show that 4n < 2n whenever n ≥ 5.
What happens with ni=0 i3 or ni=0 i4 , or...? In the first two cases, we got polynomials
P P
in n on the right side. You’ll show on homework that this always happens.
2.2. Strong induction. The version of induction we just described is sometimes called
“weak induction”. Here’s a variant sometimes called “strong induction”. We have the same
setup: we want to prove that a sequence of statements P (0), P (1), P (2), . . . are true. Then
strong induction works by completing the following 2 steps:
• Prove that P (0) is true.
• Assuming that P (0), P (1), . . . , P (n) are all true, use them to prove that P (n + 1) is
true.
You should convince yourself that this isn’t really anything logically distinct from weak
induction. However, it can sometimes be convenient to use this variation.
Some examples to think about:
• Every positive integer can be written in the form 2n m where n ≥ 0 and m is an odd
integer.
• Every integer n ≥ 2 can be written as a product of prime numbers.
• Define a function f on the natural numbers by f (0) = 1, f (1) = 2, and f (n + 1) =
f (n − 1) + 2f (n) for all n ≥ 1. Show that f (n) ≤ 3n for all n ≥ 0.
• A chocolate bar is made up of unit squares in an n × m rectangular grid. You can
break up the bar into 2 pieces by breaking on either a horizontal or vertical line.
Show that you need to make nm − 1 breaks to completely separate the bar into 1 × 1
squares (if you have 2 pieces already, stacking them and breaking them counts as 2
breaks).
Given two functions f : X → Y and g : Y → X, we say that they are inverses if f ◦ g is the
identity function on Y , i.e., f (g(y)) = y for all y ∈ Y , and if g ◦ f is the identity function on
X, i.e., g(f (x)) = x for all x ∈ X. You should have seen the following before; if not, we’ll
leave it as an exercise.
3.2. 12-fold way, introduction. We have k balls and n boxes. Roughly speaking, this
chapter is about counting the number of ways to put the balls into boxes. We can think
of each assignment as a function from the set of balls to the set of boxes. Phrased this
way, we will be examining how many ways to do this if we require f to be injective, or
surjective, or completely arbitrary. Are the boxes supposed to be considered different or
interchangeable (we also use the terminology distinguishable and indistinguishable)? And
same with the balls, are they considered different or interchangeable? All in all, this will
give us 12 different problems to consider, which means we want to understand the following
table:
Example 3.6. (1) Suppose we are given 2 red flowers and 1 yellow flower. Aside from
their color, the flowers look identical. We want to count how many ways we can
display them in a single row. There are 3 objects total, so we might say there are
3! = 6 such ways. But consider what the 6 different ways look like:
RRY, RRY, RY R, RY R, Y RR, Y RR.
Since the two red flowers look identical, we don’t actually care which one comes first.
So there are really only 3 different ways to do this – the answer 3! has included each
different way twice, but we only wanted to count them a single time.
(2) Consider a larger problem: 10 red flowers and 5 yellow flowers. There are too many
to list, so we consider a different approach. As above, if we naively count, then we
would get 15! permutations of the flowers. But note that for any given arrangement,
the 10 red flowers can be reordered in any way to get an identical arrangement, and
same with the yellow flowers. So in the list of 15! permutations, each arrangement
15!
is being counted 10! · 5! times. The number of distinct arrangements is then 10!5! .
(3) The same reasoning allows us to generalize. If we have r red flowers and y yellow
flowers, then the number of different ways to arrange them is (r+y)!
r!y!
.
(4) How about more than 2 colors of flowers? If we threw in b blue flowers, then again
the same reasoning gives us (r+y+b)!
r!y!b!
different arrangements.
Now we state a general formula, which again can be derived by the same reasoning as in
(2) above. Suppose we are given n objects, which have one of k different types (for example,
our objects could be flowers and the types are colors). Also, objects of the same type are
considered identical. For convenience, we will label the “types” with numbers 1, 2, . . . , k and
let ai be the number of objects of type i (so a1 + a2 + · · · + ak = n).
Theorem 3.7. The number of ways to arrange the n objects in the above situation is
n!
.
a1 !a2 ! · · · ak !
As an exercise, you should adapt the reasoning in (2) to give a proof of this theorem.
The quantity above will be used a lot, so we give it a symbol, called the multinomial
coefficient:
n n!
:= .
a1 , a2 , . . . , ak a1 !a2 ! · · · ak !
In the case when k = 2 (a very important case), it is called the binomialcoefficient. Note
that in this case, a2 = n − a1 , so for shorthand, one often just writes an1 instead of a1n,a2 .
3.4. Words. A word is a finite ordered sequence whose entries are drawn from some set
A (which we call the alphabet). The length of the word is the number of entries it has.
Entries may repeat, there is no restriction on that. Also, the empty sequence ∅ is considered
a word of length 0.
Example 3.8. Say our alphabet is A = {a, b}. The words of length ≤ 2 are:
∅, a, b, aa, ab, ba, bb.
Theorem 3.9. If |A| = n, then the number of words in A of length k is nk .
8 STEVEN V SAM
There are many ways to prove this, but we’ll just do one for now:
Proof. In the last section on words, we identified subsets of [n] with words of length n on
{0, 1}, with a 1 in position i if and only if i belongs to the subset. So the number of subsets
of size k are exactly the number of words with exactly k instances of 1. This is the same as
arranging n − k 0’s and k 1’s from the section on permutations. In that case, we saw the
n!
answer is (n−k)!k! .
n
X n
Corollary 3.13. = 2n .
k=0
k
Proof. The left hand side counts the number of subsets of [n] of some size k where k ranges
from 0 to n. But all subsets of [n] are accounted for and we’ve seen that 2n is the number
of all subsets of [n].
n
Here’s an important identity for binomial coefficients (we interpret −1 = 0):
Proposition 3.14 (Pascal’s identity). For any k ≥ 0, we have
n n n+1
+ = .
k−1 k k
Proof. The right hand side is the number of subsets of [n + 1] of size k. There are 2 types
of such subsets: those that contain n + 1 and those that do not. Note that the subsets that
do contain n + 1 are naturally in bijection with the subsets of [n] of size k − 1: to get such
a subset, delete n + 1. Those that do not contain n + 1 are naturally already in bijection
n
with the subsets of [n] of size k. The two sets don’t overlap and their sizes are k−1 and
n
k
, respectively.
An important variation of subset is the notion of a multiset. Given a set S, a multiset
of S is like a subset, but we allow elements to be repeated. Said another way, a subset of S
can be thought of as a way of assigning either a 0 or 1 to an element, based on whether it
gets included. A multiset is then a way to assign some non-negative integer to each element,
where numbers bigger than 1 mean we have picked them multiple times.
Example 3.15. There are 10 multisets of [3] of size 3:
{1, 1, 1}, {1, 1, 2}, {1, 1, 3}, {1, 2, 2}, {1, 2, 3},
{1, 3, 3}, {2, 2, 2}, {2, 2, 3}, {2, 3, 3}, {3, 3, 3}.
Aside from exhaustively checking, how do we know that’s all of them? Here’s a trick: given a
multiset, add 1 to the second smallest values (including ties) and add 2 to the largest value.
What happens to the above:
{1, 2, 3}, {1, 2, 4}, {1, 2, 5}, {1, 3, 4}, {1, 3, 5},
{1, 4, 5}, {2, 3, 4}, {2, 3, 5}, {2, 4, 5}, {3, 4, 5}.
We get all of the 3-element subsets of [5]. The process is reversible using subtraction, so
there is a more general fact here.
Theorem 3.16. The number of k-element multisets of [n] is
n+k−1
.
k
10 STEVEN V SAM
Proof. We adapt the example above to find a bijection between k-element multisets of [n] and
k-element subsets of [n + k − 1]. Given a multiset S, sort the elements as s1 ≤ s2 ≤ · · · ≤ sk .
From this, we get a subset {s1 , s2 + 1, s3 + 2, . . . , sk + (k − 1)} of [n + k − 1]. On the other
hand, given a subset T of [n + k − 1], sort the elements as t1 < t2 < · · · < tk . From this, we
get a multiset {t1 , t2 − 1, t3 − 2, . . . , tk − (k − 1)} of [n]. We will omit the details that these
are well-defined and inverse to one another.
Some additional things:
• From the formula, we see that nk = n−k n
. This would also be implied if we could
construct a bijection between the k-element subsets and the (n − k)-element subsets
of [n]. Can you find one?
• What other entries of the 12-fold way table can be filled in now?
• Given variables x, y, z, we can form polynomials. A monomial is a product of the
form xa y b z c , and its degree is a + b + c. How many monomials in x, y, z are there of
degree d? What if we have n variables x1 , x2 , . . . , xn ?
4. Partitions and compositions
4.1. Compositions. Below, n and k are positive integers.
Definition 4.1. A sequence of non-negative integers (a1 , . . . , ak ) is a weak composition
of n if a1 + · · · + ak = n. If all of the ai are positive, then it is a composition. We call k
the number of parts of the (weak) composition.
Theorem 4.2. The number of weak compositions of n with k parts is n+k−1 = n+k−1
n k−1
.
Proof. We will construct a bijection between weak compositions of n with k parts and n-
element multisets of [k]. First, given a weak composition (a1 , . . . , ak ), we get a multiset
which has the element i exactly ai many times. Since a1 + · · · + ak = n, this is an n-element
multiset of [k]. Conversely, given a n-element multiset S of [k], let ai be the number of times
that i appears in S, so that we get a weak composition (a1 , . . . , ak ) of n.
Example 4.3. We want to distribute 20 pieces of candy (all identical) to 4 children. How
many ways can we do this? If we order the children and let ai be the number of pieces of
candy that the ith child receives, then (a1 , a2 , a3 , a4 ) is just a weak composition of 20 into 4
parts, so we can identify all ways with
the set of all weak compositions. So we know that
the number of ways is 20+4−1 23
20
= 20
.
What if we want to ensure that each child receives at least one piece of candy? First, hand
each child 1 piece of candy. We have 16 pieces left, and we can distribute them as we like,
19
so we’re counting weak compositions of 16 into 4 parts, or 16 .
As we saw with the previous example, given a weak composition (a1 , . . . , ak ) of n, we can
think of it as an assignment of n indistinguishable objects into k distinguishable boxes, so
this fills in one of the entries in the 12-fold way. A composition is an assignment which is
required to be surjective, so actually this takes care of 2 of the entries.
Corollary 4.4. The number of compositions of n into k parts is n−1
k−1
.
Proof. If we generalize the argument in the last example, we see that compositions of n into
k parts are in bijection with weak compositions of n − k into k parts.
Corollary 4.5. The total number of compositions of n (into any number of parts) is 2n−1 .
NOTES FOR MATH 184A 11
The answer suggests that we should be able to find a bijection between compositions of n
and subsets of [n − 1]. Can you find one?
4.2. Set partitions. (Weak) compositions were about indistinguishable objects into distin-
guishable boxes. Now we reverse the roles and consider distinguishable objects into indis-
tinguishable boxes.
Definition 4.6. Let X be a set. A partition of X is an unordered collection of nonempty
subsets S1 , . . . , Sk of X such that every element of X belongs to exactly one of the Si . The
Si are the blocks of the partition. Partitions of sets are also called set partitions to
distinguish from integer partitions, which will be discussed next.
Example 4.7. Let X = {1, 2, 3}. There are 5 partitions of X:
{{1, 2, 3}}, {{1, 2}, {3}}, {{1, 3}, {2}}, {{2, 3}, {1}}, {{1}, {2}, {3}}.
When we say unordered collection of subsets, we mean that {{1, 2}, {3}} and {{3}, {1, 2}}
are to be considered the same partition.
The notation above is a little cumbersome, so we can also write the above partitions as
follows:
123, 12|3, 13|2, 23|1, 1|2|3.
The number of partitions of X with k blocks only depends on the number of elements of
X. So for concreteness, we will usually assume that X = [n].
Example 4.8. If we continue with our previous example of candy and children: imagine
the 20 pieces of candy are now labeled 1 through 20 and that the 4 children are all identical
clones. The number of ways to distribute candy to them so that each gets at least 1 piece
of candy is then the number of partitions of [20] into 4 blocks.
Definition 4.9. We let S(n, k) be the number of partitions of [n] into k blocks. These are
called the Stirling numbers of the second kind. By convention, we define S(0, 0) = 1.
Note that S(n, k) = 0 if k > n.
So S(n, k) is, by definition, an answer to one of the 12-fold way entries: how many ways
to put n distinguishable objects into k indistinguishable boxes. It will be generally hard to
get nice, exact formulas for S(n, k), but we can do some special cases:
Example 4.10. For n ≥ 1, S(n, 1) = S(n, n) = 1. For n ≥ 2, S(n, 2) = 2n−1 − 1 and
S(n, n − 1) = n2 . Can you see why?
We also have the following recursive formula:
Theorem 4.11. If k ≤ n, then
S(n, k) = S(n − 1, k − 1) + kS(n − 1, k).
12 STEVEN V SAM
Proof. Consider two kinds of partitions of [n]. The first kind is when n is in its own block. In
that case, if we remove this block, then we obtain a partition of [n − 1] into k − 1 blocks. To
reconstruct the original partition, we just add a block containing n by itself. So the number
of such partitions is S(n − 1, k − 1).
The second kind is when n is not in its own block. This time, if we remove n, we get a
partition of n − 1 into k blocks. However, it’s not possible to reconstruct the original block
because we can’t remember which block it belonged to. So in fact, there are k different ways
to try to reconstruct the original partition. This means that the number of such partitions
is kS(n − 1, k).
If we add both answers, we account for all possible partitions of [n], so we get the identity
we want.
Here’s a table of small values of S(n, k):
n\k 1 2 3 4 5
1 1 0 0 0 0
2 1 1 0 0 0
3 1 3 1 0 0
4 1 7 6 1 0
5 1 15 25 10 1
We define B(n) to be the number of partitions of [n] into any number of blocks. This is
the nth Bell number. By definition,
n
X
B(n) = S(n, k).
k=0
Proof. We separate all of the set partitions of [n + 1] based on the number of elements in
the block that contains n + 1. Consider those where the size is j. To count the number of
these, we need to first choose the other elements to occupy the same block as n + 1. These
n
numbers come from [n] and there are j − 1 to be chosen, so there are j−1 ways to do this.
We have to then choose a set partition of the remaining n + 1 − j elements, and there are
n
B(n + 1 − j) many of these. So the number of such partitions is j−1 B(n + 1 − j). The
possible values for j are between 1 and n + 1, so we get the identity
n+1
X n
B(n + 1) = B(n + 1 − j).
j=1
j − 1
n
= ni to get the desired
Re-index the sum by setting i = n + 1 − j and use the identity n−i
identity.
4.3. Integer partitions. Now we come to the situation where both balls and boxes are
indistinguishable. In this case, the only relevant information is how many boxes are empty,
how many contain exactly 1 ball, how many contain exactly 2 balls, etc. We use the following
structure:
NOTES FOR MATH 184A 13
Y (λ) =
In general, it is a left-justified collection of boxes with λi boxes in the ith row (counting from
top to bottom).
The transpose (or conjugate) of a partition λ is the partition whose Young diagram is
obtained by flipping Y (λ) across the main diagonal. For example, the transpose of (4, 2, 1)
is (3, 2, 1, 1):
Note that we get the parts of a partition from a Young diagram by reading off the row
lengths. The transpose is obtained by instead reading off the column lengths. The notation
is λT . If we want a formula: λTi = |{j | λj ≥ i}|.
Note that (λT )T = λ. A partition λ is self-conjugate if λ = λT .
Example 4.15. Some self-conjugate partitions: (4, 3, 2, 1), (5, 1, 1, 1, 1), (4, 2, 1, 1):
, ,
Theorem 4.16. The number of partitions λ of n with `(λ) ≤ k is the same as the number
of partitions µ of n such that all µi ≤ k.
14 STEVEN V SAM
Proof. We get a bijection between the two sets by taking transpose. Details omitted.
Theorem 4.17. The number of self-conjugate partitions of n is equal to the number of
partitions of n using only distinct odd parts.
Proof. Given a self-conjugate partition, take all of the boxes in the first row and column of
its Young diagram. Since it’s self-conjugate, there are an odd number of boxes. Use this as
the first part of a new partition. Now remove those boxes and repeat. For example, starting
there is certainly a way to make an assignment, but they’re all the same: we can’t
tell the boxes apart, so it doesn’t matter where the balls go.
(9) These are set partitions of [k] into n blocks.
(10) These are the number of integer partitions of k where the number of parts is ≤ n.
Remember that pi (k) is the notation for the number of integer partitions of k into i
parts.
(11) The reasoning here is the same as (8).
(12) These are the number of integer partitions of k into n parts.
This says that the total number of subsets of [n] is 2n which is a familiar fact from before.
n
i n
X
Corollary 5.3. For n > 0, we have 0 = (−1) .
i=0
i
Proof. Substitute x = −1 and y = 1 into the binomial theorem.
If we rewrite this, it says that the number of subsets of even size is the same as the number
of subsets of odd size. It is worth finding a more direct proof of this fact which does not rely
on the binomial theorem.
n
n−1
X n
Corollary 5.4. n2 = i .
i=0
i
Proof. Take thePderivative
i−1of n−i
both sides of the binomial theorem with respect to x to get
n−1 n n
n(x + y) = i=0 i i x y . Now substitute x = y = 1.
It is possible to interpret this formula as the size of some set so that both sides are different
ways to count the number of elements in that set. Can you figure out how to do that? How
about if we took the derivative twice with respect to x? Or if we took it with respect to x
and then with respect to y?
5.2. Multinomial theorem.
Theorem 5.5 (Multinomial theorem). For n, k ≥ 0, we have
X n
n
(x1 + x2 + · · · + xk ) = xa1 xa2 · · · xakk .
a1 , a2 , . . . , ak 1 2
(a1 ,a2 ,...,ak )
ai ≥0
a1 +···+ak =n
Proof. The proof is similar to the binomial theorem. Consider expanding the product (x1 +
· · · + xk )n . To do this, we first have to pick one of the xi from the first factor, pick another
one from the second factor, etc. To get the term xa11 xa22 · · · xakk , we need to have picked x1
exactly a1 times, picked x2 exactly a2 times, etc. We can think of this as arranging n objects,
where ai of them have “type i”. In that case, we’ve already discussed that this is counted
by the multinomial coefficient a1 ,a2n,...,ak .
By performing substitutions, we can get a bunch of identities that generalize the one from
the previous section. I’ll omit the proofs, try to fill them in.
n
X n
k = ,
a1 , a2 , . . . , ak
(a1 ,a2 ,...,ak )
ai ≥0
a1 +···+ak =n
X
a1 n
0= (1 − k) ,
a1 , a2 , . . . , ak
(a1 ,a2 ,...,ak )
ai ≥0
a1 +···+ak =n
n−1
X n
nk = a1 .
a1 , a2 , . . . , ak
(a1 ,a2 ,...,ak )
ai ≥0
a1 +···+ak =n
NOTES FOR MATH 184A 17
6. Inclusion-exclusion
Example 6.1. Suppose we have a room of students, and 14 of them play basketball, 10 of
them play football. How many students play at least one of these? We can’t answer the
question because there might be students who play both. But we can say that the total
number is 24 minus the amount in the overlap.
B F
Alternatively, let B be the set who play basketball and let F be the set who play football.
Then what we’ve said is:
|B ∪ F | = |B| + |F | − |B ∩ F |.
New situation: there are additionally 8 students who play hockey. Let H be the set of
students who play hockey. What information do we need to know how many total students
there are?
B F
Here the overlap region is more complicated: it has 4 regions, which suggest that we need 4
more pieces of information. The following formula works:
|B ∪ F ∪ H| = |B| + |F | + |H| − |B ∩ F | − |B ∩ H| − |F ∩ H| + |B ∩ F ∩ H|.
To see this, the total diagram has 7 regions and we need to make sure that students in each
region get counted exactly once in the right side expression. For example, consider students
who play basketball and football, but don’t play hockey. They get counted in B, F , B ∩ F
with signs +1, +1, −1, which sums up to 1. How about students who play all 3? They get
counted in all terms with 4 +1 signs and 3 −1 signs, again adding up to 1. You can check
the other 5 to make sure the count is right.
The examples above have a generalization to n sets, though the diagram is harder to draw
beyond 3.
Theorem 6.2 (Inclusion-Exclusion). Let A1 , . . . , An be finite sets. Then
n
X X
|A1 ∪ · · · ∪ An | = (−1)j−1 |Ai1 ∩ Ai2 ∩ · · · ∩ Aij |.
j=1 1≤i1 <i2 <···<ij ≤n
Proof. We just need to make sure that every element x ∈ A1 ∪ · · · ∪ An is counted exactly
once on the right hand side. Let S = {s1 , . . . , sk } be all of the indices such that x ∈ Asr .
Then x belongs to Ai1 ∩ · · · ∩ Aij if and only if {i1 , . . . , ij } ⊆ S. So the relevant contributions
18 STEVEN V SAM
Proof. It turns out to be easier to count the number of permutations which are not derange-
ments and then subtract that from the total number of permutations. For i = 1, . . . , n,
let Ai be the set of bijections f such that f (i) = i. Then the set of non-derangements is
A1 ∪ · · · ∪ An . To apply inclusion-exclusion, we need to count the size of Ai1 ∩ · · · ∩ Aij
for some choice of indices i1 , . . . , ij . This is the set of bijections f : [n] → [n] such that
f (i1 ) = i1 , . . . , f (ij ) = ij . The remaining information to specify f are its values outside of
i1 , . . . , ij , which we can interpret as a bijection of [n] \ {i1 , . . . , ij } to itself. So there are
(n − j)! of them. So we get
n
X X
|A1 ∪ · · · ∪ An | = (−1)j−1 |Ai1 ∩ · · · ∩ Aij |
j=1 1≤i1 <···<ij ≤n
n
X X
= (−1)j−1 (n − j)!
j=1 1≤i1 <···<ij ≤n
n
j−1 n
X
= (−1) (n − j)!
j=1
j
n
X n!
= (−1)j−1 .
j=1
j!
Remember that we have to subtract this from n!. So the final answer simplifies as so:
n n
j−1 n! n!
X X
n! − (−1) = (−1)j .
j=1
j! j=0
j!
The problem with formulas coming from inclusion-exclusion is the alternating sign. It can
generally be hard to estimate the behavior of the quantity as n grows. For example, binomial
NOTES FOR MATH 184A 19
n
coefficients i
(for fixed i) limit to infinity as n goes to infinity. However, the alternating
sum
n
i n
X
(−1)
i=0
i
is 0. For derangements, we can use the following observation. We have a formula for the
exponential function
∞
X xi
ex = .
i=0
i!
If we plug in x = −1 and only take the terms up to i = n, then we get the number of de-
rangements divided by n!, i.e., the percentage of permutations that are derangements. From
calculus, taking the first n terms of a Taylor expansion is supposed to be a good approxi-
mation for a function, so for n → ∞, the proportion of permutations that are derangements
is e−1 ≈ .368, or roughly 36.8%.
We can also use inclusion-exclusion to get an alternating sum formula for Stirling numbers.
Theorem 6.4. For all n ≥ k > 0,
k k
(k − i)n
1 X i k
X
S(n, k) = (−1) (k − i)n = (−1)i .
k! i=0 i i=0
i!(k − i)!
Proof. As we discussed before, k!S(n, k) counts the number of surjective functions f : [n] →
[k]. So we will count this quantity. For i = 1, . . . , k, let Ai be the set of functions f : [n] →
[k] such that i is not in the image of f . The surjective functions are the complement of
A1 ∪ · · · ∪ Ak from the set of all functions (there are k n total functions). To apply inclusion-
exclusion, we need to count the size of Ai1 ∩ · · · ∩ Aij for 1 ≤ i1 < · · · < ij ≤ k. This is the
set of functions so that {i1 , . . . , ij } are not in the image; equivalently, this is identified with
the set of functions f : [n] → [k] \ {i1 , . . . , ij }, so there are (k − j)n of them. So we can apply
inclusion-exclusion to get
n
X X
|A1 ∪ · · · ∪ Ak | = (−1)j−1 |Ai1 ∩ · · · ∩ Aij |
j=1 1≤i1 <···<ij ≤k
n
X X
= (−1)j−1 (k − j)n
j=1 1≤i1 <···<ij ≤k
n
j−1 k
X
= (−1) (k − j)n .
j=1
j
Now divide both sides by k! to get the first equality. The second equality comes from
canceling the k! from the binomial coefficient.
20 STEVEN V SAM
This is what you get if you just distribute like normal. As a special case, if ai = 0 for i > 0,
we just get
X
a0 B(x) = a0 b n x n .
n≥0
X
A(x) + B(x) = 2xn ,
n≥0
X
A(x)B(x) = (n + 1)xn .
n≥0
A formal power series A(x) is invertible if there is a power series B(x) such that
A(x)B(x) = 1. In that case, we write B(x) = A(x)−1 = 1/A(x) and call it the inverse
of A(x). If it exists, then B(x) is unique.
Example 7.2. Let A(x) = n≥0 xn and B(x) = 1 − x. Then A(x)B(x) = 1, so B(x) is the
P
inverse of A(x). For that reason, we will use the expression
1 X
= xn .
1 − x n≥0
However, the formal power series x is not invertible: the constant term of xB(x) is 0 no
matter what B(x) is, so there is no way that an inverse exists.
Theorem 7.3. A formal power series A(x) is invertible if and only if its constant term is
nonzero.
NOTES FOR MATH 184A 21
This looks like it could have problems with infinite sums, but because A(x) has no con-
stant term, for each d, the coefficient of xd is 0 in A(x)n whenever n > d, so to compute
the coefficient of xd in the above expression, we only do finitely many multiplications and
additions.
Example 7.4. Let d be a positive integer, A(x) = xd and B(x) = n≥0 xn . Then B(A(x)) =
P
dn
P
n≥0 x . We can do this substitution into the identity
(1 − x)B(x) = 1
to get X
(1 − xd ) xdn = 1,
n≥0
22 STEVEN V SAM
We can also take the derivative D of a formal power series. We define it as follows:
X X
(DA)(x) = A0 (x) = nan xn−1 = (n + 1)an+1 xn .
n≥0 n≥0
1 1
= n≥0 xn . Taking the derivative of the left side gives (1−x)
P
Example 7.5. We have 1−x 2.
n−1 n
P P
Taking the derivative of the right side gives n≥0 nx = n≥0 (n + 1)x . We’ve already
seen that these two expressions arePequal.
How would we simplify B(x) = n≥0 nxn ? We have a few options. First:
X X 1 1 1 − (1 − x) x
B(x) = (n + 1)xn − xn = 2
− = 2
= .
n≥0 n≥0
(1 − x) 1−x (1 − x) (1 − x)2
Or more directly:
X 1
B(x) = x nxn−1 = x .
n≥0
(1 − x)2
1 n
We will use ex to denote the formal power series
P
n≥0 n! x .
We won’t really go into the meaning of the formal power series (1 + x)m for general real
numbers. However, when m is a non-negative integer, this agrees with the ordinary binomial
theorem with y = 1. When m is a negative integer, the meaning is (1 + x)m = 1/(1 + x)−m .
NOTES FOR MATH 184A 23
√
For fractional m, we can also interpret them. For example, (1 + x)1/2 = 1 + x, which
represents a formal power series whose square equal to 1 + x. In other words,
!2
X 1/2
xn = 1 + x.
n≥0
n
This will be useful in later calculations. Let’s work out a few cases.
Example 7.7. Consider m = −1. We know from before that
1 X
= xn
1 − x n≥0
We should also be able to get this from the binomial theorem with m = −1. We have
(−1)n n!
−1 (−1)(−2) · · · (−1 − n + 1)
= = = (−1)n .
n n! n!
More generally, consider m = −d for some positive integer d. Then from what we just did,
we have
!d
X
−d n n
(1 + x) = (−1) x .
n≥0
The right side could be expanded, possibly by using induction on d, but we’d have to know
a pattern before we could proceed. Instead, let’s use the binomial theorem directly:
(−1)n (d + n − 1)(d + n − 2) · · · (d)
−d (−d)(−d − 1) · · · (−d − n + 1)
= =
n n! n!
(d + n − 1)! d+n−1
= (−1)n = (−1)n .
(d − 1)!n! n
This gives us the identities
1 n d+n−1
X
= (−1) xn ,
(1 + x)d n≥0
n
1 X d+n−1 n
= x .
(1 − x)d n≥0
n
the product of all positive even integers between 2 and n. Keep in mind this does not mean
we do the factorial twice. With our new notation, we have
(−1)n−1 (2n − 3)!!
1/2
= .
n 2n n!
Remember that this means that
!2
X (−1)n−1 (2n − 3)!!
n
xn = 1 + x.
n≥0
2 n!
To check that by hand, we could expand the left side, but it would be a lot of work.
In the previous example, we found a square root to the formal power series 1 + x. Because
(−1)2 = 1, if we multiplied that solution by −1, we’d get another solution. Are there more?
If we were talking about numbers, then no. The same holds for formal power series too.
More generally, if we’re trying to solve a quadratic equation
A(x)t2 + B(x)t + C(x) = 0
where A(x), B(x), C(x) are formal power series, then there are at most two different solutions
t in formal power series (there could be only one or none). We won’t prove this because it’s
beyond the scope of this course, but we will use this later to solve some problems.
Conveniently, the quadratic formula applies in this situation.
If A(x) is invertible (remember this is equivalent to having nonzero constant term), we
get (at most) two solutions:
p
−B(x) ± B(x)2 − 4A(x)C(x)
t= .
2A(x)
If B(x)2 = 4A(x)C(x), then there’s only one solution. If A(x) is not invertible, then the
situation is more subtle, but we won’t deal with this case.
8.1. Linear recurrence relations. Our first application of ordinary generating functions
is to solve linear recurrence relations. A sequence of numbers is said to satisfy a linear
recurrence relation of order d if there are scalars c1 , . . . , cd such that cd 6= 0, and for all
n ≥ d, we have
an = c1 an−1 + c2 an−2 + · · · + cd an−d .
We’ve seen this idea before, although in slightly different forms.
Example 8.1. The Fibonacci numbers Fn are given by the sequence 1, 1, 2, 3, 5, 8, 13, 21, . . . .
This isn’t really telling you what the general Fn is, so instead let me say that for all n ≥ 2,
we have
Fn = Fn−1 + Fn−2 .
Together with the initial conditions F0 = 1, F1 = 1, this is enough information to calculate
any Fn . So (by definition), the Fibonacci numbers satisfy a linear recurrence relation of
order 2.
NOTES FOR MATH 184A 25
Remember the recurrence is only valid for n ≥ 2, so we have to separate out the first two
terms. Now comes an important point: the last two sums are almost the same as A(x) if we
re-index them:
X X X
an−1 xn = an xn+1 = x an xn = xA(x) − a0 x
n≥2 n≥1 n≥1
X X
an−2 xn = an xn+2 = x2 A(x).
n≥2 n≥0
In particular,
A(x) = a0 + a1 x + c1 xA(x) − c1 a0 x + c2 x2 A(x).
We can rewrite this as
a0 + (a1 − c1 a0 )x
(8.6) A(x) = .
1 − c1 x − c2 x2
We want to factor the denominator. To do this, plug in t 7→ x−1 into (8.3) and multiply by
x2 to get
1 − c1 x − c2 x2 = (1 − r1 x)(1 − r2 x).
Now we can apply partial fraction decomposition to (8.6) to write
α1 α2
A(x) = +
1 − r1 x 1 − r2 x
NOTES FOR MATH 184A 27
for some constants α1 , α2 . But these terms are both geometric series, so we can further write
X X
A(x) = α1 r1n xn + α2 r2n xn .
n≥0 n≥0
The coefficient of x on the left side is an and the coefficient of xn on the right side is
n
X X
A(x) = β1 r1n xn + β2 (n + 1)r1n xn .
n≥0 n≥0
we have to do something like we did in Theorem 8.7. For example, if d = 5 and the roots
are r1 with multiplicity 3 and r2 with multiplicity 2 (and r1 6= r2 ), then we would have
This should look familiar to you if you’ve ever solved a linear homogeneous differential
equation with constant coefficients.
I’ll leave it to you to formulate the general case, though we won’t be doing anything more
with it in this class.
Example P 8.8. If an = n!, we can think of this as the number of ways to order the elements
of [n]. So n≥0 n!xn is the ordinary generating function for orderings.
If bn = 2n , we can think of this as the number of ways of picking some subset of elements
of [n] to be considered special.
Example 8.9. From the last example: an + bn = n! + 2n is the number of ways of either
putting an ordering on the elements of [n] or picking a subset of the elements to be special.
This may seem weird if we try to interpret an + an , but one way to keep this in check is
to think of 2 different people as doing the task. So an + an = 2n! can be thought of as the
number of ways of either letting person 1 order the elements of [n] or letting person 2 order
the elements of [n].
Example 8.10. A class consists of n days. We want to split the lectures of the class into
two pieces: the first half is the theoretical part and the second part is the laboratory part.
The theoretical part needs 1 day for a guest lecturer while the laboratory part needs 2 days.
How many ways can we plan out this course?
Let an = n be the number of ways of picking a day for a guest lecturer for a course with n
days and let bnP= n2 be the number ofP ways of picking two days for a guest lecturer. Then
define A(x) = n≥0 an x and B(x) = n≥0 bn xn . The coefficient of xn in A(x)B(x) is the
n
x2
B(x) take derivative twice and then multiply by 2
):
X x
A(x) = nxn = ,
n≥0
(1 − x)2
X n x2
B(x) = xn = .
n≥0
2 (1 − x)3
Then the product is
x3
A(x)B(x) = .
(1 − x)5
Apply the general binomial theorem:
X −5 X n + 4
3 −5 3 n 3
x (1 − x) =x x =x xn .
n≥0
n n≥0
n
n+1 n+1
The coefficient of xn in the above expression is n−3
= 4
. Can you see a direct way to
get that answer?
Example 8.11. Let p≤k (n) be the number of integer partitions of n with at most k parts.
To make the following cleaner, we use the convention that p≤k (0) = 1. Using the transpose of
partitions, this is also the number of integer partitions of n using only the numbers 1, . . . , k,
and we will instead use this interpretation. We want a simple expression for n≥0 p≤k (n)xn .
P
1
When k = 1, we get p≤1 (n) = 1 for all n, so n≥0 p≤1 (n)xn = 1−x
P
.
Now consider k = 2. To do this, we make the following unusual observation: let bn be the
number of ways of writing n as a sum of 2’s (so bn = 1 if n is even and 0 otherwise) and let
an be the number of ways of writing n as a sum of 1’s (so an = 1 for all n). If we have a
partition of n using only 1’s and 2’s, we can think of this as splitting [n] into two consecutive
pieces of size i and n − i, and writing the first as a sum of 1’s and the second as a sum of
2’s. Thus, we get
Xn
p≤2 (n) = ai bn−i .
i=0
n 2n 1
P P
We have n≥0 bn x = n≥0 x = so
1−x2
,
X 1
p≤2 (n)xn = .
n≥0
(1 − x)(1 − x2 )
We can repeat this: let cn be the number of ways of writing n as a sum of 3’s. Then
n
X
p≤3 (n) = p≤2 (i)cn−i
i=0
n 1
x3n =
P P
and n≥0 cn x = n≥0 1−x3
,
which leads us to
X 1
p≤3 (n)xn = 2 )(1 − x3 )
.
n≥0
(1 − x)(1 − x
By induction on k, we can prove that
k
X Y 1 1
p≤k (n)xn = = .
n≥0 i=1
1 − xi (1 − x)(1 − x2 ) · · · (1 − xk )
Another Random Document on
Scribd Without Any Related Topics
Men det var ej endast vid tillverkningen af garn och tyg, vid spånad
och väfnad förstånd och skicklighet var nödvändiga för kvinnorna.
Att skaffa sig material för tillverkningarna kräfde också insikter och
framför allt arbete. Jordtegen skulle iordningställas om våren för att
mottaga linfröet för en kommande skörd, och fåren skulle skötas för
att lämna en god ullafkastning vid årets stora klippningar, långullen
om hösten, jullidjeullen på vintern och vårullen på försommarn. Det
gamla ordstäfvet, att kvinnan skulle taga ullen från bässens rygg och
sätta den på karlens, vardt härvidlag en lefvande och bokstaflig
sanning.
Vid ullens bearbetning till garn behöfdes ej andra redskap än
skrubbor och kardor, spinnrock och härfvel, då däremot linet
erfordrade en större och vidlyftigare beredning. Men det var en glad
tid, dessa kliftantalkornas tid, då unga och gamla kvinnor gick med
sin klifta från gård till gård, så länge lin fanns i byn att klifta, och det
var kalas, glädje och sång vid alla talkor. De inföll nästan alltid på
senhösten strax efter marknaden. Nu var det så, att på marknaden i
staden hade de af ungdomarna, som af naturen voro utrustade med
sångröst och en smula gehör, lärt sig en hel hop nya visor och
melodier, ty den, som på marknaderna sålde visor, sjöng äfven
melodierna till dem, och den, som var kvick att fatta och lära sig,
kom hem med en hel skatt af dem till glädje på kliftantalkorna, och
där inöfvades visorna kväll efter kväll, till dess alla kunde sjunga
med.
Efter kliftandet vidtog häcklandet. En person drog hackorna af, en
annan finklonen och den tredje stod med skakankäppen och
formade blårna till tottar, och så var det färdigt att sättas på
spinnrockskräcklan, och det blef sång och sagor vid aftonbrasa och
spinnrockssurr. Härfknipporna växte allt större på stugans vägg, och
de idoga kvinnornas rockar surrade allt trägnare ju närmare det led
mot jul. Väfstolen hölls i jämn gång för julväfvarna, och först vid det
egentliga julstökets inträde fördes spinnrockar och väfstol ut för att
efter den stora hälgens firande åter tagas in vid tjugundagstiden.
Då kom en tid af ideligt spinnande afbrutet endast vid någon
apostladag, fettisdagen och andra sådana halfhälgdagar. På dessa
dagar fick man ej spinna utan i stället förrätta något annat arbete.
Spinnandet på dessa dagar var förbjudet därför att kvinnor som för
brott satt i fängelse (spinnhus) där fick uträtta spinnsysslor, och
spinnrocken därför betraktades som ett straffredskap — så stor ära
den annars hölls i af kvinnorna. Därför ställdes den undan på en
apostladag. Man gick då hällre till byn, hvilket annars ej ofta kom i
fråga, ty „Långer bysgång gjorde korter härfstång.“
Väfstolen kom i gång senare på vårvintern, då dagarna ljusnade
och linhärfstängerna hängde i långa rader på blek vid soliga väggar
eller låg utbredda på smältande drifvor.
Jämsides med linet gick ullspånaden, och då den var ett viktigare
arbete, anförtroddes den åt gammelmor eller någon annan äldre
person i huset. Den mest ansvarsfulla af all spånad var
vadmalsvarpen samt varpar till andra ylleväfnader. Det fanns gamla
gummor, som lifnärde sig med sådan varpspånad och i utbyte därför
fick matvaror och ved. De hade nått en stor färdighet i sin
arbetskonst, och de arbetade som små maskiner. En gammal
soldatänka, gammal-Hurton kallad — mannen hette Hurtig och sades
ha varit med „redan i det stora kriget länge förrän ryssen kom över“
— spann ensam vadmalsvarp till två byar. Vid rågtröskningen på
hösten uppbar hon lönen för sitt arbete.
Bomullsgarn fanns ej att köpa. Dock hade man den tiden ändå
bomullsklänningar med linvarp och hemspunnet inslag af bomull,
som man fick köpa. Det fanns enkom för bomull afsedda små kardor
att karda dem med, och det blef ett godt och bastant bomullstyg,
som höll en människolifslängd ut och helt olika de skira, lätta våra
dagars kvinnor bära. Det talas om en kvinna, som började „kejka“
tråden till en ny hvardagskjol åt sig, då det öfriga husfolket på
morgonen gick till höängen. På aftonen, då höjerfolket vände åter,
bar husmor den nya kjolen på sig.
Först vid en senare tid, för ungefär sextio år sedan, kom det
färdiga bomullsgarnet i bruk och kallades då allmänt engilstrådar.
Men då det var mycket dyrt i början, användes det sparsamt. För
något över trettio år sedan kom även det s. k. kamelgarnet i bruk
och ersatte i någon mån ullgarnet till klänningar. Detta syntes de
gamla mycket underbart. En gammal kvinna, som ännu aldrig burit
andra kläder än de af lin och ull säges hafva utropat:
— Ska vi nu börja klä oss i kamelhår, som Johannes döparen!
Färgningen af det spunna garnet utfördes för det mesta hemma,
men också någon gång i staden hos yrkesfärgaren, där äfven
vadmalstygerna mottogs till stampning, färgning och prässning. Vid
färgningen hemma begagnade man sig af växtfärger både inhemska
och utländska s. s. brunt af stenmossan och porsriset, olika slags
gult af björk och allöf samt lökskal, grått af svinlingonris och
björkbark, grönt af ljungen o. s. v. Af utländska färgämnen, som
man fick köpa i handeln, begagnades kropp och kochenill till röda
färger, bresilja till svarta; sandel, sumack, galläppel, cateku och
orleana samt många fler till olika färgningsbehof. Blått kunde man
erhålla af den utländska växten indigo. Men då färgningen därmed
var förenad med vissa konster och omständigheter och färgämnet
var dyrt att köpa, anlitades vid behof däraf yrkesfärgaren, som i sina
stora kypar kunde framställa blått i alla nyanser från det djupaste
svartblå till det klaraste „karleblå“.
Väfnadsredskapen bestod af väfstolen med alla sina många olika
tillbehör och refträ, där varpen utrefvades. Vi anse att refträ är
oundgängligen nödvändigt för uppsättandet af en väf. Dock finnes
det ännu åldringar som säga sig minnas, huru kvinnorna i deras
barndom redde sig utan detta trä. De hade fastspikade klykor här
och där rundt kring stugans väggar, alla på en viss jämn höjd, och så
sprang de fram och tillbaka rundtomkring stugan utrefvande varpen
kring väggarna, där klykorna höll den upp. Skälet togs i båda
ändarna, på samma sätt som vid begagnandet af refträ. I äldre tider
användes i stället för våra bobinrullor, ett slags trän som kallades
kajkor. Väfvarna samlades ej på varpen utan randningen skedde vid
inslaget. Väfvarna var därför mycket breda, och de gamla väfskedar,
som ännu finns kvar från fordom, passar ej i de små, moderna
väfstolarna. Först i en senare tid, då redskapen mera utvecklat sig,
kom bruket att randa en väf på varpen.
Som vid allting annat man hade för händer, följde äfven skrock
och vidskepelse med vid slöjdarbetet. Redan om våren, vid
sädesärlans ankomst, tog man vara på om det var att vänta ett godt
linår. Sågs ärlan första gången på gödselhögen, blef det dåligt, satt
hon uppe på ett tak, blef det godt. Var den första fjäriln man såg
gul, tydde det på godt lin, var den brokig, spåddes dåligt.
Sädesmannen som sådde linet, skulle vid tillfället vara klädd i snöhvit
skjorta, och påsen, som han tog linfröet ur, skulle vara bländande
hvit. Hit hör äfven det bekanta fastlagsåkandet och golfsopningen.
Kom en främmande person in, då en väf refvades, så ropade den
som refvade: „stig långt och bredt!“ Och den inkomne måste stiga
med långa och breda steg öfver stugans golf, på det att väfven ej
skulle tryta i längd eller bredd. Kom en främmande person in, just då
man höll på att sluta en väf, betydde det, att den som kom in, snart
skulle dö, kom någon in vid väfvens början, betydde det motsatsen.
Väfven i stugan ansågs nästan som ett hälgadt ting. Skadade någon
med berådt mod en väf, ansågs det som ett nidingsdåd.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com