Modern Algebra SEO Lecture Notes
Modern Algebra SEO Lecture Notes
Contents
0 Introduction 2
0.1 Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
0.2 Appendix B: Sets, Functions, Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
0.3 Appendix D: Equivalence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
0.4 Appendix C: Math Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1 Arithmetic in Z Revisited 19
1.1 Integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.2 Divisibility in Z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.3 Primality in Z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3 Rings 28
3.1 Definition and Examples of Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2 Algebra in Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3 Ring Homomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4 Arithmetic in F[x] 37
4.1 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2 Divisibility in F[x] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.3 Primality (Irreducibilty) in F[x] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4 Polynomial Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7 Groups 53
7.1 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.2 Properties of Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.3 SubGroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
0 Introduction
This is not a complete set of lecture notes for Math 448, Modern Algebra I. Additional material will
be covered in class and discussed in the textbook. These notes are currently under development
as a port from a previous version, so typos and formatting errors are inevitable. Check back
frequently for updates.
0.1 Logic
In this section, we give an informal overview of logic and proofs. For a more formal introduction
see any logic textbook.
Definition. If Q follows from no premises in a formal axiom system, we say that Q is provable in
the system. A provable statement is called a theorem.
And finally, the definition we’ve all been waiting for!
Notation. If Q is provable from premises P1 , . . . , Pn in a formal system we can denote this symbol-
ically as
P1 , . . . , Pn ` Q
It is also commonplace to refer to such an expression as a theorem. To prove such a theorem is to
give a proof of Q in the same formal system where additionally the premises are ‘Given’ as axioms.
Term Description
Remarks:
• An element is either in a set or it is not in a set, it cannot be in a set more than once.
• It is not necessary that we know specifically which element of the domain an expression
represents, only that it represents some unspecified element in that set.
• We do not have to know if a statement is true or false, just that it is either true or false.
• If a statement contains n variables, x1 , . . . xn , then to solve the statement is to find the set of
all n-tuples (a1 , . . . , an ) such that each ai is an element of the domain of xi and the statement
becomes true when x1 , . . . , xn are replaced by a1 , . . . , an respectively. In this situation, each
such n-tuple is called a solution of the statement.
• In formal mathematics, ‘true’ means ‘provable’.
Definition. Lambda expressions can be applied to an expression a having the same type as x to
form a new expression, (λx, E)(a) which has the same type as E. These can be further simplified to
the expression obtained by replacing all occurrences3 of x in E with a.
Remark. If we give a name to a lambda expression, e.g., define f to be λx, E then the expression
(λx, E)(a) is just the usual notation for function application f (a).4
Definition. Two lambda expressions are said to be equivalent if they simplify to the same or
equivalent things when applied to any argument.
Remark. Renaming all occurrences of x in λx, E with a new identifier always produces a lambda
expression that is equivalent to the original. Another common situation where we can simplify a
lambda expression λx, E is when the expression E does not contain x. In this situation (λx, E)(a)
simplifies to just E for every a, and thus we can say that λx, E simplifies to just E in that case.
2
These refer to free occurrences - see below.
3
See footnote 2. Also no free identifier in a should become bound as a result of the substitution.
4
Indeed, in precalculus they usually write f (x) = x3 instead of writing f = (λx, x3 ), but the latter is usually what they
mean.
P1 (show)
..
.
Pk (show)
...................................
Q1 (conclude)
..
.
Qn (conclude)
In this notation, the rule looks like a template that we can fill in to create our proofs. In particular,
the lines marked with a (show) need to be justified with a rule of inference that is supplied as
a reason for that line, and those marked with (conclude) can be justified with the given rule of
inference.
Some rules of inference have a premise of the form
(P1 , . . . , Pk ` Q)
This is not a statement in the formal system itself, but rather the assertion that Q can be proven
from P1 , . . . , Pk in the formal system. We call an expression of this form a subproof or environment.
Such a premise is satisfied by including a subproof in a proof that shows that Q can be proved
from the given premises (which do not need to be justified by a rule of inference). We denote this
in recipe notation as an indented ‘assume-block’ as illustrated below.
φ or ψ, (φ ` ρ), (ψ ` ρ) ` ρ
where φ, ψ, and ρ are any mathematical statements. Then we would express this rule in recipe
notation as
Proof by Cases
φ or ψ (show)
Assume φ
ρ (show)
←
Assume ψ
ρ (show)
←
...................................
ρ (conclude)
In this, everything between an Assume and the following ← (the ‘end assumption’ symbol) is a
subproof that demonstrates the corresponding premise in the rule of inference. We indent such
assumption blocks in our proofs. Subproofs can be nested, and the level of indentation corresponds
to the level of nesting. Assumptions (lines that start with Assume) do not need to be justified by
a rule of inference. We say that they are given. Lines marked with (show) must be justified. Lines
marked with (conclude) are justified by the rule itself.
Note that we do include the word "Assume " in the proof itself, but not the words "show" or
"conclude" which are just instructions to the proof author (as opposed to the reader) for how to
justify the indicated lines.
Natural Deduction
We now turn our attention to a formal axiom system that is based on one first formulated by
Gerhard Gentzen in 1934 as a formal system that closely imitates the way mathematicians actually
reason when writing traditional expository proofs.
Propositional Logic
The Statements of Propositional Logic
Definition. Let φ, ψ be statements. Then the five expressions “¬φ”, “φ and ψ”, “φ or ψ”, “φ ⇒
ψ”, and “φ ⇔ ψ” are also statements whose truth values are completely determined by the truth
values of φ and ψ as shown in the following table:
T T F T T T T
T F F F T F F
F T T F T T F
F F T F F T T
We can also write ’not’ for ¬, ’if and only if’ for ⇔, and ’implies’ for ⇒. A statement of the form
’φ ⇒ ψ’ is called a conditional statement or an implication, and can be written in English as ’φ implies
ψ’, ’if φ then ψ’, ’ψ follows from φ’, or ’ψ, if φ’.
in order to avoid the confusion that ‘P or Q ⇒ R and S’ might actually mean something like
P or (Q ⇒(R and S)). In order to cut down on parentheses, we assign a precedence order for
our operators, meaning we apply the operators in the following order (from highest to lowest).
Precedence of Notation
and+ φ, ψ ` (φ and ψ)
and− (φ and ψ) ` φ
(φ and ψ) ` ψ
or+ φ ` (φ or ψ)
ψ ` (φ or ψ)
or− (proof by cases) (φ or ψ), (φ ⇒ ρ), (ψ ⇒ ρ) ` ρ
⇒+ (φ ` ψ) ` (φ ⇒ ψ)
⇒ − (modus ponens) (φ ⇒ ψ), φ ` ψ
⇔+ (φ ⇒ ψ), (ψ ⇒ φ) ` (φ ⇔ ψ)
⇔− (φ ⇔ ψ) ` (φ ⇒ ψ)
(φ ⇔ ψ) ` (ψ ⇒ φ)
not+ (proof by contradiction) (φ ` →←) ` not φ
not− (proof by contradiction) (not φ ` →←) ` φ
→← + φ, (not φ) ` →←
We can also list these rules in template notation that mirrors how they are used in proofs.
Propositional Logic
and + and −
φ (show) φ and ψ (show)
ψ (show) ........................................................
........................................................ φ (conclude)
φ and ψ (conclude) ψ (conclude)
⇒+ ⇒ − (modus ponens)
Assume φ φ (show)
ψ (show) φ⇒ψ (show)
← ........................................................
........................................................ ψ (conclude)
φ⇒ψ (conclude)
⇔+ ⇔−
φ⇒ψ (show) φ⇔ψ (show)
ψ⇒φ (show) ........................................................
........................................................ φ⇒ψ (conclude)
φ⇔ψ (conclude) ψ⇒φ (conclude)
or + or − (proof by cases
φ (show) φ or ψ (show)
........................................................ φ⇒ρ (show)
φ or ψ (conclude) ψ⇒ρ (show)
ψ or φ (conclude) ........................................................
ρ (conclude)
→← + copy
φ (show) φ (show)
¬φ (show) ........................................................
........................................................ φ (conclude)
→← (conclude)
Remarks:
• The symbol ← is an abbreviation for “end assumption”.
• The symbol →← is called “contradiction” and represents the logical constant false.
• The word Assume is actually entered as part of the proof itself, it is not just an instruction in
the recipe like ’(show)’ and ’(conclude)’.
• The inputsAssume- and “←” are not themselves statements that you prove or are given, but
rather are inputs to rules of inference that may be inserted into a proof at any time. There is
no useful reason however, to insert such statements unless you intend to use one of the rules
of inference that requires them as an input.
• The statement following an Assume is the same as any other statement in the proof and can
be used as an input to a rule of inference.
• Statements in an Assume-← block can be used as inputs to rules of inference whose conclu-
sion is also inside the same block only. Once a Assume is closed with a matching ←, only the
entire block can be used as an input to a rule of inference. The individual statements within
a block are no longer valid outside the block. We usually indent and Assume-← block to
keep track of what statements are valid under which assumptions.
Example 2. Let P and Q be statements. Prove the following case of DeMorgan’s Law, namely that
¬P or ¬Q ⇒ ¬(P and Q)
Proof.
1. Assume ¬P or ¬Q -
2. Assume ¬P -
3. Assume P and Q -
4. P by and −; 3
5. →← by →← +; 2,4
6. ← -
7. ¬(P and Q) by not+; 3,5,6
8. ← -
9. ¬P ⇒ ¬(P and Q) by ⇒+; 2,7,8
10. Assume ¬Q -
11. Assume P and Q -
12. Q by and −; 11
13. →← by →← +; 10,12
14. ← -
15. ¬(P and Q) by not+; 11, 13, 14
16. ← -
17. ¬Q ⇒ ¬(P and Q) by ⇒+; 10,15,16
18. ¬(P and Q) by or −; 1,9,17
19. ← -
20. ¬P or ¬Q ⇒ ¬(P and Q) by ⇒ +; 1,18
2
Notice that when a rule of inference has a subproof for a premise, we indicate this by citing the
line numbers for the assumption, the conclusion, and the end of assumption block indicator (←)
e.g., as shown in line 7 above.
Exercise 3. Give a formal proof for the reverse case of DeMorgan’s Law, namely that
¬(P and Q) ⇒ ¬P or ¬Q
Exercise 4. Give a formal proof for yet another case of DeMorgan’s Law, namely that
¬(P or Q) ⇔ ¬P and ¬Q
Predicate Logic
We can extend Propositional Logic by adding more statements and rules of inference to those we
already have in our formal system. This extended formal system is called Predicate Logic.
Quantifiers
The symbol λ in the lambda expression (λx, E) is an example of a quantifier. The thing that all
quantifiers have in common is that they bind variables. If W is an expression that does not contain
any quantifiers, then every occurrence of every identifier that appears in the expression is said to
be a free occurrence of that identifier.
If a quantifier appears in an expression, there are one or more variables that it binds. All occurrences
of the variables that are in the scope of the quantifier (usually everything to the right of it until a
scope delimiter for that quantifier is encountered) are called bound variables.
Definition. The symbols ∀ and ∃ are quantifiers. The symbol ∀ is called “for all”, “for every”, or
“for each”. The symbol ∃ is called “for some” or “there exists”.
We will encounter more quantifiers beyond just these two and λ.
Statements
Every statement of Propositional Logic is still a statement of Predicate Logic. In addition we define
the following statements.
Definition. If x is any variable and W is a lambda expression5 that simplifies to a statement when
applied to any expression having the same type as x, then (∀x, W(x)) and (∃x, W(x)) are both
statements.
We say that the scope of the quantifier in (∀x, W(x)) and (∃x, W(x)) is everything inside the outer
parentheses. Sometimes these parentheses are omitted when the scope is clear from context. All
occurrences if x throughout the scope are said to be bound by the quantifier.
Variable declaration
Before using a free identifier for the first time in any expression in our proofs we should tell the
reader what that identifier represents. There are four ways to introduce a new free identifier.
1. It can be declared to be a variable (a variable declaration).
2. It can be declared to be a constant (a constant declaration).
3. It can be defined as temporary new notation, usually as an abbreviation for a larger expression
(a notational definition).
4. It can occur free in an expression preceding the proof itself, such as in the statement of the
theorem, in a premise that is given, or declared globally prior to the start of the proof (globally
declared).
Bound variables do not have to be declared. They can be any identifier you like, as long as that
identifier is not in the scope of more than one quantifier that binds it.
Rules of Inference
The rules of inference for these two quantifiers are as follows.
Name Rule
Predicate Logic∗
∀+ ∀−
Let s be arbitrary (variable declaration) ∀x, φ(x) (show)
φ(s) (show) ........................................................
← φ(t) (conclude)
........................................................
∀x, φ(x) (conclude)
∃+ ∃−
φ(t) (show) ∃x, φ(x) (show)
........................................................ ........................................................
∃x, φ(x) (conclude) For some c, (constant declaration)
φ(c) (conclude)
∗ Restrictions
and Remarks
• In ∀+, s must be a new variable in the proof, cannot appear as a free variable in any assumption
or premise, and W(s) cannot contain any constants which were produced by the ∃− rule. The
indentation and ← symbol indicate the scope of the declaration of s. Variables s and x must
have the same type.
• In ∀− and ∃+, no free variable in t may become bound when t is substituted for x in W(x).
Variable x and expression t must have the same type.
• In ∃+, t can be an expression, and W(x) can be the expression obtained by replacing one or
more of the occurrences of t with x. The identifier x cannot occur free in W(t). Variable x and
expression t must have the same type.
• In ∃−, c must be a new identifier in the proof. Also W(c) must immediately follow the
constant declaration for c in the proof. The scope of the declaration continues indefinitely or
until the end of the scope of any subproof block or variable declaration scope that contains
the constant declaration. Variable x and constant c must have the same type.
One consequence of this is that it enforces the restriction on ∀+ that prohibits any constant
declared with ∃− to appear in W(s) because after the application of ∀+ any free occurrence
of c is no longer in the scope of the original declaration (and therefore undeclared).
Equality
Finally, we can complete our definition of logic by adding the rules of inference for equality.
Definition. The equality symbol, =, is defined by the following two rules of inference.
reflexivity ` (x = x)
substitution (x = y), φ ` (φ with one or more free occurrences of x replaced by y)
Equality
Reflexivity Substitution∗
........................................................ x=y (show)
x=x φ (show)
........................................................
φ with any free occurrences of x replaced by
y. (conclude)
Name Definition
Set builder notation∗ x ∈ y : φ(y) ⇔ φ (x)
Subset A ⊆ B ⇔ ∀x, x ∈ A ⇒ x ∈ B
Set equality A = B ⇔ A ⊆ B and B ⊆ A
Power set P (A) = { B : B ⊆ A }
Intersection x ∈ A ∩ B ⇔ x ∈ A and x ∈ B
Union x ∈ A ∪ B ⇔ x ∈ A or x ∈ B
Set Difference x ∈ B − A ⇔ x ∈ B and x < A
Complement x ∈ A0 ⇔ x < A
T
Indexed Intersection x ∈ Ai ⇔ ∀i, i ∈ I ⇒ x ∈ Ai
i∈I
S
Indexed Union x ∈ Ai ⇔ ∃i, i ∈ I and x ∈ Ai
i∈I
Two convenient ∀x ∈ A, φ (x) ⇔ ∀x, x ∈ A ⇒ φ(x)
abbreviations ∃x ∈ A, φ (x) ⇔ ∃x, x ∈ A and φ(x)
S
Partition of a set P is a partition of A ⇔ (∀S ∈ P, S , ∅ and S ⊆ A) and A = S
S∈P
and ∀S ∈ P, ∀T ∈ P, S = T or S ∩ T = ∅
solution set of W {s : W(s)} where W is a lambda expression that returns a
statement
∗
Set builder notation and indexed union and intersection are quantifiers that bind the variables y and i in their respective
definitions. Thus, for example, y and i can be replaced by alpha substitution.
∗∗
To solve a statement is to find its solution set. The values of s in the solution set must have the same type as the input
to W. For multivariable statements the solution set is the set of all ordered tuples that make it true.
Cartesian Products
Name Definition
Ordered Pairs x, y = (u, v) ⇔ x = u and y = v
Ordered n-tuple (x1 , . . . , xn ) = y1 , . . . , yn ⇔ x1 = y1 and · · · and xn = yn
Cartesian Product A × B = x, y : x ∈ A and y ∈ B
Cartesian Product A1 × · · · × An = {(x1 , . . . , xn ) : x1 ∈ A1 and · · · and xn ∈ An }
Power of a Set An = A × A × · · · × A where there are n occurrences of A in the
Cartesian product
Functions
Name Definition
Def of function f : A → B ⇔ f ⊆ A × B and ∀x, ∃!y, x, y ∈ f
f
Alt. function notation A→B⇔ f: A→B
Def of f (x) f : A → B ⇒ f (x) = y ⇔ x, y ∈ f
Domain f : A → B ⇒ A is the domain of f
Codomain f : A → B ⇒ B is the codomain of f
Function equality f = g ⇔ f : A → B and g : A → B and ∀a ∈ A, f (a) = g(a)
Image (of a set) f : A → B and S ⊆ A ⇒ f (S) = f (x) : x ∈ S
Range f : A → B ⇒ f (A) is the range of f
Identity Map idA : A → A and ∀x, idA (x) = x
f g g◦ f
Composition A → B and B → C ⇒ A −→ C and ∀x, g ◦ f (x) = g f (x)
Injective (one-to-one)6 f is injective ⇔ ∀x ∈ A, ∀y ∈ A, f (x) = f y ⇒ x = y
Surjective (onto)1 f is surjective ⇔ ∀y ∈ B, ∃x ∈ A, y = f (x)
Bijective f is bijective ⇔ f is injective and f is surjective
Inverse g is an inverse of f ⇔
f : A → B and g : B → A and f ◦ g = idB and g ◦ f = idA
Invertible f is invertible ⇔ ∃g, g is an inverse of f
Inverse Image f : A → B and S ⊆ B ⇒ f inv (S) = x ∈ A : f (x) ∈ S
Binary Operation Any function ∗ : G × G → G is called a binary operation on G
∗
Another way to define a function is to say that it is a triple, ( f, A, B) where f is a lambda expression, A is a set of
elements the type f can be applied to, and B is a set of elements of the type f outputs. Note that f (a) represents the
same element in both definitions.
Name Definition
Sequences
Definition. A finite sequence is a function t : In → A where n is a natural number and A is a set.
An infinite sequence is a function t : N+ → A where A is a set. In either case, t (k) is called the kth
term of the sequence.
Remark. It is often convenient to say that t is a finite (resp infinite) sequence if t : On → A (resp.
t : N → A). In this case we say that t (k) is the k + 1st term of the sequence.
t1 , t2 , t3 , . . . , tn
t1 , t2 , t3 , . . .
Remark. Sometimes for readability we might want to enclose a sequence in parenthesis. For
example, we might write “Let t = (1, 2, 3, 4)” instead of “Let t = 1, 2, 3, 4”. In this sense there is
really no distinction between n-tuples and finite sequences.
t0 , t1 , . . . , tk−1 , tk , . . . , tk+n−1
denotes the sequence infinite sequence t such that ti = tk+((i−k) Mod n) for all i > n.
and
R x, y ⇔ x, y ∈ R (prefix notation)
Definition. Let R be an equivalence relation on A and a ∈ A. Then the equivalence class of a, denoted,
[a]R , is the set
[a]R = { x : x R a } (equivalence class)
Notation. We often abbreviate [a]R by [a] when the relation R is clear from context.
[a] = [b] ⇔ a R b.
and
∀a, b ∈ A, [a] = [b] or [a] ∩ [b] = ∅
We summarize these definitions along with a few others regarding relations in the following table.
Relations
Name Definition
7
Where ∼ is a relation on a set A
Relations (cont.)
Name Definition
Peano Postulates
Name Axiom
In all of the axioms the quantified variables have natural number type, so that in particular we can
only apply the ∀− rule for expressions which also are type natural number. In N4 above and in the
following, P (n) is a statement about a natural number variable n (i.e., P is a lambda expression that
returns a statement when applied to a natural number variable n). Axiom N4 is called mathematical
induction, or simply induction. While not strictly necessary, the following definitions are useful.
Definition (base ten representation). We define the usual base ten representations of natural
numbers such that 1 = σ(0), 2 = σ(1), 3 = σ(2), 4 = σ(3),. . . and so on.
Strong Induction
Theorem (Strong Induction). Let P (n) be any statement about a natural number variable n. Then
P (0) and ∀k, ∀ j ≤ k, P(j) ⇒ P (σ(k)) ⇒ ∀n, P (n) .
Note that for both standard induction and strong induction we can replace the P(0) with P(a) for
some a ∈ N in which case the resulting conclusion is valid for all n ≥ a. This gives us the following
flavors of induction which can be stated in recipe notation.
Induction
induction strong induction
P(0) (show) P(0) (show)
Let k ∈ N (variable declaration) Let k ∈ N (variable declaration)
Assume P(k) Assume ∀ j ≤ k, P(j)
P(k + 1) (show) P(k + 1) (show)
← ←
← ←
........................................................ ........................................................
∀n, P(n) (conclude) ∀n, P(n) (conclude)
1 Arithmetic in Z Revisited
1.1 Integers
Theorem (Well Ordering Axiom). Every nonempty set of natural numbers contains a least element,
i.e.
∀S ⊆ N, S , ∅ ⇒ ∃m ∈ S, ∀n ∈ S, m ≤ n.
Notation. If S is a nonempty set of natural numbers, we denote its least element by min (S).
Remark. It can be shown that the following are equivalent: Math Induction, Strong Math Induction,
and the Well Ordering Axiom.
Theorem (Division Algorithm for Integers). Let a, b ∈ Z, and b > 0 . Then there exist unique
integers q, r ∈ Z such that
a = qb + r and 0 ≤ r < b
Definition. In the Division Algorithm Theorem, we call q the quotient and r the remainder when
a is divided by b. In this situation we also define
a quo b = q
a mod b = r
Number Theory
Well ordering theorem def of min
S⊆N (show) S⊆N (show)
S,∅ (show) S,∅
........................................................ ........................................................
For some m ∈ S, (constant declaration) min(S) ∈ S (conclude)
∀s ∈ S, m ≤ s (conclude)
S⊆N (show)
S,∅ (show)
s∈S (show)
........................................................
min(S) ≤ s (conclude)
1.2 Divisibility in Z
Definition (divides). Let a, b ∈ Z and b , 0. Then
b | a ⇔ ∃q ∈ Z, a = qb
Definition (even and odd). Let a ∈ Z. We say that a is even if and only if 2 | a, and we say that a is
odd if and only if a is not even.
Theorem (Bézout’s Lemma). Let a, b ∈ Z not both zero, and d = gcd (a, b). Then ∃s, t ∈ Z, sa+tb =
d and d is the smallest positive integer of this form.
Corollary (alt def of gcd). Let a, b, d ∈ Z, a , 0 or b , 0. Then d = gcd (a, b) if and only if
1. d > 0
2. d | a and d | b
3. ∀c ∈ Z, c | a and c | b ⇒ c | d
Divisibility in Z
divides divides
a, b, q ∈ Z (show) a, b ∈ Z (show)
a = qb (show) b|a (show)
........................................................ ........................................................
b|a (conclude) For some q ∈ Z, (constant declaration)
a = qb (conclude)
Divisibility in Z (cont.)
gcd gcd
a, b, d ∈ Z (show) d = gcd(a, b) (show)
a , 0 or b , 0 (show) ........................................................
d>0 (show) a , 0 or b , 0 (conclude)
d | a and d | b (show) d>0 (conclude)
Let c ∈ Z (variable declaration) d|a (conclude)
Assume c | a and c | b d|b (conclude)
c≤d (show) d = gcd(a, b) (show)
← c|a (show)
← c|b (show)
........................................................ ........................................................
d = gcd(a, b) (conclude) c≤d
1.3 Primality in Z
Definition. Let p ∈ Z − { 0, ±1 }. We say that p is prime if and only if ∀c ∈ Z, c | p ⇒ c ∈ ±1, ±p
Definition. Let p ∈ Z. We say p is composite if and only if p < { 0, ±1 } and p is not prime.
Remark. Notice that the numbers 0, 1, − 1 are neither prime nor composite. Hence “composite” does
not mean “not prime”.
p is prime ⇔ ∀b, c ∈ Z, p | bc ⇒ p | b or p | c
where pi is the ith positive prime, k and ek are positive integers, and each ei ∈ N.
Notation. It is commonplace to write the prime factorization of an integer by omitting any prime
factor whose exponent is zero in the expression given by the Fundamental Theorem. Thus we can
say that the prime factorization of n is
e
n = ±pe11 pe22 · · · pkk
Primality in Z
prime prime
p ∈ Z − {0, ±1} (show) p is prime (show)
Let c ∈ Z (variable declaration) ........................................................
Assume c | p p < {0, ±1} (conclude)
c ∈ {±1, ±p} (show) p is prime (show)
← c|p (show)
........................................................
←
........................................................ c ∈ {±1, ±p} (conclude)
p is prime (conclude)
Primality in Z (cont.)
composite composite
c is not prime (show) p is composite (show)
c < {0, ±1} (show) ........................................................
........................................................ p is not prime (conclude)
p is composite (conclude) p < {0, ±1} (conclude)
For some a, b, (constant declaration)
1 < a, b < | p | and p = ±ab (conclude)
a≡b⇔n|a−b
n
Remark. Note that in the definition of Zn , [x] is the equivalence class of x with respect to ≡.
n
In the following table all variables have type integer, n is a positive integer, and the equivalence
classes are for the relation ≡.
n
Congruence in Z
≡ ≡
n n
n|a−b (show) a≡b (show)
........................................................ . . .n. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
a≡b (conclude) n|a−b (conclude)
n
Zn Zn
x ∈ Zn (show) k, j ∈ {0, 1, 2, · · · , n − 1} (show)
........................................................ k, j (show)
For some a ∈ Z, (constant declaration) ........................................................
x = [a] (conclude) [k] , [j] (conclude)
2.2 Arithmetic in Zn
a+c≡b+d
n
and
a·c≡b·d
n
[a + c] = [b + d]
and
[a · c] = [b · d]
Remark. We usually use infix notation when applying binary operators to their arguments, i.e., we
write (a f b) instead of f (a, b).
Definition. Let n ∈ N+ .
n o
⊕ = ((A, B), C) : ∃a, b ∈ Z, A = [a], B = [b], and C = [a + b]
n o
= ((A, B), C) : ∃a, b ∈ Z, A = [a], B = [b], and C = [a · b]
Remark. This theorem allows us to use infix notation to write the definitions more conveniently in
this form:
[a] ⊕ [b] = [a + b]
[a] [b] = [a · b]
Arithmetic in Zn
modular arithmetic modular arithmetic
a, b, c, d ∈ Z (show) a, b ∈ Z (show)
a≡b (show) ........................................................
n [a] ⊕ [b] = [a + b] (conclude)
c≡d (show)
. . n. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . [a] [b] = [a · b] (conclude)
a+c≡b+d (conclude)
n
a·c≡b·d (conclude)
n
2.3 Algebra in Zn
As is frequently the convention, write will sometimes write st as an abbreviation for s t as long
as it is clear what the missing multiplication is from context.
Any number that has a multiplicative inverse is called a unit. Two nonzero numbers whose product
is zero are called zero divisors. In these terms the following theorem says that p is prime precisely
when Zp has no zero divisors, and equivalently, every nonzero element of Zp has a multiplicative
inverse.
3 Rings
3.1 Definition and Examples of Rings
Definition (ring). A ring is a triple (R, +, ·) where R is a set and +, · are binary operations on R such
that for all x, y, z ∈ R,
1. x + (y + z) = (x + y) + z (associativity of +)
2. x+y= y+x (commutativity of +)
3. ∃t ∈ R, ∀x ∈ R, t + x = x = x + t (identity of +)
4. ∃u ∈ R, x + u = t (inverse of +)
5. x · (y · z) = (x · y) · z (associative of ·)
6. x · (y + z) = (x · y) + (x · z) and (y + z) · x = (y · x) + (z · x) (distributivity of ·, +)
Remark. The t in #4 refers to any t described in #3, so that technically #4 should say:
∀t ∈ R, (∀x ∈ R, t + x = x = x + t) ⇒ ∀x ∈ R, ∃u ∈ R, x + u = t
Notation. We write 0R for the unique additive identity of a ring (R, +, ·).
Lemma (uniq of add inv). Let (R, +, ·) be a ring and u, v, x ∈ R. If u+x = 0R = x+u and v+x = 0R = x+v
then u = v (i.e. the additive inverse of x in a ring is unique)
Definition
(of subtraction). Let (R, +, ·) be a ring and a, b ∈ R. Then a − b is defined to be
−
a+ b .
Types of Rings
Definition (commutative ring). A ring (R, +, ·) is a commutative ring if and only if ∀a, b ∈ R, ab = ba.
Definition (ring with identity). A ring (R, +, ·) is a ring with identity if and only if ∃i ∈ R, ∀x ∈
R, ix = x = xi.
∀x ∈ R, ux = x = xu and vx = x = xv
Notation. If R is a ring with identity we write 1R for the unique multiplicative identity of R.
Lemma (uniq of mult inverse). Let (R, +, ·) be a ring with identity 1R and x, u, v ∈ R. If
ux = 1R = xu and vx = 1R = xv
Notation. If R is a ring with identity we write x−1 for the unique multiplicative inverse of x in R.
Definition (integral domain). A ring (R, +, ·) is an integral domain if and only if it is a commutative
ring with identity 1R , 0R and ∀a, b ∈ R, ab = 0R ⇒ a = 0R or b = 0R .
Definition (field). A ring (R, +, ·) is a field if and only if is a commutative ring with identity 1R , 0R
and ∀a ∈ R − { 0R } , ∃x ∈ R, ax = 1R (i.e., every nonzero element has a multiplicative inverse).
Rings
ring ring
+: R × R → R (show) (R, +, ·) is a ring (show)
·: R × R → R (show) x, y, z ∈ R (show)
Let x, y, z ∈ R (variable declaration) ........................................................
x + (y + z) = (x + y) + z (show) x + (y + z) = (x + y) + z (conclude)
x+y= y+x (show) x+y= y+x (conclude)
∃0R ∈ R, ∀x, 0R + x = x = x + 0R (show) 0R ∈ R (conclude)
∃ − x ∈ R, − x + x = x + (− x) = 0R (show) 0R + x = x = x + 0R (conclude)
−x ∈ R (conclude)
x · (y · z) = (x · y) · z (show) − x + x = x + (− x) = 0
x · (y + z) = x · y + x · z (show) R (conclude)
(y + z) · x = y · x + z · x (show) x · (y · z) = (x · y) · z (conclude)
← x · (y + z) = (x · y) + (x · z) (conclude)
........................................................
(R, +, ·) is a ring
Rings (cont.)
field field
(R, +, ·) is a commutative ring (show) (R, +, ·) is a field (show)
(R, +, ·) is a ring with identity (show) ........................................................
1R , 0R (show) (R, +, ·) is a commutative ring (conclude)
Let x ∈ R − {0R } (variable declaration) (R, +, ·) is a ring with identity (conclude)
∃y ∈ R, x · y = 1R 1R , 0R (conclude)
← (R, +, ·) is a field (show)
........................................................ x∈R (show)
(R, +, ·) is a field (conclude) ........................................................
−
x 1∈R (conclude)
− −
x · x 1 = x 1 · x = 1R (conclude)
subtraction
(R, +, ·) is a ring (show)
x, y ∈ R (show)
........................................................
x − y = x + (− y) (conclude)
Subrings
Definition (subring). Let (R, +, ·) be a ring and S ⊆ R. (S, +, ·) is a subring of (R, +, ·) if and only if
(S, +, ·) is a ring (where + and · denote the restrictions of the original +, · to S).
Theorem (Cartesian Product of Rings). Let (R, +, ·), (S, ∔, •) be rings and define
(r, s) ⊕ (u, v) = (r + u, s ∔ v)
(r, s) (u, v) = (r · u, s • v)
Remark. In the previous theorem if we use + for the addition in both rings R, S and abbreviate
products by concatentation, then the previous definitions become simply
(r, s) ⊕ (u, v) = (r + u, s + v)
(r, s) (u, v) = (ru, sv)
subring theorem
(R, +, ·) is a ring (show)
S⊆R (show)
S,∅ (show)
Let x, y ∈ S (variable declaration)
x−y∈S (show)
x·y∈S (show)
←
........................................................
(S, +, ·) is a subring of (R, +, ·) (conclude)
Theorem (the Algebra Theorem I). Let (R, +, ·) be a ring and a, b, c ∈ R. Then
1. a+b=a+c⇔b=c
2. a+b=c⇔a=c−b
3. a + c = c ⇔ a = 0R
4. a = b ⇔ a − b = 0R
4. − (a + b) = (− a) + (− b)
5. − (a − b) = − a + b
6. (− a) (− b) = ab
7. If R has identity then (− 1R ) a = − a
Corollary (to the Sign Theorem). Let (R, +, ·) be a ring and a, b, c ∈ R. If a , 0R and a = bc then
b , 0R and c , 0R .
an = a · a · · · · · a
| {z }
n factors
and
na = a + a + · · · + a
| {z }
n summands
ax = 1R and ya = 1R ⇒ x = y
Corollary (uniqness of multiplicative inverse). Let (R, +, ·) be a ring with identity and a, x, y ∈ R.
ax = xa = 1R and ya = ay = 1R ⇒ x = y
Definition (unit). Let (R, +, ·) be a ring with identity and a ∈ R. If a has a multiplicative inverse
then we say a is a unit in R.
Definition (U (R)). Let (R, +, ·) be a ring with identity. The set of all units of R is denoted U (R).
Definition (associate). Let (R, +, ·) be a commutative ring with identity and a, b ∈ R. We say a is an
associate of b if and only if a = ub for some u ∈ U (R). If a is an associate of b we write a b.
Theorem (the Algebra Theorem II). Let (R, +, ·) be a ring with identity and a, b, x, y ∈ R, and
a ∈ U (R). Then
1. ax = b ⇔ x = a−1 b
2. xa = b ⇔ x = ba−1
−1
3. a−1 ∈ U (R) and a−1 =a
Remark. Remember the BAN ON FRACTIONS! You may not write ba instead of a−1 b or ba−1
because in a non-commutative ring these last two expressions might not be equal! So the symbol
b
a is undefined for elements in an arbitrary ring.
Theorem (the Algebra Thm III). Let (R, +, ·) be an integral domain, a, b, c ∈ R, and a , 0R . Then
ab = ac ⇒ b = c
Definition (zero divisor). Let (R, +, ·) be a ring and a ∈ R. Then a is called a zero divisor of R if
and only if
a , 0 and ∃b ∈ R, b , 0R and (ab = 0R or ba = 0R )
Remark. As usual in mathematics, we will often omit parenthesis for associative operations such
as the addition and multiplication in a ring. We also use the precendence of operators with the
ring multiplication having a higher precedence than ring addition so that e.g. a + bc means a + (bc)
and not (a + b)c.
Algebra in Rings
unit & inverse unit & inverse
(R, +, ·) is a ring with identity (show) (R, +, ·) is a ring with identity (show)
a, x ∈ R (show) a is a unit of (R, +, ·) (show)
ax = xa = 1R (show) ........................................................
........................................................ a−1 ∈ R (conclude)
a is a unit of (R, +, ·) (conclude) a · a−1 = a−1 · a = 1R (conclude)
x = a−1 (conclude)
associate associate
(R, +, ·) is a comm. ring with identity (show) (R, +, ·) is a comm. ring with identity (show)
a, b ∈ R (show) a, b ∈ R (show)
u ∈ U (R) (show) ab (show)
a = ub (show) ........................................................
........................................................ For some u ∈ U (R), (constant declaration)
ab (conclude) a = ub (conclude)
Definition. Let (R, +, ·), (S, ⊕, ) be rings. Then ring R is isomorphic to ring S if and only if there
exists a function f : R → S such that
Definition. Let (R, +, ·) , (S, ⊕, ) be rings and f : R → S. The map f is a homomorphism (or ring
homomorphism) if and only if
1. ∀a, b ∈ R, f (a + b) = f (a) ⊕ f (b)
2. ∀a, b ∈ R, f (a · b) = f (a) f (b)
Remark. Note that in most situations we use +,· for the addition and multiplication (and conca-
tentation for ·) in both R and S so that requirements #1,#2 in the defintions of isomorphism and
homomorphism above would be written:
1. ∀a, b ∈ R, f (a + b) = f (a) + f (b)
2. ∀a, b ∈ R, f (a · b) = f (a) · f (b)
in this notation.
Ring Homomorphisms
ring homomorphism ring homomorphism
(R, +, ·) is a ring (show) (R, +, ·) is a ring (show)
(S, ⊕, ) is a ring (show) (S, ⊕, ) is a ring (show)
f: R→S (show) f : R → S is a ring homomorphism (show)
Let x, y ∈ R (variable declaration) x, y ∈ R (show)
f (x + y) = f (x) ⊕ f (y) (show) ........................................................
f (x · y) = f (x) f (y) (show) f (x + y) = f (x) ⊕ f (y) (conclude)
← f (x · y) = f (x) f (y) (conclude)
........................................................
f is a ring homomorphism (conclude)
4 Arithmetic in F[x]
4.1 Polynomials
Definition (eventually zero). Let (R, ⊕, ) be a ring. An infinite sequence of elements of R,
a0 , a1 , a2 , . . . , an , . . .
is said to be eventually zero if and only if there exists N ∈ N such that for all i ≥ N, ai = 0R .
Definition (polynomial). Let (R, ⊕, ) be a ring. A polynomial with indeterminate x and coeffi-
cients in R is an expression of the form
a0 + a1 x + a2 x2 + · · · + an xn
X
n
a0 + a1 x + a2 x2 + · · · + an xn = ai xi
i=0
If some coefficient ai = 0R we can omit the summand ai xi when writing the polynomial. Similarly,
if R has identity, we can abbreviate 1R xi as simply xi . Finally, we can also permute the order of the
summands in a polynomial to obtain another equivalent expression.
Definition. Two polynomials are equal if and only if their corresponding sequence of coefficients
are equal.
Definition (R[x]). Let (R, ⊕, ) be a ring. Then R[x] is the set of all polynomial with indeterminate
x and coefficients in R.
Remark. Notice that we can consider R to be a subset of R[x] by identifying a ∈ R with the constant
polynomial a in R[x].
Definition. Let (R, ⊕, ) be a ring and P, Q ∈ R[x]. Then there exist a0 , . . . , an , b0 , . . . , bm ∈ R such
that P = a0 + a1 x + · · · + an xn and Q = b0 + b1 x + · · · + bm xm . Define ak = 0R for k > n, bk = 0R for
k > m, and s = max(m, n). Then
Remark. This is just the ordinary addition and multiplication of polynomials, except with the
coefficients in an arbitrary ring. We usually write +, · (or concatentation) for ⊕, when it is clear
from context.
Remark. The book uses f (x) to denote an arbitrary element of R[x], but this notation can easily be
confused with the value of a function f at x, so we will simply write f for an arbitrary polynomial
in R[x].
Theorem (Div Alg in F[x]). Let F be a field, f, g ∈ F[x], and g , 0F[x] . Then there exist unique
polynomials q, r ∈ F[x] such that
Remark. In the Division Algorithm Theorem for polynomials, we call q the quotient and r the
remainder when f is divided by g just as we did in the integer case.
In the following recipies, (R, ⊕, ) is a ring.
Polynomials
polynomial polynomial equality
f ∈ R[x] − { 0R } (show) f, g ∈ R[x] (show)
........................................................ Let i ∈ N (variable declaration)
For some n ∈ N, a0 , . . . , an ∈ R, (constant coeff( f, i) = coeff(g, i) (show)
declaration) ←
f = a0 + · · · + an xn and an , 0R (conclude) ........................................................
f =g (conclude)
degree degree
f = a0 + a1 x + · · · an xn ∈ R[x] (show) f ∈ R[x] (show)
an , 0R (show) deg( f ) = n (show)
........................................................ ........................................................
deg( f ) = n (conclude) coeff( f, n) , 0R (conclude)
∀i > n, coeff( f, i) = 0R (conclude)
f | g ⇔ ∃q ∈ F[x], g = q f
If f | g we say f divides g.
n o
Lemma. Let F be a field, f, g ∈ F[x] − 0F[x] . If f | g then deg( f ) ≤ deg(g).
Lemma. Let F be a field, f ∈ F[x] − { 0F }, and c = LC( f ). Then c−1 ∈ F and c−1 f is monic.
Definition (gcd). Let F be a field, f, g, d ∈ F[x], and either f , 0F[x] or g , 0F[x] . Then d = gcd( f, g)
if and only if
1. d is monic
2. d | f and d | g
3. ∀c ∈ F[x], c | f and c | g ⇒ deg(c) ≤ deg(d)
Remark. Technically the symbol gcd (a, b) is not well defined until we show that there is only one
such polynomial in the following theorem. Until then we can say that d is a gcd(a, b) if it satisfies
the three properties listed above.
Theorem (Bézout for polynomials). Let F be a field, f, g, d ∈ F[x], ( f , 0F[x] or g , 0F[x] ), and
d = gcd( f, g). Then ∃s, t ∈ F[x], s f + tg = d and d is the unique monic polynomial of smallest degree
that is of this form.
Corollary (alt def of gcd). Let F be a field, f, g, d ∈ F[x], and either f , 0F[x] or g , 0F[x] . Then
d = gcd( f, g) if and only if
1. d is monic
2. d | f and d | g
3. ∀c ∈ F[x], c | f and c | g ⇒ c | d
gcd( f, g) = gcd(g, r)
Divisibility in F[x]
divides divides
a, b, q ∈ F[x] (show) a, b ∈ F[x] (show)
a = qb (show) b|a (show)
........................................................ ........................................................
b|a (conclude) For some q ∈ F[x], (constant declaration)
a = qb (conclude)
gcd gcd
a, b, d ∈ F[x] (show) d = gcd(a, b) (show)
a , 0F or b , 0F (show) ........................................................
d is monic (show) a , 0F or b , 0F (conclude)
d | a and d | b (show) d is monic (conclude)
Let c ∈ F[x] (variable declaration) d|a (conclude)
Assume c | a and c | b d|b (conclude)
deg(c) ≤ deg(d) (show) d = gcd(a, b) (show)
← c|a (show)
← c|b (show)
........................................................ ........................................................
d = gcd(a, b) (conclude) deg(c) ≤ deg(d)
U (R[x]) = U (R)
i.e., the units in R[x] are the constant polynomials u where u is a unit of R.
Corollary (units in F[x]). Let F be a field. The units of F[x] are the nonzero constant polynomials,
i.e., U (F[x]) = F − { 0F }.
n o
Lemma (alt def of ). Let F be a field, f, g ∈ F[x] − 0F[x] . Then
f | g and g | f ⇔ f g
Lemma. (associates have same degree) Let F be a field and a, b ∈ F[x] − { 0F }. If a b then deg(a) = deg(b).
Definition (irreducible). Let F be a field and p ∈ F[x] − F. We say p is irreducible if and only if
∀c ∈ F[x], c | p ⇒ c ∈ U (F[x]) or c p.
Definition. Let F be a field and p ∈ F[x]. We say p is reducible if and only if p is non-constant and
p is not irreducible.
Remark. The definitions of irreducible and reducible in F[x] correspond to the definitions of prime
and composite in Z.
Theorem (alternate def of reducible). Let F be a field and p ∈ F[x]. We say p is reducible if and
only if there exist g, h ∈ F[x] such that
1. p = gh
2. 0 < deg(g) < deg(p)
Remark. Note that in the previous theorem, since 0 < deg(g) < deg(p) it follows that 0 < deg(h) <
deg(p) also.
Corollary (linear polynomials are irreducible). Let F be a field and p ∈ F[x]. If deg(p) = 1 then p
is irreducible.
Theorem (alternate def of irreducible). Let F be a field and p ∈ F[x]. The following are equivalent
(T.F.A.E.).
1. p is irreducible
2. ∀b, c ∈ F[x], p | bc ⇒ p | b or p | c
3. ∀r, s ∈ F[x], p = rs ⇒ r ∈ U (F[x]) or s ∈ U (F[x]).
p | a1 a2 · · · an ⇒ p | ai for some i ∈ { 1, 2, . . . , n }
Theorem (Fundamental Theorem of Arithmetic for F[x]). Let F be a field. Every nonconstant
polynomial f ∈ F[x] can be expressed as a product of irreducible polynomials in the form
e
n = cpe11 pe22 pe33 · · · pkk
where c ∈ F, each pi is a distinct monic irreducible polynomial in F[x], and each ei ∈ N. This expression
is unique up to reordering of the factors.
Note that in the following we identify F with the constant polynomials in F[x]. For example,
F[x] − F is the set of polynomials with positive degree.
Irreducibility in F[x]
irreducible irreducible
p ∈ F[x] − F (show) p is irreducible (show)
Let c ∈ F[x] (variable declaration) ........................................................
Assume c | p deg p > 0 (conclude)
c ∈ U (F[x]) or c p (show) p is irreducible (show)
← c|p (show)
........................................................
←
........................................................ c ∈ U (F[x]) or c p (conclude)
p is irreducible (conclude)
reducible reducible
c is not irreducible (show) c is reducible (show)
c<F (show) ........................................................
........................................................ c is not irreducible (conclude)
c is reducible (conclude) c<F (conclude)
Definition (root). Let R be a commutative ring, f ∈ R[x], and a ∈ R. We say a is a root of f if and
only if f (a) = 0R .
Theorem (Remainder Theorem). Let F be a field, f ∈ F[x], and a ∈ F. Then there exists q ∈ F[x]
such that
f = q · (x − a) + f (a)
i.e. the remainder when f is divided by x − a is f (a).
Corollary (Factor Theorem). Let F be a field, f ∈ F[x], and a ∈ F. Then a is a root of f if and only if
(x − a) is a factor of f .
Corollary (to the Remainder Theorem II). Let F be a field, f ∈ F[x]. If deg( f ) ≥ 2 and f is
irreducible then f has no roots in F.
Corollary (to the Remainder Theorem III). Let F be a field, f ∈ F[x]. If deg f = 2 or deg( f ) = 3
then
f is irreducible ⇔ f has no roots in F.
Corollary (to the Remainder Theorem IV). Let F be a field, f ∈ F[x] − { 0F }, and n = deg( f ).
Then f has at most n roots in F.
Corollary (to the Remainder Theorem V). Let F be an infinite field and f, g ∈ F[x]. Then
f =g⇔ f =g
Polynomial Functions
Remainder Theorem Remainder Theorem
f ∈ F[x] (show) f ∈ F[x] (show)
a∈F (show) F is infinite (show)
........................................................ ........................................................
For some q ∈ F[x] f =g⇔ f =g (conclude)
f = q · (x − a) + f (a) (conclude)
f ≡g⇔p| f −g
p
Remark. The textbook writes f = g mod p for f ≡ g.
p
Remark. Note that in the definition of F[x]/(p), [ f ] is the equivalence class of x with respect to ≡.
p
We will write F[x]p = F[x]/(p).
In the following table, all free variables have type F[x] and equivalence classes are with respect to
≡.
p
Congruence in F[x]
≡ ≡
p p
p| f −g (show) f ≡g (show)
........................................................ . . .p. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
f ≡g (conclude) p| f −g (conclude)
p
F[x]p F[x]p
z ∈ F[x]p (show) k, j ∈ f ∈ F[x] : f = 0F or deg( f ) < deg(p)
........................................................ (show)
For some f ∈ F[x], (constant declaration) k, j (show)
z = [f] (conclude) ........................................................
[k] , [j] (conclude)
Theorem (polynomial modular arithmetic). Let F be a field, f, g, h, i, p ∈ F[x], and deg(p) > 0. If
f ≡ h and g ≡ i then
p p
f +g≡h+i
p
and
f ·g≡h·i
p
Remark. This theorem allows us to use infix notation to write the definitions more conveniently in
this form:
f ⊕ g = f +g
f g = f ·g
Theorem (F[x]p is a ring). Let F be a field, p ∈ F[x], and deg(p) > 0. Then F[x]p , ⊕, is a
commutative ring with identity, and 1F[x]p = [1F ].
Theorem (F is a subring of F[x]p ). Let F be a field, p ∈ F[x], deg(p) > 0, and define
F∗ = { [c] : c ∈ F }
Remark. We often identify c ∈ F with [c] ∈ F[x]p and simply say that F is a subring of F[x]p .
Arithmetic in F[x]p
modular arithmetic modular arithmetic
a, b, c, d, p ∈ F[x] (show) a, b ∈ F[x] (show)
a≡b (show) ........................................................
p [a] ⊕ [b] = [a + b] (conclude)
c≡d (show) [a] [b] = [a · b] (conclude)
. . p. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
a+c≡b+d (conclude)
p
a·c≡b·d (conclude)
p
Theorem (F[x]p for irreducible p). Let F be a field, p ∈ F[x], deg(p) > 0. The following are
equivalent (T.F.A.E.).
1. p is irreducible.
2. F[x]p is a field.
3. F[x]p is an integral domain.
Definition. Let F be a field, p, f ∈ F[x], deg(p) > 0, n ∈ N+ , a0 , . . . , an ∈ F, and f = a0 +a1 x+· · ·+an xn .
Define f : F[x]p → F[x]p by
Remark. If we identify F with F∗ ⊆ F[x]p , then this function f is just an extension of our original
function f from F to F[x]p .
Theorem (extension field). Let F be a field, p ∈ F[x], and p irreducible. Then F[x]p is an extension
field of F which contains a root of p.
Corollary (existence of extension fields). Let F be a field, f ∈ F[x], and deg(p) > 0. There exists
an extension field K of F containing a root of f .
Theorem (ideal generated by c1 , . . . , cn ∈ R). Let R be a commutative ring with identity and
c1 , . . . , cn ∈ R. The set
I = { r1 c1 + r2 c2 + · · · + rn cn : r1 , . . . , rn ∈ R }
is an ideal of R.
Definition (principle and finitely generated ideals). The ideal I in the previous theorem is called
the ideal generated by { c1 , . . . , cn }. If n = 1 then I is called a principal ideal. Since { c1 , . . . , cn } is a
finite set, we say that I is finitely generated.
Remark. Note that in the definition of R/I, [r] is the equivalence class of r with respect to ≡.
I
Theorem (equivalence class mod I). Let R be a ring, a ∈ R, and I an ideal of R. Then
[a] = { a + i : i ∈ I }
a + I = { a + i : i ∈ I } = [a]
is called the left coset of a mod I. The notation a + I is called coset notation for the equivalence
class [a].
and
ac ≡ bd
I
Remark. This theorem allows us to use infix notation to write the definitions more conveniently in
this form:
[a] ⊕ [b] = [a + b]
[a] [b] = [a · b]
(a + I) ⊕ (b + I) = (a + b) + I
(a + I) (b + I) = ab + I
Theorem (R/I is a ring). Let R be a ring and I an ideal of R. Then (R/I, ⊕, ) is a ring.
Definition (quotient ring). Let R be a ring and I an ideal of R. Then (R/I, ⊕, ) is called a quotient
ring.
Notation. As in Zn , and F[x]p we will often abbreviate [a] as a. We will also often abbreviate ⊕ as
+ and as ×, ·, or concatentation.
f is injective ⇔ Ker( f ) = { 0R }
Definition (quotient map). Let R be a ring, I an ideal of R, and define f : R → R/I by ∀r ∈ R, f (r) =
[r]. The map f is called the quotient map (or natural homomorphism).
S R/ Ker( f )
Quotient Ring
f : R → S a ring homomorphism (show)
f is surjective (show)
........................................................
S R/ Ker f (conclude)
7 Groups
7.1 Groups
Definition (group). Let G be a set and ∗ : G × G → G a binary operator. The pair (G, ∗) is a group
if and only if
1. ∀a, b, c ∈ G, a ∗ (b ∗ c) = (a ∗ b) ∗ c (associative)
2. ∃e ∈ G, ∀a ∈ G, a ∗ e = a = e ∗ a (identity)
3. ∀a ∈ G, ∃d ∈ G, a ∗ d = e = d ∗ a (inverses)
Remark. We will often abbreviate abbreviate a ∗ b by ab. We will also often refer to the group (G, ∗)
as simply G.
Remark. The e in condition #3 refers to any e satisfying condition #2, so technically it should be
written
∀e ∈ G, (∀a ∈ G, a ∗ e = a = e ∗ a) ⇒ (∀a ∈ G, ∃d ∈ G, a ∗ d = e = d ∗ a)
Types of Groups
Definition (abelian group). A group (G, ∗) is abelian if an only if ∀a, b ∈ G, a ∗ b = b ∗ a (i.e., ∗ is
commutative).
Definition (finite group). A group (G, ∗) is finite if and only if G is a finite set.
Definition (cardinality). If S is a finite set, the # (S) denotes the number of elements in the finite
set S. Two sets (finite or infinite) have the same cardinality if an only if there is a bijection between
them.
Remark. The book writes | S | for the number of elements in S, but we will use #(S).
Definition (order of a group). If (G, ∗) is a finite group then #(G) is called the order of the group.
Examples of Groups
Theorem (additive group of a ring). Let (R, +, ·) be a ring. Then (R, +) is a group.
Theorem (group of units in a ring). Let (R, +, ·) be a ring with identity. Then (U (R) , ·) is a group.
Dn = Sym (Pn )
Theorem (direct product). Let (G, ∗) and (H, ·) be groups and define : (G × H) × (G × H) → G × H
by
(a, b) (c, d) = (a ∗ c, b · d)
for all (a, b), (c, d) ∈ G × H. Then (G × H, ) is a group.
Definition (direct product group). The group (G × H, ) is called the direct product of the groups
G and H.
Groups
Group Group
∗:G×G→G (show) (G∗) is a group (show)
e∈G (show) a, b, c ∈ G
Let a, b, c ∈ G (variable declaration) ........................................................
a ∗ (b ∗ c) = (a ∗ b) ∗ c (show) ∗:G×G→G (conclude)
a∗e=e∗a=a (show) a ∗ (b ∗ c) = (a ∗ b) ∗ c (conclude)
u∈G (show) eG ∈ G (conclude)
a∗u=u∗a=e (show) eG ∗ a = a ∗ eG = a (conclude)
← a−1 ∈ G (conclude)
........................................................ a ∗ a−1 = eG = a−1 ∗ a = a (conclude)
(G, ∗) is a group (conclude)
Notation. Let (G, ∗) be a group. Then eG denotes the unique identity element of G.
Notation. Let (G, ∗) be a group and a ∈ G. Then a−1 denotes the unique inverse of a.
and
a−n = a−1 · a−1 · · · · · a−1
| {z }
n factors
and
a0 = eG
an am = an+m
and
(an )m = anm
Notation (Additive notation). For abelian groups we sometimes write ∗ as + and an as na and a−1
as − a.
Definition (order of an element). Let (G, ∗) be a group, k ∈ N+ , and a ∈ G. We say a has order k if
and only if k is the smallest positive integer such that ak = eG . In other words, a has order k if
ak = eG and ∀j ∈ N+ , a j = eG ⇒ j ≥ k
If a has order k for some k ∈ N+ we say a has finite order, otherwise we say a has infinite order. If
a has finite order we define | a | to be the order of a.
Corollary (to order theorem). Every element of a finite group has finite order.
Properties of Groups
order of an element order of an element
n∈ N+ (show) a has order n in group G (show)
a∈G (show) ........................................................
an = eG (show) an = eG (conclude)
Let m ∈ N+ (variable declaration) a has order n in group G (show)
Assume am = eG am = eG (show)
........................................................
n≤m (show)
n|m (conclude)
←
←
........................................................
a has order n (conclude)
7.3 SubGroups
Definition (subgroup). Let (G, ∗) be a group and H ⊆ G. Then (H, ∗) is a subgroup of (G, ∗) if and
only if (H, ∗) is a group (where ∗ denotes the restriction of the original ∗ to H).
Definition (proper subgroup). Let (H, ∗) be a subgroup of (G, ∗). Then (H, ∗) is a proper subgroup
of (G, ∗) if and only if H , G and H , { eG }.
Theorem (subgroup theorem). Let (G, ∗) be a group, H ⊆ G, and H , ∅. Then (H, ∗) is a subgroup
of (G, ∗) if and only if
1. ∀a, b ∈ H, ab ∈ H
2. ∀a ∈ H, a−1 ∈ H
Theorem (subgroup theorem II). Let (G, ∗) be a group and H ⊆ G a finite nonempty set. Then
(H, ∗) is a subgroup of (G, ∗) if and only if
∀a, b ∈ H, ab ∈ H
Cyclic groups
Definition (cyclic subgroup). Let (G, ∗) be a group and a ∈ G. Define
hai = { an : n ∈ Z }
Theorem (cyclic groups are abelian). Let (G, ∗) be a group, a ∈ G. Then (hai , ∗) is an abelian
subgroup of (G, ∗).
Definition. Let S ⊆ G and (G, ∗) be a group. The subgroup generated by S is the smallest subgroup
of G which contains S. It is denoted by hSi.
Theorem (subgroup generated by S). Let S ⊆ G and (G, ∗) a group. Then hSi is the set of all
products of elements of S and their inverses.
Notation 5. If S ⊆ G we write S−1 for the set of all inverses of elements of S, i.e.,
n o
S−1 = s−1 : s ∈ S
abelian abelian
Let a, b ∈ G (variable declaration) (G, ∗) is abelian (show)
a∗b=b∗a (show) a, b ∈ G
← ........................................................
........................................................ a∗b=b∗a (conclude)
(G, ∗) is abelian (conclude)
Theorem (classification of cyclic groups). Every infinite cyclic group is isomorphic to (Z, +).
Every finite cyclic group of order n is isomorphic to (Zn , +).
Theorem (properties of group homomorphisms). Let (G, ∗), (H, ·) be groups, f : G → H a group
homomorphism, and a ∈ G. Then
1. f (eG ) = eH
2. f a−1 = f (a)−1
3. ( f (G), ·) is a subgroup of (H, ·)
4. If f is injective then G f (G)
Corollary (Cayley’s theorem for finite groups). Every group of order n is isomorphic to a subgroup
of Sn .
Group Homomorphisms
group homomorphism group homomorphism
(G, ∗) is a group (show) (G, ∗) is a group (show)
(H, ·) is a group (show) (H, ·) is a group (show)
f: G→H (show) f : G → H is a group homomorphism (show)
Let x, y ∈ G (variable declaration) x, y ∈ G (show)
f (x ∗ y) = f (x) · f (y) (show) ........................................................
← f (x ∗ y) = f (x) · f (y) (conclude) f (eG ) = eH
........................................................ (conclude)
f is a group homomorphism (conclude) f (x−1 ) = f (x)−1 (conclude)
a ≡ b ⇔ a ∗ b−1 ∈ K
K
[a] = { ka : k ∈ K }
Ka = { ka : k ∈ K } = [a]
is called the right coset of a mod K (or a right coset of K). The notation Ka is called coset notation
for the equivalence class [a].
Theorem (cosets are the same size). Let (K, ∗) be a subgroup of (G, ∗) and a ∈ G. Then there exists
a bijection f : K → Ka. Thus, if K is finite, then every coset of K has the same number of elements.
Definition. Let (K, ∗) be a subgroup of (G, ∗). Define [G : K] to be the number of distinct right cosets
of K. The number [G : K] is called the index of K in G.
Theorem (Lagrange). Let (K, ∗) be a subgroup of a finite group (G, ∗). Then
#(G) = #(K)[G : K]
Classification of Groups I
Lagrange Lagrange
(G, ∗) is a group (show) (G, ∗) is a group (show)
(K, ∗) is a subgroup of G (show) a∈G (show)
........................................................ ........................................................
#(G) = #(K)[G : K] (conclude) | a | | #(G) (conclude)
#(K) | #(G) (conclude) a#(G) = eG (conclude)
Definition (disjoint cycles). Two cycles (a1 a2 · · · ak ) , (b1 b2 · · · bm ) ∈ Sn are disjoint if and only if
{ a1 , . . . , an } ∩ { b1 , . . . , bm } = ∅.
Theorem (even and odd permutations). No element of Sn is both a product of an even number of
transpositions and also a product of an odd number of transpositions.
subset subset
Let x ∈ A (variable declaration) A⊆B (show)
x∈B (show) x∈A (show)
← ........................................................
........................................................ x∈B (conclude)
A⊆B (conclude)
intersection intersection
x∈A (show) x∈A∩B (show)
x∈B (show) ........................................................
........................................................ x∈A (conclude)
x∈A∩B (conclude) x∈B (conclude)
union union
x ∈ A or x ∈ B (show) x∈A∪B (show)
........................................................ ........................................................
x∈A∪B (conclude) x ∈ A or x ∈ B (conclude)
complement complement
x<A (show) x ∈ A0 (show)
........................................................ ........................................................
x ∈ A0 (conclude) x<A (conclude)
partition partition
Let S ∈ P P is a partition of A (show)
S⊆A (show) S∈P (show)
← ........................................................
Let S, T ∈ P S⊆A (conclude)
Assume S , T
S∩T ={} (show) P is a partition of A (show)
← S, T ∈ P (show)
........................................................
←
S ∩ T = { } or S = T (conclude)
Let x ∈ A
For some S ∈ P,
P is a partition of A (show)
x∈S (show)
x∈A (show)
← ........................................................
........................................................
For some S ∈ P, (constant declaration)
P is a partition of A (conclude)
x∈S (conclude)
Power of a set
........................................................
An = A × · · · × A (conclude)
| {z }
n copies
image image
f: A→B (show) f: A→B (show)
S⊆A (show) S⊆A (show)
x∈S (show) y ∈ f (S) (show)
........................................................ ........................................................
f (x) ∈ f (S) For some x ∈ S, (constant declaration)
y = f (x) (conclude)
composition composition
f: A→B (show) f: A→B (show)
g: B → C (show) g: B → C (show)
........................................................ x∈A (show)
(g ◦ f ) : A → C (conclude) ........................................................
(g ◦ f )(x) = g( f (x)) (conclude)
injective injective
f: A→B (show) f: A→B (show)
Let x, y ∈ A (variable declaration) f is injective (show)
Assume f (x) = f (y) f (x) = f (y) (show)
x=y (show) ........................................................
← x=y (conclude)
←
........................................................
f is injective (conclude)
surjective surjective
f: A→B (show) f: A→B (show)
Let b ∈ B (variable declaration) f is surjective (show)
∃a ∈ A, f (a) = b (show) b∈B (show)
← ........................................................
........................................................ For some a ∈ A, (constant declaration)
f is surjective (conclude) b = f (a) (conclude)
bijective bijective
f is surjective (show) f is bijective (show)
f is injective (show) ........................................................
........................................................ f is surjective (conclude)
f is bijective (conclude) f is injective (conclude)
symmetric symmetric
Let x, y ∈ A (variable declaration) ∼ is symmetric (show)
Assume x ∼ y x∼y (show)
y∼x (show) ........................................................
← y∼x (conclude)
←
........................................................
∼ is symmetric (conclude)
transitive transitive
Let x, y, z ∈ A (variable declaration) ∼ is transitive (show)
Assume x ∼ y and y ∼ z x∼y (show)
x∼z (show) y∼z (show)
← ........................................................
← x∼z (conclude)
........................................................
∼ is transitive (conclude)