0% found this document useful (0 votes)
8 views41 pages

Week 3 - Information Processing in Humans and Machines (Part 3)

The document covers fundamental concepts in computing and cyber security, focusing on digital logic structures, the von Neumann model, and artificial intelligence. It explains combinatorial logic circuits, basic storage elements, and the concept of memory, including sequential logic circuits and finite state machines. Additionally, it discusses the synchronous behavior of finite state machines and the role of the clock in computer operations.

Uploaded by

owen chan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views41 pages

Week 3 - Information Processing in Humans and Machines (Part 3)

The document covers fundamental concepts in computing and cyber security, focusing on digital logic structures, the von Neumann model, and artificial intelligence. It explains combinatorial logic circuits, basic storage elements, and the concept of memory, including sequential logic circuits and finite state machines. Additionally, it discusses the synchronous behavior of finite state machines and the role of the clock in computer operations.

Uploaded by

owen chan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

CSIT123 Computing and Cyber

Security Fundamentals
Week 4: Information Processing in Humans and
Machines and Artificial Intelligence (Part 3)
Dr. Huseyin Hisil and Dr. Xueqiao Liu

Initially prepared by Dr. Dung Duong


Overview
● Digital Logic Structures
● The von Neumann Model
● Artificial Intelligence
Logic structures
● Two kinds:
○ store information
○ do not store information
■ Referred as decision elements or combinatorial logic structures
● Combinatorial logic structures
○ Outputs are strictly dependent on the combination of input values
that being applied to the structure right now
○ Not at all dependent on any past history since no information is
stored internally
○ Three useful logic circuits: one-bit adder, decoder, mux.
Combinational Logic Circuits
● A One-Bit Adder (a.k.a. a Full Adder, FA for short)
• Binary addition is to proceed from right to left, one column at a time,
adding two digits from two values plus carry in, and generating a sum
digit and a carry to next column.

Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus


Combinational Logic Circuits
● A Four-Bit Adder

Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus


Combinational Logic Circuits
● A Four-Bit Adder
Combinational Logic Circuits
● A Four-Bit Adder / Subtractor
Combinational Logic Circuits
● Decoder
• Property: exactly one of its outputs is 1 and all the rest are 0s.
• The one output that is logically 1 is the output corresponding to the input pattern
that it is expected to detect. In general, decoders have n inputs and 2n outputs.
• We say the output line that detects the input pattern is asserted. That is, that output
line has the value 1, rather than 0 as is the case for all the other output lines.
• Use: determine how to interpret a bit pattern.

Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus


Combinational Logic Circuits
● Mux (Multiplexer or data selector)
• Property: select one of the inputs (A or B) and connect it to the output
• The select signal S determines which input is connected to output
• A mux of 2n inputs and n select lines

Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus


Combinational Logic Circuits
● Mux (Multiplexer or data selector)
• The following 4-input mux requires 2 select lines, which line does it
select?
• Can you construct the gate-level representation for an eight-input mux?
How many select lines must you have?

Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus


Combinational Logic Circuits
● The Programmable Logic Array (PLA)
• It consists of an array of AND gates (called an
AND array) followed by an array of OR gates
(called an OR array)
•The number of AND gates corresponds to the
number of input combinations (rows) in the truth
table. For n-input logic functions, we need a PLA with
2n n-input AND gates. The number of OR gates
corresponds to the number of logic functions we
wish to implement, that is, the number of output
columns in the truth table. We say we program the
connections from AND gate outputs to OR gate
inputs to implement our desired logic functions.
Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus
Combinational Logic Circuits
● Logical Completeness
• PLA consists of only AND gates, OR gates, and inverters, means that any
logic function can be implemented, provided that enough AND, OR, and
NOT gates.
• We say that the set of gates {AND, OR, NOT} is logically complete
because can build a circuit to carry out specification of any truth table
we wish without using other gates.
• Question: Is there any single two-input logic gate that is logically
complete? E.g., is the NAND gate logically complete? Hint: Can I
implement a NOT gate with a NAND gate? If yes, can I then implement
an AND gate using a NAND gate followed by a NOT gate? If yes, can I
implement an OR gate using just AND gates and NOT gates? If all of the
above is true, then the NAND gate is logically complete, and I can
implement any desired logic function as described by its truth table
with a barrel of NAND gates.
Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus
Basic Storage Elements
● The R-S Latch
• It can store one bit of information, a 0 or a 1.
• Two 2-input NAND gates are connected such that the output of each is
connected to one of the inputs of the other. The remaining inputs S and
R are normally held at a logic level 1.
• R: “reset” or “clear” the element, i.e., set it to zero.
• S: “set” the element, i.e., set it to one.
1
1
out
out 0
1 1
0

1 0
0 1

1 1
Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus
Basic Storage Elements
● The R-S Latch
• The Quiescent State: We describe the quiescent (or quiet) state of a
latch as the state when the latch is storing a value, either 0 or 1.
1 1
out out
1 0
0 1

1 1 0
0 0
out 1
0 out
1 1
1 1 0

clear set

0 1
1 0
0 1
Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus
Basic Storage Elements
● The Gated D Latch
• To be useful, it is necessary to control when a latch is set and when it is
cleared. A simple way to accomplish this is with the gated latch. It
consists of the R-S latch.
• The latch will be set to value of D, but only when WE (write enable) is
asserted (i.e., WE == 1). When WE is not asserted (i.e., WE == 0), the
outputs S and R are both equal to 1, then the value stored in the latch
remains unchanged.

Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus


The Concept of Memory
● Addressability
• The number of bits stored in each memory location is the memory’s
addressability. A 2-gigabyte memory (written 2GB) is a memory
consisting of 2,147,483,648 memory locations, each containing one
byte (i.e., eight bits) of storage.
• Most memories are byte-addressable. The reason is historical; most
computers got their start processing data, and one character stroke on
the keyboard corresponds to one 8-bit ASCII code. If the memory is
byte-addressable, then each ASCII character occupies one location in
memory, allowing to be changed easily.
The Concept of Memory
● A 22-by-3-Bit Memory
• The memory has an address space of four locations and an
addressability of three bits. A memory of size 22 requires two bits to
specify the address. We describe the two-bit address as A[1:0]. A
memory of addressability three stores three bits of information in each
memory location. We describe the three bits of data as D[2:0]. In both
cases, our notation A[high:low] and D[high:low] reflects the fact that
we have numbered the bits of address and data from right to left, in
order, starting with the rightmost bit, which is numbered 0. The
notation [high:low] means a sequence of high − low + 1 bits such that
“high” is the bit number of the leftmost (or high) bit number in the
sequence and “low” is the bit number of the rightmost (or low) bit
number in the sequence.
The Concept of Memory

Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus


Sequential Logic Circuits
● What are Sequential Logic Circuits?
• Digital Logic Structures, process information (i.e., make decisions) +
store information

• Sequential logic circuits are used to implement a very important class of


mechanisms called finite state machines. We use finite state machines
in essentially all branches of engineering. E.g., used as controllers of
electrical systems, mechanical systems, and aeronautical systems.
• In von Neumann model, a finite state machine is at the heart of the
computer.
Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus
Sequential Logic Circuits
● A Simple Example: The Combination Lock
sequential lock combinational lock
R13-L22-R3 can open lock whether the lock opens or not is independent of past rotations of four wheels
R22-L13-R3 cannot the lock does not care at all about past rotations

Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus


Sequential Logic Circuits
● The Concept of State
• For the sequential lock to work, it must identify several relevant situations, as
follows:
•A. The lock is not open, and NO relevant operations have been performed.
•B. The lock is not open, but the user has just completed the R13 operation.
•C. The lock is not open, but the user has just completed R13, followed by L22.
•D. The lock is open, since the user has just completed R13, followed by L22, followed by R3.
• We have labelled these four situations A, B, C, and D. We refer to each of these
situations as the state of the lock.
• The notion of state is a very important concept in computer engineering, and
actually, in just about all branches of engineering. The state of a mechanism,
more generally, the state of a system, is a snapshot of that system in which all
relevant items are explicitly expressed, i.e., the state of a system is a snapshot
of all the relevant elements of the system at the moment the snapshot is
taken.
Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus
Sequential Logic Circuits
● The Finite State Machine and Its State Diagram
• The behaviour of each system can be specified by a finite state machine
and represented as a state diagram. A finite state machine consists of
five elements:
• a finite number of states
• a finite number of external inputs
• a finite number of external outputs
• an explicit specification of all state transitions
• an explicit specification of what determines each external output
• The set of states represents all possible situations (or snapshots) that
the system can be in. Each state transition describes what it takes to
get from one state to another.
Sequential Logic Circuits
● The Finite State Machine and Its State Diagram
• A state diagram for the combination lock:

•In short, the next state is determined by the combination of the current
state and the current external input.
•In all the systems we will study, the output values will be specified solely by
the current state of the system.
Sequential Logic Circuits
● The Synchronous Finite State Machine
• Up to now, a transition from a current state to a next state in our finite
state machine happened when it happened. That is, there is no fixed
amount of time between successive inputs to the finite state machine.
This is the case we have discussed. These systems are asynchronous
because there is nothing synchronizing when each state transition must
occur.
• However, almost no computers work that way. On the contrary,
computers are synchronous because the state transitions take place,
one after the other, at identical fixed units of time. They are controlled
by a synchronous finite state machine.
Sequential Logic Circuits
The Clock
● A synchronous finite state machine transitions from its current state to its next
state after an identical fixed interval of time.
● Control of that synchronous behaviour is in part the responsibility of the clock
circuit.
● A clock circuit produces a signal (THE clock), whose value alternates between
0 volts and some specified fixed voltage.
● In digital logic terms, the clock is a signal whose value alternates between 0
and 1.
Sequential Logic Circuits
The Clock
● Below shows the value of the clock signal as a function of time

● Each of the repeated sequence of identical intervals is referred to as a clock


cycle. A clock cycle starts when the clock signal transitions from 0 to 1 and
ends the next time the clock signal transitions from 0 to 1.
● When people say their laptop computers run at a frequency of 2 gigahertz,
they are saying their laptop computers perform two billion pieces of work
each second since 2 gigahertz means two billion clock cycles each second,
each clock cycle lasting for just one-half of a nanosecond. The synchronous
finite state machine makes one state transition each clock cycle.
Von Neumann Model and its Basic Components
● Von Neumann Model
• To get a task done by a computer, we need
•a computer program that specifies what computer
must do to perform the task + the computer that is to
carry out the task
•A computer program consists of a set of instructions,
each specifying a well-defined piece of work for the
computer to carry out
•The instruction is the smallest piece of work specified
in a computer program. That is, the computer either
carries out the work specified by an instruction, or it
does not. The computer does not have the luxury of
carrying out only a piece of an instruction.
Image Source: https://www.thedailybeast.com/egyptian-scrolls-reveal-hangover-cure, https://libwww.freelibrary.org/digital/item/57905
Von Neumann Model and its Basic Components
● Memory
• A more realistic memory for one of today’s computer systems is 234 by 8 bits
•That is, a typical memory in today’s world of computers consists of 234 distinct memory locations,
each of which is capable of storing 8 bits of information. We say that such a memory has an
address space of 234 uniquely identifiable locations, and an addressability of 8 bits. We refer to
such a memory as a 16-gigabyte memory (abbreviated, 16 GB). The “16 giga” refers to the 234
locations, and the “byte” refers to the 8 bits stored in each location. The term is 16 giga because 16
is 24 and giga is the term we use to represent 230= 1,073,741,824, which is approximately one
billion; 24 times 230 = 234. A byte is the word we use to describe 8 bits.
•Two characteristics of a memory location: its address and what is stored there.
• E.g., a memory consisting of 8 locations
•Its addresses are shown at the left, in binary 0 to 7.
•Each location contains 8 bits of information.
•Value 6 is stored in the memory location whose address is 4.
•Value 4 is stored in the memory location whose address is 6.
•What is the address space of it, and what the addressability of it?
Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus
Von Neumann Model and its Basic Components
● Memory
• To read the contents of a memory location, we first place the address of
that location in the memory’s address register (MAR) and then
interrogate the computer’s memory. The information stored in the
location having that address will be placed in the memory’s data
register (MDR).
• To write (or store) a value in a memory location, we first write the
address of the memory location in the MAR, and the value to be stored
in the MDR. We then interrogate the computer’s memory with the
write enable signal asserted. The information contained in the MDR will
be written into the memory location whose address is in the MAR.
Von Neumann Model and its Basic Components
● Processing Unit: carries out actual processing of information
• Functional Units: each performs one particular operation (divide, square root, etc.)
• The simplest processing unit is ALU. ALU is the abbreviation for Arithmetic and Logic Unit, so called because it
is usually capable of performing basic arithmetic functions (like ADD and SUBTRACT) and basic logic operations
(like bit-wise AND, OR, and NOT).
• Word Length: ALU processes data elements of a fixed size (word length of
computer)
• The data elements are called words. E.g., to perform ADD, the ALU receives two words as inputs and produces
a single word (the sum) as output. Each ISA (instruction set architecture) has its own word length, depending
on the intended use of the computer.
• Most microprocessors today that are used in PCs or workstations have a word length of 64 bits (Intel’s “Core”
processors) or 32 bits (Intel’s “Pentium III” processors). Even most microprocessors now used in cell phones
have 64-bitword lengths, e.g., Apple’s A7 through A11 processors, and Qualcomm’s Snap Dragon processors.
However, the microprocessors used in very inexpensive applications often have word lengths of as little as 16
or even 8 bits.
Von Neumann Model and its Basic Components
● Processing Unit: carries out actual processing of information
● Registers: temporary storage in order to avoid the much longer access time
•Store operands and results of functional units, size of each register = size of values
processed by the ALU, i.e., each contain one word.
•Current microprocessors typically contain 32 registers, each consisting of 32 or 64
bits, depending on the architecture. However, the importance of temporary
storage for values that most modern computers will need shortly means many
computers today have an additional set of special-purpose registers consisting of
128 bits of information to handle special needs.
Von Neumann Model and its Basic Components
● Input and Output
• Peripherals: devices exist for the purposes of input and output
•In order for a computer to process information, the information must get into the
computer. E.g., mouse, keyboard, digital scanners, and shopping mall kiosks to help
you navigate the shopping mall.
•In order to use the results of that processing, those results must be displayed in
some fashion outside the computer. E.g., monitor, printer, LED displays, disks, and
shopping mall kiosks.
Von Neumann Model and its Basic Components
● Control Unit
• It is like the conductor of an orchestra and in charge of making all the other
parts of the computer play together.
•Keeps track of both where we are within process of executing the program and where we
are in the process of executing each instruction.
•To keep track of which instruction is being executed, the control unit has an instruction
register to contain that instruction.
•To keep track of which instruction is to be processed next, the control unit has a register
that contains the next instruction’s address.
•For historical reasons, that register is called the program counter (PC), although a better
name for it would be the instruction pointer, since the contents of this register is
“pointing” to the next instruction to be processed. Intel does in fact call that register the
instruction pointer, but the simple elegance of that name has not caught on.
Instruction Processing
● The Instruction: the most basic unit of computer processing
• The central idea in the von Neumann model of computer processing is that
program and data are both stored as sequences of bits in the computer’s
memory, and program is executed one instruction at a time under the
direction of the control unit.
• Two parts: the opcode (what the instruction does) and the operands (who it
does it to!)
• Three kinds of instructions: operates, data movement, and control, although
many ISAs have some special instructions that are necessary for those ISAs.
•Operate instructions operate on data, e.g., ADD, AND and NOT
•Data movement instructions move information from the processing unit to and from memory and
to and from input/output devices.
•Control instructions are necessary for altering the sequential processing of instructions. That is,
normally the next instruction executed is the instruction contained in the next memory location. If
a program consists of instructions 1,2,3,4...10 located in memory locations A, A+1, A+2, ...A+9,
normally the instructions would be executed in the sequence 1,2,3...10. However, sometimes we
will want to change the sequence. Control instructions enable us to do that.
Instruction Processing
● The Instruction Cycle (NOT the Clock Cycle!)
• Instructions are processed under the direction of the control unit in a
very systematic, step-by-step manner. The entire sequence of steps
needed to process an instruction is called the instruction cycle. The
instruction cycle consists of 6 sequential phases, each phase requiring
zero or more steps. Zero steps indicates most computers have been
designed s.t. not all instructions require all 6 phases.
•FETCH
•DECODE
•EVALUATE ADDRESS
•FETCH OPERANDS
•EXECUTE
•STORE RESULT
Instruction Processing
● Fetch: Obtains next instruction and loads it into control unit’s instruction register (IR)
○ Load the MAR with the contents of the PC, and simultaneously increment the PC.
○ Interrogate memory, resulting in the instruction being placed in the MDR.
○ Load the IR with the contents of the MDR.

Watch: youtube.com/watch?v=cNN_tTXABUA
Instruction Processing
● Decode
○ Examines instruction in order to figure out what microarchitecture is being asked
to do.
○ First identify the opcode. Then depending on opcode, identify other operands
from the remaining bits.
● Evaluate Address
● Computes address of memory location that is needed to process the
instruction.
● Not all instructions access memory to load or store data.
Instruction Processing
● Fetch Operands
• This phase obtains the source operands needed to process the instruction.
● Execute
• This phase carries out the execution of the instruction.
•E.g., for the ADD instruction, this phase consists of the step of performing the
addition in the ALU.
● Store Result
• The result is written to its designated destination.
•Write address to MAR, and data to MDR + assert WRITE signal to memory.
•E.g., for the ADD instruction, in many computers this action is performed during
the EXECUTE phase. That is, an ADD instruction can fetch its source operands,
perform the ADD in the ALU, and store the result in the destination register all in a
single clock cycle. In other words, in this case a separate STORE RESULT phase is
not needed.
Instruction Processing
● Changing the Sequence of Execution
○ A computer program is usually executed in sequence, i.e., first instruction is executed, then
second instruction is executed, followed by third instruction, and so on.
○ Sometimes we want to change the sequence of instruction execution, and control instruction
can do it. E.g., loop, if-then, and function call
○ Each instruction cycle starts with loading the MAR with the PC. Thus, if we wish to change the
sequence of instructions executed, we must change the contents of the PC between the time it
is incremented (during the FETCH phase of one instruction) and the start of the FETCH phase of
the next instruction.
○ Control instructions perform that function by loading the PC during the EXECUTE phase, which
wipes out the incremented PC that was loaded during the FETCH phase. The result is that, at
the start of the next instruction cycle, when the computer accesses the PC to obtain the
address of an instruction to fetch, it will get the address loaded during the previous instruction’s
EXECUTE phase, rather than the next sequential instruction in the computer’s program.
○ The most common control instruction is the conditional branch (BR), which either changes the
contents of the PC or does not change the contents of the PC, depending on the result of a
previous instruction (usually the instruction that is executed immediately before the conditional
branch instruction).
Instruction Processing
● Control of the Instruction Cycle
○ The instruction cycle is controlled by a synchronous
finite state machine.
■ Each state corresponds to one machine cycle of activity
that takes one clock cycle to perform
■ The processing controlled by each state is described
within the node representing that state
■ The arcs show the next state transitions

Image Source: https://www.mheducation.com.au/ise-introduction-to-computing-systems-from-bits-gates-to-c-c-beyond-9781260565911-aus


End of slides

You might also like