0% found this document useful (0 votes)
2 views64 pages

Midterm Review

The document contains quiz questions and solutions related to network protocols, including HTTP, TCP, and circuit switching. It discusses concepts such as round trip time (RTT), packet switching versus circuit switching, and the role of web caching in reducing delays. Additionally, it covers the structure of the Internet, the importance of protocols, and various types of delays in packet-switched networks.

Uploaded by

johnmcdonald8999
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views64 pages

Midterm Review

The document contains quiz questions and solutions related to network protocols, including HTTP, TCP, and circuit switching. It discusses concepts such as round trip time (RTT), packet switching versus circuit switching, and the role of web caching in reducing delays. Additionally, it covers the structure of the Internet, the importance of protocols, and various types of delays in packet-switched networks.

Uploaded by

johnmcdonald8999
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 64

Quiz Questions

Quiz 2
Suppose within your Web browser you click on a link to obtain a Web page. Let's suppose that the
Web page associated with the link is a simple HTML document that references 8 objects.
Suppose the RTT between the local host and the Web server is RTT = 80 msecs. Assumption:
zero transmission time for HTML document and the referenced objects. Non-persistent HTTP is
used. There are no parallel TCP connections.
How much time elapses from when the client clicks on the link until the base html document and
all 8 additional objects are received from web server at the client.
800
160
1280
1440
Download delays for 100 objects (HTTP 1.1 with local web caching).
Consider an HTTP 1.1 client and server. The RTT delay between the client and server
is 2 seconds. Suppose the time a server needs to transmit an object into its outgoing
link is 3 seconds.
There is also a local web cache, as shown in the figure below, with negligible (zero)
propagation delay and object transmission time. The client makes 100 requests
one after the other, waiting for a reply before sending the next request. All requests
first go to the cache (which also has a 2.0 sec. RTT delay to the server but zero RTT to
the client).
How much time elapses between the client transmitting the first request, and the
receipt of the last requested object, assuming no use of the IF-MODIFIED-SINCE
header line anywhere, and assuming that 50% of the objects requested are "hits"
(found) in the local cache?
Note: HTTP 1.1 is being used. It means the client set up a TCP connection to the
server only once.
Solution
1 TCP connection + 50 (time if hits) + 50 (time if missed)

2 + 50*0 + 50 (2 + 3)

delay time to transmit


between each object into the
the cache outgoing link by
and the server
server
SMTP-HTTP-HTTP
HTTP-HTTP-HTTP
SMTP-SMTP-HTTP
SMTP-SMTP-SMTP
Question 10: rdt2.2
What is the state of sender at t=0? What is the state of receiver at t=5? what is the value of d? what
is the value of c?
Midterm question: Stop and Wait in action
Question 11: rdt3.0
protocol

A: pkt 0
B: Ack 0
C: pkt 0
D: Ack 0
E: Pkt 0
F: Pkt 0 (duplicate)
G: Ack 0
H: Pkt 1
I: Ack 0
J: Pkt 1
K: Ack 1
Question 13: Calculating checksum

11010010 01111010
10011010 01011001
Question 14
web caching reduces the delay for all objects (cached+non-cached) requested by
a user not just for some of the objects (cached objects).

True

False
Quiz 1
Question 12

Consider the figure below, showing a link-layer frame heading from a host to a
router. There are three header fields shown. Match the name of a header with a
header label shown in the figure.

H1: ?

H2: ?

H3: ?
Suppose we have four different servers connected to four different
clients over three links, as shown in the following figure. The four pairs
share a common link with a transmission capacity of R = 300 Mbps.
The four links from the servers to the shared link have a transmission
capacity of RS = 30 Mbps. Each of the four links from the shared middle
link to a client has a transmission capacity of RC = 100 Mbps.

What is the maximum achievable end-end throughput (in Mbps) for


each of four client-to-server pairs, assuming that the middle link is
fairly shared (divides its transmission rate equally)?

● 30 Mbps
● 75 Mbps
● 300 Mbps
● 100 Mbps
Consider the two cases given below:

● A circuit-switching in which N users, each requiring a bandwidth of 25 Mbps, must share a link of
capacity 100 Mbps.
● A packet-switching in which M users sharing a 100 Mbps link, where each user again requires 25 Mbps
when transmitting, but only needs to transmit 20 percent of the time.

When circuit switching is used, what is the maximum number of users that can be supported?

25

10

4
Consider the circuit-switched network below. There are four circuit switches: A, B, C, and
D. Suppose there are 13 circuits between A and B, 20 circuits between B and C, 12
circuits between C and D, and 13 circuits between D and A.

Suppose that these maximum number of connections are all ongoing. What happens
when another call connection request arrives to the network, will it be accepted? Answer
Yes or No

Yes

No
● Question: A B
○ if each link has 1 Mbps transmission rate,
what is the transmission rate of an end-to-end
circuit switch connection between A and C?

A. 250 kbps
B. 1 Mbps
C. 4 Mbps

D
C
Circuit Switching Characteristics

● Question: A B
○ What is the maximum number of
simultaneous connections that can
be in progress at any one time in
this network?

A. 4
B. 8
C. 16
D. 32
D
C
Question
Simple HTTP GET request response time. Suppose an HTTP client makes a request to the www.wlu.ca web server. The client
has never before requested a given base object, nor has it communicated recently with the www.wlu.ca server. You can assume,
however, that the client host knows the IP address of www.wlu.ca.

How many round trip times (RTTs) are needed from when the client first makes the request to when the base page is completely
downloaded, assuming the time needed by the server to transmit the base file into the server's link is equal to 1/2 RTT and that
the time needed to transmit the HTTP GET into the client's link is zero? (Note: You should take into account any TCP setup time
required before the HTTP GET is actually sent by the client, the time needed to request the object, and the time needed for the
server to transmit the requested object. You can assume the propagation delay is zero.)
Slides Summary
Note
● These slides summarizes some of the important concepts in Chapters 1,
Chapters 2, and Chapters 3 (sections 3-1 to 3-5)
● It does NOT mean only the content of these slides will be asked in the
midterm
Chapter 1
Internet
● Nuts and bolts view of the Internet
○ A collection of hardware and software components executing protocols
○ A collection of billions of computing devices, and packet switches interconnected by links
● Service View of the Internet
○ A place I go for information, entertainment, and to communicate with people
○ A platform for building network applications.
■ infrastructure that provides services to the applications
■ Enables distributed Applications.
○ Distributed Applications:
■ runs on end systems and exchanges data via the computer network
■ Web surfing, e-mail, instant messaging, Internet phone, distributed games, peer-to-
peer file sharing, television distribution, and video conferencing.
■ More to come
Packet Switching
● When one end system sends data to another end system, the sending end
system breaks the data into chunks, called packets.
● the Internet transports each packet separately, routing a packet to its
destination using a destination address that is written into the packet.
○ Analogy: Similar to the process of delivering post-office mail
● Forwarding (packet switching)
○ Forwarding incoming packets to outgoing links by using packet’s destination address
■ determine on which link it should forward the packet.
○ on a packet by packet basis
● Packet switches “store and forward” packets:
○ before forwarding a packet on an outgoing link, packet switch first receives and stores the entire
packet.
Protocol
● A protocol defines the format and order of messages exchanged between two
or more communication entities, as well as the actions taken on the
transmission and/or receipt of a message or other event
● Used extensively in computer networks
● human protocol and a computer network protocol
What’s a protocol?

● Analogy:A human protocol and a


computer network protocol (Web
browser and a Web server)
○ the Web browser first sends an
introductory message to the server
○ the server responds with its own
introductory message
○ the browser then sends another
message, requesting a specific Web
page
○ finally, the server sends a last message,
which includes the requested Web page.
Circuit Switching
● Used in traditional digital telephone networks
○ Used in part of cable access network
● In circuit switching
○ before transmitting data between two end system, the network establishes a dedicated end-
to-end connection between the two end systems and reserves bandwidth in each link along
the connection.
○ Inefficient:
■ The reserved connection bandwidth is “wasted” whenever the two end systems are not
sending data
Packet Switching Versus Circuit Switching: Example
● There are 10 users
● One user generates one thousand 1,000 bits packet, while other users are
idle
● TDM circuit switching: each frame having 10 slots of 1ms
○ link capacity: 1 Mbps →1000 bits are transferred in one slot → one thousand 1000 bits will
be transferred in 10 seconds
● Packet switching
○ the user can send at 1 Mbps
○ no other user is generating packet

Packet switching has better performance


Physical media and access networks
● The communication links in a computer network may have different physical
media types
○ Wired links
■ copper wire: Dial-up modem links, DSL, and most Ethernet links
■ coaxial cable: Cable links are made of
■ fiber optics: Long-haul Internet backbone links
○ Wireless links
■ wi-fi links
■ bluetooth links
■ satellite links
○ A tremendous variety of media types can be found in the Internet today.
● An access link is a link that connects the end system to the Internet. Access
links can be copper wire, coaxial cable, fiber optics or wireless
Network of Networks
● The Internet is a network of networks
○ The Internet consists of many interconnected networks, each of which is called an Internet
Service Provider (ISP)
○ Each ISP is itself a network of packet switches and communication links
● The ISPs are roughly organized in a hierarchy
○ The ISPs at the bottom of the hierarchy:
■ access ISPs, such as residential ISPs, university ISPs, and enterprise ISPs.
○ The ISPs at the top of the hierarchy:
■ tier-1 ISPs: typically include long-haul intra- and inter-continental fiber links.
○ Tier-n ISPs provide service – for a price – to tier-(n+1) ISPs.
○ Each ISP is independently managed.
■ All networks use a common protocol suite called the Internet Protocol: IP
Protocol Layer
● To deal with complexity, the protocols are organized into layers
○ A typical computer network makes use of many, many protocols
■ hundreds of protocols.
● The protocol layers are arranged in a “stack”
○ Example: Internet protocols stack:
■ Internet organizes its protocols into five layers
■ Application, transport, network, link, and physical
○ The protocols of layer n uses the services provided by the protocols at the layer n-1 (the layer
below)
○ Application-layer is the highest layer in the protocol stack
■ all other layers provide services to the application
■ applications are the most important reason for existence of computer networks
Encapsulation/Decapsulation
● Encapsulation
○ When the sender-side application-layer process passes an application-level data unit (an
application message) to the transport layer, that message becomes the payload of the
transport layer segment, which also contains additional transport-layer header information,
e.g., information that will allow the transport layer at the receiver side to deliver the message
to the correct receiver-side application
■ Analogy: Transport layer segment as an envelope with some information on the envelope (the segment’s
header fields) and the application layer payload as a message within the envelope.
○ The transport layer passes the transport layer segment to the network layer
■ The segment becomes the payload of the network layer datagram, which has additional fields used by the
network layer (e.g., the address of the receiver).
■ Analogy: the network layer as an envelope, with some information on the outside of the network-layer envelope
○ Finally, the network layer datagram is passed to the link layer, which encapsulates the datagram
within a link-layer frame
Encapsulation/Decapsulation
● Encapsulation/Decapsulation
○ a protocol at layer n will look at the header information on the envelope
○ Encapsulation:
■ The protocol may pass the envelope back down to the lower layer (e.g., for forward to
another node)
○ Decapsulation:
■ open the envelope and extract the upper layer payload, and pass that upper-layer
envelop up to layer n+1.
Encapsulation
Decapsulation
Four Sources of Packet Delay
Delay, Loss and Throughput in packet switched Networks

● Computer networks move data with delay and lose packets.


● A packet can be transmitted on a link only if there is no other packet currently
being transmitted on the link and if there are no other packets preceding it in
the queue
Propagation and transmission delay

● Propagation and transmission delays play critical role in the performance of


many distributed applications
● The propagation delay over a link:
○ The time it takes a bit to travel from one end of the link to the other
○ It is equal to the length of the link divided by propagation speed of the link’s physical medium
● The transmission delay of a link
○ Relates to packets and not bits
○ transmission delay =
■ the number of bits in the packet / transmission rate of the link (bandwidth or capacity)
○ It is the amount of time it takes to push the packet onto the link
○ Once a bit is pushed onto a link it needs to propagate to the other end
○ Total delay across a link = transmission delay + propagation delay
Transmission delay
● The amount of time required to push all of the packet’s bits into the link
○ L: packet length (bits)
○ R: transmission rate for a link (aka bandwidth, aka capacity)
○ How fast we can put bits onto the link
○ dtrans: transmission delay
■ time needed to transmit L bits packet into the link:

● typically on the order of microseconds to milliseconds in practice


Queueing delay and packet loss
● Queueing delay:
○ Many packets can arrive at a packet switch roughly at the same time. If these packets need to
be forwarded on the same outbound link, all but one of these packets will have to “queue,”
that is wait for their turns to be transmitted
● Packet loss:
○ if the queue of packets becomes very large, the packet switch’s buffer may become
exhausted, causing packets to become dropped or “lost”
● Queuing delay and packet loss can severely impact the performance of an
application
Chapter 2
Application Layer
Application-layer protocol
● Processes that send and receive messages in an application-layer protocol
● Distributed Applications composed of processes communicating with each
other
● The details of the communications are specified in these protocols
● HTTP
○ Get
○ Post
● DNS
● SMTP
HTTP Request Message
Client/server versus peer-to peer
● The two approaches taken for structuring a network application
○ client/server paradigm
■ a client process request service (by sending one or messages) to a server process
■ The server process implements a service by reading the client requests, performing
some action (e.g., finding a web page in the case of an http server) and sending one
messages in reply (in the case of http, returning the requested object)
○ peer-to-peer approach
■ the two ends of the protocol are equals, e.g., as in a telephone call.
Services provided by the Internet’s transport layer
● The only two services available to an Internet application
○ reliable, congestion controlled data transfer (TCP)
○ unreliable data transfer (UDP)

● No minimum guaranteed transfer rate


● No bound on the delay from source to destination.
HTTP: request/response interaction
● A classical client/server approach
○ A client (web browser) makes a request with a GET message
○ A web server provides a reply
● uses TCP to provide for reliable transfer of the GET request from client-to-
server, and the reply from server-to-client
○ a TCP connection must first be set up
○ This means that a TCP setup request is first sent from the TCP in the client to the TCP in the
server, with the TCP server replying back to the TCP client
○ Following TCP connection setup, the HTTP GET message can be sent over the TCP
connection from client-to-server, and the reply received.
HTTP: request/response interaction
● Different version of http:
○ http 1.0: non-persistent HTTP, a new TCP connection must be set up each time
○ http 1.1 persistent HTTP, multiple HTTP GET messages can be sent over a single TCP
connection
■ better performance than non-persistent http:
● no need to set up a new TCP for each of the HTTP requests beyond the first
● pipelined: multiple http request
○ http2
○ http3: over udp
Caching
● Saving a local copy of a requested piece of information (web document, DNS
translation pair) that is retrieved from a distant location, so that if the same
piece of information is requested again, it can be retrieved from the local
cache, rather than having to retrieve the information from the distant location
● Caching can improve performance by decreasing response time (since the
local cache is closer to the requesting client) and avoiding the use of scarce
resources
● Other examples:
○ browser caching:
○ write down a phone number on a piece of paper and keep it in my pocket, rather than have to
loop up the number again in phone book.
DNS
● both an application and a protocol
○ The name-IP-address translation service is performed at DNS servers, just as any application
provides a service via a server.
○ The DNS service is a very special network service
■ without it the network would not be able to function
■ it is implemented in very much the same way as other network applications
■ complexity at network’s “edge”
● The DNS is an application-layer protocol
○ hosts, name servers communicate to resolve names (address/name translation)
○ allows hosts to query the distributed database
Sockets
● TCP sockets: accept(), and the creation of a new socket
○ The one “tricky” thing with TCP sockets is that a new socket is created when a TCP server
returns from an accept() system call. We’ve called the socket on which the server waits when
performing the accept as a “welcoming socket.” The socket returned from the accept() is used
to communicate back to the client that was connected to the server via the accept()
● UDP socket: send and pray on the receiving side; datagrams from many
senders on the receiving side
○ Since UDP provides an unreliable data transfer service, a sender that sends a datagram via a
UDP socket has no idea if the datagram was ever received by the receiver (unless the
receiver is programmed to send a datagram back that acknowledges that the original
datagram was received). On receiving side, datagram from many different senders be
received on the same socket.
Pull versus push protocol
● How does one application process get data to/from another application
process
● pull
○ the receiver must explicitly request (“pull”) the information
■ web
● push
○ the data holder sends the information to the receiver without the receiver explicitly asking for
the data
○ SMTP:
■ when an email is “pushed” from sender to receiver
Chapter 3
Transport Layer
Logical communication between processes
● Transport layer provides logical communication between processes
○ Application processes use the logical communication provided by the transport layer to send
messages to each other, free from the worries of the details of the network infrastructure used to
carry these messages.
● Network layer protocol provides logical communication between hosts
● Household analogy
● An application protocol lives only in the end systems and is not present in the
network core.
● A computer network may offer more than one transport protocol to its
applications, each providing a different service model.
● The transport layer protocols in the Internet: UDP and TCP
○ provide two entirely different service models to applications
Transport vs. network layer services and protocols
● network layer: logical communication between hosts
● transport layer: logical communication between processes
○ relies on, enhances, network layer services
● Household analogy
○ 12 kids in Ann’s house sending letters to 12 kids in Bill’s house:
○ processes = kids
○ app messages = letters in envelopes
○ transport protocol = Ann and Bill who demux to in-house siblings
○ hosts = houses
○ network-layer protocol = postal service
Internet checksum: example
example: add two 16-bit integers

Note: when adding numbers, a carryout from the most significant bit needs to be added to the result
Multiplexing and demultiplexing
● demultiplexing: The mechanism of passing the payload to the appropriate
socket
○ A receiving host may be running more than one network application process
○ When a host receives a packet, it must decide to which of its ongoing processes it is to pass
the packet’s payload
○ In particular, when a transport-layer protocol in a host receives a segment from the network
layer, it must decide to which socket it is to pass the segment’s payload
● Multiplexing:
○ At the source host
○ the job of gathering data chunks from different sockets, adding header information (for
demultiplexing at the receiver), and passing the resulting segments to the network layer
Connectionless and Connection-Oriented Demultiplexing
● Multiplexing and demultiplexing in TCP and UDP
○ Using port number in the header field of UDP and TCP segments
■ Every UDP and TCP segment has a field for a source port number and another field for
a destination port number.
● Multiplexing and demultiplexing in UDP vs. TCP
○ UDP: each UDP socket is assigned a port number, and when a segment arrives to a host, the
transport layer examines the destination port number in the segment and directs the segment
to the corresponding socket.
○ TCP: a TCP socket is identified by the four-tuple: (source IP address, source port number,
destination IP address, destination port number)
■ When a TCP segment arrives from the network to a host, the host uses all four values
to direct (demultiplex) the segment to the appropriate socket.
Transport Layers in the Internet
● UDP
● TCP
UDP
● UDP is a no-frills, bare-bones protocol, allowing the application to talk almost directly
with the network layer.
● Services:
○ multiplexing/demultiplexing
○ error checking
○ The UDP segment has only four header fields
■ source port number, destination port number, length of the segment, and checksum
● Why an application may choose to use UDP:
○ finer control of what data is sent in a segment
○ It has no connection establishment; it has no connection state at servers; and it has less packet header
overhead than TCP
■ DNS is an example of an application protocol which uses DNS. DNS sends its queries and answers
within UDP segments, without any connection establishment between communicating entities.
Reliable Data Transfer
● Network layer in the internet provides unreliable data transfer
○ when the transport layer in the source host passes a segment to the network layer, the network
layer does not guarantee it will deliver the segment to the transport layer in the destination host.
The segment could get lost and never arrive at the destination.
● A transport layer could provide reliable data transfer (RDT)
○ guarantee process-to-process message delivery even when the underlying network layer is
unreliable.
● Idea:
○ The receiver acknowledges the receipt of a packet
○ The sender retransmits the packet if it does not receive the acknowledgement
○ handles bit errors as well as packet loss
○ mechanisms:
■ acknowledgements, timers, checksums, sequence numbers, and acknowledgement numbers
Reliable Data Transfer
● The textbook incrementally develops an RDT stop-and-wait protocol in
Section 3.4.
● rdt 1.0
● rdt 2.0
● rdt 2.1
● rdt 2.2
● rdt 3.0
Pipelined Reliable Data Transfer
● Stop-and-wait protocol is inefficient
○ The source sends one packet at a time, only sending a new packet once it has received an
acknowledgment for the previous packet.
○ poor throughput performance, particularly if either the transmission rate, R, or the round-trip
time, RTT, is large.
● Solution: pipelined protocol
○ the sender is allowed to send multiple packets without waiting for an acknowledgment
○ Pipelining requires an increased range in sequence numbers and additional buffering at
sender and receiver.
○ Two pipelined RDT protocols:
■ Go-Back-N (GBN)
■ Selective Repeat (SR)
Pipelined Reliable Data Transfer
● Go-Back-N (GBN) vs Selective Repeat (SR)
○ Similarities:
■ Both protocols limit the number of outstanding unacknowledged packets the sender can
have in the pipeline
○ Differences:
■ GBN uses cumulative acknowledgments, only acknowledging up to the first non-
received packet. A single-packet error can cause GBN to retransmit a large number of
packets.
■ In SR, the receiver individually acknowledges correctly received packets. SR has better
performance than GBN, but is more complicated, both at sender and receiver.
TCP
● TCP is connection oriented
○ Before one process can send application data to the other process, the two processes must “handshake” with
each other by sending to each other (a total of) three empty TCP segments
○ The process initiating the TCP handshake is called the client.
○ The process waiting to be hand shaken is the server
○ After the 3-packet handshake is complete, a connection is said to be established and the two processes can
send application data to each other
● TCP is a byte-stream protocol
○ A segment may not contain a single application-layer message
■ It may contain, for example, only a portion of a message or contain multiple messages
● A TCP connection has a send buffer and a receive buffer
○ On the send side, the application sends bytes to the send buffer, and TCP grabs bytes from the send buffer to
form a segment.
○ On the receive side, TCP receives segments from the network layer, deposits the bytes in the segments in the
receive buffer, and the application reads bytes from the receive buffer.
TCP
● TCP is reliable: it employs a Reliable Data Transfer (RDT) protocol
○ TCP’s RDT service ensures that the byte stream that a process reads out of its receive buffer is
exactly the byte stream that was sent by the process at the other end of the connection.
● TCP Reliable data transfer mechanisms
○ pipelined
○ cumulative acknowledgments
○ sequence numbers and acknowledgment numbers
○ a timer, and a dynamic timeout interval.
■ In order to set the timeout in its RDT protocol, TCP uses a dynamic RTT estimation
algorithm.
● Retransmissions at the sender are triggered by two different mechanisms:
○ timer expiration
○ triple duplicate acknowledgments.

You might also like