What is used by tcp and udp to track multiple individual conversations between clients and servers?

The transport layer provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers.  The transport layer controls the reliability of a given link through flow control, segmentation/desegmentation, and error control.  Some protocols are state and connection oriented.  This means that the transport layer can keep track of the segments and retransmit those that fail.

Purpose of the Transport Layer

The following are the primary responsibilities of the transport layer:

  • Tracking the individual communications between application on the source and destination hosts
  • Segmenting data and managing each piece
  • Reassembling the segments into streams of application data
  • Identifying the different apllications
  • Performing flow control between end users
  • Enabling error recovery
  • Intitiating a session

The transport layer enables applications on devices to communicate, as shown in Section 4-2

Enabling Applications on devices to communicate

Tracking Individual Conversations

Any host can have multiple applications that are communicating across the network.  Each of these applications will be communicating with one or more applications on remote hosts.  It is the responsibility of the transport layer to maintain the multiple communication streams between these applications.

Consider a computer connected to a network that is simultaneously receiving and sending e-mail and instant messages, viewing websites and conducting a Voice over IP (VoIP) phone call, as shown in Section 4-3.  Each of the applications is sending and receiving data over the network at the same time.  However, data from the phone call is not directed to the web browser, and text from an instant message does not appear in an e-mail.

Tracking the Conversations

Image: encrypted-tbn0.gstatic.com

The transport layer segments the data and manages the separation of data for different applications.  Multiple applications running on a device receive the correct data.

The application layer passes large amounts of data to the transport layer.  The transportation has to break the data into smaller pieces, better suited for transmission.  These pieces are called segments.

This process includes the encapsulated required on each piece of data.  Each piece of application data requires headers to be added to the transport layer to indicate to which communication it is associated.

Segmentation of the data, as shown, in Section 4-4, in accordance with transport layer protocols, provides the means to both send and receive data when running multiple applications concurrently on a computer.  Without segmentation, only one application, the streaming video for example, would be able to receive data.  You could not receive e-mails, chat on instant messenger or view web pages while also viewing the video.

Because networks can provide multiple routes that can have different transmission times, data can arrive in the wrong order.  By numbering and sequencing the segments, the transport layer can ensure that these segments are reassembled into the proper order.  

At the receiving host, each segment of data must be reassembled into the proper order and then directed to the appropriate application.

The protocols at the transport layer describe how the transport layer header information is used to reassemble the data pieces into in-order data streams to be passed to the application layer.

Identifying the Applications

To pass data streams to the proper applications, the transport layer must identify the target application.  To accomplish this, the transport layer assigns an identifier to an application.

The TCP/IP protocols call this identifier a port number.  Each software process that needs to access the network is assigned a port number unique in that host.  This port number is used in the transport layer header to indicate to which application that piece of data is associated.

At the transport layer, each particular set of pieces flowing between a source application and a destination application is known as a conversation.  Dividing data into small parts, and sending these parts from the source to the destination, enables many different communications to be interleaved (multiplexed) on the same network.

The transport layer is the link between the application layer and the lower layers that are responsible for network transmission.  This layer accepts data from different conversations and passes it down to the lower layers as manageable pieces that can be eventually multiplexed over the media.

Applications do not need to know the operational details of the network in use.  The applications generate data that is sent from one application to another, without regard to the destination host type, the type of media over which the data must travel, the path taken by the data, the congestion on a link, or the size of the network.

Additionally, the lower layers are not aware that multiple applications are sending data or the network.  Their responsibility is to deliver data to the appropriate device.  The transport layer then sorts these pieces before delivering them to the appropriate application.

Network hosts have limited resources, such as memory of bandwidth.  When the transport layer is aware that these resources are overtaxed, some protocols can request that the sending application reduce the rate of data flow.  This is done at the transport layer by regulating the amount of data the source transmits as a group.  Flow control can prevent the loss of segments on the network and avoid the need for retransmission.

As the protocols are discussed in the chapter, this service will be explained in more detail.

For many reasons, it is possible for a piece of data to become corrupted, or lost, as it is transmitted over the network.  The transport layer can ensure that all pieces reach their destination by having the source device retransmit any data that is lost.

For many reasons, it is possible for a piece of data to become corrupted, or lost, as it is transmitted over the network.  The transport layer can ensure that all pieces reach their destination by having the source device retransmit any data that is lost.

The transport layer can provide connection orientation by creating a session between the applications.  These connections prepare the applications to communicate with each other before any data is transmitted.  Within these sessions, the data for a communication between the two applications can be closely managed.

Multiple transport layer protocols exist to meet the requirements of different applications  For example, users require that an e-mail or web page be completely received and presented for the information to be considered useful.  Slight delays are considered acceptable to ensure that the complete information is received and presented.

In contrast, occasionally missing small parts of a telephone conversation might be considered acceptable.  You can either infer the missing audioo from tyhe context of the conversation or ask the other person to repeat what he said.  This is considered preferable to the delays that would result from asking the network to manage and resend missing segmewnts.  In this example, the user, not the network, manages the resending or replacement of missing information.  

In today's converged networks, where the flow of voice, video, and data travels over the same network, applications with very different transport needs can be communicating on the same network.  The different transport layer protocols have different rules allowing devices to handle these diverse data requirements.  

Some protocols, such as UDP (User Datagram Protocol), provide just the basic functions for efficiently delivering the data pieces between the appropriate applications.  These types of protocols are useful for applications, they have additional overhead and make larger demands on the network.

To identify each segment of data, the transport layer adds to the piece a header containing binary data.  This header contains fields of bits.  The values in these fields enable different transport layer protocols to perform different functions.

SUPPORTING RELIABLE COMMUNICATION

Recall that the primary function of the transport layer is to manage the application data for the conversations between hosts.  however, different applications have different requirements for their data, and therefore different transport protocols have been developed to meet the requirements.

TCP is a transport layer protocol that can be implemented to ensure reliable delivery of the data.  In networking terms, reliability means ensuring that each piece of data that the source sends arrives at the destination.  At the transport layer, the three basic operations of reliability are:

  • Tracking transmitted data
  • Acknowledging received data
  • Retransmitting any unacknowledged data

The transport layer of the sending host tracks all the data pieces for each conversation and retransmits any data that the receiving host did not acknowledge.  These reliability processes place additional overhead on the network resources because of the acknowledgement, tracking, and retransmission these reliability operations more control data is exchanged between the sending and receiving hosts.  This control Information is contained in the Layer 4 header.

This creates a trade-off between the value of reliability and the burden it places on the network.  Application developers must choose which transport protocol type is appropriate bassed on the requirements of their applications.  At the transport layer, protocols specify methods for either reliable, guaranteed delivery or best-effort delivery.  In the context of networking, best-effort delivery is referred to as unreliable, because the destination does not acknowledge whether is received the data. 

Transport Layer Protocols

Image:  OSI Transport Layer Network Fundamentals – Chapter 4. (n.d.). Retrieved February 2, 2015, from http://www.uobabylon.edu.iq/eprints/paper_9_19459_27.pdf

Applications such as databases, web pages, and e-mail, require that all the sent data arrive at the destination in its original condition for the data to be useful.  Any missing data could cause a corrupt communication that is either incomplete or unreadable. These applications are designed to use a transport layer protocol that implements reliability.  The additional network overhead is considered to be required for these applications.

Other applications are more tolerant of the loss of small amounts of data.  For example, if one or two segments of a video stream fail to arrive, it would only create a momentary disruption in the stream.  This can appear as distortion in the image but might not even be noticeable to the user.  Imposing overhead to ensure reliability for this application could reduce the usefulness of the application.  The image in a streaming would be greatly degraded if the destination device had to account for lost data and delay the stream while waiting for its arrival.  It is better to render the best image possible at the time with the segments that arrive and forego reliability.  If reliability is required for some reason, these applications can provide error checking and retransmission requests.

The two most commons transport layer protocols of the TCP/IP protocol suite are Transmission Control Protocol (TCP) and User Datagram Protocol (UDP).  Both protocols manage the communication of multiple applications.  the differences between the tow are the specific functions that each protocol implements.

User Datagram Protocol (UDP)

UDP is a simple, connectionless protocol, described ion RFC 768.  It has the advantage of providing low-overhead data delivery.  The segments of communcation in UDP are called datagrams.  UDP sends datagrams as "best effort".

Applications that use UDP include:

  • Domain Name System (DNS)
  • Video streaming
  • Voice over IP (VoIP)
Section 4-6

TCP and UDP Headers

Image:  OSI Transport Layer Network Fundamentals – Chapter 4. (n.d.). Retrieved February 2, 2015, from http://www.uobabylon.edu.iq/eprints/paper_9_19459_27.pdf

Transmission Control Protocol (TCP)

TCP is a connection-oriented protocol, described in RFC 793.  TCP incurs additional overhead to gain functions.  Additional functions specified by TCP are same-order delivery, reliable delivery, and flow control.  Each TCP segment has 20 bytes of overhead in the header encapsulating the application layer data, whereas each UDP segment has only 8 bytes of overhead.

The following applications use TCP:

  • Web browsers
  • E-mail
  • File transfers

Port Addressing

Consider tghe earlier example of a computer simultaneously receiving and sending e-mail, instant messages, web pages, and a VoIP phone call.

The TCP, and UDP-based services keep track of the various applications that are communication.  To differentiate the segments and atatgrams for each application, toth TCP and UDP have header fields that can uniquely identify these applications.

Identifying the Conversations

The header of each segment or datagram contains a so7urce and destination port.  The source port number is the number for this communication associated with the originating application on the local host.  The destination port number is the number for this communication associated with the destination application on the remote host.

Port numbers are assigned in various ways, depending on whether the message is a request or a response.  While server processes have static port numbers assigned to the service daemon running on the remote host.  The client software must know what port number is associated with the server process on the remote host.  This destination port number is configured, either by default or manually.  For example, when a web browser application makes a request to a web server, the browser uses TCP and port number 80 unless otherwise specified.  TCP port 80 is the default port assigned to web-serving applications.  Many common applications have default port assignments.

The source port in a segment or datagram header of a client request is randomly generated.  As long as it does not conflict with other ports in use on the system, the client can choose any port number.  This port number acts like a return address for the requesting application.  The transport layer keeps track of this port and the application that initiated the request so that when a response is returned, it can be forwarded to the correct application.  The requesting application port number is used as the destination port number in the response coming back from the server.

The combination of the transport layer port number and the network layer IP address assigned to the host uniquely identifies a particular process running on a specific host device.  This combination is called a socket.  Occasionally, you can find the terms port numbers and sockets used interchangeably.  In the context of this book, the term socket refers only to the unique combination of IP address and port number.  A socket pair, consisting of the source and destination IP address and port numbers, is also unique and identifies the consversation between the two hosts.

For example, an HTTP web page request being sent to a web server (port 80) running on a host with a Layer 3 IPv4 address of 192.163.1.20 would be sestined to socket 192.168.1.20.

If the web browser requesting the web page is running on host 192.168.100.48 and the dynamic port number assigned to the web browser is 49152, the socket for the web page would be 192.168.100.48:49152.

The unique identifiers are the port numbers, and the process of identifying the different conversations through the use of port number is shown in Section 4-8.

Identifying Conversations

What is used by tcp and udp to track multiple individual conversations between clients and servers?

Image:  OSI Transport Layer Network Fundamentals – Chapter 4. (n.d.). Retrieved February 2, 2015, from http://www.uobabylon.edu.iq/eprints/paper_9_19459_27.pdf


Port Addressing Types and Tools

The Internet Assigned Numbers Authority (IANA)  assigns port numbers.  IANA is a standards body that is responsible for assigning various addressing standards.

The different types of port numbers are 

  • Well-known ports (numbers 0 to 1023)
  • Registered ports (numbers 1024 to 49151)
  • Dynamic or private ports (numbers 49152 to 65535)

The following sections describe the three types of port numbers and examples of when both TCP and UDP might use the same port number.  You also Learn about the netstat network utility.  

Well-known ports (numbers 0 to 1023) are reserved for services and applications.  They are commonly used for applications such as HTTP (web server), POP3/SMTP (e-mail server) and Telnet.  By defining these well-known ports for server applications, client applications can be programmed to request a connection to that specific port and is associated service.  Table 4-1 lists some well-known ports for TCP and UDP.

Well-Know Port Application Protocol 20 File Transfer Protocol (FTP) Data TCP 21 File Transfer Protocol  (FTP) Control TCP 23 Telnet TCP 25 Simple Mail Transfer Protocol (SMTP) TCP 69 Trivial File Transfer Protocol (TFTP) UDP 80 Hypertext Transfer Protocol TCP 110 Post Office Protocol 3 (POP3) TCP 194 Internet Relay Chat (IRC) TCP 443 Secure HTTP (HTTPS) TCP 520 Routing Information Protocol (RIP) UDP Table:  Network Fundamentals, CCNA Exploration Companion Guide.

Registered ports (numbers 1024-49151) are assigned to user processes or applications.  These processes are primarily individual applications that would receive a well-known port.  When not used for a server resource, a client can dynamically select a registered port as its source port.  Table 4-2 lists registered ports for TCP and UDP.

Registered Port Application Protocol 1812 RADIUS Authentication Protocol UDP 1863 MSN Messenger TCP 2000Cisco Skinny Client Control Protocol
(SCCP), used in VoIP applicationsUDP 5004 Real-Time Transport Protocol (RTP, a voice and video transport protocol) UDP 5060 Session Initiation Protocol (SIP, used in VoIP applications UDP 8008 Alternate HTTP TCP 8080 Alternate HTTP TCP Table:  Network Fundamentals, CCNA Exploration Companion Guide.

Dynamic or private ports.  (numbers 49152 to 65535), also known as ephemeral ports, are usually assigned dynamically to client apllications when initiating a connection.  It is not common for a client to connect to a service using dynamic or private ports (although some peer-to-peer file-sharing programs do).

Some applications can use both TCP and UDP.  For example, the low overhead of UDP enables DNS to serve many client requests very quickly.  Sometimes, however, sending the requested information can require the reliability of TCP.  In this case, both protocols use the well-known port number 53 with this service.  Table 4-3 lists examples of registered and well-known TCP and UDP common ports.

Common Port Application Port Type 53 DNS Well-known TCP/UDP common port 161 SNMP Well-known TCP/UDP common port 531 AOL Instant Messenger, IRC Well-known TCP/UDP common port 1433 MS SQL Registered TCP/UDP common port 2948 WAP (MMS) Registered TCP/UDP common port Table:  Network Fundamentals, CCNA Exploration Companion Guide.

Sometimes it is necessary to know which active TCP connections are open and running on a networked host.  The netstat is an important network utility that you can use to verify those connections.  netstat lists the protocol in use, the local address and port number, the destination address and port number, and the state of the connection.

Unexplained TCP connections can indicate that something or someone is connected to the local host, which is a major security threat.  Additionally, unnecessary TCP connections can consume valuable system resources, thus slowing the host's performance.  Use netstat to examine the open connections on a host when performance appears to be compromised.  Many useful options are available for the netstat command.

SEGMENTATION AND REASSEMBLY:  DIVIDE AND CONQUER

Chapter 2, "Communicating over the Network," explained how an application passes data down through the various protocols to create a protocol data unit (PDU) that is then transmitted on the medium.  At the application layer, the data is passed down and is segmented into pieces.  A UDP segment (piece) is called a datagram.  A TCP segment (piece) is called a segment.  A UDP header provides source and destination (ports), sequencing, acknowledgements, and flow control.  At the destination host, this process is reversed until the data can be passed up to the application. Section 4-9 provides an example.

Transport layer functions

Image:  OSI Transport Layer Network Fundamentals – Chapter 4. (n.d.). Retrieved February 2, 2015, from http://www.uobabylon.edu.iq/eprints/paper_9_19459_27.pdf

Some applications transmit large amounts of data--in some cases, many gigabytes.  Sending all this data in one large piece would be impractical.  A large piece of data would be impractical.  A large piece of data could take minutes or even hours to send, and no other network traffic could be transmitted at the same time.  In addition, if errors occurred during the transmission, the entire data file would be lost or would have to be re-sent.  Network devices would not have memory buffers large enough to store this much data while it is being transmitted or received.  The size of the segment varies depending on the networking technology and specific physical medium in use.

Dividing application data into segments both ensures that data is transmitted within the limits of the media and that data from different applications can be multiplexed onto the media.  TCP and UDP handle segmentation differently.

In TCP, each segment header contains a sequence number.  This sequence number allows the transport layer function on the destination host to reassemble segment in the order in which they were transmitted.  This ensures that the destination application has the data in the exact form the sender intended.

Although services using UDP also track the conversations between applications, they are not concerned with the order in which the information was transmitted or in maintaining a connection.  The UDP header does not include a sequence number.  UDP is a simpler design and generates less overhead than TCP, resulting in a faster transfer of data.

Information can arrive in a different order than it was transmitted because different packets can take different paths through the network.  An application that uses UDP must tolerate the fact that data might not arrive in the order in which it was sent.

TCP:  COMMUNICATING WITH RELIABILITY

TCP is often referred to as a connection-oriented protocol, a protocol that guarantees reliable and in-order delivery of data from sender to receiver.

Making Conversations Reliable

The key distinction between TCP and UDP is reliability.  The reliability of TCP communication is performed using connection-oriented sessions.  Before a host using TCP sends data to another host, the transport layer initiates a process to create a connection with the destination.  This connection enables the tracking of a session, or communication stream between the hosts.  This process ensures that each host is aware of and prepared for the communication.  A complete TCP conversation requires the establishment of a session between the hosts in both directions.

After a session has been established, the destination sends acknowledgments to the source for the segments that it receives.  These acknowledgments form the basis of reliability within a predetermined amount of time, it retransmits that data to the destination.

Part of the additional overhead of using TCP is the network traffic generated by acknowledgments and retransmissions.  The establishment of the sessions creates overhead in the form of additional segments being exchanged.  Additional overhead is the result of keeping track of acknowledgments and the retransmission process the host must undertake if no acknowledgment is received.

Reliability is achieved by having fields in the TCP segment, each with a specific function. 

Application processes run on servers.  These processes wait until a client initiates communication with a request for information or other services.

Each application process running on the server is configured to use a port number, either by default or manually by a system administrator.  An individual server cannot have two services assigned to the same port number within the same transport layer services.  A host running a web server application and a file transfer application cannot have both configured to use the same port (for example, TCP port 8080)

When an active server application is assigned to a specific port, that port is considered to be "open" on the server.  This means that the transport layer accepts and processes segment addressed to that port.  Any incoming client request addressed to the correct socket is accepted, and the data is passed to the server application.  There can be many simultaneous ports open on a server, one for each active server application.  It is common for a server to provide more than one service, such as a web server and an FTP server, at the same time.

One way to improve security is to restrict server access to only those ports associated with the services and applications that should be accessible to authorized requestors.

Clients Sending TCP Requests

Image:  OSI Transport Layer Network Fundamentals – Chapter 4. (n.d.). Retrieved February 2, 2015, from http://www.uobabylon.edu.iq/eprints/paper_9_19459_27.pdf

TCP Connection Establishment and Termination

When two hosts communicate using TCP, a connection is established before data can be exchanged.  After the communication is completed, the sessions are closed and the connection is terminated.  The connection and session mechanisms enable TCP'S reliability function.

The host tracks each host using the information in the TCP header.

Each connection represents two one-way handshake.  Control bits in the TCP header indicated the progress and status of the connection.  The three-way handshake performs the following functions:

  • Establishes that the destination device is present on the network
  • Verifies that the destination device has an active service and is accepting requests on the destination port number that the intiating client intends to use for the session
  • Informs the destinat device that the source client intends to establish a communication session on that port number

In TCP connections, the host servih as a client initiates the session to the server.  The three steps in TCP connection establishment are as follows:

    1. The initiating client sends a segment containing an initial sequence value, which serves as a request to the server to begin a communications session.
    2. The server responds with a segment containing an acknowledgment value equal to the received sequence value plus1, plus its own synchronizing sequence value.  The acknowledgment value is 1 greater than the sequence number because ther is no data contained to be acknowledged.  This acknowledgment value enables the client to tie the response back to the original segment that it sent to the server.
    3. The initiating client responds with an acknowledgment value equal to the sequence value it received plus 1.  This completes the process of establishing the connection.
Section 4-11

TCP Connection Establishment

Image:  OSI Transport Layer Network Fundamentals – Chapter 4. (n.d.). Retrieved February 2, 2015, from http://www.uobabylon.edu.iq/eprints/paper_9_19459_27.pdf

To understand the three-way handshake process, it is important to look at the various values that the two hosts exchange.  Within the TCP segment header, the following six 1-bit fields contain control information used to manage the TCP processes

  • URG:  Urgent pointer field significant
  • ACK:  Acknowledgment field significant
  • PSH:  Push function
  • RST:  Reset the connection
  • SYN:  Synchronize sequence numbers
  • FIN:  No more data from sender
These fields are referred to as flags because the value of one of these fields is only 1 bit and, therefore, has only two values:  1 or 0.  When a bit value is set to 1, it indicates what control information is contained in the segment.

The next sections describe each step of the three-way handshake in more detail.

A TCP client begins the three-way handshake by sending a segment with the SYN control flag set, indicating an initial value in the sequence number field in the header.  This initial value for the sequence number, known as the initial value in the sequence number field in the header.  This initial value for the sequence number, known as the initial sequence number (ISN), is randomly chosen and is used to begin tracking the flow of data from the client to the server for this session.  The ISN in the header of each segment is increased by 1 for each byte of data sent from the client to the server as the data conversation continues.  The SYN control flag is set and the relative sequence number is at 0.

The TCP server needs to acknowledge the receipt of the SYN segment from the client to establish the session from the client to the server.  To do so, the server sends a segment back to the client with the ACK flag set, indicating that the acknowledgment number is significant.  With this flag set in the segment, the client recognizes this is an acknowledgment that the server received the SYN from the TCP client.

The value of the acknowledgment number field is equal to the client ISN plus 1.  This establishes a session from the client to the server.  The ACK flag will remain set for the balance of the session.  The conversation between the client and the server is two one-way sessions:  one from the client to the server and the other from the server to the client.  In this second step of the three-way handshake, the server must initiate the response from the server to the client.  To start this session, the server uses the SYN flag in the same way that the client did.  It sets the SYN     control flag in the header to establish a session from the server to the client.  The SYN flag indicates that the initial value of the sequence number field is in the header.  This value will be used to track the flow of data in this session from the server back to the client. 

Finally, the TCP client responds with a segment containing an ACK that is the response to the TCP SYN sent by the server.  This segment does not include user data.  The value in the acknowledgment number field contains one more than the ISN received from the server.  After both sessions are established between client and server, all additional segments exchanged in this communication will have the ACK flag set.

You can add security to the data network by doing the following:

  • Denying the establishment of TCP sessions
  • Allowing sessions to be established for specific services only
  • Allowing traffic only as a part of already established sessions

You can implement this security for all TCP sessions or only for selected sessions

To close a connection, the FIN control flag in the segment header must be set.  To end each one-way TCP session, a two-way handshake is used consisting of a FIN segment and an ACK segment.  Therefore, to terminate a single conversation supported by TCP, four exchanges are needed to end both sessions:

    1. When the client has no more data to send in the stream, it sends a segment with the FIN flag set.
    2. The server sends an ACK to acknowledge the receipt of the FIN to terminate the session from client to server.
    3. The server sends a FIN to the client to terminate the server-to-client session
    4. The client responds with an ACK to acknowledge the FIN from the server.
Section 4-12

TCP Connection Termination:  FIN ACK

Image:  OSI Transport Layer Network Fundamentals – Chapter 4. (n.d.). Retrieved February 2, 2015, from http://www.uobabylon.edu.iq/eprints/paper_9_19459_27.pdf

In this explanation, the terms client  and server  are used in this description as a reference for simplicity, but the termination process can be initiated by any two hosts that complete the session.

When the client end of the session has no more data to transfer, it sets the FIN flag in the header of a segment.  next, the server end of the connection will send a normal segment contqaining data with ACK flag set using the acknowledgment number, confirming that all bytes of data have been received.  When all segments have been acknowledged, the session is closed.

The session in the other direction is closed using the same process.  The receiver indicates that there is no more data to send by setting the FIN flag in the header of a segment sent to the source.  A return acknowledgment confirms that all bytes of data have been received and that the session is, in turn, closed.

It is also possible to terminate the connection by a three-way handshake.  When the client lhas no more data to send, it sends a FIN to the server.  If the server also has no more data to send, it can reply with both the FIN and ACK flags set, combining two steps into one.  The client replies with an ACK.

TCP Acknowledgment with Windowing

One of TCP's functions is to make sure that each segment reaches its destination.  The TCP services on the destination host acknowledge the data that they have received to the source application.

the segment header sequence number and acknowledgment number are used together to confirm receipt of the bytes of data contained in the segments.  The sequence number indicates the relative number of bytes in the current segment.  TCP uses the acknowledgment number in segments sent back, to the source to indicated the next byte in the session that the receiver expects to receive.  This is called expectational acknowledgement.

The source if informed that the destination has received all bytes in thes data stream but not including, the byte indicated by the acknowledgment number.  The sending host is expected to send a segment that uses a sequence number that is equal to the acknowledgment number.

Remember, each connection is actually two one-way sessions.  Sequence numbers and acknowledgment numbers are being exchanged in both directions.

In Section 4-13, the host on the left is sending data to the host on the right.  It sends a segment containing 10 bytes of data for this session and a sequence number equal to 1 in the header.

Host B receives the segment a Layer 4 and determines that the sequence number is 1 and that it has 10 bytes of data.  Host B then sends a segment back to host A to acknowledge the receipt of this data.  In this segment, the host sets the acknowledgment number to 11 to indicate that the next byte of data it expects to receive in this session is byte number 11.

When host A receives this acknowledgment, it can now send the next segment containing data for this session starting with byte number 11.

Acknowledgment of TCP Segments

Image:  OSI Transport Layer Network Fundamentals – Chapter 4. (n.d.). Retrieved February 2, 2015, from http://www.uobabylon.edu.iq/eprints/paper_9_19459_27.pdf

Looking  at this example, if host A had to wait for acknowledgment of the receipt of each 10 bytes, the network would have a lot of overhead,  To reduce the overhead of these acknowledgments, multiple segments of data can be sent and acknowledged with a single TCP message in the opposite direction.  This acknowledgment contains an acknowledgment number based on the total number of segments received in the session.

For example, starting with a sequence number of 2000, if 10 segments of 1000 bytes each were received, an acknowledgment number of 12001 would be returned to the source.

The amount of data that a source can transmit before an acknowledgment must be received is called the window size.  Window size is a field in the TCP header that enables the management of lost data and flow control.

No matter how well designed a network is, data loss will occasionally occur.  Therefore, TCP provides methods of managing these segment losses, includintg a mechanism to retransmit segments with unacknowledged data.

A destination host service using TCP usually only acknowledges data for contiguous sequence bytes.  If one or more segments are missing, only the data in the segments that complete the stream is acknowledged.  For example, if segments with sequence numbers 1500 to 3000 and 3400 to 3500 were received, the acknowledgment number would be 3001 to 3399 have not been received. 

When TCP at the source host has not received an acknowledgment after a predetermined amount of time, it will go back to the last acknowledgment number that it received and retransmit data from that point forward.  "The retransmission process is not specified by RFC 793 but is left up to the particular implementation of TCP.

For a typical TCP implementation, a host can transmit a segment, put a copy of the segment in a retransmission queue, and star a timer.  When the data acknowledgment is received before the time expires, the segment is retransmitted.

Hosts today can also employ an optional feature called selective acknowledgments.  I both hosts support selective acknowledgments, it is possible for the destination to acknowledge bytes in noncontiguous segments, and the host would only need to retransmit the missing data.

TCP Congestion Control:  Minimizing Segment Loss

TCP provides congestion control trough the use of flow control and dynamic window sizes.  

Flow control assists the reliability of TCP transmission by adjusting the effective rate of data flow between the two services in the session.  When the source is informed that the specified amount of data in the segments is received, it can continue sending more data for this session.

The window size field in the TCP header specifies the amount of data that can be transmitted before an acknowledgment must be received.  The initial window size is determined during the session starup through the three-way handshake.

the TCP feedback mechanism adjusts the effective rate of data transmission to the maximum flow that the network and destination device can support without loss.  TCP attempts to manage the rate of transmission so that all data will be received and retransmission will be minimized.

TCP Segment Acknowledgement and Window Size

Image:  OSI Transport Layer Network Fundamentals – Chapter 4. (n.d.). Retrieved February 2, 2015, from http://www.uobabylon.edu.iq/eprints/paper_9_19459_27.pdf

Another way to control the data flow is to use dynamic window sizes.  When network resources are constrained, TCP can reduce the window size the require that received segments be acknowledged more frequently.  This effectively slows the rate of transmission because the source must wait for data to be acknowledged.

The TCP receiving host sends the window size value to the sending TCP to indicate the number of bytes that is prepared to receive as part of this session.  If the destination needs to slow the rate of communication because of limited buffer memory, it can send a smaller window size value to the source as part of an acknowledgment.  If a receiving host has congestion, it can respond to the sending host with a segment with a reduced window size.

After periods of transmission with no data losses or constrained resources, the receiver will begin to increase the window size field.  This reduces the overhead on the network because fewer acknowledgments need to be sent.  Window size will continue to increase until data loss occurs, which will cause the window size to be decreased.

Which communication tool allows multiple users to communicate with each?

Conferencing Technology Conferencing software allows two or more individuals to communicate with each other in real time, using Internet-based or cloud-based platforms. Conferencing software allows that same group of people to view a unified screen.

Which communication tool allows multiple users to communicate with each other in real time by using a smartphone application of social media site?

Instant messaging is a set of communication technologies used for text-based communication between two (private messaging) or more (chat room) participants over the Internet or other types of networks (see also LAN messenger). IM–chat happens in real-time.

Why is UDP well suited as the transport layer protocol for video applications?

The most common Application layer protocols carried by UDP datagrams can be found in Table 2-3. Because UDP doesn't require the error correction segments used by TCP, it is faster than TCP. It is therefore also well suited to streaming media, where retransmitting a corrupted segment won't provide any benefits.

What protocol header information is used at the transport layer to identify a target application?

The transport layer (TCP, SCTP, and UDP) reads the header to determine which application layer protocol must receive the data. Then, TCP, SCTP, or UDP strips off its related header. TCP, SCTP, or UDP sends the message or stream to the receiving application. The application layer receives the message.