A friend uses a satellite connection for internet access. Streaming media, files films are impressive, but busy web pages are slow.
I have made some notes...
High-bandwidth satellite links aren't always faster than low-bandwidth land-based links; it depends on the type of network traffic.
The real problem with satellite access is latency, the round triptime.
A keystroke will need a quarter of a second to travel the distance between uplink, satellite and downlink, and then the response needs the same time to travel the same way back again.
In typical internet client/server interaction the client generates a request which is responded to by the server, after which the client generates the next request. As a result, unless protocols and software are tuned to use long-delay links (using windowing protocols and such) a 28k phone link that uses landlines only may very well be faster than a 1.5 Mbit satellite link that uses delays in the order of 300 to 500 milliseconds. Only in larger data transfers (FTP, image transfers, etc.) delay becomes less of an issue.
The following metrics are used to measure various aspects of network protocol performance.
RTT, expressed in milliseconds, is the elapsed time for a request to go from
node 'A' to node 'B,' and for the reply from 'B' to return to 'A.' The RTT is
the total time for the trip. The forward and reverse path times do not need to
be the same.
RTT depends on the network infrastructure in place, the distance between nodes, network conditions, and packet size. Packet size, congestion, and payload compressibility have a significant impact on RTT for slower links. Other factors can affect RTT, including forward error correction and data compression, which introduce buffers and queues that increase RTT.
Goodput, measured in bits-per-second, shows useful application data successfully processed by the receiver. It measures effective or useful throughput and includes only application dataŚnot packet, protocol, or media headers.
Protocol Overhead, expressed as a percentage, is the number of non-application bytes (protocol and media framing) divided by the total number of bytes. In this article, overhead is calculated for both directions, but it can be calculated separately for each direction.
The bandwidth-delay product is the product of the bits-per-second bandwidth of the network and the RTT, or delay, in seconds. This equates to the number of bits it takes to fill the "network pipe." If this number is large, the TCP/IP stack must be able to deal with a large amount of unacknowledged data in order to keep the pipeline full. This is an important end-to-end metric for streaming applications.
There are also two basic types of network applications, transactional and
streaming. These application types could also be called Interactive and Batch
Processing applications. Transactional applications are "stop and go"
applications. The protocol operations seen are usually request-reply in nature
and operations may need to be ordered, although not in all cases. Some examples
of transactional applications include Synchronous RPC, as well as some HTTP and
With streaming applications, the objective is to move data, with little concern for data ordering. Many traditionally transactional applications can also be streamed. Some examples of streaming applications include network backup and FTP.
The design of a network application should be cognisant of the network and protocol characteristics.
The TCP/IP protocol has a number of built-in limitations. Most of these
limitations are only visible when running a poorly written application. How
these limitations affect the application depend on whether it is a transactional
or streaming application.
Transactional applications are affected by the overhead required for connection establishment and termination. For example, each time a connection is established on an Ethernet network, three packets of about 60 bytes each must be sent and approximately 1 RTT is required for this exchange. When termination of a connection occurs, four packets are exchanged. This is compounded when an application opens and closes connections often.
In addition, when a connection is established, "slow-start" takes place. This artificially limits the number of data segments that can be sent before acknowledgement of those segments is received, an efficiency designed to limit network congestion. When a connection over Ethernet is first established, regardless of the receiver's window size, a 4-kilobyte (KB) transmission can take up to 3 to 4 RTT due to slow-start.
The Nagle Algorithm is a TCP/IP optimization that can also limit data transfer speed on a connection. It reduces protocol overhead for applications that send small amounts of data, such as a Telnet session sending a single character at a time. Rather than immediately sending a packet with lots of header files and little data, the stack waits for more data from the application or an acknowledgement before proceeding.
Delayed acknowledgements or "Delayed ACK" was also designed into TCP/IP to enable more efficient "piggybacking" of acknowledgements when return data was forthcoming from the receiving side application. However, if this data is not forthcoming and the sending side is waiting for an acknowledgement, delays of about 200 ms per-send can be experienced.
When a TCP connection is closed, the connection resources at the node that initiated the close are put into a wait state, called TIME-WAIT, to guard against data corruption if duplicate packets linger in the network (ensuring both ends are done with the connection). This can deplete resources required per-connection (RAM and Ports) when applications frequently open and close connections.
In addition to being affected by delayed ACK and other congestion avoidance schemes, streaming applications can also be affected by a default receive window that is too small on the receiving end.
Why are we playing with satellites? We live in Milton Keynes, a modern city in England that has dire Broadband coverage.
I'm doing something about it!
Comments? Email me