4 Factors of Network Latency
When thinking about a computer network, we often focus on uptime/availability. Is the network up and running? Will you be able to access your data on-demand? Although uptime is very important, it is only one half of the computing equation. The other half, and some might argue the more important half, is the speed at which data travels. How quickly and efficiently your data can travel across the network from one point to another is just as important as if the network is available.
Although discussions of uptime may receive the most attention on technology news and corporate websites, low latency is the backbone of an effective network. Vast amounts of data from disparate locations need to be processed, put in line, and sent across town or around the world quickly and efficiently to keep businesses competitive.
So, what does latency actually mean, what are the primary causes of network delays, and what are some of the ways MULTACOM helps to reduce it?
What is Latency?
Network latency can be defined with either a point to point or round-trip explanation. It can refer to the time it takes a packet of data to travel from one point to another. It can also refer to a round-trip where the packet of data must travel from a source to some other point and back to the source again. Since neither option will be absolutely instantaneous, the time delay that the data takes to make the trip is latency.
In a real-world environment in which data travels from New York City to Tokyo, Japan, the data will travel through multiple links and gateways to get to its destination. Because there are devices like switches and routers along the way, and lots of other packets of data traveling along a similar route, there will be a delay in how quickly your data gets to its destination. Latency is typically measured in milliseconds and is expressed as a number greater than zero.
Primary Causes of Network Delays.
Each link and gateway delay added together will give you the total amount of network latency. Latency is caused by four key factors:
1. Data Processing
The first factor in network latency is processing. This is the time needed to evaluate a data packet header to determine exactly where it is going, the best route to select, and then assign it to the appropriate link queue for transmission.
As an example, if you are scheduled to travel to a client’s office in another city, you need to understand where you currently are and where you are going to determine how best to get to the highway, train station or airport which can speed you on your way. Being able to select the best route and following the signs or GPS instructions as you go (hands free of course), will help ensure that you make it to your destination with no problems.
The more time it takes a router to evaluate where the data packet is going, the more time is wasted in getting it to its destination, resulting in increased latency.
2. Queueing
The second factor which can cause delays is queueing. Whether you work in a small business with three employees or a large enterprise with 5,000 employees, your network is in constant use – archiving old records, visiting websites, sending emails, listening to music, using cloud-based business applications, and watching online videos, just to name a few.
It appears this is all happening at the same time, but that is not true. Each of these activities generates packets of data which travel back and forth and are at different places along the network waiting in queues to proceed to their destination. If these were actually being sent at the same time, a traffic jam of data could cause a network bottleneck or dropped packets. This might occur because a router is overloaded, it decides to send more important information first, it is believed to be part of a DDoS attack, or for other reasons.
To prevent this traffic jam, packets of data are placed in queues where they wait until they are released and can continue on their way to an email recipient, data center server or other destination. Since a data packet can be stopped and placed in a queue at more than one point along the route, each delay that it experiences adds more latency and extends the time it takes the data to get to its recipient.
3. Transmission
The third factor which can cause delays is transmission. Simply put, a transmission delay is the time between when the first packet of data and the last packet of data are transmitted. Explained as a question: how quickly can you get your data into your network’s link or “on the wire” so it can be sent to its destination?
This is the where the idea of bandwidth comes into play. The speed with which you can do this is determined by the speed of your link. If you can increase your network’s bandwidth, you can increase the speed and reduce the transmission latency on your network. If this is not possible, you will need to send less data.
4. Propagation
The fourth factor which can cause delays is propagation. The propagation factor is probably the easiest aspect of latency to understand but also the one that is difficult to control. A propagation delay is the time between when the last piece of data is transmitted at the source and the last piece of data is received at the destination. This is all based on the actual physical distance from where the data was sent to where the data will end up.
Consider an email sent from a salesperson in Los Angles, California to a potential customer in Oslo, Norway. If the data could be sent on a direct route, that would be a distance of 5,324 miles. But, computer networks are not direct. Since fiber optic lines must cross continents and oceans to get the data to its destination, the distance could actually be longer. Beginning in Los Angeles, the email may have to travel through Saint Louis, Missouri; Washington, D.C.; London, England, and Copenhagen, Denmark before it arrives at the office of the email’s recipient in Oslo, Norway.
There is no way to implement advanced technologies to reduce propagation latency. Physical distance is what it is. LA will always be 5,324 miles from Oslo.
Add together your latency for each of these four factors – processing, queueing, transmission and propagation – and you have the total amount of network latency.
How MULTACOM Reduces Latency.
At MULTACOM, we are laser-focused on reducing latency to help deliver increased productivity, efficiency and speed of business to all our clients. Four primary ways in which we accomplish this are:
1. Network Peering:
Peering is defined as the connection of two separate Internet networks for the purpose of directly transferring customer traffic between them, so a third-party carrier does not need to be involved.
While all Tier 1 ISPs and many colocation and dedicated data center providers also peer, their primary focus is to reduce their own costs. At MULTACOM, we do this not to save money for ourselves but to deliver business benefits to our clients, including:
- Increased redundancy across multiple ISPs which decreases reliance on a single provider.
- Increased capacity enabling large quantities of data to be split and transported across multiple networks which improves speed of delivery.
- Enhanced performance by providing data with more direct paths to their destination.
We have over 200 peers and direct connections to large ISPs around the world including Singtel, NTT, China Telecom, China Unicom, Quest, Microsoft, Google, Airtel, China Mobile, ChinaCache, EMIX, and more.
2. Route Optimization:
Route optimization software enables providers to determine if they would like to select a route on which a client’s data will travel based on cost or performance. Many providers, in order to generate large cost savings for themselves, select inexpensive routes with little concern for performance.
At MULTACOM, our intelligent routing platform is not optimized for cost savings but client performance. Our software measures latency and packet loss, enabling us to route network traffic around problem areas as needed. It detects the best path regardless of cost, improving performance and speed of delivery.
3. Network Utilization:
Network utilization is the current amount of network traffic compared to the maximum amount of traffic that the network can handle. It is usually written as a percentage and shows whether a network is busy, at normal utilization, or is idle.
MULTACOM’s policy on network utilization is on a per line basis with each line not allowed to exceed 50% utilization. This enables us to eliminate bottlenecks, increase network performance and reduce latency across our data center.
4. DDoS Mitigation:
We utilize a unique scrubbing system to eliminate attempted attacks outside of the network by studying the specific fingerprint of the attack. This enables us to only filter the specific type of attack and not disturb the service of other customers.
As an example, harmless video streams and voice communications utilizing UDP (User Datagram Protocol) are often blocked by a DDoS provider’s scrubbing center because it is assumed to be a UDP flood. Since our proprietary software is able to detect the type, port, destination, and fingerprint of the attack, we are able to pinpoint and mitigate the attack by filtering that specific signature and sending it to the scrubbing center without disturbing our clients’ voice and video streams and other UDP services.
Conclusion
Since each individual component that we have discussed here – processing, queueing, transmission and propagation – can contribute to the overall latency which you experience, it is critical to take this into consideration when selecting a data center provider. Does the provider spend more time discussing how much money they can save you or how quickly and effectively you will be able to conduct business? Your data center’s focus on decreasing latency at every turn will keep your business moving, enable you to be more competitive and help drive a positive return to your company’s bottom line. Contact MULTACOM to learn more about our initiatives to reduce network latency.