TCP Congestion Handling

 

 

TCP Congestion Handling

Congestion by definition means intermediate devices – routers in this case – are overloaded. Meaning that TCP segments would not be delivered as fast as possible or they can be even dropped – which leads to retransmission.

Now, let’s suppose congestion dramatically increased on the internetwork, and there was no mechanism in place to handle congestion. Segments would be delayed or dropped, which would cause them to time out and be retransmitted. This would increase the amount of traffic on the internetwork between client/server. Furthermore, there might be thousands of TCP connections behaving similarly. Each would keep retransmitting more and more segments, increasing congestion further. Performance of the entire internetwork would decrease dramatically, resulting in a condition called congestion collapse.

Now TCP uses a number of mechanisms to achieve high performance and avoid congestion collapse. These mechanisms control the rate of data entering the network, keeping the data flow below a rate that would trigger the collapse. They also yield an approximately max-min fair allocation between flows.

Acknowledgments for data sent, or lack of acknowledgments, are used by senders to infer network conditions between TCP sender and receiver. Coupled with timers, TCP senders and receivers can alter the behavior of the flow of data. This is more generally referred to as congestion control and/or congestion avoidance.

Modern implementations of TCP contain 3 intertwined algorithms:

  • Slow Start
  • Congestion Avoidance
  • Fast Retransmit
  • Fast Recovery

(NOTE: The book states these mechanisms to RFC 2001, but it has evolved way more than that and now these mechanisms are stated in RFC 5681, which obsoletes RFC 2581 and which in turn obsoleted RFC 2001)

TCP Congestion Window

 

Is one of the factors that determine the number of bytes that can be outstanding at any given time. The congestion window is maintained by the sender.

The congestion window is a  means of stopping a link between the sender and the receiver from getting overloaded with too much traffic. It is calculated by estimating how much congestion there is between the two places.

When a connection is set up, the congestion window, a value maintained independently at each host, is set to a small multiple of the maximum segment size (MSS) allowed for that connection. Further variance in the congestion window is dictated by an Additive Increase/Multiplicative Decrease approach (I believe this was the case before Slow-Start which follows a more multiplicative increase).

This means that is all segments are received and the acknowledgments reach the sender on time, some constant is added to the window size. The window keeps growing exponentially until a timeout occurs or the receiver reaches its limit (a threshold value “ssthresh”). After this, the congestion window increases linearly at the rate of 1/congestion-window on each new acknowledgment received.

On timeout:

  1. Congestion Window is reset to 1 MSS (not sure if is a complete MSS or just the initial fraction said before)
  2. “ssthresh” is set to half the congestion widow size before packet loss started
  3. “slow start” is initiated.

An administrator may adjust the maximum window size limit, or adjust the constant added during additive increase, as part of TCP tuning.

The flow of data over a TCP connection is also controlled by the use of the TCP receive window. By comparing its own congestion window with the receive window of the receiver, a sender can determine how much data it may send at any given time.

Slow Start (RFC 5681)

Is part of the congestion control strategy used in TCP – it is used in conjunction with other algorithms. It is also known as the exponential growth phase.

Slow-start begins initially with a congestion window size (cwnd) of 1, 2 or 10. The value of the congestion window (CWND) will be increased with each acknowledgment (ACK) received, effectively doubling the window size each round trip time (“although is not exactly exponential because the receiver may delay its ACKs, typically sending one ACK every two segments that it receives”). The transmission rate will be increased with slow-start algorithm until either a loss is detected, or the receiver’s advertised window (rwnd or RCV.WND) is the limiting factor, or the slow-start threshold (ssthresh) is reached. If a loss event occurs, TCP assumes that it is due to network congestion and takes steps to reduce the offered load on the network. These measurements depend on the used TCP congestion avoidance algorithm. Once ssthresh is reached, TCP changes from slow-start algorithm to the linear growth (congestion avoidance) algorithm. At this point the window is increased by 1 segment for each RTT.

Although the strategy is referred to as “Slow-Start”, its congestion window growth is quite aggressive, more aggressive than the congestion avoidance phase. Before slow-start was introduced in TCP, the initial pre-congestion avoidance phase was even faster. (To do more digging- I think initially it was a linear approach – what we used to call diente de sierra – and was a much slower approach to fill the link capacity with data)

The behavior upon packet loss depends on the TCP congestion avoidance algorithm that is used:

TCP Tahoe: In here, when a loss occurs, fast retransmit is sent, half of the current CWND is saved as a slow start threshold (ssthresh) and slow start begins again from its initial CWND. Once the CWND reaches the ssthresh, TCP changes to congestion avoidance algorithm where each new ACK increases the CWND by: ss + (ss/cwnd)

This results in a linear increase of the CWND.

TCP Reno: This implements an algorithm called fast recovery. A fast retransmit is sent, half of the current CWND is saved as ssthresh and as a new CWND, thus skipping slow-start and going directly to congestion avoidance algorithm

Transmission Control Protocol (TCP) uses a network congestion-avoidance algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, with other schemes such as slow-start and congestion window to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet.[1][2][3][4] Per the end-to-end principle, congestion control is largely a function of internet hosts, not the network itself. There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet.

From <https://en.wikipedia.org/wiki/TCP_congestion_control>

tcp1

 

singature

Advertisements

RESTful Services

RESTful Services

Rest1

your client will need to request to the server, for data. on the request you can include data like API Keys when the request arrives at the server, will provide back with a response if it was successful or not, and provide you with data (JSON, XML, etc) – on the very simple form, RESTful can we thought of a way to get data.

JSON is a simple standard for data delivery in the restful model, JSON has been used to exchange data between applications written in all of these programming languages.

this source https://restfulapi.net/json-vs-xml/ provides some more insights.

XML is a data format, AND it is a language also. It has many powerful features which make it much more than simple data format for data interchange. e.g. XPath, attributes and namespaces, XML schema and XSLT etc. All these features have been the main reasons behind XML popularity.

JSON was not designed to have such features, even though some of them are now trying to find their places in the JSON world, e.g. JSONPath.

Simply put, XML’s purpose is to document markup. Always prefer to use XML, whenever document markup and meta-data is an essential part of data and cannot be taken away.

Example:

https://developers.messagebird.com/docs/introduction

curl -X GET 'https://rest.messagebird.com/reporting/sms?periodStart=2018-04-01T00:00:00Z&periodEnd=2018-04-30T00:00:00Z&periodGroup=month&filterBy[originator]=OmNomNom&filterBy[originator]=BeautyBird&groupBy=originator' -H 'Authorization: AccessKey test_euSTWsGvjp' -H 'Accept: application/json'

 

Rest2

 

 

Things that holds numbers.

What happens if we start moving hidden neuron layers (and neurons) from the initial Circle Data set at http://playground.tensorflow.org how many of those things we need to learn from current data set.

D3

a simple perceptron, with only  4 neurons will have a great EPOCH

D4

however 2 Neurons, will just not cut it with the offered data sets and Activation

D5

check out Epoch using reLU

D6

 

 

 

What is inside those ESP packet|S

pic1

Say that you have an IPSEC Between 2 Palo Alto devices (or at least 1) and you want to know what is inside those ESP Packets.

you could capture Protocol 50

esp-pic2

To get the keys you will need to raise the dump level: debug ike global on dump

Send some traffic over the VPN Tunnel, and you will see in the IKEmgr the Encryption Key and the Authentication key used on that SPI

esp-pic3

see in the image above the first packet (1) as ESP

make sure the SPI value set-in Wireshark is in LOWER case

After Wireshark decrypts it, we can see that ICMP traffic was sent out via the tunnel interface encapsulated on ESP, and that an ICMP message as the payload timing out

esp-pic4

singature

Get ready for multihoming!

Internet businesses, especially those which enable VOIP, e-commerce or cloud services, require IP redundancy. For them, network performance is crucial as it is directly connected to their quality of service. Any routing anomaly causing downtime or outages results in financial loss and might severely affect provider’s reputation. Deploying redundant IP connectivity is one of the most frequent solutions to minimize downtime, and this post will screen the most important steps in setting up redundancy for an IP network.

A redundant network is one connected to multiple internet providers. Such networks are commonly called multihomed. The Border Gateway Protocol (BGP) is used to connect to transit providers via eBGP sessions. The protocol is able to asses all the available routes, and find the shortest path to an end-user. Eventually, traffic is routed through the shortest available paths to achieve maximum performance.

Prepare your BGP Network:

BGP is quite similar to the Routing Information Protocol (RIP); however, instead of choosing the shortest path based on router hops, it relies on the shortest path among Autonomous Systems (AS). Autonomous System Numbers are associated with the BGP routing domains and are identified by an AS Number (ASN), provided by a Regional Internet Registry (RIR).

As you get to understand the BGP basics, configuring a multihomed network becomes simple. As soon as your network’s internet connections are up and running, you can follow these common steps to achieve BGP multihoming:

1. Get your own ASN. You can acquire one from your Regional Internet Registry, and identify your network on the internet, as a separate authority, running its own policies.
2. Purchase some IP address space from your RIR.
3. When using a static route to link with your provider, the network is single-homed (using one internet connection) and the internet provider is not sending any BGP routes to your network. In order to multihome, you must ask the internet provider to announce BGP routes towards your AS. Keep in mind, your ASN and the remote router’s neighbor address will be required by your internet provider. The static route can be removed as soon as you get the internet provider’s BGP routes in your routing table. As soon as you have all these in place, you can start advertising your network via BGP.
4. Once you are multihomed on a single route, add a link to an alternative internet provider, and ask it to advertise BGP routes towards your AS. The second internet provider will also require your ASN and the remote router’s neighbor address, so have them ready.

As soon as you have followed these steps, routes from each of your internet providers will appear within your edge router’s BGP table. According to BGP’s algorithm, routes having the shortest AS path towards a destination will be used to send the traffic through.

If one of your Internet providers goes down, the BGP session that enables connectivity with that provider will be reset and all of the advertised routes, originating from the offline provider shall be withdrawn from your routing table. Eventually, better alternative routes shall be selected from routes announced by the alternative internet provider.

Given to the BGP’s algorithm, all of your traffic might be sent out towards a particular provider, since it is the best one to route through. If the amount of traffic exceeds the internet provider’s link capacity, you might need to perform some tuning, to balance the traffic among your internet provider’s links. This task might be quite hard to accomplish, since BGP alone does not imply load balancing. As an alternative, you could use specific hardware or some route optimization​ solutions such as Noction’s Intelligent Routing Platform (IRP), to optimize BGP decision-making

BGP Usage and considerations:

When using BGP, there are several things to keep in mind:
e- Since BGP advertises network fluctuations to routers outside your AS, you must maintain your network to be as stable as possible.
– Advertise only a specific set of prefixes you own. Other networks might suffer service loss if you are advertising prefixes other than yours.
– Plan your architecture before engaging in BGP routing. Your network needs to be configured according to several BGP aspects in meeting multihoming requirements.
– Choose your edge routers. The Internet’s BGP tables involve huge amounts of data, especially with multihoming in place. Therefore, your edge routers must have enough memory to store and process all those routing tables.

While BGP alone can empower your network to deliver fair performance, it is still not enough when delivering performance sensitive applications, such as VOIP or e-commerce. Under some circumstances, the shortest path BGP selects, could be congested or affected by other network anomalies. However, traffic gets re-routed from from the shortest path only when it is the destination is completely unreachable. As a result, an end user might experience service delivery issues, since traffic is routed through a reachable, yet underperforming internet path.

To avoid such scenarios, BGP tuning must be performed at a network’s edge, which involves manipulating various BGP attributes to spot the issues and re-route specific prefixes, from those underperforming paths to alternative routes with better performance metrics. Best practices, recommend deployment of intelligent routing systems like Noction IRP, which can address most of your BGP challenges in a multihomed environment.

As soon as you have a redundant BGP network which is empowered by automation, you are ready to meet your customer’s demand for 100% uptime and outstanding network performance.

A Networker Blog