created by [email protected]
In the Internet, congestion control is embedded in a protocol at the transport layer or above. The TCP protocol operates “end-to-end”, where “end” is the host the application is running on. PEPs usually break end-to-end connections into several closed-loop connections. However, in addition to reliability problems, complexities in using IPsec and SSL arise because security is an end-to-end function in these cases. In terms of the processing delay, running a separate TCP instance for every flow is expensive.
Lowering latency in datacenters directly impact the quality of the returned results, and as a result, revenue. However, most proposed improvements require changing the datacenter fabric, which hinders their applicability and deployability over commodity hardware. TCP-based protocols fail at attaining fairness among flows with different RTTs.
Overbooking of resources is an economic necessity to achieve sufficient users to amortise costs over. The same overbooking creates the potential for performance hazards which are triggered by both "normal" fluctuations in demand as well as peaks in demand due to external correlations. So the problem is how to maintain services to critical/essential/premium customers during such periods of demand exceeding supply.
Routing and forwarding solutions in datacentre networks (DCNs), typically based on TCP/IP, do not scale well, resulting in large forwarding tables, routing burden and communication cost (information exchanged to populate routing tables and re-converge upon failures)
In RINA, recursion arises from the ability to arbitrarily arrange structurally-equivalent DIFs. This allows each DIF to detect and manage the congestion for its resources, push back to higher layer DIFs when resources are overloaded. Moreover, improvements that have been done to TCP such as Split-TCP on the internet “naturally appear” with RINA without their side effects. RINA also aggregates flows at lower layers, leading to less competition among flows and increased performance.
LGC is an easily deployable mechanism. It operates purely at the transport layer of end systems and does not require changes to network equipments such as switches and routers. However, different from DCTCP where this threshold is a function of the Bandwidth_Delay Product (BDP), our threshold is always set to only one packet, which further lowers latency. It also attains fairness among flows irrespective of the RTT.
Combination of the QTAMux (that can ensure bounded delay and loss on short - ballistic - timescales) and congestion control system that can signal the need to decrease demand over elastic timescales can assure services despite overbooking.
Topological routing and forwarding policies that make use of the DCN topology knowledge to forward packets to the closest neighboring device to their destination. In the non-failure scenario, this approach only requires the storage of forwarding information per adjacent neighbor (compared to traditional forwarding tables, which may contain up to one entry per network node). Upon failures, the storage of merely few exceptions are enough to override invalid primary rules due to such failures.
IP based approaches do not have the coherent signalling infrastructure to convey the demand/supply requirements nor do they have the appropriate recursive and spacial-isolation mechanisms to achieve suitable time constants for the control loops
The rigidness of the TCP/IP protocol stack does not allow the deployment of such topological forwarding and routing solutions, as the RINA programmable environment does in an easy way.
Wireless/Satellite network service providers
Datacenter owners/administrators
Customer: Service providers (e.g. telcos) wanting to offer premium services, e.g. emergency communications, entertainment without interruptions, reliable gaming, business service such as remote desktop etc.
Target costumer: datacentre owners/administrators.
Packet loss, End-to-end delay
Latency, Round-trip-time, Throughput
Resource utilisation, QoE, UX
Forwarding table size, communication cost, computational complexity of the routing re-convergence upon failures