Husker engineers developing tool to combat internet congestion

Computer Science

Tiffany Lee, November 2, 2021

Husker engineers developing tool to combat internet congestion

As the internet continues its rapid expansion into every corner of people’s lives, its reliability and efficiency are increasingly important. Stable connections are needed to power home security systems, disaster response tools, smart cities, connected vehicles and more.

For these applications to thrive, the internet must be reliable in complex environments — on the road, in the ocean and underground, to name a few. A major blockade to consistent performance is network congestion, which happens when a network is overloaded with data.

The University of Nebraska–Lincoln’s Lisong Xu and Hamid Bagheri are using a $750,000 grant from the National Science Foundation to develop a tool that will address one of the most significant drivers of network congestion: buggy congestion control algorithms, whose misfires can lead to the all-too-familiar internet freeze or interrupted video streaming during peak usage times. These glitchy algorithms are the result of how engineers test their product before deployment: Today, the standard approach is to test congestion control algorithms for speed, but not reliability. Bugs related to reliability are discovered and fixed down the road — sometimes years after deployment.

This approach poses serious threats to the internet’s stability and performance. Xu and Bagheri aim to change the process by producing a methodology that engineers can use to verify the reliability of these algorithms — formally known as Congestion Control Algorithm Implementations, part of the Transmission Control Protocol — before they go live.

“The goal of this project is to make sure the internet and the Transmission Control Protocol will work correctly in all types of new internet applications and new types of environments that are different from the traditional internet,” said Xu, professor of computer science and engineering. “We want to make sure that we not only have a fast internet, but also one that is reliable and that you can count on in all possible situations, especially in emergencies.”

The Transmission Control Protocol, or TCP, is the internet’s fundamental software. It’s a communications standard that enables isolated computer networks to communicate by grouping data into packets for transport across networks. If the internet is the “information highway,” the TCP is its overarching traffic law.

Congestion Control Algorithm Implementations, or CCAIs, are a subset of the TCP. Though engineers have launched more than 20 CCAIs, even the most widely adopted ones pose problems. For example, in 2015, engineers discovered a decade-old error in CUBIC, the CCAI used by all major operating systems. Experts believe the bug significantly affected the internet’s efficiency during that time period.

Myriad reasons underlie the problem. For one thing, CCIA developers often don’t have the expertise or training to verify their algorithms using mathematically based techniques. Scalability is also an issue, as verifying a CCAI across a round trip involves a large number of data packets in a wide range of network environments. And to this point, expense has also been a barrier.

“Before, computational mechanisms have not been able to use verification techniques, because it was expensive to complete an exhaustive analysis,” said Bagheri, assistant professor of computer science and engineering. “These days, we have stronger machines that enable us to leverage exhaustive analysis and verification techniques to be able to prove properties of the systems.”

Xu and Bagheri will merge their areas of expertise to produce a tool that developers at companies such as Google, Microsoft and Apple can easily use to check, analyze and verify their algorithms. Bagheri, an expert in software engineering, will help apply a technique that’s been used to successfully verify software to the realm of the TCP, which is Xu’s specialty.

The approach, called exhaustive symbolic execution, is a means of systematically testing all the possible execution paths resulting from certain inputs, which are represented symbolically. By contrast, concrete execution allows engineers to test only a single pathway triggered by a specific input. This shift — from concrete inputs to symbolic inputs in the realm of the TCP and CCAIs — is what distinguishes the team’s work.

“When you analyze a single point, you’re not covering the entire state of space,” Bagheri said. “With the help of new verification techniques, we are going to exhaustively analyze the entire state of space with respect to TCP and CCAI protocols, and especially custom-built CCAIs. That is the big difference between this work and much of the other work that has been done in this domain.”

The duo will recruit graduate and undergraduate students to participate in the research, with a particular focus on including women and students from underrepresented groups. They also plan to partner with the Young Nebraska Scientists program to provide high school students with summer research experiences.

The program is funded through NSF’s Formal Methods in the Field program, which unites researchers in formal methods with researchers in other areas of computer and information science and engineering to develop methodologies aimed at correct-by-construction systems and applications.


Computer Science