Labeled Network Stack promises to improve the user experience of interactive network services while keeping resources high

Labeled Network Stack promises to improve the user experience of interactive network services while keeping resources high

Network interaction has become ubiquitous with the development of the information age and has penetrated into all areas of our lives, such as cloud gaming, web searching and autonomous driving, promoting progress human and providing comfort to society. However, the increasing number of customers has also caused problems affecting the user experience. Online services may not respond to some users within the expected time, which is known as high tail latency. Additionally, burst server traffic exacerbates this problem.

In order to solve this problem and improve computer performance, researchers must constantly optimize network stacks. At the same time, low entropy cloud (i.e. low workload interference and low system jitter) is becoming a new trend, where the Labeled Network Stack (LNS) based server is a good case for getting orders of magnitude performance improvement over servers. based on traditional networking stacks. Thus, it is essential to conduct a quantitative analysis of the LNS to reveal its benefits and potential improvements.

Wenli Zhang, researcher at the State Key Laboratory of Processors, Institute of Computing Technology and co-authors of this study, said: “Although previous experiments have demonstrated that LNS can support millions of clients with low queuing latency , compared to mTCP, a typical user-space networking stack in academia, and the Linux networking stack, the mainstream networking stack in industry, an in-depth quantitative study is lacking to answer the following two questions:

(i) Where does the low tail latency and low entropy of LNS mainly come from, compared to mTCP and the Linux networking stack?

(ii) How many more LNSs can be optimized? »

In order to answer the above questions, an analytical method based on queuing theory is proposed to simplify the quantitative study of cloud server queuing latency. In the Massive-Client scenario, Zhang and his co-authors establish models characterizing the change in processing speed at different stages for an LNS-based server, an mTCP-based server, and a Linux-based server, with burst traffic for example. Additionally, the authors derive the formulas for the tail latency of the three servers.

“Our models 1) reveal that two technologies in LNS, including data full-path priority processing and full-path zero copying, are the primary contributors to high performance, with orders of magnitude latency improvement of queue as latency entropy reduces to a maximum of 5.5× plus mTCP-based server, and 2) suggest the optimal number of worker threads querying a database, improving LNS-based server concurrency 2.1×–3.5 × Zhang said, “The analytical method can also be applied to modeling other servers characterized as tandem queuing networks.”

This work is supported in part by the National Research and Development Key Program of China (2016YFB1000200) and the National Natural Science Foundation of China Key Program (61532016).

Article reference: Hongrui Guo, Wenli Zhang, Zishu Yu, Mingyu Chen, “Analysis of theoretical queuing performance of a labeled low entropy network stack”, Intelligent Computing, vol. 2022, Paper ID 9863054, 16 pages, 2022. https://doi.org/10.34133/2022/9863054

/Public release. This material from the original organization/authors may be ad hoc in nature, edited for clarity, style and length. The views and opinions expressed are those of the author or authors. See in full here.

#Labeled #Network #Stack #promises #improve #user #experience #interactive #network #services #keeping #resources #high

Leave a Comment

Your email address will not be published. Required fields are marked *