<?xml version="1.0" encoding="US-ASCII"?>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc2629 version 1.2.13 -->
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" [
<!ENTITY RFC4760 SYSTEM "https://xml2rfc.tools.ietf.org/public/rfc/bibxml/reference.RFC.4760.xml">
<!-- ENTITY USECASES PUBLIC ''
      'https://xml2rfc.tools.ietf.org/public/rfc/bibxml3/reference.I-D.draft-geng-rtgwg-cfn-dyncast-ps-usecase-00.xml'-->
]>
<?rfc compact="yes"?>
<?rfc text-list-symbols="o*+-"?>
<?rfc subcompact="no"?>
<?rfc sortrefs="no"?>
<?rfc symrefs="yes"?>
<?rfc strict="yes"?>
<?rfc toc="yes"?>
<rfc category="info" docName="draft-du-cats-computing-modeling-description-03"
     ipr="trust200902" submissionType="IETF">
  <front>
    <title abbrev="Computing Information Description in CATS">Computing
    Information Description in Computing-Aware Traffic Steering</title>

    <author fullname="Zongpeng Du" initials="Z." surname="Du">
      <organization>China Mobile</organization>

      <address>
        <postal>
          <street>No.32 XuanWuMen West Street</street>

          <city>Beijing</city>

          <code>100053</code>

          <country>China</country>
        </postal>

        <email>duzongpeng@foxmail.com</email>
      </address>
    </author>

    <author fullname="Kehan Yao" initials="K." surname="Yao">
      <organization>China Mobile</organization>

      <address>
        <postal>
          <street>No.32 XuanWuMen West Street</street>

          <city>Beijing</city>

          <code>100053</code>

          <country>China</country>
        </postal>

        <email>yaokehan@chinamobile.com</email>
      </address>
    </author>

    <author fullname="Cheng Li" initials="C." surname="Li">
      <organization>Huawei Technologies</organization>

      <address>
        <email>c.l@huawei.com</email>
      </address>
    </author>

    <author fullname="Guangping Huang" initials="G." surname="Huang">
      <organization>ZTE</organization>

      <address>
        <email>huang.guangping@zte.com.cn</email>
      </address>
    </author>

    <author fullname="Zhihua Fu" initials="Z." surname="Fu">
      <organization>New H3C Technologies</organization>

      <address>
        <email>fuzhihua@h3c.com</email>
      </address>
    </author>

    <date day="6" month="July" year="2024"/>

    <workgroup>CATS</workgroup>

    <abstract>
      <t>This document describes the considerations and requirements of the
      computing information that needs to be notified into the network in
      Computing-Aware Traffic Steering (CATS).</t>
    </abstract>
  </front>

  <middle>
    <section anchor="introduction" title="Introduction">
      <t>Computing-Aware Traffic Steering (CATS) is proposed to support
      steering the traffic among different service sites according to both the
      real-time network and computing resource status as mentioned in <xref
      target="I-D.ietf-cats-usecases-requirements"/>. It requires the network
      to be aware of computing resource information and select a service
      instance based on the joint metric of computing and networking.</t>

      <t>In order to generate steering strategies, the modeling of computing
      capability is required. Different from the network, computing capability
      is more complex to be measured. For instance, it is hard to predict how
      long will be used to process a specific computing task based on the
      different computing resource. It is hard to calculate and will be
      influenced by the whole internal environments of computing nodes. But
      there are some indicators has been used to describe the computing
      capability of hardware and computing service, as mentioned in Appendix
      A.</t>

      <t>Based on the related works and the demand of CATS traffic steering,
      this document analyzes the types of computing resources and tasks,
      providing the factors to be considered when modeling and evaluating the
      computing resource capability. The detailed modeling job of the
      computing resource is not the object of this document.</t>
    </section>

    <section anchor="definition-of-terms" title="Definition of Terms">
      <t>This document makes use of the following terms:</t>

      <t><list hangIndent="2" style="hanging">
          <t hangText="Computing-Aware Traffic Steering (CATS): ">A traffic
          engineering approach <xref target="I-D.ietf-teas-rfc3272bis"/> that
          takes into account the dynamic nature of computing resources and
          network state to optimize service-specific traffic forwarding
          towards a given service contact instance. Various relevant metrics
          may be used to enforce such computing-aware traffic steering
          policies.</t>

          <t hangText="Service:">An offering that is made available by a
          provider by orchestrating a set of resources (networking, compute,
          storage, etc.).</t>

          <t hangText="Service instance:">An instance of running resources
          according to a given service logic.</t>

          <t hangText="Service identifier:">Used to uniquely identify a
          service, at the same time identifying the whole set of service
          instances that each represents the same service behavior, no matter
          where those service instances are running.</t>

          <t hangText="Computing Capability:">The ability of nodes with
          computing resource achieve specific result output through data
          processing, including but not limited to computing, communication,
          memory and storage capability.</t>
        </list></t>
    </section>

    <section anchor="problemstatement"
             title=" Problem Statement in Computing Resource Modeling">
      <t>Modeling itself provides a general method to evaluate the
      capabilities of computing resource. For CATS, modeling-based computing
      resource representation is the basis for subsequent traffic steering. In
      addition, for different applications, it may be optimized based on
      general modeling methods to establish a set of models that conform to
      their own characteristics, so as to generate corresponding
      representation methods. Moreover, in order to use computing resource
      status more efficiently and protect privacy, modeling for the further
      representation of resource information needs to support the necessary
      simplification and obfuscation. However, there are difficulties in
      compute resources modeling.</t>

      <section title="Heterogeneity of Computing Resources">
        <t>Heterogeneous computing resources have different characteristics.
        For example, CPUs usually deal with serial processing and are most
        widely used. GPUs usually handle parallel computing, such as rendering
        of display tasks, and are widely used in artificial intelligence and
        neural network. FPGA and ASIC are usually used to handle domain
        specific computing tasks. These basic computing chips are constructed
        to be different device types. For example, standard servers, AI
        servers, all-in-one machines, etc. These computing devices have
        multi-dimensional and hierarchical resources, such as cache, storage,
        communication, etc., and these dimensions will affect each other and
        further affect the overall level of computing capability. Moreover,
        these computing resources may further be virtualized to provide
        on-demand cloud services, which make the modeling even harder.</t>
      </section>

      <section title="Diversity of Service Types Resulting in Modeling Complexity">
        <t>Modeling computing resources also depends on service types. For
        example, videos and distributed transaction systems may need higher
        computing capabilities measured in queries per second(QPS), while AI
        inference systems may need higher computing capabilities measured in
        tokens per second(TPS), and video or object recognition may need
        higher computing capabilities measured in frame per second(FPS).
        Computing capabilities have different meanings towards different
        applications. Moreover, different computing tasks require different
        computing precision, such as integer calculation, floating-point
        calculation, hash calculation, etc.</t>
      </section>
    </section>

    <section title="Usage of Computing Resource Modeling of CATS">
      <t>We need to use the computing resource modeling in two procedures. The
      first is the service deployment, and the second is the traffic steering,
      in which the later is more related to the CATS work. However, the
      service deployment is the precondition of CATS, which enables the
      assumption that the service can be accessed in multiple places.</t>

      <t>In the procedure of service deployment, a control or management
      device either in the CATS domain or in the Computing domain can collect
      the computing information and make the service deployment decisions. As
      the procedure is not that real time, it can collect more information
      about the service points. Many existing jobs can be reused here such as
      the ones used in the data centers.</t>

      <t>In the procedure of traffic steering, we can use limited metrics to
      trigger the change of the policy for the service on path, so that a
      quick response can be ensured for the change of the computing
      status.</t>

      <t>For the modeling mechanism based on CATS-defined format, the decision
      point can collect more information to support both the service
      deployment and the traffic steering. On the contrary, the mechanism
      based on application-defined method will be more suitable for the CATS,
      in which only necessary metrics need to be notified into the network or
      called the CATS domain. The detailed requirements of metric definition
      can be found in Section 5.</t>

      <section title="Modeling Based on CATS-defined Format">
        <t>Figure 1 shows the case of modeling based on CATS-defined Format.
        CATS provides the modeling format to the computing domain to evaluate
        the computing resource capability of computing domain and then get the
        result based on the unified interface, which will define the
        properties should be notified to CATS. Then CATS could select the
        specific service instance based on the computing resource and network
        resource status.</t>

        <t>In this way, the CATS domain and computing domain has the relative
        loose boundary based on the situation that the CATS service and
        computing resource belongs to the same provider, CATS could be aware
        of computing resource more or less, depending on the privacy
        preserving demand of the computing domain at the same time. The
        exposed computing capability includes the static information of
        computing node category/level and the dynamic capabilities information
        of computing node.</t>

        <t>Based on the static information, some visualization functions can
        be implemented on the management plane to know the global view of
        computing resources, which could also help the deployment of
        applications considering the overall distributed status of computing
        and network resource. Based on the dynamic information, CATS could
        steer category-based applications traffic based on the unified
        modeling format and interface.</t>

        <figure anchor="fig-CAN-defined-modeling"
                title="Modeling Based on CATS-defined Format">
          <artwork>                                 |
                        
         CATS Domain             |                   Computing Domain
                                                              
+--------+  ----------------------&gt;-------------------&gt;  +-------------+
|visuali-|                 Modeling Format               |  Computing  |
|zation  |                       |                       |             |
+--------+  &lt;--------------------&lt;---------------------  |  Resource   |
|Traffic |     Static level/category of computing node   |             |
|Steering|                       |                       |  Modeling   |
+--------+  &lt;--------------------&lt;---------------------  +-------------+
                  Dynamic capability of computing node       

                                 |

                                 |</artwork>
        </figure>
      </section>

      <section title="Modeling Based on Application-defined Method">
        <t>Figure 2 shows the case of modeling based on application-defined
        method. Computing resource of the specific application evaluates its
        computing capability by itself, and then notifies the result which
        might be the index of real time computing level to CATS. Then CATS
        selects the specific service instance based on the computing
        index.</t>

        <t>In this way, the CATS domain and computing domain has the strict
        boundary based on the situation that the CATS service and computing
        resource belongs to the different providers. CATS is just aware of the
        index of computing resource which is defined by application, don't
        know the real status of computing domain, and the traffic steering
        right is potentially controlled under application itself. If CATS is
        authorized by application, it could steer traffic based on network
        status at the same time.</t>

        <figure anchor="fig-APP-defined-modeling"
                title="Modeling Based on Application-defined Method">
          <artwork>                       |                     |               
                       |                     |       
       CATS Domain     |                     |       Computing Domain  
                       |                     |                
                       |                     |           +-------------+
+--------+             |                     |           |  Computing  |
|Traffic |             |                     |           |             |
|        |  &lt;---------------------&lt;---------- ---------- |  Resource   |
|Steering|    dynamic index of computing capability level  |             |
+--------+             |                     |           |  Modeling   |
                       |                     |           +-------------+
                       |                     |
                       |                     |
                       |                     |
                       |                     |</artwork>
        </figure>
      </section>
    </section>

    <section title="Computing Resource Modeling">
      <t>To support a computing service in CATS, we need to evaluate the
      comprehensive service performance in a service instance, which is
      influenced by the coordination of chip, storage, network, platform
      software, etc. It is to say that the service support capabilities are
      influenced by multidimensional factors. After the capability values are
      generated, they are notified to the decision point in the network to
      influence the traffic steering. However, the decision point in the
      network, for example the Ingress Node, only cares about how to use to
      capability values to do the traffic steering, but does not care about
      the way how the capability values are generated.</t>

      <t>From the aspect of services, they need an evaluating system to
      generate one or more capability values. To achieve the best LB result,
      different services or service types may have different ways to evaluate
      the capability. However, it is out of scope of the document.</t>

      <t>From the aspect the decision point in the network, it only needs to
      understand the way to use the values, and implement the related policy.
      This document would mainly discuss about this aspect.</t>

      <t/>

      <section title="Requirements of Using in CATS">
        <t>It is assumed that the same service can be provided in multiple
        places in the CATS. In the different service instances, it is common
        that they have different kinds of computing resources, and different
        utilization rate of the computing resources.</t>

        <t>In the CATS, the decision point, which should be a node in the
        network, should be aware of the network status and the computing
        status, and accordingly choose a proper service point for the
        client.</t>

        <t>A general process to steer the CATS traffic is described as below.
        The CATS packets have an destination address as the service ID that is
        announced by the different service points.</t>

        <t>Firstly, the service points need to collect some specific computing
        information that need to be sent into the network following a uniform
        format so that the decision point can understand the computing
        information. In this step, only necessary computing information needs
        to be considered, so as to avoid exposing too much information of the
        service points.</t>

        <t>Secondly, the service instances send the computing information into
        the network by some means, and update it periodic or on demand.</t>

        <t>Thirdly, the decision point receives the computing information, and
        makes a decision for the specific service related to the service ID.
        Hence, the route for the service ID on the Ingress is established or
        updated.</t>

        <t>Fourthly, the traffic for the service ID reaching the Ingress node
        would be identified and steered according to the policy in the
        step3.</t>

        <t>In fact, what to send, how to send, and the optimization objective
        of the policy are all related to the design of the computing resource
        modeling in CATS, meanwhile they would influence each other. Some
        requirements are listed below.</t>

        <t><list style="numbers">
            <t>The optimization objective of the policy in the decision point
            may be various. For example, it may be the lowest latency of the
            sum of the network delay and the computing delay, or it may be an
            overall better load balance result, in which we would prefer the
            service points that could support more clients.</t>

            <t>The update frequency of the computing metrics may be various.
            Some of the metrics may be more dynamic, and some are relatively
            static.</t>

            <t>The notification ways of the computing metrics may be various.
            According to its update frequency, we may choose different ways to
            update the metric.</t>

            <t>Metric merging process should be supported when multiple
            service instances are behind the same Egress.</t>
          </list></t>

        <t>The target in CATS mainly concerns about the service point
        selection and traffic steering in Layer3, in which we do not need all
        computing information of the service points. Hence, we can start with
        simple cases in the work of the computing resource modeling in CATS.
        Some design principles can be considered.</t>

        <t><list style="numbers">
            <t>Simplicity: The computing metrics in CATS SHOULD be few and
            simple, so as to avoid exposing too much information of the
            service points.</t>

            <t>Scalability: The computing metrics in CATS SHOULD be evolveable
            for the future extensions.</t>

            <t>Interoperability: The computing metrics in CATS SHOULD be
            vendor-independent, and OS-independent.</t>

            <t>Stability: computing metrics SHOULD NOT incur too much overhead
            in protocol design, and it can be stabilized to be used.</t>

            <t>Accuracy: computing metrics SHOULD be effective for path
            selection decision making, and the accuracy SHOULD be
            guaranteed.</t>
          </list></t>

        <t/>

        <t/>
      </section>

      <section title="Considerations of Using in CATS">
        <t>Various metrics can be considered in CATS, and perhaps different
        services would need different metrics. However, we can start with
        simple cases.</t>

        <t>In CATS, a straightforward intent is to minimal the total delay in
        the network domain and the computing domain. Thus, we can have a start
        point for the metric designation in CATS considering only the delay
        information. In this case, the decision point can collect the network
        delay and the computing delay, and make a decision about the optimal
        service point accordingly. The advantage of this method is that it is
        simple and easy to start; meanwhile, the network metric and the
        computing metric have the same unit of measure. The network delay can
        be the latency between the Ingress node and Egress node in the
        network. The computing delay can be generated by the server, which has
        the meaning of &ldquo;the estimate of the duration of my processing of
        request&rdquo;. It is usually an average value for the service
        request. The optimization objective of traffic steering in this
        scenario is the minimal total delay for the client.</t>

        <t>Another metric that can be considered is the server capability. For
        example, one server can support 100 simultaneous sessions and another
        can support 10,000 simultaneous sessions. The value can be generated
        by the server when deploying the service instance. The metric can work
        alone. In this scenario, the decision point can do a Load Balance job
        according to the server capability. For example, the decision process
        can be load balancing after pruning the service points with poor
        network latency metrics. Also, the metric can work with the computing
        delay metric. For example, in this scenario, we can prune the service
        points with poor total latency metrics before the load balancing.</t>

        <t>In future, we can also consider other metrics, which may be more
        dynamic. Besides, for some other optimization objectives, we can
        consider other metrics, even metrics about energy consumption.
        However, in this cases, the decision point needs to consider more
        dimensions of metrics. A suggestion is that we should firstly make
        sure the service point is available, which means the service point can
        still accept more sessions, and then select a optimal target service
        point according to the optimization objective.</t>

        <t/>

        <t/>
      </section>

      <section title="Default Policy Discussion On Decision Point">
        <t>To enable the basic cooperation in CATS, we need one or a set of
        default computing metrics to be notified into the network. All the
        CATS Ingresses need to understand the default metrics and trigger the
        same or similar operations, i.e., as the default policies, inside the
        router. The detailed procedures inside the Ingresses are
        vendor-specific.</t>

        <t>By comparison, other metrics would be optionally, although perhaps
        they can obtain a better or more preferred LB result than the default
        ones. If the Ingress receives the additional metrics and can
        understand them, it can use the optional metrics to update the default
        forwarding policy for the routes of the anycast IP.</t>

        <t>There are two kinds of forwarding treatments on the Ingress.
        Although they are implementations inside the equipment, we give a
        general description about them here, because they are related to the
        default metric selection.</t>

        <t>The first one is that the Ingress will deploy several routes for
        the anycast IP, but among them only one is active, and others are for
        backup and are set to inactive. The second one it that the Ingress can
        have multiple active routes for the anycast IP, and each route has a
        dedicated weight, so that a load balancing can be done within the
        Ingress.</t>

        <t>The advantage of the first one is that it can select a best service
        instance for the client according to the network and computing status.
        However, its disadvantage is that the Ingress will forward all the new
        clients to a single service point before the policy is updated, which
        will potentially cause the service point to become busy. For the
        second one, it may achieve a better LB result.</t>

        <t>An initial proposal of the default metrics for the default policies
        is that we can always send the two metrics mentioned in the last
        paragraph, i.e., the computing delay and the server capability. At
        least one of them should be valid. The bits of the computing delay or
        the server capability are set to all "zero" will be considered
        invalid, and other values are considered valid. Meanwhile, the bits of
        the computing delay or the server capability are set to all "one"
        stands for the service point is temporary busy, and the Ingress should
        not send new clients to that service point. Alternatively, we can also
        add another simple metric to indicate the busy or not status. However,
        this metric is relatively more dynamic than the former two.</t>

        <t/>
      </section>
    </section>

    <section title="Network Resource Modeling">
      <t>The modeling of the network resource is optional, which depends on
      how to select the service instance and network path. For some
      applications which care both network and computing resource, the CATS
      service provider also need to consider the modeling of network and
      computing together.</t>

      <t>The network structure can be represented as graphs, where the nodes
      represent the network devices and the edges represent the network path.
      It should evaluate the single node, the network links and the E2E
      performance.</t>

      <section title="Consideration of Using in CATS">
        <t>When to consider both the computing and network status at the same
        time, the comprehensive modeling of computing and network might be
        used. For example, to measure all the resource in a unified dimension,
        such as latency, reliability, etc.</t>

        <t>If there is no strict demand of consider them at same time, for
        instance, consider computing status first and then network status.
        CATS could select the service instance at first, then to mark
        identifier for network path selection of network itself. In this
        situation, the network modeling is not that needed. Existing
        mechanisms on the control plane or the management plane in the network
        can be used to obtain the network metrics.</t>
      </section>
    </section>

    <section title="Application Demands Modeling">
      <t/>

      <t>The application always has its own demands for network and computing
      resource, for instance we can see the HD video always requires the high
      bandwidth and the PC game always requires the better GPU and memory. The
      application is identified by using the Service Identifier in the
      network, which can indicate its demands in a certain degree.</t>

      <section title="Consideration of Using in CATS">
        <t>The modeling of the application demand is optional, which depends
        on whether the application could tell the demands to the network, or
        what it could tell. Once the CATS knows the application's demand,
        there should be a mapping between application demand and the modeling
        of the computing and/or network resource.</t>
      </section>
    </section>

    <section anchor="security-considerations" title="Security Considerations">
      <t>TBD.</t>
    </section>

    <section anchor="iana-considerations" title="IANA Considerations">
      <t>TBD.</t>
    </section>

    <section anchor="acknowledgements" title="Acknowledgements">
      <t>The author would like to thank Adrian Farrel, Joel Halpern, Tony Li,
      Thomas Fossati, Dirk Trossen, Linda Dunbar for their valuable
      suggestions to this document.</t>
    </section>

    <section title="Contributors">
      <t>The following people have substantially contributed to this
      document:</t>

      <t><figure>
          <artwork>	Yuexia Fu
	China Mobile
	fuyuexia@chinamobile.com

	Jing Wang
	China Mobile
	wangjingjc@chinamobile.com

	Peng Liu
	China Mobile
	liupengyjy@chinamobile.com

	Wenjing Li 
	Beijing University of Posts and Telecommunications
	wjli@bupt.edu.cn

	Lanlan Rui
	Beijing University of Posts and Telecommunications
	llrui@bupt.edu.cn
</artwork>
        </figure></t>
    </section>
  </middle>

  <back>
    <references title="Informative References">
      <?rfc include="reference.I-D.ietf-cats-usecases-requirements"?>

      <?rfc include="reference.I-D.ietf-teas-rfc3272bis"?>

      <reference anchor="One-api">
        <front>
          <title>http://www.oneapi.net.cn/</title>

          <author fullname="One-api" surname="">
            <organization/>
          </author>

          <date year="2020"/>
        </front>
      </reference>

      <reference anchor="Amazon">
        <front>
          <title>https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html#available-metrics</title>

          <author fullname="Amaozn" surname="">
            <organization/>
          </author>

          <date year="2022"/>
        </front>
      </reference>

      <reference anchor="Aliyun">
        <front>
          <title>https://help.aliyun.com/?spm=a2c4g.11186623.6.538.34063af89EIb5v</title>

          <author fullname="Aliyun" surname="">
            <organization/>
          </author>

          <date year="2022"/>
        </front>
      </reference>

      <reference anchor="Tencent-cloud">
        <front>
          <title>https://buy.cloud.tencent.com/pricing</title>

          <author fullname="Tencent-cloud" surname="">
            <organization/>
          </author>

          <date year="2022"/>
        </front>
      </reference>

      <reference anchor="cloud-network-edge">
        <front>
          <title>A new edge computing scheme based on cloud, network and edge
          fusion</title>

          <author fullname="cloud-network-edge" surname="">
            <organization>Telecommunication Science</organization>
          </author>

          <date year="2020"/>
        </front>
      </reference>

      <reference anchor="heterogeneous-multicore-architectures">
        <front>
          <title>Towards energy-efficient heterogeneous multicore
          architectures for edge computing</title>

          <author fullname="IEEE access" surname="">
            <organization>IEEE access</organization>
          </author>

          <date year="2019"/>
        </front>
      </reference>

      <reference anchor="ARM-based">
        <front>
          <title>A heterogeneous CPU-GPU cluster scheduling model based on
          ARM</title>

          <author fullname="Software Guide" surname="">
            <organization>Software Guide</organization>
          </author>

          <date year="2017"/>
        </front>
      </reference>
    </references>

    <section title="Related Works on Computing Capability Modeling">
      <t/>

      <t>Some related work has been proposed to measurement and evaluate the
      computing capability, which could be the basis of computing capability
      modeling.</t>

      <t><xref target="cloud-network-edge"/> proposed to allocate and adjust
      corresponding resources to users according to the demands of computing,
      storage and network resources.</t>

      <t><xref target="heterogeneous-multicore-architectures"/> proposed to
      design heterogeneous multi-core architectures according to different
      customization, such as CPU microprocessors with ultra-low power
      consumption and high code density, low power microprocessor with FPU,
      and a high-performance application processor with FPU and MMU support
      based on a completely unordered multi problem architecture.</t>

      <t><xref target="ARM-based"/> proposed the cluster scheduling model that
      is combined with GPU virtualization and designed a hierarchical cluster
      resource management framework, which can make the heterogeneous CPU-GPU
      cluster be effectively used.</t>

      <t>The hardware cloud service providers have also disclosed their
      parameter indicators for computing services:</t>

      <t><xref target="One-api"/> provides a collection of programming
      languages and cross architecture libraries across different
      architectures, to be compatible with heterogeneous computing resources,
      including CPU, GPU, FPGA, and others. <xref target="Amazon"/> uses the
      computing resource parameters when evaluating the performance, including
      the average CPU utilization, average number of bytes received and sent
      out, and average application load balancer. Alibaba cloud <xref
      target="Aliyun"/> gives the indicators including vcpu, memory, local
      storage, network basic and burst bandwidth capacity, network receiving
      and contracting capability, etc., when providing cloud servers service.
      <xref target="Tencent-cloud"/> uses vcpu, memory (GB), network receiving
      and sending (PPS), number of queues, intranet bandwidth capacity (Gbps),
      dominant frequency, etc.</t>

      <t/>
    </section>
  </back>
</rfc>
