RFC 9845 | Management for Green Networking | August 2025 |
Clemm, et al. | Informational | [Page] |
Reducing humankind's environmental footprint and making technology more environmentally sustainable are among the biggest challenges of our age. Networks play an important part in this challenge. On one hand, they enable applications that help to reduce this footprint. On the other hand, they contribute to this footprint themselves in no insignificant way. Therefore, methods to make networking technology itself "greener" and to manage and operate networks in ways that reduce their environmental footprint without impacting their utility need to be explored. This document outlines a corresponding set of opportunities, along with associated research challenges, for networking technology in general and management technology in particular to become "greener", i.e., more sustainable, with reduced greenhouse gas emissions and less negative impact on the environment.¶
This document is a product of the Network Management Research Group (NMRG) of the Internet Research Task Force (IRTF). This document reflects the consensus of the research group. It is not a candidate for any level of Internet Standard and is published for informational purposes.¶
This document is not an Internet Standards Track specification; it is published for informational purposes.¶
This document is a product of the Internet Research Task Force (IRTF). The IRTF publishes the results of Internet-related research and development activities. These results might not be suitable for deployment. This RFC represents the consensus of the Network Management Research Group of the Internet Research Task Force (IRTF). Documents approved for publication by the IRSG are not candidates for any level of Internet Standard; see Section 2 of RFC 7841.¶
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at https://www.rfc-editor.org/info/rfc9845.¶
Copyright (c) 2025 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.¶
Climate change and the need to curb greenhouse gas (GHG) emissions have been recognized by the United Nations and by most governments as one of the big challenges of our time. As a result, curbing those emissions is becoming increasingly important for society and for many industries. The networking industry is no exception.¶
The science behind greenhouse gas emissions and their relationship with climate change is complex. However, there is overwhelming scientific consensus pointing toward a clear correlation between climate change and a rising amount of greenhouse gases in the atmosphere. One greenhouse gas of particular concern, but by no means the only one, is carbon dioxide (CO2). Carbon dioxide is emitted in the process of burning fuels to generate energy that is used, for example, to power electrical devices such as networking equipment. Notable here is the use of fossil fuels (such as oil, which releases CO2 that has long been removed from the earth's atmosphere), as opposed to the use of renewable or sustainable fuels that do not "add" to the amount of CO2 in the atmosphere. There are additional gases associated with electricity generation, in particular methane (CH4) and nitrous oxide (N2O). Although they exist in smaller quantities, they have an even higher Global Warming Potential (GWP).¶
Greenhouse gas emissions are in turn correlated with the need to power technology, including networks. Reducing those emissions can be achieved by reducing the amount of fossil fuels needed to generate the energy that is needed to power those networks. This can be achieved by improving the energy mix to include increasing amounts of low-carbon and/or renewable (and hence sustainable) energy sources, such as wind or solar. It can also be achieved by increasing energy savings and improving energy efficiency so that the same outcomes are achieved while consuming less energy in the first place.¶
The amount of greenhouse gases that an activity adds to the atmosphere, such as CO2 that is emitted in burning fossil fuels to generate the required energy, is also referred to as the "greenhouse footprint" or the "carbon footprint" (accounting for greenhouses gases other than CO2 in terms of CO2 equivalents). Reducing this footprint to net zero is hence a major sustainability goal. However, sustainability encompasses other factors beyond carbon, such as the sustainable use of other natural resources, the preservation of natural habitats and biodiversity, and the avoidance of any form of pollution.¶
In the context of this document, we refer to networking technology that helps to improve its own networking sustainability as "green". Green, in that sense, includes technology that helps to lower networking's greenhouse gas emissions including the carbon footprint, which in turn includes technology that helps increase efficiency and realize energy savings as well as facilitates managing networks toward a stronger use of renewables.¶
Arguably, networks can already be considered a "green" technology in that networks enable many applications that allow users and whole industries to save energy and thus become environmentally more sustainable in a significant way. For example, they allow (at least to an extent) to substitute travel with teleconferencing. They enable many employees to work from home and telecommute, thus reducing the need for actual commuting. IoT applications that facilitate automated monitoring and control from remote sites help make agriculture more sustainable by minimizing the usage of water, fertilizer, and land area. Networked smart buildings allow for greater energy optimization and sparser use of lighting and HVAC (heating, ventilation, air conditioning) than their non-networked, not-so-smart counterparts. That said, calculating precise benefits in terms of net sustainability contributions and savings is complex, as a holistic picture involves many effects including substitution effects (perhaps saving on emissions caused by travel but incurring additional costs associated with additional home office use) as well as behavioral changes (perhaps a higher number of meetings than if travel were involved).¶
The IETF has recently initiated a reflection on the energy cost of hosting meetings three times a year (see [IETF-Net0]). It conducted a study of the carbon emissions of a typical meeting and found out that 99% of the emissions were due to air travel. In the same vein, [Framework] compared an in-person with a virtual meeting and found a reduction in energy of 66% for a virtual meeting. These findings confirm that networking technology can reduce emissions when acting as a virtual substitution for physical events.¶
That said, networks themselves consume significant amounts of energy. Therefore, the networking industry has an important role to play in meeting sustainability goals and not just by enabling others to reduce their reliance on energy but by also reducing its own. Future networking advances will increasingly need to focus on becoming more energy efficient and reducing the carbon footprint, for reasons of both corporate responsibility and economics. This shift has already begun, and sustainability is becoming an important concern for network providers. In some cases, such as in the context of networked data centers, the ability to procure enough energy becomes a bottleneck, prohibiting further growth, and greater sustainability thus becomes a business necessity.¶
For example, in its annual report, Telefónica reports that in 2021, its network's energy consumption per petabyte (PB) of data amounted to 54 megawatt-hours (MWh) [Telefonica2021]. This rate has been dramatically decreasing (by a factor of seven over six years), although gains in efficiency are being offset by simultaneous growth in data volume. The same report states that an important corporate goal is continuing on that trajectory and aggressively reducing overall carbon emissions further.¶
An often-considered gain in networking sustainability can be made with regards to improving the efficiency with which networks utilize power during their use phase, reducing the amount of energy that is required to provide communication services. However, for a holistic approach, other aspects need to be considered as well.¶
The environmental footprint is not determined by energy consumption alone. The sustainability of power sources needs to be considered as well. A deployment that includes devices that are less energy efficient but powered by a sustainable energy source can arguably be considered "greener" than a deployment that includes highly efficient devices that are powered by diesel generators. In fact, in the same Telefónica report mentioned earlier, extensive reliance on renewable energy sources is emphasized.¶
Similarly, deployments can take other environmental factors into account that affect the carbon footprint. For example, deployments where the need for cooling is reduced or where excessive heat generated by equipment can be put to a productive use will be considered greener than deployments where this is not the case. Examples include deployments in cooler natural surroundings (e.g., in colder climates) where that is an option. Likewise, manufacturing and recycling networking equipment are also part of the sustainability equation, as the production itself consumes energy and results in a carbon cost embedded as part of the device itself. Extending the lifetime of equipment may in many cases be preferable over replacing it earlier with equipment that is slightly more energy efficient, but that requires the embedded carbon cost to be amortized over a much shorter period of time.¶
Management has an outsized role to play in approaching those problems. To reduce the amount of energy used, network providers need to maximize ways in which they use scarce resources and eliminate the use of unneeded resources. They need to optimize the way in which networks are deployed, which resources are placed where, and how equipment lifecycles and upgrades are being managed -- all of which constitute classic operational problems. As best practices, methods, and algorithms are developed, they need to be automated to the greatest extent possible, migrated over time into the network, and performed on increasingly short timescales, transcending management and control planes.¶
From a technical perspective, multiple vectors along which networks can be made "greener" should be considered:¶
Equipment level:¶
Perhaps the most promising vector for improving networking sustainability concerns the network equipment itself. At the most fundamental level, networks (even softwarized ones) involve appliances, i.e., equipment that relies on electrical power to perform its function. There are two distinct layers with different opportunities for improvement:¶
Hardware: Reducing embedded carbon during material extraction and manufacturing, improving energy efficiency, and reducing energy consumption during operations, and reuse, repurpose, and recycle motions.¶
Software: Improving software energy efficiency, maximizing utilization of processing devices, and allowing for software to interact with hardware to improve sustainability.¶
Beyond making network appliances merely more energy efficient, there are other important ways in which equipment can help networks become greener. This includes aspects such as supporting port power-saving modes or down-speeding links to reduce power consumption for resources that are not fully utilized. To fully tap into the potential of such features, it requires accompanying management functionality, for example, to determine when it is "safe" to down-speed a link or to enter a power-saving mode, and to maximize the conditions when that action is appropriate.¶
Most importantly, from a management perspective, improving sustainability at the equipment level involves providing management instrumentation that allows for precise monitoring and managing power usage and doing so at different levels of granularity, for example, accounting separately for the contributions of CPU, memory, and different ports. This enables (for example) controller applications to optimize energy usage across the network and to leverage control loops to assess the effectiveness (e.g., in terms of reducing power use) of the measures that are taken.¶
As a side note, the terms "device" and "equipment", as used in the context of this document, are used to refer to networking equipment. We are not taking into consideration end-user devices and endpoints such as mobile phones or computing equipment.¶
Protocol level:¶
Energy-efficiency and greenness are aspects that are rarely considered when designing network protocols. This suggests that there may be plenty of untapped potential. Some aspects involve designing protocols in ways that reduce the need for redundant or wasteful transmission of data, allowing not only for better network utilization but for greater goodput per unit of energy being consumed. Techniques might include approaches that reduce the "header tax" incurred by payloads as well as methods resulting in the reduction of wasteful retransmissions. Similarly, there may be cases where chattiness of protocols may be preventing equipment from going into sleep mode. Designing protocols that reduce chattiness in such scenarios, for example, that reduce dependence on periodic updates or heartbeats, may result in greener outcomes. Likewise, aspects such as restructuring addresses in ways that minimize the size of lookup tables, associated memory sizes, and hence energy use, can play a role as well.¶
Another role of protocols concerns the enabling of management functionality to improve energy efficiency at the network level, such as discovery protocols that allow for quick adaptation to network components being taken dynamically into and out of service depending on network conditions, as well as protocols that can assist with functions such as the collection of energy telemetry data from the network.¶
Network level:¶
Perhaps the greatest opportunities to realize power savings exist at the level of the network as whole. Many of these opportunities are directly related to management functionality. For example, optimizing energy efficiency may involve directing traffic in such a way that it allows the isolation of equipment that might not be needed at certain moments so that it can be powered down or brought into power-saving mode. By the same token, traffic should be directed in a way that requires bringing additional equipment online or out of power-saving mode in cases where alternative traffic paths are available for which the incremental energy cost would amount to zero. Likewise, some networking devices may be rated less "green" and more power-intensive than others or may be powered by less-sustainable energy sources. Their use might be avoided except during periods of peak capacity demands. Generally, incremental carbon emissions can be viewed as a cost metric that networks should strive to minimize and consider as part of routing and network path optimization.¶
Architecture level:¶
The current network architecture supports a wide range of applications but does not consider energy efficiency as one of its design parameters. One can argue that the most energy efficient shift of the last two decades has been the deployment of Content Delivery Network overlays: while these were set up to reduce latency and minimize bandwidth consumption, from a network perspective, retrieving the content from a local cache is also much greener. What other architectural shifts can produce energy consumption reduction?¶
In this document, we will explore each of those vectors in further detail and attempt to articulate specific challenges that could make a difference when addressed. As our starting point, we borrow some material from "Challenges and Opportunities in Green Networking" [GreenNet22]. For this document, this material has been both expanded (for example, in terms of some of the opportunities) and pruned (for example, in terms of background on prior scholarly work).¶
This document is a product of the Network Management Research Group (NMRG) of the Internet Research Task Force (IRTF). This document reflects the consensus of the research group and was discussed and presented multiple times, each time receiving positive feedback and no objections. It is not a candidate for any level of Internet Standard and is published for informational purposes.¶
Below you find acronyms used in this document:¶
The carbon footprint and, with it, greenhouse gas emissions are determined by several factors. A main factor is network energy consumption, as the energy consumed can be considered a proxy for the burning of fuels required for corresponding power generation. Network energy consumption by itself does not tell the whole story, as it does not take the sustainability of energy sources and the energy mix into account. Likewise, there are other factors such as the carbon cost expended in the manufacturing of networking hardware. Nonetheless, network energy consumption is an excellent predictor of a carbon footprint and its reduction, which is key to sustainable solutions. Hence, exploring possibilities to improve energy efficiency is a key factor for greener, more sustainable, less carbon-intensive networks.¶
It is important to understand some of the characteristics of power consumption by networks and which aspects contribute the most. This helps to identify where the greatest potential is, not just for power savings but also for sustainability improvements.¶
Power is ultimately drawn by devices. Devices are not monoliths but are composed of multiple components. The power consumption of the device can be divided into the consumption of the core device -- the backplane and CPU, if you will -- as well as additional consumption incurred per port and line card. In addition, the GPU and TPU may be used in the network and may have different power consumption profiles. Furthermore, it is important to understand the difference between power consumption when a resource is idling versus when it is under load. This helps to understand the incremental cost of additional transmission versus the initial cost of transmission.¶
In typical networking devices, only roughly half of the energy consumption is associated with the data plane [Bolla2011energy]. An idle base system typically consumes more than half of the energy that the same system would consume when running at full load [Chabarek08] [Cervero15]. Generally, the cost of sending the first bit is very high, as it requires powering up a device, port, etc. The incremental energy cost of transmission of additional bits (beyond the first) is many orders of magnitude lower. Likewise, the incremental cost of the incremental CPU and memory needed to process additional packets becomes fairly negligible.¶
This means that a device's energy consumption does not increase linearly with the volume of forwarded traffic. Instead, it resembles a step function in which energy consumption stays roughly the same up to a certain volume of traffic, followed by a sudden jump when additional resources need to be procured to support a higher volume of traffic.¶
By the same token, it is generally more energy efficient to transmit a large volume of data in one burst (and subsequently turn off or down-speed the interface when idling) than to continuously transmit at a lower rate. In that sense, it can be the duration of the transmission that dominates the energy consumption -- not the actual data rate.¶
The implications on green networking from an energy-savings standpoint are significant. Of utmost importance are schemes that allow for "peak shaving": networks are typically dimensioned for periods of peak demand and usage, yet any excess capacity during periods of non-peak usage does not result in corresponding energy savings. Peak shaving techniques that reduce peak traffic spikes and waste during non-peak periods may result in outsize sustainability gains. Peak shaving could be accomplished by techniques such as spreading spikes out over geographies (e.g., routing traffic across more costly but less utilized routes) or over time (e.g., postponing and buffering non-urgent traffic).¶
Likewise, large gains can be made whenever network resources can effectively be taken offline for at least some of the time, managing networks in a way that enables resources to be removed from service so they can be powered down (or put into a more energy-saving state, such as when down-speeding ports) while not needed. Of course, any such methods need to take into account the overhead of taking resources offline and bringing them back online. This typically takes some amount of time, requiring accurate predictive capabilities to avoid situations in which network resources are not available at times when they would be needed. In addition, there is additional overhead, such as synchronization of state, to be accounted for.¶
At the same time, any non-idle resources should be utilized to the greatest extent possible, as the incremental energy cost is negligible. Of course, this needs to occur while still taking other operational goals into consideration, such as protection against failures (allowing for readily available redundancy and spare capacity in case of failure) and load balancing (for increased operational robustness). As data transmission needs tend to fluctuate wildly and occur in bursts, any optimization schemes need to be highly adaptable and allow control loops at very fast time scales.¶
Similarly, for applications where this is possible, it may be desirable to replace continuous traffic at low data rates with traffic that is sent in bursts at high data rates in order to potentially maximize the time during which resources can be idled.¶
As a result, emphasis needs to be given to technology that allows, for example, (at the device level) very efficient and rapid discovery, monitoring, and control of networking resources so that they can be dynamically taken offline or brought back in service without (at the network level) requiring an extensive convergence of state across the network or a recalculation of routes and other optimization problems, and (at the network equipment level) support rapid power cycle and initialization schemes. There may be some lessons that can be applied here from IoT, which has long had to contend with power-constrained end devices that need to spend much of their time in power-saving states to conserve battery.¶
We are categorizing challenges and opportunities to improve sustainability at the network equipment level along the following lines:¶
Hardware and manufacturing: Related opportunities are arguably among the most obvious and perhaps "largest". However, solutions here lie largely outside the scope of networking researchers.¶
Visibility and instrumentation: Instrumenting equipment to provide visibility into how they consume energy is key to management solutions and control loops to facilitate optimization schemes.¶
Perhaps the most obvious opportunities to make networking technology more energy efficient exist at the equipment level. After all, networking involves physical equipment to receive and transmit data. Making such equipment more power efficient, having it dissipate less heat to consume less energy and reduce the need for cooling, making it eco-friendly to deploy, sourcing sustainable materials, and facilitating the recycling of equipment at the end of its lifecycle -- all contribute to making networks greener. Reducing the energy usage of transmission technology, from wireless (antennas) to optical (lasers), is a strategy that is unique to networking.¶
One critical aspect of the energy cost of networking is the cost to manufacture and deploy the networking equipment. In addition, even the development process itself comes with its own carbon footprint. This is outside of the scope of this document: we only consider the energy cost of running the network during the operational part of the equipment's lifecycle. However, a holistic approach would include the embedded energy that is included in the networking equipment. As part of this, aspects such as the impact of deploying new protocols on the rate of obsolescence of existing equipment should be considered. For instance, incremental approaches that do not require replacing equipment right away -- or even that extend the lifetime of deployed equipment -- would have a lower energy footprint. This is one important benefit also of technologies such as Software-Defined Networking and network function virtualization, as they may allow support for new networking features through software updates without requiring hardware replacements.¶
[Emergy] describes an attempt to compute not only the energy of running a network but also the energy embedded into manufacturing the equipment. This is denoted by "emergy", a portmanteau for embedded energy. Likewise, [Junkyard] describes an approach to recycling equipment and a proof of concept using old mobile phones recycled into a "junkyard" data center.¶
One trade-off to consider at this level is the selection of a platform that can be hardware-optimized for energy efficiency versus a platform that is versatile and can run multiple functions. For instance, a switch could run on an efficient hardware platform or run as a software module (container) over some multipurpose platform. While the first one is operationally more energy efficient, it may have a higher embedded energy from a smaller scale, a less efficient production process, as well as a shorter shelf life once new functions need to be added to the platform.¶
Beyond "first-order" opportunities, as outlined in Section 4.1, network equipment just as importantly plays a role in enabling and supporting green networking at other levels. Of prime importance is the equipment's ability to provide visibility to the management and control planes into its current energy usage. Such visibility enables control loops for energy optimization schemes, allowing applications to obtain feedback regarding the energy implications of their actions, from setting up paths across the network that require the least incremental amount of energy to quantifying metrics related to energy cost to optimize forwarding decisions. Absent an actual measurement of energy usage (and until such measurement is put in place), the network equipment could advertise some proxy of its power consumption. For example, it could use a labeling scheme of silver, gold, or platinum similar to the LEED sustainability metric in building codes, or the Energy Star label in home appliances, or a description of the type of the device as using CPU vs. GPU vs. TPU processors with different power profiles.¶
One prerequisite to such schemes is to have proper instrumentation in place that allows for monitoring current power consumption at the level of networking devices as a whole, line cards, and individual ports. Such instrumentation should also allow for assessing the energy efficiency and carbon footprint of the device as a whole. In addition, it will be desirable to relate this power consumption to data rates and to current traffic, for example, to indicate current energy consumption relative to interface speeds, as well as for incremental energy consumption that is expected for incremental traffic (to aid control schemes that aim to "shave" power off current services or to minimize the incremental use of power for additional traffic). This is an area where the current state of the art is sorely lacking, and standardization lags behind. For example, as of today, standardized YANG data models [RFC7950] for network energy consumption that can be used in conjunction with management and control protocols have yet to be defined.¶
To remedy this situation, efforts to define sets of green networking metrics [GREEN_METRICS] as well as YANG data models that include objects that provide visibility into power measurements (e.g., [POWER_YANG]) were underway in 2024. Agreed sets of metrics and corresponding data models will provide the basis for further steps, beginning with their implementation as part of management and control instrumentation.¶
Instrumentation should also take into account the possibility of virtualization, introducing layers of indirection to assess the actual energy usage. For example, virtualized networking functions could be hosted on containers or virtual machines that are hosted on a CPU in a data center instead of a regular network appliance such as a router or a switch, leading to very different power consumption characteristics. For example, a data center CPU's power consumption could be more efficient and more proportional to actual CPU load. Instrumentation needs to reflect these facts and facilitate attributing power consumption in a correct manner.¶
Beyond monitoring and providing visibility into power consumption, control knobs are needed to configure energy-saving policies. For instance, power-saving modes are common in endpoints (such as mobile phones or notebook computers) but sorely lacking in networking equipment.¶
The following summarizes some challenges and opportunities in this space that can provide the basis for advances in greener networking:¶
Basic equipment categorization as "energy efficient" (or not) as a first step to identify immediate potential improvements, akin to the Energy Star program from the US's Environmental Protection Agency.¶
Equipment instrumentation advances for improved energy awareness; definition and standardization of granular management information.¶
Virtualized energy and carbon metrics and assessment of their effectiveness in solutions that optimize carbon footprints in virtualized environments (including SDN, network slicing, network function virtualization, etc.).¶
Certification and compliance assessment methods that ensure that green instrumentation cannot be manipulated to give false and misleading data.¶
Methods that account for equipment that powers an energy mix, to facilitate solutions that optimize carbon footprint and minimize pollution beyond mere energy efficiency [Hossain2019].¶
There are several opportunities to improve network sustainability at the protocol level, which can be categorized as follows. The first and arguably most impactful category concerns protocols that enable carbon footprint optimization schemes at the network level and management towards those goals. Other categories concern protocols designed to optimize data transmission rates under energy considerations, protocols designed to reduce the volume of data to be transmitted, and protocol aspects related to network addressing schemes. While those categories may be less impactful, even areas with smaller gains should be explored.¶
There is also substantial work in the area of IoT, which has had to contend with energy-constrained devices for a long time. Much of that work was motivated not by sustainability concerns but practical concerns such as battery life. However, many aspects appear to also apply in the context of sustainability, such as reducing chattiness to allow IoT equipment to go into low-power mode. Accordingly, there is an opportunity to extend IoT work to more generalized scenarios. The use of power-constrained protocols in the wider Internet happens regularly. For instance, ARM-based chipsets initially designed for energy efficiency in battery-operated mobile devices have been embraced in data centers for a similar trajectory.¶
As discussed in Section 6, energy-aware and pollution-aware schemes can help improve network sustainability but require awareness of related data. To facilitate such schemes, protocols are needed that are able to discover what links are available along with their energy efficiency. For instance, links may be turned off in order to save energy and turned back on based upon the elasticity of the demand. Protocols should be devised to discover when this happens and to have a dynamic view of the topology that keeps up with frequent updates due to power cycling of the network resources.¶
Also, protocols are required to quickly converge onto an energy-efficient path once a new topology is created by turning links on/off. Current routing protocols may provide for fast recovery in the case of failure. However, failures are hopefully relatively rare events, while we expect an energy-efficient network to aggressively try to turn off links. There may be synergies with Time-Variant Routing [TVR_REQS] that can be explored, in which the topology varies over time with nodes and links turned on or off according to a schedule. There may be overlaps in use cases, for example, when regular changes in the energy mix (and hence carbon footprint) of some nodes occur that coincide with the time of day (such as switching from solar to fossil fuels at night).¶
Some mechanism is needed to present to the management layer a view of the network that identifies opportunities to turn off resources (e.g., routers or links) while still providing an acceptable level of Quality of Experience (QoE) to the users. This gets more complex as the level of QoE shifts from the current best-effort delivery model to more sophisticated mechanisms with, for instance, latency, bandwidth, or reliability guarantees.¶
Similarly, schemes might be devised in which links across paths with a favorable energy mix are preferred over other paths. This implies that the discovery of topology should be able support corresponding parameters. More generally speaking, any mechanism that provides applications with network visibility is a candidate for scrutinization as to whether it should be extended to provide support for sustainability-related parameters.¶
The following summarizes some challenges and opportunities in this space that can provide the basis for advances in greener networking:¶
Protocol advances to enable rapidly taking down, bringing back online, and discovering availability and power-saving status of networking resources while minimizing the need for reconvergence and propagation of state.¶
An assessment of which protocols could be extended with energy- and sustainability-related parameters in ways that would enable "greener" networking solutions, and an exploration of those solutions.¶
The second category involves designing protocols in such a way that the rate of transmission is chosen to maximize energy efficiency. For example, Traffic Engineering (TE) can be manipulated to impact the rate adaptation mechanism [Ren2018jordan]. By choosing where to send the traffic, TE can artificially congest links so as to trigger rate adaptation and therefore reduce the total amount of traffic. Most TE systems attempt to minimize Maximum Link Utilization but energy-saving mechanisms could decide to do the opposite (i.e., maximize Minimum Link Utilization) and attempt to turn off some resources to save power.¶
Another example is to set up the proper rate of transmission to minimize the flow completion time (FCT) so as to enable opportunities to turn off links. In a wireless context, [TradeOff] studies how setting the proper initial value for the congestion window can reduce the FCT and therefore allow the equipment to go faster into a low-energy mode. By sending the data faster, the energy cost can be significantly reduced. This is a simple proof of concept, but protocols that allow for turning links into a low-power mode by transmitting the data over shorter periods could be designed for other types of networks beyond Wi-Fi access. This should be done carefully: in an extreme case, a high rate of transmission over a short period of time may create bursts that the network would need to accommodate, with all attendant complications of bursty traffic. We conjecture there is a sweet spot between trying to complete flows faster while controlling for burstiness in the network. It is probably advisable to attempt to send traffic paced yet in bulk rather than spread out over multiple round trips. This is an area of worthwhile exploration.¶
The following summarizes some challenges and opportunities in this space that can provide the basis for advances in greener networking:¶
Protocol advances that allow greater control over traffic pacing to account for fluctuations in carbon cost, i.e., control knobs to "bulk up" transmission over short periods or to smooth it out over longer periods.¶
Protocol advances for optimizing link utilization according to different goals and strategies (including maximizing Minimal Link Utilization vs. minimizing Maximal Link Utilization, etc.)¶
Assessments of the carbon impact of such strategies.¶
The third category involves designing protocols in such a way that they reduce the volume of data that needs to be transmitted for any given purpose. Loosely speaking, by reducing this volume, more traffic can be served by the same amount of networking infrastructure, hence reducing overall energy consumption. Possibilities here include protocols that avoid unnecessary retransmissions. At the application layer, protocols may also use coding mechanisms that encode information close to the Shannon limit. Currently, most of the traffic over the Internet consists of video streaming, and video encoders are already quite efficient and keep improving all the time. This results in energy savings as one of many advantages, although of course the savings are offset by increasingly higher resolution. It is not clear that the extra work to achieve higher compression ratios for the payloads results in a net energy gain: what is saved over the network may be offset by the compression/decompression effort. Further research on this aspect is necessary.¶
At the transport protocol layer, TCP and to some extent QUIC react to congestion by dropping packets. This is an extremely energy inefficient method to signal congestion because (a) the network has to wait one RTT to be aware that the congestion has occurred, and (b) the effort to transmit the packet from the source up until it is dropped ends up being wasted. This calls for new transport protocols that react to congestion without dropping packets. ECN [RFC2481] is a possible solution, however, it is not widely deployed. DCTCP [Alizadeh2010DCTCP] is tuned for data centers; Low Latency, Low Loss, and Scalable Throughput (L4S) is an attempt to port similar functionality to the Internet [RFC9330]. Qualitative Communication [QUAL] [Westphal2021qualitative] allows the nodes to react to congestion by dropping only some of the data in the packet, thereby only partially wasting the resource consumed by transmitted the packet up to that point. Novel transport protocols for the WAN can ensure that no energy is wasted transmitting packets that will be eventually dropped.¶
Another solution to reduce the bandwidth of network protocols is by reducing their header tax, for example, by applying header compression. An example in IETF is RObust Header Compression (ROHC) [RFC3095]. Again, reducing protocol header size saves energy to forward packets, but at the cost of maintaining a state for compression/decompression, plus computing these operations. The gain from such protocol optimization further depends on the application and whether it sends packets with (a) large payloads close to the MTU, thus limiting the header tax and any savings, or (b) very small payload size, thus increasing the header tax and the savings.¶
An alternative to reducing the amount of protocol data is to design routing protocols that are more efficient to process at each node. For instance, path-based forwarding/labels such as MPLS [RFC3031] facilitate the next hop lookup, thereby reducing the energy consumption. It is unclear if some state at router to speed up lookup is more energy efficient than "no state + lookup", which is more computationally intensive. Other methods to speed up a next-hop lookup include geographic routing (e.g., [Herzen2011PIE]). Some network protocols could be designed to reduce the next hop lookup computation at a router. It is unclear whether Longest Prefix Match (LPM) is energy efficient or a significant energy burden for router operation.¶
Beyond the volume of data itself, another consideration is the number of messages and chattiness of the protocol. Some protocols rely on frequent periodic updates or heartbeats, which may prevent equipment from going into sleep mode. In such cases, it makes sense to explore the use of feasible alternatives that rely on different communication patterns and fewer messages.¶
The following summarizes some challenges and opportunities in this space that can provide the basis for advances in greener networking:¶
Assessments of energy-related trade-offs regarding protocol design space and trade-offs, such as maintaining state versus more compact encodings, or extra computation for transcoding operations versus larger data volume.¶
Protocol advances for improving the ratio of goodput to throughput and to reduce waste: reduction in header tax, in protocol verbosity, in need for retransmissions, improvements in coding, etc.¶
Protocols that allow for managing transmission patterns in ways that facilitate periods of link inactivity, such as burstiness and chattiness.¶
Network addressing is another way to shave off energy usage from networks. Address tables can get very large, resulting in large forwarding tables that require considerable amount of memory, in addition to large amounts of state that needs to be maintained and synchronized. From an energy footprint perspective, both can be considered wasteful and offer opportunities for improvement. At the protocol level, rethinking how addresses are structured can allow for flexible addressing schemes that can be exploited in network deployments that are less energy-intensive by design. This can be complemented by supporting clever address allocation schemes that minimize the number of required forwarding entries as part of deployments.¶
Alternatively, the addressing could be designed to allow for more efficient processing than LPM. For instance, a geographic type of addressing (where the next hop is computed as a simple distance calculation based on the respective position of the current node, of its neighbors and of the destination) [Herzen2011PIE] could be potentially more energy efficient.¶
The following summarizes some challenges and opportunities in this space that can provide the basis for advances in greener networking:¶
Networks have been optimized for many years under many criteria, for example, to optimize (maximize) network utilization and to optimize (minimize) cost. Hence, it is straightforward to add optimization for "greenness" (including energy efficiency, power consumption, carbon footprint) as important criteria.¶
This includes assessing the carbon footprints of paths and optimizing those paths so that overall footprint is minimized, then applying techniques such as path-aware networking or segment routing [RFC8402] to steer traffic along those paths. (As mentioned earlier, other proxy measures could be used for carbon footprint, such as energy-efficiency ratings of traversed equipment.) It also includes aspects such as considering the incremental carbon footprint in routing decisions. Optimizing cost has a long tradition in networking; many of the existing mechanisms can be leveraged for greener networking simply by introducing the carbon footprint as a cost factor. Low-hanging fruit includes adding carbon-related parameters as a cost parameter in control planes, whether distributed (e.g., IGP) or conceptually centralized via SDN controllers. Likewise, there are opportunities in right-placing functionality in the network. An example is placement of virtualized network functions in carbon-optimized ways, i.e., cohosted on fewer servers in close proximity to each other in order to avoid unnecessary overhead in long-distance control traffic.¶
Other opportunities concern adding carbon awareness to dynamic path selection schemes. This is sometimes referred to as "energy-aware networking" (or "pollution-aware networking" [Hossain2019] or "carbon-aware networking", when parameters beyond simply energy consumption are taken into account). Again, considerable energy savings can potentially be realized by taking resources offline (e.g., putting them into power-saving or hibernation mode) when they are not needed under current network demand and load conditions. Therefore, weaning such resources from traffic becomes an important consideration for energy-efficient traffic steering. This contrasts and indeed conflicts with existing schemes that typically aim to create redundancy and load-balance traffic across a network to achieve even resource utilization. This usually occurs for important reasons, such as making networks more resilient, optimizing service levels, and increasing fairness. Thus, a big challenge is how resource-weaning schemes to realize energy savings can be accommodated without cannibalizing other important goals, counteracting other established mechanisms, or destabilizing the network.¶
An opportunity may lie in making a distinction between "energy modes" of different domains. For instance, in a highly trafficked core, the energy challenge is to transmit the traffic efficiently. The amount of traffic is relatively fluid (due to multiplexing of multiple sessions) and the traffic is predictable. In this case, there is no need to optimize on a per-session basis or at a short timescale. In the access networks connecting to that core, though, there are opportunities for this fast convergence: traffic is much more bursty and less predictable, and the network should be able to be more reactive. Other domains such as DCs may have more variable workloads and different traffic patterns.¶
The following summarizes some challenges and opportunities in this space that can provide the basis for advances in greener networking:¶
Devise methods for carbon-aware traffic steering and routing; treat carbon footprint as a traffic cost metric to optimize.¶
Apply Machine Learning (ML) and AI methods to optimize networks for carbon footprint; assess applicability of game theoretic approaches.¶
Articulate and, as applicable, moderate trade-offs between carbon awareness and other operational goals such as robustness and redundancy.¶
Extend control plane protocols with carbon-related parameters.¶
Consider security issues imposed by greater energy awareness, to minimize the new attack surfaces that would allow an adversary to turn off resources or to waste energy.¶
As an important prerequisite to capture many of the opportunities outlined in Section 6.1, good abstractions (and corresponding instrumentation) for easily assessing energy cost and carbon footprint will be required. These abstractions need to account for not only the energy cost associated with packet forwarding across a given path, but also the related cost for processing, for memory, and for maintaining of state, to result in a holistic picture.¶
In many cases, optimization of carbon footprint has trade-offs that involve not only packet forwarding but also aspects such as keeping state, caching data, or running computations at the edge instead of elsewhere. (Note: There may be a differential in running a computation at an edge server vs. at a hyperscale DC. The latter is often better optimized than the latter.) Likewise, other aspects of carbon footprint beyond mere energy-intensity should be considered. For instance, some network segments may be powered by more sustainable energy sources than others, and some network equipment may be more environmentally friendly to build, deploy, and recycle, all of which can be reflected in abstractions to consider.¶
Assessing carbon footprint at the network level requires instrumentation that associates that footprint not just with individual devices (as outlined in Section 4.2) but also with concepts that are meaningful at the network level, i.e., to flows and to paths. For example, it will be useful to provide visibility into the carbon intensity of a path: Can the carbon cost of traffic transmitted over the path be aggregated? Does the path include outliers, i.e., segments with equipment with a particularly poor carbon footprint?¶
Similarly, how can the carbon cost of a flow be assessed? That might serve many purposes beyond network optimization, e.g., introducing green billing and charging schemes, and raising carbon awareness by end users.¶
The following summarizes some challenges and opportunities in this space that can provide the basis for advances in greener networking:¶
As mentioned in Section 3, the overall energy usage of a network is in large part determined by how the network is dimensioned, specifically: which and how many pieces of network equipment are deployed and turned on. A significant portion of energy is drawn even when simply in idle state. Hence, minimizing the amount of equipment that needs to be turned on in the first place presents one of the biggest energy-saving opportunities.¶
Network deployments are generally dimensioned for periods of peak traffic, resulting in excess capacity during periods of non-peak usage that nonetheless consumes power. Shaving peak usage may thus result in outsized sustainability gains, as it reduces energy usage during peak traffic but, more importantly, waste during non-peak periods.¶
While traffic volume is largely a function of demand traffic that network providers have little influence over, some peak shaving can nevertheless be accomplished by techniques such as spreading spikes out over geographies (e.g., redirecting some traffic across more costly but less utilized routes, particularly in cases when traffic spikes are of a more local or regional nature) or over time (e.g., postponing non-urgent traffic, storing or buffering using edge clouds or extra storage where feasible).¶
To make techniques effective, accurate learning and prediction of traffic patterns are required. This includes the ability to perform forecasting to ensure that additional resources can be spun up in time should they be needed. Clearly, this presents interesting challenges, yet also opportunities for technical advances to make a difference.¶
The following summarizes some challenges and opportunities in this space that can provide the basis for advances in greener networking:¶
Support methods for monitoring and forecasting traffic demand, involving new mechanisms and/or performance improvements of existing mechanisms to support the collection of telemetry and generation of traffic matrices at very high velocity and scale.¶
Additional methods for distributing traffic load evenly across the network, i.e., load balancing on a network scale, and enablement of those methods through control protocol extensions as needed.¶
One set of challenges of carbon-aware networking concerns the fact that many schemes result in much greater dynamicity and continuous change in the network, as resources may be steered away from (when possible) and then leveraged again (when necessary) in rapid succession. This imposes significant stress on convergence schemes that results in challenges to the scalability of solutions and their ability to perform in a fast-enough manner. Network-wide convergence imposes high cost and incurs significant delay and thus is not susceptible to such schemes. In order to mitigate this problem, mechanisms should be investigated that do not require convergence beyond the vicinity of the affected network device. The impact of churn needs to be minimized, especially in cases where central network controllers (responsible for the configuration of paths and the positioning of network functions and that aim for global optimization) are involved. This means that, for example, discovery, rediscovery, and update schemes need to be simplified, and extensive recalculation (e.g., of routes and paths based on the current energy state of the network) needs to be avoided.¶
The following summarizes some challenges and opportunities in this space that can provide the basis for advances in greener networking:¶
Protocols that facilitate rapid convergence (per Section 5.1).¶
Investigate methods that mitigate effects of churn, including methods that maintain memory or state as well as methods relying on prediction, inference, and interpolation.¶
One of the most important network management constructs is that of the network topology. A network topology can usually be represented as a database or as a mathematical graph, with vertices or nodes, edges or links, representing networking nodes, links connecting their interfaces, and all their characteristics. Examples of these network topology representations include routing protocols' link-state databases (LSDBs) and service function chaining graphs.¶
To add carbon and energy awareness into networks, the energy proportionality of topologies directly supports visibility into energy consumption and improvements via automation.¶
The following summarizes some challenges and opportunities in this space that can provide the basis for advances in greener networking:¶
Embedding carbon and energy awareness into the representation of topologies, whether considering IGP LSDBs and their advertisements, BGP-LS (BGP Link-State), or metadata for the rendering of service function paths in a service chain.¶
Use of those carbon-aware attributes to optimize topology as a whole under end-to-end energy and carbon considerations.¶
Another possibility to improve network energy efficiency is to organize networks in a way that they allow important applications to reduce energy consumption. Examples include facilitating retrieval of content or performing computation in ways that minimize the amount of communication needed in the first place, even if energy savings within the network may be offset (at least in part) by additional energy consumption elsewhere. The following examples suggest that it may be worthwhile to reconsider the ways in which networks are architected to minimize their carbon footprint.¶
For example, Content Delivery Networks (CDNs) have reduced the energy expenditure of the Internet by downloading content near the users. The content is sent only a few times over the WAN and then is served locally. This shifts the energy consumption from networking to storage. Further methods can reduce the energy usage even more [Bianco2016energy] [Mathew2011energy] [Islam2012evaluating]. Whether overall energy savings are net positive depends on the actual deployment, but from the network operator's perspective, at least it shifts the energy bill away from the network to the CDN operator.¶
While CDNs operate as an overlay, another architecture has been proposed to provide the CDN features directly in the network -- namely, Information-Centric Networks [Ahlgren2012survey], also studied in the ICNRG of the IRTF. However, this shifts the energy consumption back to the network operator and requires some power-hungry hardware, such as chips for larger name lookups and memory for the in-network cache. As a result, it is unclear if there is an actual energy gain from the dissemination and retrieval of content within in-network caches.¶
Fog computing and placing intelligence at the edge are other architectural directions for reducing the amount of energy that is spent on packet forwarding and in the network. There again, the trade-off is between performing computational tasks (a) in an energy-optimized data center at very large scale (but requiring transmission of significant volumes of data across many nodes and long distances) versus (b) at the edge where the energy may not be used as efficiently (less multiplexing of resources and inherently lower efficiency of smaller sites due to their smaller scale) but the amount of long-distance network traffic and energy required for the network is significantly reduced. Softwarization, containers, and microservices are direct enablers of such architectures. Their realization will be further aided by the deployment of programmable network infrastructure, such as Infrastructure Processing Units (IPUs) or SmartNICs that offload some computations from the CPU onto the NIC. However, the power consumption characteristics of CPUs are different from those of NPUs; this is another aspect to be considered in conjunction with virtualization.¶
Other possibilities are taking economic aspects into consideration, such as providing incentives to users of networking services in order to minimize energy consumption and emission impact. In [Wolf2014choicenet], an example is provided that could be expanded to include energy incentives.¶
Other approaches consider performing a late binding of the data and the functions to be performed on it [Krol2017NFaaS]. The COINRG of the IRTF focuses on similar issues. Jointly optimizing for the total energy cost that takes into account networking as well as computing (along with the different energy cost of computing in a hyperscale DC vs. at an edge node) is still an area of open research.¶
In summary, rethinking the overall network (and networked application) architecture can be an opportunity to significantly reduce the energy cost at the network layer, for example, by performing tasks that involve massive communications closer to the user. To what extent these shifts result in a net reduction of carbon footprint is an important question that requires further analysis on a case-by-case basis.¶
The following summarizes some challenges and opportunities in this space that can provide the basis for advances in greener networking:¶
Investigate organization of networking architecture for important classes of applications (e.g., content delivery, right-placing of computational intelligence, industrial operations and control, massively distributed ML and AI) to optimize green footprint and holistic approaches to trade-offs of carbon footprint with forwarding, storage, and computation.¶
Models to assess and compare alternatives in providing networked services, e.g., evaluate carbon impact relative to where to perform computation, what information to cache, and what communication exchanges to conduct.¶
How to make networks "greener" and reduce their carbon footprint is an important problem for the networking industry to address, both for societal and for economic reasons. This document has highlighted a number of the technical challenges and opportunities in that regard.¶
Of those, perhaps the key challenge to address right away is the ability to expose at a fine granularity the energy impact of any networking actions. Providing visibility into this will enable many approaches to come towards a solution. It will be key to implementing optimization via control loops that can assess the energy impact of a decision taken. It will also help to answer questions such as:¶
Determining where the sweet spots are and optimizing networks along those lines will be a key towards making networks "greener". We expect to see significant advances across these areas and believe that researchers, developers, and operators of networking technology have an important role to play in this.¶
This document has no IANA actions.¶
Security considerations may appear to be orthogonal to green networking considerations. However, there are a number of important caveats.¶
Security vulnerabilities of networks may manifest themselves in compromised energy efficiency. For example, attackers could aim at increasing energy consumption to drive up attack victims' energy bills. Specific vulnerabilities will depend on the particular mechanisms. For example, in the case of monitoring energy consumption data, tampering with such data might result in compromised energy optimization control loops. Hence, any mechanisms to instrument and monitor the network for such data need to be properly secured to ensure authenticity.¶
In some cases, there are inherent trade-offs between security and maximal energy efficiency that might otherwise be achieved. An example is encryption, which requires additional computation for encryption and decryption activities and security handshakes, in addition to the need to send more traffic than necessitated by the entropy of the actual data stream. Likewise, mechanisms that allow to turn resources on or off could become a target for attackers.¶
Energy consumption can be used to create covert channels, which is a security risk for information leakage. For instance, the temperature of an element can be used to create a Thermal Covert Channel [TCC], or the reading/sharing of the measured energy consumption can be abused to create a covert channel (see for instance [DRAM] or [NewClass]). Power information may be used to create side-channel attacks. For instance, [SideChannel] provides a review of 20 years of study on this topic. Any new parameters considered in protocol designs or in measurements are susceptible to create such covert or side channels, and this should be taken into account while designing energy-efficient protocols.¶
The authors thank Dave Oran for providing the information regarding covert channels using energy measurements and Mike King for an exceptionally thorough review and useful comments.¶