Reaching for Net-Zero: Achieving Zero Carbon Data Centers by Decentralizing Consensus of Power Supply Amongst Utility and Microgrid Providers
White Paper 1
This paper is the first in a series of UN2030 Sustainable Development Goals initiative thought experiments focused on using blockchain and decentralized consensus algorithms to overcome logistical barriers to a true zero carbon emissions data center and microgrid environment. The scope of this paper is the optimization of energy dispatch and supply from distributed resources down to a data center or microgrid supported campus. Future papers will build upon the arguments laid forth and culminate in a fully integrated blockchain consortium beginning at the manufacturing of parts for all entities in the network and reaching all the way down to the individual tenants in a co-location data center performing tenant-to-tenant server subletting during demand response events for the data center or macrogrid (i.e., a traditional wide area synchronous grid or colloquially, an electrical utility grid).
(Keywords: game theory, good actors, bad/malevolent actors, UN2030, centralized consensus, decentralized consensus, proof of work (POW), proof of stake (POS), private-by-design (PbD), microgrid, macrogrid, blockchain, byzantine fault tolerance, embodied carbon, operating carbon, traveling carbon, zero greenhouse gas (ZGHG), distributed energy, block mining, data structures, peer to peer (P2P), server to server (S2S), client to client (C2C), internet of things (IOT), demand response (DR), tolerance for loss)
Introduction – Why do we need standby generators?
Decentralized Consensus and Byzantine Fault Tolerance
Distributed Energy Resources for Microgrids as a Database and Automated Transaction Network
Determining Loss-Tolerance, Threshold for Trust, and Proof of Stake
Optimizing the Supply of Zero-Greenhouse-Gas Energy via Decentralized Networks
Automation of Components in a Microgrid
While some understanding of the keywords mentioned in the Abstract section is assumed and required for this paper, the concepts of decentralized consensus, byzantine fault tolerance, distributed energy resources forming a database, loss-tolerance and trust, proof of stake, embodied carbon will be described at a precursory level. The objective of this whitepaper is to conceptually portray an application for which blockchain removes barriers to the success of zero carbon emissions operating and embodied environment. This is achieved via a thought experiment in which four distributed energy suppliers need to cooperate to deliver synchronized or grid forming power to a microgrid supported data center and accurately respond to events of increased demand to maximize their own revenues. They cannot trust each other as competitors but they must find some way to communicate information between one another to form a grid and optimize their economic dispatch because the utility brings unnecessary overhead, and the data center campus lacks the resources to continually log and verify upstream information on its own.
2. Decentralized Consensus and Byzantine Fault Tolerance
Before discussing a decentralized consensus, or even blockchain, the idea of centralized consensus needs to be understood as the “conventional wisdom” for problem-solving. Decision making and problem solving are easy when all players in a game are on the same team or when multiple businesses trust a banker to accurately record and upkeep a ledger of transactions between one another. This trust filled, naturally collaborative, network of individuals forms a centralized consensus, a decision making and record keeping model in which all parties trust each other or trust the same person and have no reservations about exchanging information (Krawiec-Thayer). This is a highly effective method for getting work done with colleagues or for paying bills, but it is not a practical way to get competitors in a single industry to work together or exchange data. That is where de-centralized consensus algorithms come into play.
Decentralized consensus, As Dr. Mitchell P. Krawiec-Thayer writes, “is the ability for many parties to safely store and share information, without having to rely on a central authority or trust any other participants in the network” in the editorial blog post, “What’s the big deal about Decentralized Consensus”. Paying close attention to the ability for “many parties” to share and store information in a common database, both safely and without trusting each other or a central authority, it becomes immediately apparent why any technology that can achieve decentralization of decision making is vital to the success of modernized microgrid and data center technology. Dr. Krawiec-Thayer states the following:
Any effective decentralized consensus system must solve a fundamental challenge: how can a system arrive at universal agreement under adversarial conditions where messages may be unknowingly lost and participants may behave dishonestly for their own gain?
As Krawiec-Thayer later mentions, this problem was concisely posed to the greater technological community almost 40 years ago, in what is often referred to as the Byzantine Generals’ Problem:
Imagine a group of generals of the Byzantine army camped with troops around an enemy city. Communicating only by messenger, the generals must agree upon a common battle plan — whether to attack or retreat. Either way, they must arrive at agreement and act in unison since an attack with only a portion of the troops would be disastrous. However, one or more of the generals may be traitors who will try and confuse the others. The problem is to find an algorithm to ensure that the loyal generals will reach agreement. (Lamport, Shostak, and Pease; 1982)
Considering this context, a blockchain is a software and system policy implementation of valid solutions to the Byzantine Generals Problem. Blockchain, as a decentralized problem-solving system, creates an opportunity to empower mission critical facilities. Consider how data centers fall under the need for decentralized decision making. Data center facility operations teams and sub-groups barely trust each other. They tend to act more like tribes than a single unit, even though they roughly share a common goal. Decision making often requires lengthy chains of approval at various stakeholder levels for even simple changes to electrical equipment. Trust between any power facility and the electrical utility is illusory at best and antagonistic at worst. There is a high level of added overhead and delay due to this slow-moving and poorly fitting centralized consensus that is forced onto data center design and operation. On top of these logistical inefficiencies, consider that there are a high number of nodes measuring power and computational information in a data center. Server cabinets or racks all need some method of periodically or continually checking the load against capacity and upstream equipment ratings. The service transformer to a data center, conventionally, must be metered by a central utility. Multiple telecom and monitoring systems are often required to implement building fire protection, alarm, lighting, and emergency power response control policies. If only there were some ways to remove the need for a data center to oversee all of this information, especially at such a great financial cost to the facility owners and shareholding entities.
Instead of forcing conventional decision making that barely functions in microgrid and data center environments, it is possible to utilize a so called “decentralized consensus algorithm” that removes central authority dependence and encourages success amongst the many parties involved in these larger operations efforts. Literature dating as far back as 1982 lays the foundation for arguing that a zero-greenhouse gas (ZGHG) future requires shifting the current paradigm of central entities (see Byzantine Generals Problem). Instead of forcing the data center facility, microgrid operator, or macrogrid power utility to manage and verify data, designers of truly renewable and sustainable data center infrastructure must create new systems that decentralize. These systems require no trust amongst players but witness trust as an emergent property of continued success and will be built upon in this paper for renewable internet-of-things (IoT) applications (such as zero net carbon peer to peer clouds, smart grid metering, electric vehicle, etcetera).
Relating the idea of byzantine faults, blockchain, and decentralized networking back to mission critical and renewable applications, the July 1982 paper “The Byzantine Generals Problem” (cited by approximately 7377 scholarly articles, according to google scholar), coauthored by Leslie Lamport, Robert Shostak, and Marshall Pease in the fourth volume of the ACM journal, “Transactions on Programming Languages and Systems” directly ties mission critical applications to decentralized problem solving in terms of “reliable systems”. Using redundant computers, systems, or facilities and cross referencing (via internal or external voting) to a single result is the only alternative to utilizing materialistically reliable device components, when attempting to implement reliable computer systems. As the three authors explain:
“This is true whether one is implementing a reliable computer using redundant circuitry to protect against the failure of individual chips, or a ballistic missile defense system using redundant computing sites to protect against the destruction of individual sites by a nuclear attack. The only difference is in the size of the replicated "processor".
(Lamport, Shotak, Pease 398)
As Lamport, Shotak, Pease go on to explain, there are some flaws in this reasoning; however, the basic premise of critical systems requiring reliable outputs remains valid. After discussing parameters of a reliable voting solution, and the issues with circumventing material or voting considerations via hardware, Lamport, Shotak, Pease arrive at a significant realization for mission critical systems, “redundant inputs cannot achieve reliability; it is still necessary to ensure that the nonfaulty processors use the redundant data to produce the same output”. From the perspective of ZGHG microgrid supported power distribution, there is a major flaw in the paradigms of mission critical design: Redundant systems cannot achieve true reliability without using the same redundant data (i.e., power, information) to produce the same output (Lamport, Shotak, Pease; 1982, 387). This sounds paradoxical from the perspective of mechanical, electrical, and plumbing (MEP) design for redundancy and may require further scrutiny to determine the truth of such statement in the realm of power distribution design; however, Lamport, Shotak, Pease might argue that the electrical equipment adjacent to MEP design in mission critical must achieve these requirements, of redundant nonfaulty processing systems using the same sets of redundant data to produce the same output at any order or tier of redundancy, to be considered truly reliable. This is not to decry against formal requirements for redundancy, like the Uptime Institute’s tier system, by any means. The two key takeaways are that mission critical design has been a consideration of decentralized consensus networks since the field’s founding thesis in 1982, and that any system with a need for reliability must consider redundant computation, redundant data, and reliable parts. As data centers, being mission critical, intrinsically have a high need for reliability, they are no exception to this claim.
Moving on to the main subject of this paper, achieving optimal dispatch of power to a microgrid via decentralized power, consider the various interests in a microgrid:
Authorities having jurisdiction
Macro-grid owning utilities
Micro-grid operating data center campuses
Distributed energy supplying and storing entities
All of the aforementioned parties, with the exception of the campus being served, are in fierce competition for survival. The question of how to operate a successful zero carbon impact data center then shifts to forming an effective system for decentralized consensus. This system, method, treaty, and algorithm must somehow rope multiple parties who do not trust each other into a common database that removes all red tape and is highly resistant to bad actors.
In the case of economic dispatch amongst competing renewable energy providers, the competitors, data center, and all operating staff in between, are analogous to the byzantine generals. The generals issue a command, to attack or retreat, while the interested parties issue commands of how much power to send and when. The messenger is no longer a horseback rider, but a public (or semi-private) communications network along the blockchain.
The byzantine general’s problem can be distilled further into several bullet points which identify any scenario that may be solved by a decentralized consensus technology like blockchain:
There is a need for a common exchange of information, i.e., database.
There are any number of resource-governing entities involved.
The parties in this game have conflicting incentives or reason to mistrust each other.
These parties are likely governed by different rules.
There is a need for a truly objective and unbiased, unchangeable log of records.
The rules behind decision making rarely change if-ever.
So, a system to solve the problem of Byzantine Generals operating distributed energy resources in a microgrid environment or colo tenants in a data center environment must include the following to be truly optimal:
Consensus on records and transactions must be decentralized.
The system functions whether or not the players trust each other.
Highly resistant to tampering with data, creating false records, and collusion against the common goal. This is known as a high value of byzantine fault tolerance.
Good actors are rewarded for exchanging valid information and acting trustworthy.
Bad or malevolent actors lose resources, and identify themselves as more untrustworthy, whenever they attempt to falsify communications.
Resources, information, and decision-making occurs quickly.
Decisions involving multiple parties may need to be bi-directionally automated, such that either party can deliver commands over both party’s resources, under a verification system.
The growing record of transactions, information exchanged, and consensus decisions is secure, permanent, and able to identify false information quickly.
Flowchart (above), “Does your enterprise need blockchain?” – (Source: Information Services Group)
This, in a nutshell, is a conceptual description of blockchain and what it fixes in a data center, and microgrid environment, that conventional logistics cannot. As depicted in the flowchart created by the Information Services Group, a zero GHG microgrid supporting collection of energy resources fulfill the criteria of requiring a trust-independent database operated by many individuals and representatives. They do not trust their client blindly, they do not trust each other, and they do not trust the power utility as a competitor. For a true zero impact microgrid to succeed at the level of distributed energy resources, the energy operators must utilize a blockchain.
3. Distributed Energy Resources for Microgrids as a Database and Automated Transaction Network
Imagining that there are some number of third-party suppliers of renewable energy to a microgrid supported data center, it is very easy to understand the desire for a database of transactions, electrical characteristics over time, as well as supply chain parameters. A solar plant wants to log its own reserves and a client wants to have an idea of the rate at which the plant generates energy reserves. The same can be said for any sort of distributed energy resource of a renewable, zero or low embodied carbon footprint nature (be it wind, geothermal, hydro, etcetera). However, it is conventional wisdom that the user cannot get access to utility data and the microgrid cannot get real-time or truly deep data about its 3rd party suppliers. This can make the process of verifying certified renewable and zero embodied carbon energy, upstream of an entity, almost impossible. This is when a blockchain is very handy. Because transactions records are permanent, require stake or proof (see proof of work, proof of stake) to generate, and are tamper/falsification proofed, a supplier of power is assured that their data is secure by key. The recipient of power receives assurance that the claimed reserves were available because the log of capacity witnessed by the solar plant is the same log witnessed by the data center as a client. A database is then created between the two parties containing transaction records, voltages, currents, power quality, ambient conditions, consistent values of energy entering transmission lines and energy exiting it, as well as the available solar reserves for use by the data center in an emergency. This idea can be daisy-chained such that a database is formed between all distributed energy resources and resource users in the microgrid environment; each one of them is able to form an economic dispatch without trusting each other or knowing the identity of other players in the database. Furthermore, the blockchain allows any party to propose and dispatch resources across the entire network, so long as their dispatch request fulfills the conditions of the consensus algorithm. At this point, the consensus algorithm becomes a decentralized and automatic transaction machine or an automated transaction network; such that, each member of the network independently forms the transaction machine (it has no single point of failure, and it exists at all points in the network). Transactions between two or more parties may be initiated through the automated network by a single member. This brings unrivaled speed and efficiency to demand response, power optimization, and economic dispatches because the network solves for all parties’ internal decision-making algorithms against the network-wide consensus algorithm almost instantly. Relevant parties receive funds or power dispatch. Compensation is then distributed to the selected miners or proven good actors who were used to perform computations on behalf of the transaction machine. As another point for consideration, this blockchain machine increases the security of transactions against malevolent actors. By making dispatches transparent, even if anonymous, and by giving all users the ability to command the network under consensus, the network machine identifies bad actors as users who fail to satisfy the consensus algorithm.
4. Determining Loss-Tolerance, Proof of Stake, and a Threshold for Trust
One of the most crucial steps in creating a successful blockchain or automated decentralized transaction network is determining the method of verifying good intent. Thinking back to the problem of the Byzantine Generals, to ensure that a command or information being issued by one general to the whole army is truthful, and in good faith, there needs to be some system in place to expose malevolent actions (Lamport, Shostak, and Pease; 1982). This way, bad actors and defectors expose themselves and, in modern systems, lose resources or trust when they issue a malevolent command by the system’s design. Another benefit is that costly commands prevent repeated malevolent actions against the network unless many parties are pooling resources and colluding. There are a few systems that can be used to ensure honest, successful, and quick operation of a zero-greenhouse-gas system of energy distributors. Although not all methods of achieving reliability in a decentralized microgrid network shall be examined for this paper, there are many valid means of achieving such a system. It is recommended that key terms, works cited, and further reading are consulted for the implementations not utilized here.
As an alternative to the proof of work paradigm for block chains and decentralized consensus networks, consider “proof of knowledge” and “proof of stake”. Regarding proof of knowledge, consider the zk-SNARKs technology behind the Zcash cryptocurrency. The creators of Zcash, the Electric Coin Company explains in the article, “What are zk-SNARKs?”, that their technology “refers to a proof construction where one can prove possession of certain information, e.g., a secret key, without revealing that information, and without any interaction between the prover and verifier”. Diagrammatically, this process is described as follows:
Figure adapted from “How zk-SNARKs are constructed in Zcash” by the Electric Coin Company
Imagine a system that could command many different competitors to cooperate without revealing sensitive information to any of them. This would be referred to as a zero-knowledge proof of knowledge by the creators of zkSNARKs and Zcash. Zero knowledge proof of knowledge blockchains are constructed by following the algorithm shown in the figure adapted from “How zk-SNARKS are constructed in Zcash”. To elaborate further, in a zero-carbon microgrid supported data center context, this decentralized control network commands many different competing energy suppliers to generate the optimal amount of electricity for any arbitrary load, given zero knowledge assertions of each supplier’s fixed or dynamic price and zero-knowledge assertions of each supplier’s current or scheduled energy generation and storage capabilities. Competing suppliers are incentivized to implement the blockchain by the promise of maximized revenue capabilities and microgrid forming or supported data center facilities are incentivized to implement the blockchain to minimize their total cost to achieve zero-carbon (or highly carbon efficient) off-grid power across the system. Due to the nature of zero knowledge proofs of knowledge, no member of this blockchain would need to know the other members’ prices or how much energy they were allowed to supply.
As an application of the microgrid control policy network implemented with a zero-knowledge proof of knowledge blockchain, imagine this technology being used to achieve a zero carbon emissions microgrid data center environment and provide enrolled electricity suppliers with maximized revenue during any electrical event or extended contract of service. Take the follow assumptions as truth for the sake of this practical thought experiment:
There exists a microgrid either formed or electrically followed by a data center or collection of data halls on a data center campus.
The grid forming and following facilities may or may not have their own distributed energy resources and storage. This is irrelevant to the goal of supplementing energy during shortages, events, extended periods of time. Cost optimization opportunities may appear at any time for the third-party suppliers and the data center campus.
The microgrid can off-grid, entering what is known as island mode.
The primary goal of this microgrid shall be to support the continuous operation of mission critical facilities via zero carbon and green energy.
There exists an arbitrary solar company, an arbitrary wind company, an arbitrary geothermal company, and an arbitrary biodiesel that can supply electricity to the microgrid. These companies are in direct competition for revenue created by supplying electricity into a microgrid.
There exist 53 server tenants as an arbitrary number.
There must be more than 3 members of the blockchain.
To eliminate utility involvement in metering, a data center, building, or an individual server tenant interfaces their equipment monitoring values to the decentralized network system. This interface allows any of the load users to inform blockchain members of how much energy was used per billing cycle and prove that they have maintained a record of equipment monitoring status to verify that tampering did not occur, without revealing info to other load users or suppliers. Because it is zero-knowledge, the equipment monitoring channels inarguably provide true and anonymous readings of energy usage without utility verification. This also enables the added capability for load users to schedule out required demand without revealing classified information to any of the competing suppliers. The same outcome can be achieved for any other metric, system, or information that the blockchain members wish to include in their anonymous exchanges and supply optimizations. All suppliers receive direct commands from the decentralized network process (that they collectively form) on how much energy to supply, for maximized revenue, after asserting how much energy they hold in reserves and how much they charge.
For sake of argument, the solar supplier creates 31kWh of zero-carbon emissions energy. The solar supplier, as a prover, may inarguably assert that they produce 31kWh of monthly energy for microgrid and data center use to either a server tenant (verifier) or any competing supplier (verifier) without revealing the number itself. At the load side of the blockchain and microgrid, a server enterprise user or data center can verify and select just how much green energy they are willing to pay for at the source, without the exchange of sensitive information. In the example given, the arbitrary 53 server tenants (provers) prove to a wind supplier, a solar supplier, a geothermal supplier, and a biodiesel supplier (all four as verifiers) that they used between 1kWh and 3kWh per tenant, totaling 91kWh in a month without revealing the total amount to any of the four suppliers. They first prove that they have some true value of total power to all four entities without revealing the number itself via a hashed value that hides 91kWh in randomness and secretness. This hashed value is unique such that it must correspond to the value 91kWh that none of the suppliers know. As the old paradigm of C language coding goes, “one cannot need to need to know how a script will be used.” Once all suppliers (or selected block miners) have performed their role in the verifying algorithm to confirm that the hashed value does exist and corresponds to some secret number, the 53 tenants may prove to each supplier the individual amount used, or verify the individual amount supplied by power entity (as a vice versa scenario, or perhaps both must occur). They use a zero-knowledge proof of knowledge to let each power supplier know that they have four true values, each hidden by a hash value. In the though experiment, these tenants prove to each supplier that they owe payment for 31kWh from the solar supplier, 30kWh from the wind supplier, 15.1kWh from the geothermal supplier, and 14.9kWh from the biodiesel supplier without telling any of the other three suppliers how much power was supplied by their competitors. Because of this implementation, no electric utility metering is installed in the microgrid system and data center facility personnel are no longer needed to act as the central authority between microgrid, power suppliers, and server tenants.
As Marco Conoscenti writes in his 2016 IEEE Computer Systems and Applications Conference review paper, “Blockchain for the Internet of Things: A Systematic Literature Review”,
A private-by-design IoT could be fostered by the combination of the blockchain and a P2P storage system. Sensitive data produced and exchanged among IoT devices are stored in such storage system, whose P2P nature could ensure privacy, robustness and absence of single points of failure (Conoscenti, Introduction; 2016)
In an internet of things environment, it becomes necessary to consider that not every IOT device is capable of storing an entire blockchain, so a hybrid between blockchain and peer to peer storage that is hashed for security, in any manner described in this paper or in its consulted works, enables block chaining between lower levels of I/O devices and higher-level entities like dispatchable generation and load consuming facilities. For example, if a substation only needs to store the portion of a blockchain required for it to function as a node, then its owner is more likely to enroll it as an additional resource. Consulting Conoscenti’s rather thorough review once more, he goes on to write:
Combined with this storage system, the blockchain has the fundamental role to register and authenticate all operations performed on IoT devices data. Each operation on data (creation, modification, deletion) is registered in the blockchain: this could ensure that any abuse on data can be detected. Moreover, access policies can be specified and enforced by the blockchain, preventing unauthorized operations on data.
The hybridization of blockchain to each local IoT device enables the tracking and verification of entire oceans of pure data; furthermore, granting the control policy for a modernized grid to the blockchain creates scrutiny against malevolent actors at an almost microscopic level by design-intent, without intruding on an individual entity’s privacy. Think back to the zero-knowledge proof of knowledge for clarity. It is possible, via emerging cryptographic algorithms like zkSNARKs, for blockchains to validate if a self-metering transmission pole registered 32 amperes of current at some arbitrary time without knowing the value was 32 amperes.
5. Automation of Components in a Microgrid
Taking things, a step further, if given the proper implementation, power systems, and computational resources, it is possible to fully automate the following components of a microgrid:
Creation of economic dispatch computation.
Assurance of data security.
Decentralized computation of economic dispatch problem for the microgrid.
Proposed dispatch of power based on optimized solution to the economic dispatch problem.
Likely, via the standard application of nonlinear distributed Newton Raphson method to modernized grid modeling, if microgrid has a large quantity of buses; otherwise, simple calculus scripts should be able to solve.
Proof of stake or zero knowledge proof of optimal solution to economic dispatch and of funds owed to all affected parties.
Proposal of financial compensation for economic dispatch to the network consensus algorithm without a 3rd party broker or metering entity.
Approval or denial of proposed economic dispatch, financial compensation, and decentralized computation of conformity to consensus algorithm by the blockchain network.
Dispatch of power from generating entities.
Scrutiny against malevolent actions after or during dispatch.
Storage and processing of data from all points between generation and loads.
Decentralized updates to a hashed ledger of transactions or ledger of power flow at set intervals.
The full scope of autonomy and reduced costs for creating central facilities to manage the microgrid, generation dispatch, transmission stations, and data center oversight is astounding. As Conoscenti writes, “In this framework, people are not required to entrust IoT data produced by their devices to centralized companies: data could be safely stored in different peers, and the blockchain could guarantee their authenticity and prevent unauthorized access” (Conoscenti; 2016). There is no need for an electrical utility or 3rd party to meter, verify, or broker the exchange of power for payment because the entire network of power systems connected to the microgrid and supporting the data center facility provide decentralized oversight. This system satisfies the mission critical application of the original Byzantine Generals Problem by virtue of high quality, zero embodied carbon, device components being used to provide distributed redundant computations stored partially and fully across a much higher number of systems than a data center would typically be able to rely upon for logistical data. The microgrid itself, is intrinsically designed without a single point of failure. Blockchain systems have a high resistance to byzantine faults, providing high reliability in data security.
Now consider a scenario in which each node in the modernized microgrid (i.e., measurement and relaying, smart devices, control systems, electrical equipment, etc.), data center (service point, monitoring and control systems, electrical equipment, etc.), or transmission and distribution (controls, transmission transformers, meters, substations, etc.) contains a partial or full record of the blockchain to address the issue of limited resources at remote equipment locations. Each node may be owned by a different entity, or many nodes may be owned by one entity (cautioning against centralization). The architecture of the blockchain network is layered to reduce computational strain on entities with less power or smaller nodes and each node contains at least the portion of the blockchain it needs to operate and connect to at least three neighboring nodes (see: Lamport, Shotak, Pease regarding “neighboring commanders”).
Figures 6 and 7 from Lamport, Shotak, Pease, “Byzantine Generals Problem“
Thus, the partial blockchains forming or supporting the network collectively form at least a “3-regular graph” to ensure solvable distribution and decentralization of messages and communications. This is necessary in a smartgrid environment because smaller components may not be able to store the entire blockchain. One can reference the Byzantine Generals Problem for further information on the requirement for at least a 3-regular graph, but the concept essentially ensures unique paths between entities and that all neighboring nodes provide sufficient different routes for data resilient against malevolent action.
The necessity of partial blockchain storage is great for microgrids. As described by Conoscenti, “we suggest to develop IoT applications on top of another secure but scalable blockchain […] Moreover, we suggest to adopt a layered architecture which supports thin clients to allow IoT devices with limited resources to store only a portion of the blockchain”. Such a blockchain is ideal for an IoT and microgrid environment because it allows smaller measurement devices to form or follow the blockchain without an overwhelmingly large stock of data resources at the seemingly infinite count of nodes that form a microgrid.
Regarding the significance of understanding proof of stake, consider that in April of 2020, the founder of Swiss crypto broker Bitcoin Suisse, Niklas Nikolajsen claimed that Bitcoin will transition to a Proof of Stake algorithm once the Ethereum cryptocurrency network demonstrates the algorithm’s success in market. To an avid follower of blockchain technologies, this is highly significant and disruptive. As author Marie Huillet recounts, an outtake from a German documentary uploaded on April 6, 2020, records the founder, Nikolajsen, saying, “[Bitcoin’s move to Proof-of-Stake] is not planned, but the second-largest cryptocurrency, Ether, will move to a Proof-of-Stake concept that demands vastly less electricity, already in a few months. I’m sure, once the technology is proven, that Bitcoin will adapt to it as well” (Huellet; 2020). Nikolajsen actually goes on to claim that Proof of Stake (POS) is a superior system to Proof of Work (POW) once it is proven to work well.
To briefly describe Proof of stake, imagine a blockchain whose “nodes in the network engage in validating blocks, rather than mining them, as in PoW”. In POS, these block validators are selected by algorithm, in the case of cryptocurrency, based on “the number of tokens a given node has staked in their wallet — i.e., deposited as collateral in order to compete to add the next block to the chain” (Huellet; 2020). In the case of a microgrid, or any modernized grid technology, POS can be applied as such:
Block validators are selected from a public pool of miners or a private pool of microgrid involved entities (i.e., dispatchable generation, storage, transmission, tenants, data center, microgrid distribution and operations, smart devices, etcetera) based on deterministic algorithm.
The algorithm selects a subpopulation of the network to be block validators based upon how much tokenized “trust” they are willing to stake, and iteratively how much trust they have successfully demonstrated for past computations.
Any gamification method, direct financial compensation, dynamic incentive, or other method of providing entities a return for staking trust can be used to ensure continued involvement in proving stake.
Thus, when a supplier’s conditions for sending a power dispatch downstream are met, they submit a computation to the network that is turned into a block with identifying data hashed to secret randomness associated with a unique value, and members of the network autonomously are selected and autonomously bid to add the next block into the chain.
Entities awarded the bid are granted an incentivizing return upon successful demonstration of continued stake, and their trust index is increased.
This modified proof of stake can be considered a threshold means for determining trustworthiness and incentivizing members of the microgrid to continue acting in the best interests of the system. Because proof of work is not required, and only continued stake, there is potential for lower power requirements to complete computations in the network.
Continuing to examine the transition from theory and finance to microgrid operations, take the below system of operations, a purely hypothetical system, for a block chain integrated electric dispatch network of entities serving a large client’s demand.
(Source: EYP Mission Critical Facilities, Part of Ramboll)
Any number of operations to encrypt, package, track, and record information and transactions may be introduced to the system and given a control policy dependent on the demand/required load and measurements of the system. The block chain algorithm can take the considerations and rulebooks of each entity as triggers for a dispatch request, scramble them into hashed secretness that operates to satisfy the consensus algorithm. Thus, dispatch requests can be triggered autonomously, and transactions authorized so long as they satisfy the algorithm for consensus. So long as the miners of distributed computations, or elected trustworthy members of the network, act in good faith and continue to perform either zero knowledge proof and validation or continue to stake and gain indexed trust as a result of achieving true results from the consensus algorithm, participants in the game of this network witness optimized reliability, operational efficiency, and profits.
Using distributed computations via block mining, proof of stake, and/or the index of trust, the algorithm can be checked against current conditions by distributing calculation across entities least likely to feed bad information or send false results. Once this is performed, a block or proposed dispatch is produced and distributed to the network on the consensus algorithm as a final check. The consensus algorithm in the case of renewables, must consider a threshold or variance for which slight profit, energy, or minimal and momentary increase in embodied carbon can occur. Anything that falls outside these bounds is rejected outright. Anything that fails the consensus algorithm of the blockchain network is also rejected. Anything that satisfies the UN2030 requirements for a true zero greenhouse gas power distribution scheme, falls within the bounds of losses, and satisfies the consensus algorithm is approved autonomously, and funds are distributed to all relevant parties enrolled and effected by the economic dispatch and use of computational resources to calculate the optimized dispatch.
6. Optimizing the Supply of Zero-Greenhouse-Gas Energy via Decentralized Networks
As Helen Carruthers and Tracy Casavant of the Light House Sustainable Building Centre Society discuss in the 2013 Commission for Environmental Cooperation standards review, “What is a ‘Carbon Neutral’ Building?”, the definition of “carbon neutral” continues to evolve, as it relates to measurement, reduction, and offsetting carbon energy (Carruther & Casavant, 1). Though a scholar like Dr. Krawiec-Thayer might argue it is the result of political controversy and industry resistance to buzzwords, Carruthers as both a Project Manager and LEED AP more likely would attribute changes in the meaning of “carbon neutral” to the emergence of new technologies, new authorities, and an improved ability to identify the embodied and operating energy of carbon emissions. Take, for example, Carruther and Casavant’s description of a popular approach to carbon neutral building design:
• Integrating passive design strategies
• Designing a high-performance building envelope
• Specifying energy efficient HVAC systems, lighting, and appliances
• Installing on-site renewable energy
Observe that this approach to carbon neutral design considers the integration of design strategies, material performance, renewables, and energy efficient device specification. It appears that this process has rigor. By applying informal reasoning, designers may even argue the mainstream approach adequately addresses cradle to grave needs for carbon neutrality; however, this approach to carbon neutral design is dangerously lacking on the basis of its bounds, verification, and certification. It appears to be more of a bookend to the carbon emitting designs of old than an evolution in techniques. There is no consideration for the carbon emissions due to employees, on-site clients, and the delivery of supplies to site; furthermore, this approach lacks any mention of the carbon emitted to create a high-performance building envelope. It does not consider how much carbon is emitted to create energy efficient electrical equipment. It also fails to consider the production of renewable equipment or the emissions that may occur during the installation of equipment and build phase of the facility itself (Carruthers and Casavant, 1-2). As Carruthers and Casavant state:
A carbon neutral definition should include specific information/requirements relating to the following:
• System boundary – includes within it all areas associated with the buildings where energy is used or produced, i.e. operational energy, embodied energy of the materials used, energy used for the construction process and travel for occupants.
• Renewable energy and carbon offset 3rd party certification.
• Verification or certification of the calculated carbon emissions.
(Carruthers and Casavant, 1).
Just as a physicist must consider the most relevant boundaries of a phenomena under observation, true ZGHG microgrid and data center creating entities and firms must take into consideration, at the earliest stages of design, both the true boundaries of their project’s carbon impact, the validity of their carbon emissions calculations, and the objective certification of emissions offsetting. Thus, Carruthers and Casavant introduce three definitions for carbon neutral building design:
• Carbon Neutral – Operating Energy: […] Carbon neutral with respect to Operating Energy means using no fossil fuel GHG emitting energy to operate the building. Building operation includes heating, cooling and lighting.
• Carbon Neutral – Operating Energy + Embodied Energy: This definition for Carbon Neutrality builds upon the definition above and also adds the carbon emissions associated with energy embodied in the materials used to construct the building.
• Carbon Neutral – Operating Energy + Site Energy + Occupant Travel: This definition of carbon neutrality builds upon the inclusion of operating energy and embodied energy, and also reflects the carbon costs associated with a building's location. This requires a calculation of the personal carbon emissions associated with the means and distance of travel of all employees and visitors to the building.
(Carruthers & Casavant, 3).
The third definition considers both the emitting energy of building operation, production of the site and its parts, and the carbon costs induced by the build site’s location. In the face of so many additional contributions to carbon emissions, it becomes apparent just how little the popular approach to carbon neutral design achieves. Perhaps popular design satisfies investors and pundits, but it does not address every node of carbon emissions in such a large network of varying interests. As conjecture, this supply chain, itself, could be examined for decentralization by future authors.
 “The base definition for Carbon Neutral Design is taken from https://www.architecture2030.org“ (Carruthers & Casavant)
(Source: Grid Evolution, Vintage 2019)
Consider the visual depiction of “Tomorrow’s Decarbonized and Decentralized Power Market”, as illustrated by Grid Evolution. In modernized grid technologies, a bidirectional network of data, power, and transactional nodes emerges between dispatchable generation and the end customers, loads, or consumers of power. The complexity of such a system is massive and exceeds the ability of a central authority to effectively manage, as argued previously in this white paper and corroborated by the implications of Lamport, Shostak, and Pease. Consider, on top of the smart grid itself, the inevitability of ZGHG smart grids, which bring the considerations of operating + site + travel energy to any power system and accompanying infrastructure. At this point such a system has a high order of data nodes and multiple rulebooks to consider. The authors of the Byzantine Generals Problem would likely argue that a system of many commanding entities that do not necessarily trust one another and cannot agree on a single entity to act as an unbiased verifier cannot succeed via centralized decision making (Lamport, Shostak, and Pease; 1982). So, a microgrid itself, as modernized grid technology, is unlikely to ensure reliability when involved with more than three parties, let alone a microgrid that is designed to have a zero-carbon impact from end to end and from cradle to grave. Take into consideration any of the schemes or hypotheticals proposed so far and this seemingly unsolvable problem finds clarity. Observe the kick-off to this series of papers on the future of zero carbon networks, the “Decentralized Dispatch Problem”.
Let there exist a microgrid formed by electrical infrastructure and multiple facilities on a data center campus, whose modernized grid received dispatchable generation from four sources. These sources are parameterized with initial conditions, at the highest level, as follows:
G1[g=φ], A large solar plant at the null set condition “φ”
G2[g=φ], A small solar plant at the null set condition “φ”
G3[g=φ], A large wind plant at the null set condition “φ”
G4[g=φ], A small geothermal plant at the null set condition “φ”
The null set condition for these four sources is “not to dispatch power”, owing to the “retreat” command of the Byzantine Generals Problem. All four of these sources are competitors; by nature, if one were to fail, more power would be requested by the remaining three. Therefore, none of them can trust one another. This system shall be assumed trustless.
However, all four do share a common goal, that is, the dispatch of generated and stored power to a microgrid supporting a data center for profit. To achieve the optimal dispatch of power, it would not be unusual for an economic dispatch problem to be computed by a central entity as follows:
And cu is the time varying cost or revenue of utility power or injection back to the utility and pdc is the storage of power being loaded for reserves or being unloaded for use or injection at time t. C(t) is cost of the economic dispatch solution to power requirements and P(t) is the net power of the economic solution from the perspective of the data center main bus at time t. e(t) is the solution to the economic dispatch problem at time t. τ is some constant value for t, to keep this model simple. Note that this model could be expanded to include scattered reserves for storing power within the microgrid itself. Additionally, the resources of only a single data center shall be considered for this initial paper, though a data center microgrid could be modeled with parameters at each colocation facility, and at a deeper level, each colocation tenant. The bounds of a microgrid are dynamic and highly dependent on frame of reference, such that varying colocation facilities on a data center campus that forms the microgrid might consider all other facilities to be the microgrid from its own perspective.
The economic dispatch formula in this paper states that the economic dispatch solution e(t) models the calculus optimized supply and injection of power from the perspective of the microgrid forming campus at time t on the basis of cost savings. The goal of economic dispatch, as a refresher, is to obtain the lowest value of C (cost per kilowatt or kilowatt-hour) that solves e (problem) for P (Power) at time t. Thus, a linear equation is formed to solve e(t). If there are multiple cost coefficients available at time t by whatever means in the system, then a system of linear equations is formed that may be solvable as a matrix of coefficients or via differential equations; however, the mathematics to demonstrate such a dynamic system are beyond the scope of this paper and its intended audience.
Assuming a decentralized decision-making consensus, a central entity no longer solves for this problem as the utility once did or as the system designer may be tempted to place upon the shoulders of the microgrid formed by data center campus itself (i.e., adding cost, adding footprint, and carbon impact, and adding additional equipment). Instead, blockchain enables the solution for economic dispatch in an elegant fashion using the resources of the generation and distribution network itself. While numerous methodologies are available to establish collaboration in blockchain, imagine a high-level adaptation from the proof of stake method to a proof of trust for sake of thresholding block miners, in which the network itself distributes computation. To achieve successful dispatch, the following pseudo-algorithm must be realized by design and operation.
Such that Gdc represents data center resources, demand side management, reserve power, and control.
Such that is the unity set, implying that an entity defaulting to the state of unity is normally injecting power into its main bus. Imagine a second case in which there is a microgrid supporting a data center (i.e., a microgrid forming campus of colocation facilities whose bus-to-bus infrastructure forms a path for the supply of power to some data center) capable of storing enough reserves from distributed energy resource dispatch and whose reserves are capable of supplying enough energy to regularly exceed the demand of the data center campus or facility. So, it follows:
For whatever predicted or required time interval [a,b]; also, in which the initial computation assumes that the microgrid is operating in utility mode so that value of “ utility cost” is non-negative and the data center bus is not initially injecting power. If the initial computation finds that the right-hand side conditional is satisfied, then the system allows for the operation of the data center as a power source via either demand side management to demand response (reducing the strain on macrogrid in exchange for financial compensation from the utility) or direct injection from the facility buses and/or Distributed Energy Resource (DER) buses into the utility bus.
The economic dispatch problem is initiated upon autonomous identification of need for energy downstream or an offer to supply energy upstream. The data center may try to resell power it has received from the utility or from any distributed energy resource back to the alternative for a profit and DERs may try to do the same to the data center, the utility, and one another. A proverbial energy trading market is formed by each microgrid. Metering technology is integrated via partial storage at local IoT with the blockchain so that a need for power at any point in the network may be detecting and the network optimization of energy supply as a trade initiated. Each entity has their own rulebook containing parameters for a safe investment of trust tokens and energy dispatch for direct profit, and their rulebook is autonomously evaluated against an anonymous need for power somewhere in the network alongside conditions. If the entity’s rulebook greenlights the need for power, then that entity’s processors will automatically wager trust to build the block of economic dispatch; such that, if the entity builds the block of a partial economic dispatch solution that is a false solution or is somehow biased to the block builder outside of the network’s threshold for loss, then the entity building the block will lose their wagered trust and have a reduced ability to be selected for block building power trading solutions. If the entity builds a truthful solution to the economic dispatch problem, then they are rewarded with an increase in trust. The consensus algorithm has thresholds for loss, and the blockchain itself has algorithms for determining false blocks or deliberate attempts at sabotaging the blockchain, which is much more difficult to accomplish in a proof of stake or proof of trust system. Out of all entities wagering some amount of their trust tokens, the network consensus algorithm selects a random subpopulation from the bidding entities that fulfill the minimum computed “net trust” requirement based on importance of the economic dispatch being considered, a set of summed trust so that the total trust of block builders fulfills good actor requirements even if an individual bad actor makes it into the bid, or a computed minimum wager of trust to be selected. The third option may have an issue in which participant rulebooks either need to know the minimum wager of trust or they are just blindly guessing how much to wager. The second option builds additional resilience against bad actors because a single bad actor getting in and wagering trust on a solution they provide can be systematically overwhelmed by all other solutions to the problem. Some weight may need to be provided to individual solution submission based on how trustworthy an entity is or how much trust they wagered to create an incentivizing market.
In proof of trust, the prover must wager some amount of their trust tokens, Sk, in order bid on being included in the pool of potential block building participants. Depending on the system, in theory, trust tokens could form a secondary trading system on top of the direct energy trading floor created by bus to bus injections or it could be used by entities who would like to sell energy injection anonymously to another entity calling for grid injection. They could wager their trust that the solution is either cost efficient, energy efficient, or zero carbon if that is of interest to the recipient of power. There are many different ways to get participants interested in wagering trust in order to build blocks, each with pros and cons; as well, each method of gamifying this system may need to be tweaked for resilience and holes in logic.
where B is the initiation of some block algorithm to wager trust Sk in exchange for the ability to sell power to some anonymous demanding load pj.
This notion of wagering trust brings up another potential implementation of ZGHG data center formed microgrids and energy trading in which multiple entities may provide solutions that benefit them for profit and fall within some maximum threshold of either loss, price increase, or temporary efficiency drop at any point in the system. So long as it falls within the consensus algorithm maximum loss thresholds, and the trust ranking of the party is high enough, they can either be directly selected as an economic dispatch solution which optimizes cost for their facility’s perspective or they can be entered to a random pool of individuals with fair trust rankings. Once a solution is selected, it is sent to a random set of entities with trust rankings that match the significance of the dispatch proposed. It is hashed to protect individual identities, and either proof of stake, proof of trust, or proof of knowledge is used by these block builders to ensure that they are incentivized to build a correct block for the selected dispatch solution. The redundant blocks are based on redundant data taken from computations made across the network to ensure a higher tier redundancy in computational infrastructure, and the redundant blocks are checked against the consensus algorithm. Redundant blocks offer higher protection but, just like a redundant data center, they can increase cost to the network or individual entities. Assuming a single block is made and added to the blockchain, the blockchain update is examined, approved or rejected, and distributed to the network. Transactions, operations, exchanges of funds and power are automatically dispatched for all relevant parties. Thus, the zero-greenhouse gas microgrid is able to meet its own power needs and potentially obtain external profit via injection.
In conclusion and in brief, using the framework of decentralized networking, control, and computation enabled by blockchain technologies and hybrid peer to peer IoT storage, it is possible to model the operation and control of a simple yet fully renewable microgrid data center environment. Instead of designing green data centers living downstream of high emissions utility power supplies, it is finally possible for engineers and entrepreneurial interests to create a system designed for green energy, one that demands it.
7. Future Papers
Second Paper, First Series: “Server Subletting to Save the World: How Automated Server Resource Trading Works and Why Green Data Centers Need it”
Third Paper, First Series: “Taking Back the Grid: Integration between Zero-Emission Microgrids and Data Center Tenants”
Fourth Paper, First Series: “Microgrid 2.0: How the Decentralized Tomorrow will Create Microgrids of Data centers”
“Decentralized Energy As A Service: A Green Future Without Macrogrids”
Emerging Technology Round-Up: "A Who’s Who of Zero Carbon Data Center Innovators"
8. Further Reading
Y. Sang, U. Cali, M. Kuzlu, M. Pipattanasomporn, C. Lima and S. Chen, "IEEE SA Blockchain in Energy Standardization Framework: Grid and Prosumer Use Cases," 2020 IEEE Power & Energy Society General Meeting (PESGM), Montreal, QC, Canada, 2020, pp. 1-5, doi: 10.1109/PESGM41954.2020.9281709.
R. G.S. and M. Dakshayini, "Block-chain Implementation of Letter of Credit based Trading system in Supply Chain Domain," 2020 International Conference on Mainstreaming Block Chain Implementation (ICOMBI), Bengaluru, India, 2020, pp. 1-5, doi: 10.23919/ICOMBI48604.2020.9203485.
V. Naidu, K. Mudliar, A. Naik and P. Bhavathankar, "A Fully Observable Supply Chain Management System Using Block Chain and IOT," 2018 3rd International Conference for Convergence in Technology (I2CT), Pune, India, 2018, pp. 1-4, doi: 10.1109/I2CT.2018.8529725.
R. S. Kadadevaramth, D. Sharath, B. Ravishankar and P. Mohan Kumar, "A Review and development of research framework on Technological Adoption of Blockchain and IoT in Supply Chain Network Optimization," 2020 International Conference on Mainstreaming Block Chain Implementation (ICOMBI), Bengaluru, India, 2020, pp. 1-8, doi: 10.23919/ICOMBI48604.2020.9203339.
M. Nakasumi, "Information Sharing for Supply Chain Management Based on Block Chain Technology," 2017 IEEE 19th Conference on Business Informatics (CBI), Thessaloniki, Greece, 2017, pp. 140-149, doi: 10.1109/CBI.2017.56
Z. Mahmood and J. Vacius, "Privacy-Preserving Block-chain Framework Based on Ring Signatures (RSs) and Zero-Knowledge Proofs(ZKPs)," 2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT), Sakheer, Bahrain, 2020, pp. 1-6, doi: 10.1109/3ICT51146.2020.9312014.
Aljosha Judmayer; Nicholas Stifter; Katharina Krombholz; Edgar Weippl; Elisa Bertino; Ravi Sandhu, Blocks and Chains: Introduction to Bitcoin, Cryptocurrencies, and Their Consensus Mechanisms, Morgan & Claypool, 2017, doi: 10.2200/S00773ED1V01Y201704SPT020.
S. E. Chang and Y. Chen, "When Blockchain Meets Supply Chain: A Systematic Literature Review on Current Development and Potential Applications," in IEEE Access, vol. 8, pp. 62478-62494, 2020, doi: 10.1109/ACCESS.2020.2983601.
9. Works Cited
H. Carruthers and T. Casavant, “Commission for Environmental Cooperation,” in What is a "Carbon Neutral Building", 2013, pp. 1–6.
ISG, Does your enterprise need blockchain? Information Services Group, 2021.
L. Lamport, R. Shostak, and M. Pease, “The Byzantine Generals Problem,” ACM Transactions on Programming Languages and Systems, vol. 4, no. 3, pp. 382–401, 1982.
M. Conoscenti, A. Vetrò, and J. C. De Martin, in “Blockchain for the Internet of Things: A systematic literature review”. 2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA), pp. 1–6. https://ieeexplore.ieee.org/document/7945805
M. Huillet, “Bitcoin Will Follow Ethereum And Move to Proof-of-Stake, Says Bitcoin Suisse Founder,” 14-Apr-2020.
Tomorrow's Decarbonized and Decentralized Power Market. Grid Evolution.
“What are zk-SNARKs?,” Zcash, 09-Sep-2019. [Online]. Available: https://z.cash/technology/zksnarks/. [Accessed: 19-Apr-2021].
About the Author
Matthew J. Karashik, EIT, is an Electrical Engineer at EYP MCF, Part of Ramboll. Matthew’s Experience includes engineering designs, drafting, standards review, and NFPA-70 (National Electric Code) compliance, development of single line diagrams, electrical floor plans, grounding plans, grounding diagrams, and electrical details. Matthew has also performed local applicable code review, site visits, surveys, and site assessments. Matthew’s experience includes energy efficiency and cost savings analysis of emerging technologies for data centers and power utilities. Matthew has done numerous site evaluations and demand site management using energy modeling and monitoring software tools, his experience includes design using Revit/BIM360. Matthew has a Bachelors of Science in Electrical, Electronics and Communications Engineering from New York University.