Search Results

77 results found

Blog Posts (21)

  • A roadmap towards Net-Zero greenhouse gas emissions by 2030

    We are very pleased to issue our July Newsletter! Can’t believe we are already into the second half of 2021! Time is flying as we emerge from the global pandemic. The previous 14 months or so have been an incredibly challenging time for all of us, our clients and partners, and all of our families, and we all look forward to better days in the future. If there was still any question that our industry is critical to the function of everyday life, and economic growth and progress in this country, let alone the world, that has been put to rest. The importance of the mission-critical facility/datacenter to consumers, students, commercial businesses, governments, and organizations/institutions became clearer than ever as we all had to make adjustments to the way we work, learn, interact, share information and entertain. Without this building type that we all focus on, managing our everyday lives would have been completely different and perhaps had even further devastating effects on the world around us. And while this was no doubt a quickly growing industry even before Covid-19, we believe its growth will be even stronger because of it. We personally want to thank our fantastic team here at EYP MCF for the amazing work they have done to help keep our client's growth plans intact and maintain their operations against the many logistics, scheduling, health, and travel challenges faced. We also want to thank our clients for the continued faith they have shown in our firm during this time and the many opportunities you have given us to partner. With this immense growth, our clients are even more focused on sustainability, energy efficiency, and GHG effect on data centers. As some of you have seen, EYP MCF has partnered with i3 Solutions to develop a series of white papers around exactly this topic and have released the first 2 papers already (the 3rd is coming soon!). As a consultant devoted to this building type, we believe it is part of our mandate to research and innovate around this critical area. Additionally, we are working with some of our clients to study and implement changes based on this initiative. You will see links to the program below and can download the papers there or on our new sustainability webpage, where you will find additional information with some interesting project case studies, articles, podcasts, and webinars that we have recently been involved in. Lastly, we continue to see lots of consolidation in the industry, as well as properties changing hands and newer players investing in the space. Please let us know if we can help with any due diligence work as our cross-country reach and global partners allow us to quickly review and assess facilities in the US and abroad for our customers who are involved in transactions. Have a great summer and look forward to connecting again later in the year! Please enjoy the newsletter, and let us know how we can help in any way. Rick and Brian. Click below to view our new sustainability webpage and to download the latest white papers: Sustainability Webpage Reaching for Net-Zero: Achieving Zero Carbon Data Centers by Decentralizing Consensus of Power Supply Amongst Utility and Microgrid Providers The Case for Natural Gas Generators West 7 Center: Using a Data Center Water Side Economizer on an existing facility to reduce water and energy usage ​Infrastructure Sustainability Options and Revenue Opportunities for Data Centers

  • Sustainability in Data Center Lighting Design

    Sky's the Limit When it comes to green features and sustainable design in data centers, lighting design does not get a lot of attention, after all, there is so much focus on energy savings from newer technologies in cooling, IT equipment, and power generation, which account for more significant energy savings in these critical facilities. Thanks to LED lighting technology, today our lighting power consumption has dramatically decreased compared to the old days. Lighting power consumption in data centers accounts for approximately 4% of total energy consumption in a data center, by using LED light fixtures instead of fluorescent and HID, we are able to cut down significantly on lighting power consumption. So, one might ask, what else is there to further reduce, eliminate and save energy from lighting? In today’s world, data centers are pushing the envelopes to conserve energy to meet sustainability goals and improve power usage effectiveness (PUE). As all building systems are closely integrated to operate a more efficient building, effective and efficient lighting design not only affects its power usage, it also decreases cooling demand since light fixtures dissipate heat. Effective lighting design will not only help improve PUE, but more importantly, a good lighting design will provide significant cost savings with proper illumination, creating a better visual environment for all operators and users, and hence attract potential clients - particularly for colocation providers. Lighting Fixture Selection Too often we find a generic office lighting layout applied in a data hall with lighting from 2x2 or 2x4 troffers in the aisle between the racks. This puts lights where you want them but lacks focus and efficiency to where you actually need them. Data hall rack layouts are much like book stacks in the library, vertical plane illumination is more important than a horizontal plane, to make sure technicians can see and service the racks from top to the very bottom server. An energy-efficient, asymmetrical LED stack light would be a good design solution in server rack aisles to provide proper vertical plane illumination where you need it. Lighting sensor control A good, effective lighting design cannot be done with just proper fixture selection, it needs to be combined with proper lighting control in order to achieve sustainability. TIA-942-A provides guidance on data center lighting design with three levels: Level 1 is designed for unoccupied conditions, with enough illumination for video surveillance equipment. Level 2 is designed for initial entry to the data hall and to provide sufficient lighting for safe passage in the hall. Level 3 is designed for occupied space, to provide sufficient lighting for maintenance of equipment, with 500 lux in a horizontal plane and 200 lux in a vertical plane. Grouped or zoned LED fixtures controlled with multiple smart, dimmable occupancy sensors per zone can be programmed to provide proper light levels when and where you need them. When the data hall is unoccupied, the lights can be dimmed to a minimal level enough to support video surveillance, without having the entire data hall lights on full brightness. As you enter the data hall, the individual occupancy sensor will turn on the lights to a pre-programmed level, and ‘follows’ you as you walk around to provide a sufficient and safe light level for navigation. Once you have reached the server rack, the sensor will then turn on to full brightness to support maintenance tasks. These smart, dimmable occupancy sensors will provide lights only where and when you need them, to minimize energy consumption. BMS Integration Lighting design should be viewed not as a standalone design, but rather as part of the entire building system, as it contributes to its overall efficiency. As technology thrives, sensors are getting smarter every day. Since control, monitoring, and building management systems are all a necessity in data centers, why not integrate lighting occupancy sensors into a combination sensor/control device, creating a device that can perform multiple functions in one? Today, there are sensors on the market that can detect motion to turn lights on/off, as well as sensing temperature and providing energy metering all at once. These combination sensors can be tied into building a BMS system to enhance overall building monitoring and control, and in addition, provide more robust security measures with motion detection. Take the Next Step LED is one of the most efficient light sources today, but even with LED lighting technology, there are still some drawbacks. LEDs are powered by DC current, and in order to operate a LED fixture, one will need a ‘driver’ to convert AC voltage to DC. This driver is often built into the fixture housing itself and creates and then dissipates heat when energy is converted to drive the LED light, which means more heat gain in the data hall and hence puts loads on cooling units. One of the options to eliminate heat gain is to take the driver out of the LED fixtures. There are fixtures today that offer ‘driverless’ LED fixtures called PoE lighting. These fixtures share a central driver remotely, then power and control are supplied via low voltage Cat5/6 cables to each fixture. This will help take the heat gains out of the data center aisles and reduce loads on HVAC systems even further as well as provide a precise means of controlling fixtures. Conclusion As you can see, through proper lighting design via fixture selection, dimming controls, and efficiency measures, a significant amount of energy can be saved, but there are always more options to evaluate as we head toward a sustainable future. This is where we as professionals think outside of the box and find different alternatives to reduce the carbon footprint. Since by nature light reflects off lighter, whiter surfaces and gets absorbed in dark, black surfaces, there will be more light reflected in a room with lighter color walls and ceilings, than a room with darker shades. There are studies on utilizing white color server racks instead of traditional black. By doing so, not only the visibility improves dramatically inside of the rack, the quantity of light fixtures required to light the entire data hall can be reduced up to 25% or more, hence saving more in material and installation cost, electric power consumption, and reducing cooling demand even further. As data center design professionals strive for a more responsible and sustainable future, the sky is the limit. Who knows what other innovations will come around the corner tomorrow? Please contact us at EYP Mission Critical Facilities regarding any questions or needs around data center lighting design. About Author: Angelica K. Hermanto, PE, LC, LEED AP, is an energetic and result-driven senior electrical engineer with almost 20 years of experience and expertise providing power distribution, fire alarm, lighting, and low voltage system designs within various building sectors. Angelica is a senior electrical engineer at EYP MCF, she is an experienced project manager in leading multi-discipline design professionals and has a diverse background in project management, engineering design, and study of electrical distribution, emergency power, lighting, and fire alarm systems. Angelica holds a Bachelor of Science degree in Architectural Engineering from The Pennsylvania State University. Angelica also holds a Lighting Certification from the LC (National Council on Qualifications for the Lighting Professions, NCQLP), and is a LEED accredited professional.

  • The Importance of Computational Fluid Dynamics for Data Center Equipment Yard Layout Design

    As the demand for more data center space, power, and cooling continues to increase, the need for space management and equipment layout has become a critical design requirement. The increased load densities in the data center white space has led to an increased amount of support equipment, both inside and outside the data center building. Inside the building The mechanical cooling equipment such as the Computer Room Air Handling (CRAH) units, share space with electrical power equipment like the Power Distribution Unit (PDU) units in the CRAH gallery. They share space for several reasons which include but are not limited to PDU heat rejection contained in the gallery. The heat from the PDU is immediately rejected in the CRAH unit cooling system and this layout does not require a separate electrical room for the PDUs. Figure 1 below is a typical data center layout with CRAH units and PDUs sharing space in the CRAH Gallery. Figure 1 Typical data center layout with hot aisle containment and CRAH galleries on the opposite side of the data center Outside the building Either in the yard, on the roof, or both, mechanical heat rejection systems such as cooling towers and chillers share space with electrical standby generators. A compact high-density layout inside the building results in a compact outdoor equipment layout. This leads to significant airflow management challenges and issues outside of the building. Airflow patterns outside of the building are difficult to predict because of the different variables which design engineers and architects are not able to control. These variables include wind speed, air temperature and humidity, wind direction, and other activities surrounding the building. All these would impact the performance of the outdoor data center equipment. The Computational Fluid Dynamics (CFD) analysis has become a critical tool for data center design for optimum yard or roof equipment layout. There are several CFD software vendors available on the market that are used for analysis and simulations. These simulations provide results that help data center owners and designers in the decision-making process in determining cost-effective layouts and performance. The CFD analysis when performed before the design is finalized and implemented helps in mitigating risks associated with design errors at the late stages of the design, which would result in significant and expensive change orders, construction delays or loss of compute capacity. Most manufacturer’s equipment literature provides yard and roof equipment minimum placement clearance requirements. While this information is provided as guidelines, it is expected that designers will make a correct judgment when finalizing design layouts. A good example of this is a chiller manufacturer’s recommended minimum of 8-foot clearance from solid obstruction. However, these guidelines do not factor in ambient air conditions, wind speed, or height of the obstruction. Figure 2 below demonstrates how following a manufacturer’s recommended minimum clearance can result in undesired air-cooled chiller performance due to air recirculation. Figure 2 Air Cooled Chiller Recirculation Demonstration With the use of CFD analysis and simulation, the condition demonstrated on Figure 2 is identified and corrected before the design is finalized and construction is completed. Another way the CFD analysis and simulation is a useful tool is by providing and understanding how multiple airflow patterns will interact with each other. Figure 3 below demonstrates the impact of generator exhaust air on the mechanical heat rejection equipment on the roof. In Figure 3 the hot exhaust air from the generator flue pipe and the radiator blown towards the building causes a stagnant air condition by the building. As a result of this condition the mechanical equipment inlet ambient air temperature range is outside of recommended manufacturer’s range which causes equipment not to perform as specified. Figure 3 Generator Exhaust Air impact on Mechanical Equipment on the Roof Conclusion In conclusion, the Computational Fluid Dynamic analysis and simulation tools available in the market, have become a critical tool in design for new and existing data centers. CFD analysis is one of EYP Mission Critical Facilities’ most important and powerful tools used at every stage of the project to vet and verify that solutions proposed in our design will yield expected performance results. The expected performance results are subject to varying climatic and weather conditions at the data center location as typically published by DOE, NOAA, and ASHRAE. About Author: Gardson Githu, PE is a Senior Mechanical Engineer and Consultant at EYP Mission Critical Facilities. Gardson’s experience focuses on the design and analysis of HVAC systems for commercial, industrial, and Data Center infrastructure facilities. His experience includes new facilities design, retrofit design, and mechanical systems analysis. His project experience includes chilled water plants, thermal storage systems, fuel oil systems, and air handling systems. Gardson specialized in mechanical system energy optimization, data center risk site assessment and data center thermal mapping (computational fluid dynamic analysis). He holds a Bachelor of Science degree in mechanical engineering from California State University Los Angeles, and a Master of Science degree in mechanical engineering with Themo-fluids option, from California State University Northridge. He is a team member of the recently launched EYP Mission Critical Facilities and I3 Solutions Group Sustainability Initiative to offer a practical roadmap towards a Carbon Net-Zero data center by 2030.

View All

Pages (56)

  • Achieving Zero Carbon Data Centers | EYP Mission Critical Facilities, Inc. | United States

    White Paper Reaching for Net-Zero: Achieving Zero Carbon Data Centers by Decentralizing Consensus of Power Supply Amongst Utility and Microgrid Providers ​ White Paper 1 ​ ​ July 2021 ​ ​By: Matthew Karashik, EIT EYP Mission Critical Facilities Inc (EYP MCF) Download PDF Abstract ​ This paper is the first in a series of UN2030 Sustainable Development Goals initiative thought experiments focused on using blockchain and decentralized consensus algorithms to overcome logistical barriers to a true zero carbon emissions data center and microgrid environment. The scope of this paper is the optimization of energy dispatch and supply from distributed resources down to a data center or microgrid supported campus. Future papers will build upon the arguments laid forth and culminate in a fully integrated blockchain consortium beginning at the manufacturing of parts for all entities in the network and reaching all the way down to the individual tenants in a co-location data center performing tenant-to-tenant server subletting during demand response events for the data center or macrogrid (i.e., a traditional wide area synchronous grid or colloquially, an electrical utility grid). ​ (Keywords: game theory, good actors, bad/malevolent actors, UN2030, centralized consensus, decentralized consensus, proof of work (POW), proof of stake (POS), private-by-design (PbD), microgrid, macrogrid, blockchain, byzantine fault tolerance, embodied carbon, operating carbon, traveling carbon, zero greenhouse gas (ZGHG), distributed energy, block mining, data structures, peer to peer (P2P), server to server (S2S), client to client (C2C), internet of things (IOT), demand response (DR), tolerance for loss) Content ​ Introduction – Why do we need standby generators? Decentralized Consensus and Byzantine Fault Tolerance Distributed Energy Resources for Microgrids as a Database and Automated Transaction Network Determining Loss-Tolerance, Threshold for Trust, and Proof of Stake Optimizing the Supply of Zero-Greenhouse-Gas Energy via Decentralized Networks Automation of Components in a Microgrid Future Papers Further Reading Works Cited ​ ​ 1. Introduction ​ While some understanding of the keywords mentioned in the Abstract section is assumed and required for this paper, the concepts of decentralized consensus, byzantine fault tolerance, distributed energy resources forming a database, loss-tolerance and trust, proof of stake, embodied carbon will be described at a precursory level. The objective of this whitepaper is to conceptually portray an application for which blockchain removes barriers to the success of zero carbon emissions operating and embodied environment. This is achieved via a thought experiment in which four distributed energy suppliers need to cooperate to deliver synchronized or grid forming power to a microgrid supported data center and accurately respond to events of increased demand to maximize their own revenues. They cannot trust each other as competitors but they must find some way to communicate information between one another to form a grid and optimize their economic dispatch because the utility brings unnecessary overhead, and the data center campus lacks the resources to continually log and verify upstream information on its own. ​ ​ ​ 2. Decentralized Consensus and Byzantine Fault Tolerance ​ Before discussing a decentralized consensus, or even blockchain, the idea of centralized consensus needs to be understood as the “conventional wisdom” for problem-solving. Decision making and problem solving are easy when all players in a game are on the same team or when multiple businesses trust a banker to accurately record and upkeep a ledger of transactions between one another. This trust filled, naturally collaborative, network of individuals forms a centralized consensus, a decision making and record keeping model in which all parties trust each other or trust the same person and have no reservations about exchanging information (Krawiec-Thayer). This is a highly effective method for getting work done with colleagues or for paying bills, but it is not a practical way to get competitors in a single industry to work together or exchange data. That is where de-centralized consensus algorithms come into play. Decentralized consensus, As Dr. Mitchell P. Krawiec-Thayer writes, “is the ability for many parties to safely store and share information, without having to rely on a central authority or trust any other participants in the network” in the editorial blog post, “What’s the big deal about Decentralized Consensus”. Paying close attention to the ability for “many parties” to share and store information in a common database, both safely and without trusting each other or a central authority, it becomes immediately apparent why any technology that can achieve decentralization of decision making is vital to the success of modernized microgrid and data center technology. Dr. Krawiec-Thayer states the following: Any effective decentralized consensus system must solve a fundamental challenge: how can a system arrive at universal agreement under adversarial conditions where messages may be unknowingly lost and participants may behave dishonestly for their own gain? As Krawiec-Thayer later mentions, this problem was concisely posed to the greater technological community almost 40 years ago, in what is often referred to as the Byzantine Generals’ Problem: Imagine a group of generals of the Byzantine army camped with troops around an enemy city. Communicating only by messenger, the generals must agree upon a common battle plan — whether to attack or retreat. Either way, they must arrive at agreement and act in unison since an attack with only a portion of the troops would be disastrous. However, one or more of the generals may be traitors who will try and confuse the others. The problem is to find an algorithm to ensure that the loyal generals will reach agreement. (Lamport, Shostak, and Pease; 1982) Considering this context, a blockchain is a software and system policy implementation of valid solutions to the Byzantine Generals Problem. Blockchain, as a decentralized problem-solving system, creates an opportunity to empower mission critical facilities. Consider how data centers fall under the need for decentralized decision making. Data center facility operations teams and sub-groups barely trust each other. They tend to act more like tribes than a single unit, even though they roughly share a common goal. Decision making often requires lengthy chains of approval at various stakeholder levels for even simple changes to electrical equipment. Trust between any power facility and the electrical utility is illusory at best and antagonistic at worst. There is a high level of added overhead and delay due to this slow-moving and poorly fitting centralized consensus that is forced onto data center design and operation. On top of these logistical inefficiencies, consider that there are a high number of nodes measuring power and computational information in a data center. Server cabinets or racks all need some method of periodically or continually checking the load against capacity and upstream equipment ratings. The service transformer to a data center, conventionally, must be metered by a central utility. Multiple telecom and monitoring systems are often required to implement building fire protection, alarm, lighting, and emergency power response control policies. If only there were some ways to remove the need for a data center to oversee all of this information, especially at such a great financial cost to the facility owners and shareholding entities. Instead of forcing conventional decision making that barely functions in microgrid and data center environments, it is possible to utilize a so called “decentralized consensus algorithm” that removes central authority dependence and encourages success amongst the many parties involved in these larger operations efforts. Literature dating as far back as 1982 lays the foundation for arguing that a zero-greenhouse gas (ZGHG) future requires shifting the current paradigm of central entities (see Byzantine Generals Problem). Instead of forcing the data center facility, microgrid operator, or macrogrid power utility to manage and verify data, designers of truly renewable and sustainable data center infrastructure must create new systems that decentralize. These systems require no trust amongst players but witness trust as an emergent property of continued success and will be built upon in this paper for renewable internet-of-things (IoT) applications (such as zero net carbon peer to peer clouds, smart grid metering, electric vehicle, etcetera). Relating the idea of byzantine faults, blockchain, and decentralized networking back to mission critical and renewable applications, the July 1982 paper “The Byzantine Generals Problem” (cited by approximately 7377 scholarly articles, according to google scholar), coauthored by Leslie Lamport, Robert Shostak, and Marshall Pease in the fourth volume of the ACM journal, “Transactions on Programming Languages and Systems” directly ties mission critical applications to decentralized problem solving in terms of “reliable systems”. Using redundant computers, systems, or facilities and cross referencing (via internal or external voting) to a single result is the only alternative to utilizing materialistically reliable device components, when attempting to implement reliable computer systems. As the three authors explain: “This is true whether one is implementing a reliable computer using redundant circuitry to protect against the failure of individual chips, or a ballistic missile defense system using redundant computing sites to protect against the destruction of individual sites by a nuclear attack. The only difference is in the size of the replicated "processor". (Lamport, Shotak, Pease 398) As Lamport, Shotak, Pease go on to explain, there are some flaws in this reasoning; however, the basic premise of critical systems requiring reliable outputs remains valid. After discussing parameters of a reliable voting solution, and the issues with circumventing material or voting considerations via hardware, Lamport, Shotak, Pease arrive at a significant realization for mission critical systems, “redundant inputs cannot achieve reliability; it is still necessary to ensure that the nonfaulty processors use the redundant data to produce the same output”. From the perspective of ZGHG microgrid supported power distribution, there is a major flaw in the paradigms of mission critical design: Redundant systems cannot achieve true reliability without using the same redundant data (i.e., power, information) to produce the same output (Lamport, Shotak, Pease; 1982, 387). This sounds paradoxical from the perspective of mechanical, electrical, and plumbing (MEP) design for redundancy and may require further scrutiny to determine the truth of such statement in the realm of power distribution design; however, Lamport, Shotak, Pease might argue that the electrical equipment adjacent to MEP design in mission critical must achieve these requirements, of redundant nonfaulty processing systems using the same sets of redundant data to produce the same output at any order or tier of redundancy, to be considered truly reliable. This is not to decry against formal requirements for redundancy, like the Uptime Institute’s tier system, by any means. The two key takeaways are that mission critical design has been a consideration of decentralized consensus networks since the field’s founding thesis in 1982, and that any system with a need for reliability must consider redundant computation, redundant data, and reliable parts. As data centers, being mission critical, intrinsically have a high need for reliability, they are no exception to this claim. Moving on to the main subject of this paper, achieving optimal dispatch of power to a microgrid via decentralized power, consider the various interests in a microgrid: Authorities having jurisdiction Macro-grid owning utilities Micro-grid operating data center campuses Distributed energy supplying and storing entities All of the aforementioned parties, with the exception of the campus being served, are in fierce competition for survival. The question of how to operate a successful zero carbon impact data center then shifts to forming an effective system for decentralized consensus. This system, method, treaty, and algorithm must somehow rope multiple parties who do not trust each other into a common database that removes all red tape and is highly resistant to bad actors. In the case of economic dispatch amongst competing renewable energy providers, the competitors, data center, and all operating staff in between, are analogous to the byzantine generals. The generals issue a command, to attack or retreat, while the interested parties issue commands of how much power to send and when. The messenger is no longer a horseback rider, but a public (or semi-private) communications network along the blockchain. The byzantine general’s problem can be distilled further into several bullet points which identify any scenario that may be solved by a decentralized consensus technology like blockchain: There is a need for a common exchange of information, i.e., database. There are any number of resource-governing entities involved. The parties in this game have conflicting incentives or reason to mistrust each other. These parties are likely governed by different rules. There is a need for a truly objective and unbiased, unchangeable log of records. The rules behind decision making rarely change if-ever. So, a system to solve the problem of Byzantine Generals operating distributed energy resources in a microgrid environment or colo tenants in a data center environment must include the following to be truly optimal: Consensus on records and transactions must be decentralized. The system functions whether or not the players trust each other. Highly resistant to tampering with data, creating false records, and collusion against the common goal. This is known as a high value of byzantine fault tolerance. Good actors are rewarded for exchanging valid information and acting trustworthy. Bad or malevolent actors lose resources, and identify themselves as more untrustworthy, whenever they attempt to falsify communications. Resources, information, and decision-making occurs quickly. Decisions involving multiple parties may need to be bi-directionally automated, such that either party can deliver commands over both party’s resources, under a verification system. The growing record of transactions, information exchanged, and consensus decisions is secure, permanent, and able to identify false information quickly. ​ ​ Flowchart (above), “Does your enterprise need blockchain?” – (Source: Information Services Group) This, in a nutshell, is a conceptual description of blockchain and what it fixes in a data center, and microgrid environment, that conventional logistics cannot. As depicted in the flowchart created by the Information Services Group, a zero GHG microgrid supporting collection of energy resources fulfill the criteria of requiring a trust-independent database operated by many individuals and representatives. They do not trust their client blindly, they do not trust each other, and they do not trust the power utility as a competitor. For a true zero impact microgrid to succeed at the level of distributed energy resources, the energy operators must utilize a blockchain. ​ ​ ​ 3. Distributed Energy Resources for Microgrids as a Database and Automated Transaction Network ​ Imagining that there are some number of third-party suppliers of renewable energy to a microgrid supported data center, it is very easy to understand the desire for a database of transactions, electrical characteristics over time, as well as supply chain parameters. A solar plant wants to log its own reserves and a client wants to have an idea of the rate at which the plant generates energy reserves. The same can be said for any sort of distributed energy resource of a renewable, zero or low embodied carbon footprint nature (be it wind, geothermal, hydro, etcetera). However, it is conventional wisdom that the user cannot get access to utility data and the microgrid cannot get real-time or truly deep data about its 3rd party suppliers. This can make the process of verifying certified renewable and zero embodied carbon energy, upstream of an entity, almost impossible. This is when a blockchain is very handy. Because transactions records are permanent, require stake or proof (see proof of work, proof of stake) to generate, and are tamper/falsification proofed, a supplier of power is assured that their data is secure by key. The recipient of power receives assurance that the claimed reserves were available because the log of capacity witnessed by the solar plant is the same log witnessed by the data center as a client. A database is then created between the two parties containing transaction records, voltages, currents, power quality, ambient conditions, consistent values of energy entering transmission lines and energy exiting it, as well as the available solar reserves for use by the data center in an emergency. This idea can be daisy-chained such that a database is formed between all distributed energy resources and resource users in the microgrid environment; each one of them is able to form an economic dispatch without trusting each other or knowing the identity of other players in the database. Furthermore, the blockchain allows any party to propose and dispatch resources across the entire network, so long as their dispatch request fulfills the conditions of the consensus algorithm. At this point, the consensus algorithm becomes a decentralized and automatic transaction machine or an automated transaction network; such that, each member of the network independently forms the transaction machine (it has no single point of failure, and it exists at all points in the network). Transactions between two or more parties may be initiated through the automated network by a single member. This brings unrivaled speed and efficiency to demand response, power optimization, and economic dispatches because the network solves for all parties’ internal decision-making algorithms against the network-wide consensus algorithm almost instantly. Relevant parties receive funds or power dispatch. Compensation is then distributed to the selected miners or proven good actors who were used to perform computations on behalf of the transaction machine. As another point for consideration, this blockchain machine increases the security of transactions against malevolent actors. By making dispatches transparent, even if anonymous, and by giving all users the ability to command the network under consensus, the network machine identifies bad actors as users who fail to satisfy the consensus algorithm. ​ ​ ​ 4. Determining Loss-Tolerance, Proof of Stake, and a Threshold for Trust ​ One of the most crucial steps in creating a successful blockchain or automated decentralized transaction network is determining the method of verifying good intent. Thinking back to the problem of the Byzantine Generals, to ensure that a command or information being issued by one general to the whole army is truthful, and in good faith, there needs to be some system in place to expose malevolent actions (Lamport, Shostak, and Pease; 1982). This way, bad actors and defectors expose themselves and, in modern systems, lose resources or trust when they issue a malevolent command by the system’s design. Another benefit is that costly commands prevent repeated malevolent actions against the network unless many parties are pooling resources and colluding. There are a few systems that can be used to ensure honest, successful, and quick operation of a zero-greenhouse-gas system of energy distributors. Although not all methods of achieving reliability in a decentralized microgrid network shall be examined for this paper, there are many valid means of achieving such a system. It is recommended that key terms, works cited, and further reading are consulted for the implementations not utilized here. As an alternative to the proof of work paradigm for block chains and decentralized consensus networks, consider “proof of knowledge” and “proof of stake”. Regarding proof of knowledge, consider the zk-SNARKs technology behind the Zcash cryptocurrency. The creators of Zcash, the Electric Coin Company explains in the article, “What are zk-SNARKs?”, that their technology “refers to a proof construction where one can prove possession of certain information, e.g., a secret key, without revealing that information, and without any interaction between the prover and verifier”. Diagrammatically, this process is described as follows: ​ ​ Figure adapted from “How zk-SNARKs are constructed in Zcash” by the Electric Coin Company Imagine a system that could command many different competitors to cooperate without revealing sensitive information to any of them. This would be referred to as a zero-knowledge proof of knowledge by the creators of zkSNARKs and Zcash. Zero knowledge proof of knowledge blockchains are constructed by following the algorithm shown in the figure adapted from “How zk-SNARKS are constructed in Zcash”. To elaborate further, in a zero-carbon microgrid supported data center context, this decentralized control network commands many different competing energy suppliers to generate the optimal amount of electricity for any arbitrary load, given zero knowledge assertions of each supplier’s fixed or dynamic price and zero-knowledge assertions of each supplier’s current or scheduled energy generation and storage capabilities. Competing suppliers are incentivized to implement the blockchain by the promise of maximized revenue capabilities and microgrid forming or supported data center facilities are incentivized to implement the blockchain to minimize their total cost to achieve zero-carbon (or highly carbon efficient) off-grid power across the system. Due to the nature of zero knowledge proofs of knowledge, no member of this blockchain would need to know the other members’ prices or how much energy they were allowed to supply. As an application of the microgrid control policy network implemented with a zero-knowledge proof of knowledge blockchain, imagine this technology being used to achieve a zero carbon emissions microgrid data center environment and provide enrolled electricity suppliers with maximized revenue during any electrical event or extended contract of service. Take the follow assumptions as truth for the sake of this practical thought experiment: There exists a microgrid either formed or electrically followed by a data center or collection of data halls on a data center campus. The grid forming and following facilities may or may not have their own distributed energy resources and storage. This is irrelevant to the goal of supplementing energy during shortages, events, extended periods of time. Cost optimization opportunities may appear at any time for the third-party suppliers and the data center campus. The microgrid can off-grid, entering what is known as island mode. The primary goal of this microgrid shall be to support the continuous operation of mission critical facilities via zero carbon and green energy. There exists an arbitrary solar company, an arbitrary wind company, an arbitrary geothermal company, and an arbitrary biodiesel that can supply electricity to the microgrid. These companies are in direct competition for revenue created by supplying electricity into a microgrid. There exist 53 server tenants as an arbitrary number. There must be more than 3 members of the blockchain. To eliminate utility involvement in metering, a data center, building, or an individual server tenant interfaces their equipment monitoring values to the decentralized network system. This interface allows any of the load users to inform blockchain members of how much energy was used per billing cycle and prove that they have maintained a record of equipment monitoring status to verify that tampering did not occur, without revealing info to other load users or suppliers. Because it is zero-knowledge, the equipment monitoring channels inarguably provide true and anonymous readings of energy usage without utility verification. This also enables the added capability for load users to schedule out required demand without revealing classified information to any of the competing suppliers. The same outcome can be achieved for any other metric, system, or information that the blockchain members wish to include in their anonymous exchanges and supply optimizations. All suppliers receive direct commands from the decentralized network process (that they collectively form) on how much energy to supply, for maximized revenue, after asserting how much energy they hold in reserves and how much they charge. For sake of argument, the solar supplier creates 31kWh of zero-carbon emissions energy. The solar supplier, as a prover, may inarguably assert that they produce 31kWh of monthly energy for microgrid and data center use to either a server tenant (verifier) or any competing supplier (verifier) without revealing the number itself. At the load side of the blockchain and microgrid, a server enterprise user or data center can verify and select just how much green energy they are willing to pay for at the source, without the exchange of sensitive information. In the example given, the arbitrary 53 server tenants (provers) prove to a wind supplier, a solar supplier, a geothermal supplier, and a biodiesel supplier (all four as verifiers) that they used between 1kWh and 3kWh per tenant, totaling 91kWh in a month without revealing the total amount to any of the four suppliers. They first prove that they have some true value of total power to all four entities without revealing the number itself via a hashed value that hides 91kWh in randomness and secretness. This hashed value is unique such that it must correspond to the value 91kWh that none of the suppliers know. As the old paradigm of C language coding goes, “one cannot need to need to know how a script will be used.” Once all suppliers (or selected block miners) have performed their role in the verifying algorithm to confirm that the hashed value does exist and corresponds to some secret number, the 53 tenants may prove to each supplier the individual amount used, or verify the individual amount supplied by power entity (as a vice versa scenario, or perhaps both must occur). They use a zero-knowledge proof of knowledge to let each power supplier know that they have four true values, each hidden by a hash value. In the though experiment, these tenants prove to each supplier that they owe payment for 31kWh from the solar supplier, 30kWh from the wind supplier, 15.1kWh from the geothermal supplier, and 14.9kWh from the biodiesel supplier without telling any of the other three suppliers how much power was supplied by their competitors. Because of this implementation, no electric utility metering is installed in the microgrid system and data center facility personnel are no longer needed to act as the central authority between microgrid, power suppliers, and server tenants. As Marco Conoscenti writes in his 2016 IEEE Computer Systems and Applications Conference review paper, “Blockchain for the Internet of Things: A Systematic Literature Review”, A private-by-design IoT could be fostered by the combination of the blockchain and a P2P storage system. Sensitive data produced and exchanged among IoT devices are stored in such storage system, whose P2P nature could ensure privacy, robustness and absence of single points of failure (Conoscenti, Introduction; 2016) In an internet of things environment, it becomes necessary to consider that not every IOT device is capable of storing an entire blockchain, so a hybrid between blockchain and peer to peer storage that is hashed for security, in any manner described in this paper or in its consulted works, enables block chaining between lower levels of I/O devices and higher-level entities like dispatchable generation and load consuming facilities. For example, if a substation only needs to store the portion of a blockchain required for it to function as a node, then its owner is more likely to enroll it as an additional resource. Consulting Conoscenti’s rather thorough review once more, he goes on to write: Combined with this storage system, the blockchain has the fundamental role to register and authenticate all operations performed on IoT devices data. Each operation on data (creation, modification, deletion) is registered in the blockchain: this could ensure that any abuse on data can be detected. Moreover, access policies can be specified and enforced by the blockchain, preventing unauthorized operations on data. The hybridization of blockchain to each local IoT device enables the tracking and verification of entire oceans of pure data; furthermore, granting the control policy for a modernized grid to the blockchain creates scrutiny against malevolent actors at an almost microscopic level by design-intent, without intruding on an individual entity’s privacy. Think back to the zero-knowledge proof of knowledge for clarity. It is possible, via emerging cryptographic algorithms like zkSNARKs, for blockchains to validate if a self-metering transmission pole registered 32 amperes of current at some arbitrary time without knowing the value was 32 amperes. ​ ​ ​ 5. Automation of Components in a Microgrid ​ Taking things, a step further, if given the proper implementation, power systems, and computational resources, it is possible to fully automate the following components of a microgrid: Creation of economic dispatch computation. Assurance of data security. Decentralized computation of economic dispatch problem for the microgrid. Proposed dispatch of power based on optimized solution to the economic dispatch problem. Likely, via the standard application of nonlinear distributed Newton Raphson method to modernized grid modeling, if microgrid has a large quantity of buses; otherwise, simple calculus scripts should be able to solve. Proof of stake or zero knowledge proof of optimal solution to economic dispatch and of funds owed to all affected parties. Proposal of financial compensation for economic dispatch to the network consensus algorithm without a 3rd party broker or metering entity. Approval or denial of proposed economic dispatch, financial compensation, and decentralized computation of conformity to consensus algorithm by the blockchain network. Dispatch of power from generating entities. Scrutiny against malevolent actions after or during dispatch. Storage and processing of data from all points between generation and loads. Decentralized updates to a hashed ledger of transactions or ledger of power flow at set intervals. The full scope of autonomy and reduced costs for creating central facilities to manage the microgrid, generation dispatch, transmission stations, and data center oversight is astounding. As Conoscenti writes, “In this framework, people are not required to entrust IoT data produced by their devices to centralized companies: data could be safely stored in different peers, and the blockchain could guarantee their authenticity and prevent unauthorized access” (Conoscenti; 2016). There is no need for an electrical utility or 3rd party to meter, verify, or broker the exchange of power for payment because the entire network of power systems connected to the microgrid and supporting the data center facility provide decentralized oversight. This system satisfies the mission critical application of the original Byzantine Generals Problem by virtue of high quality, zero embodied carbon, device components being used to provide distributed redundant computations stored partially and fully across a much higher number of systems than a data center would typically be able to rely upon for logistical data. The microgrid itself, is intrinsically designed without a single point of failure. Blockchain systems have a high resistance to byzantine faults, providing high reliability in data security. Now consider a scenario in which each node in the modernized microgrid (i.e., measurement and relaying, smart devices, control systems, electrical equipment, etc.), data center (service point, monitoring and control systems, electrical equipment, etc.), or transmission and distribution (controls, transmission transformers, meters, substations, etc.) contains a partial or full record of the blockchain to address the issue of limited resources at remote equipment locations. Each node may be owned by a different entity, or many nodes may be owned by one entity (cautioning against centralization). The architecture of the blockchain network is layered to reduce computational strain on entities with less power or smaller nodes and each node contains at least the portion of the blockchain it needs to operate and connect to at least three neighboring nodes (see: Lamport, Shotak, Pease regarding “neighboring commanders”). Figures 6 and 7 from Lamport, Shotak, Pease, “Byzantine Generals Problem“ Thus, the partial blockchains forming or supporting the network collectively form at least a “3-regular graph” to ensure solvable distribution and decentralization of messages and communications. This is necessary in a smartgrid environment because smaller components may not be able to store the entire blockchain. One can reference the Byzantine Generals Problem for further information on the requirement for at least a 3-regular graph, but the concept essentially ensures unique paths between entities and that all neighboring nodes provide sufficient different routes for data resilient against malevolent action. The necessity of partial blockchain storage is great for microgrids. As described by Conoscenti, “we suggest to develop IoT applications on top of another secure but scalable blockchain […] Moreover, we suggest to adopt a layered architecture which supports thin clients to allow IoT devices with limited resources to store only a portion of the blockchain”. Such a blockchain is ideal for an IoT and microgrid environment because it allows smaller measurement devices to form or follow the blockchain without an overwhelmingly large stock of data resources at the seemingly infinite count of nodes that form a microgrid. Regarding the significance of understanding proof of stake, consider that in April of 2020, the founder of Swiss crypto broker Bitcoin Suisse, Niklas Nikolajsen claimed that Bitcoin will transition to a Proof of Stake algorithm once the Ethereum cryptocurrency network demonstrates the algorithm’s success in market. To an avid follower of blockchain technologies, this is highly significant and disruptive. As author Marie Huillet recounts, an outtake from a German documentary uploaded on April 6, 2020, records the founder, Nikolajsen, saying, “[Bitcoin’s move to Proof-of-Stake] is not planned, but the second-largest cryptocurrency, Ether, will move to a Proof-of-Stake concept that demands vastly less electricity, already in a few months. I’m sure, once the technology is proven, that Bitcoin will adapt to it as well” (Huellet; 2020). Nikolajsen actually goes on to claim that Proof of Stake (POS) is a superior system to Proof of Work (POW) once it is proven to work well. To briefly describe Proof of stake, imagine a blockchain whose “nodes in the network engage in validating blocks, rather than mining them, as in PoW”. In POS, these block validators are selected by algorithm, in the case of cryptocurrency, based on “the number of tokens a given node has staked in their wallet — i.e., deposited as collateral in order to compete to add the next block to the chain” (Huellet; 2020). In the case of a microgrid, or any modernized grid technology, POS can be applied as such: Block validators are selected from a public pool of miners or a private pool of microgrid involved entities (i.e., dispatchable generation, storage, transmission, tenants, data center, microgrid distribution and operations, smart devices, etcetera) based on deterministic algorithm. The algorithm selects a subpopulation of the network to be block validators based upon how much tokenized “trust” they are willing to stake, and iteratively how much trust they have successfully demonstrated for past computations. Any gamification method, direct financial compensation, dynamic incentive, or other method of providing entities a return for staking trust can be used to ensure continued involvement in proving stake. Thus, when a supplier’s conditions for sending a power dispatch downstream are met, they submit a computation to the network that is turned into a block with identifying data hashed to secret randomness associated with a unique value, and members of the network autonomously are selected and autonomously bid to add the next block into the chain. Entities awarded the bid are granted an incentivizing return upon successful demonstration of continued stake, and their trust index is increased. This modified proof of stake can be considered a threshold means for determining trustworthiness and incentivizing members of the microgrid to continue acting in the best interests of the system. Because proof of work is not required, and only continued stake, there is potential for lower power requirements to complete computations in the network. ​ Continuing to examine the transition from theory and finance to microgrid operations, take the below system of operations, a purely hypothetical system, for a block chain integrated electric dispatch network of entities serving a large client’s demand. (Source: EYP Mission Critical Facilities) Any number of operations to encrypt, package, track, and record information and transactions may be introduced to the system and given a control policy dependent on the demand/required load and measurements of the system. The block chain algorithm can take the considerations and rulebooks of each entity as triggers for a dispatch request, scramble them into hashed secretness that operates to satisfy the consensus algorithm. Thus, dispatch requests can be triggered autonomously, and transactions authorized so long as they satisfy the algorithm for consensus. So long as the miners of distributed computations, or elected trustworthy members of the network, act in good faith and continue to perform either zero knowledge proof and validation or continue to stake and gain indexed trust as a result of achieving true results from the consensus algorithm, participants in the game of this network witness optimized reliability, operational efficiency, and profits. ​ Using distributed computations via block mining, proof of stake, and/or the index of trust, the algorithm can be checked against current conditions by distributing calculation across entities least likely to feed bad information or send false results. Once this is performed, a block or proposed dispatch is produced and distributed to the network on the consensus algorithm as a final check. The consensus algorithm in the case of renewables, must consider a threshold or variance for which slight profit, energy, or minimal and momentary increase in embodied carbon can occur. Anything that falls outside these bounds is rejected outright. Anything that fails the consensus algorithm of the blockchain network is also rejected. Anything that satisfies the UN2030 requirements for a true zero greenhouse gas power distribution scheme, falls within the bounds of losses, and satisfies the consensus algorithm is approved autonomously, and funds are distributed to all relevant parties enrolled and effected by the economic dispatch and use of computational resources to calculate the optimized dispatch. ​ ​ ​ 6. Optimizing the Supply of Zero-Greenhouse-Gas Energy via Decentralized Networks As Helen Carruthers and Tracy Casavant of the Light House Sustainable Building Centre Society discuss in the 2013 Commission for Environmental Cooperation standards review, “What is a ‘Carbon Neutral’ Building?”, the definition of “carbon neutral” continues to evolve, as it relates to measurement, reduction, and offsetting carbon energy (Carruther & Casavant, 1). Though a scholar like Dr. Krawiec-Thayer might argue it is the result of political controversy and industry resistance to buzzwords, Carruthers as both a Project Manager and LEED AP more likely would attribute changes in the meaning of “carbon neutral” to the emergence of new technologies, new authorities, and an improved ability to identify the embodied and operating energy of carbon emissions. Take, for example, Carruther and Casavant’s description of a popular approach to carbon neutral building design: ​ • Integrating passive design strategies • Designing a high-performance building envelope • Specifying energy efficient HVAC systems, lighting, and appliances • Installing on-site renewable energy • Offsetting ​ Observe that this approach to carbon neutral design considers the integration of design strategies, material performance, renewables, and energy efficient device specification. It appears that this process has rigor. By applying informal reasoning, designers may even argue the mainstream approach adequately addresses cradle to grave needs for carbon neutrality; however, this approach to carbon neutral design is dangerously lacking on the basis of its bounds, verification, and certification. It appears to be more of a bookend to the carbon emitting designs of old than an evolution in techniques. There is no consideration for the carbon emissions due to employees, on-site clients, and the delivery of supplies to site; furthermore, this approach lacks any mention of the carbon emitted to create a high-performance building envelope. It does not consider how much carbon is emitted to create energy efficient electrical equipment. It also fails to consider the production of renewable equipment or the emissions that may occur during the installation of equipment and build phase of the facility itself (Carruthers and Casavant, 1-2). As Carruthers and Casavant state: A carbon neutral definition should include specific information/requirements relating to the following: ​ • System boundary – includes within it all areas associated with the buildings where energy is used or produced, i.e. operational energy, embodied energy of the materials used, energy used for the construction process and travel for occupants. • Renewable energy and carbon offset 3rd party certification. • Verification or certification of the calculated carbon emissions. (Carruthers and Casavant, 1). ​ Just as a physicist must consider the most relevant boundaries of a phenomena under observation, true ZGHG microgrid and data center creating entities and firms must take into consideration, at the earliest stages of design, both the true boundaries of their project’s carbon impact, the validity of their carbon emissions calculations, and the objective certification of emissions offsetting. Thus, Carruthers and Casavant introduce three definitions for carbon neutral building design: ​ • Carbon Neutral – Operating Energy[1 ] : […] Carbon neutral with respect to Operating Energy means using no fossil fuel GHG emitting energy to operate the building. Building operation includes heating, cooling and lighting. • Carbon Neutral – Operating Energy + Embodied Energy: This definition for Carbon Neutrality builds upon the definition above and also adds the carbon emissions associated with energy embodied in the materials used to construct the building. • Carbon Neutral – Operating Energy + Site Energy + Occupant Travel: This definition of carbon neutrality builds upon the inclusion of operating energy and embodied energy, and also reflects the carbon costs associated with a building's location. This requires a calculation of the personal carbon emissions associated with the means and distance of travel of all employees and visitors to the building. (Carruthers & Casavant, 3). ​ The third definition considers both the emitting energy of building operation, production of the site and its parts, and the carbon costs induced by the build site’s location. In the face of so many additional contributions to carbon emissions, it becomes apparent just how little the popular approach to carbon neutral design achieves. Perhaps popular design satisfies investors and pundits, but it does not address every node of carbon emissions in such a large network of varying interests. As conjecture, this supply chain, itself, could be examined for decentralization by future authors. ​ ____________________ [1 ] “The base definition for Carbon Neutral Design is taken from https://www.architecture2030.org “ (Carruthers & Casavant) (Source: Grid Evolution, Vintage 2019) Consider the visual depiction of “Tomorrow’s Decarbonized and Decentralized Power Market”, as illustrated by Grid Evolution. In modernized grid technologies, a bidirectional network of data, power, and transactional nodes emerges between dispatchable generation and the end customers, loads, or consumers of power. The complexity of such a system is massive and exceeds the ability of a central authority to effectively manage, as argued previously in this white paper and corroborated by the implications of Lamport, Shostak, and Pease. Consider, on top of the smart grid itself, the inevitability of ZGHG smart grids, which bring the considerations of operating + site + travel energy to any power system and accompanying infrastructure. At this point such a system has a high order of data nodes and multiple rulebooks to consider. The authors of the Byzantine Generals Problem would likely argue that a system of many commanding entities that do not necessarily trust one another and cannot agree on a single entity to act as an unbiased verifier cannot succeed via centralized decision making (Lamport, Shostak, and Pease; 1982). So, a microgrid itself, as modernized grid technology, is unlikely to ensure reliability when involved with more than three parties, let alone a microgrid that is designed to have a zero-carbon impact from end to end and from cradle to grave. Take into consideration any of the schemes or hypotheticals proposed so far and this seemingly unsolvable problem finds clarity. Observe the kick-off to this series of papers on the future of zero carbon networks, the “Decentralized Dispatch Problem”. Let there exist a microgrid formed by electrical infrastructure and multiple facilities on a data center campus, whose modernized grid received dispatchable generation from four sources. These sources are parameterized with initial conditions, at the highest level, as follows: G1[g=φ], A large solar plant at the null set condition “φ” G2[g=φ], A small solar plant at the null set condition “φ” G3[g=φ], A large wind plant at the null set condition “φ” G4[g=φ], A small geothermal plant at the null set condition “φ” The null set condition for these four sources is “not to dispatch power”, owing to the “retreat” command of the Byzantine Generals Problem. All four of these sources are competitors; by nature, if one were to fail, more power would be requested by the remaining three. Therefore, none of them can trust one another. This system shall be assumed trustless. However, all four do share a common goal, that is, the dispatch of generated and stored power to a microgrid supporting a data center for profit. To achieve the optimal dispatch of power, it would not be unusual for an economic dispatch problem to be computed by a central entity as follows: And cu is the time varying cost or revenue of utility power or injection back to the utility and pdc is the storage of power being loaded for reserves or being unloaded for use or injection at time t. C(t) is cost of the economic dispatch solution to power requirements and P(t) is the net power of the economic solution from the perspective of the data center main bus at time t. e(t) is the solution to the economic dispatch problem at time t. τ is some constant value for t, to keep this model simple. Note that this model could be expanded to include scattered reserves for storing power within the microgrid itself. Additionally, the resources of only a single data center shall be considered for this initial paper, though a data center microgrid could be modeled with parameters at each colocation facility, and at a deeper level, each colocation tenant. The bounds of a microgrid are dynamic and highly dependent on frame of reference, such that varying colocation facilities on a data center campus that forms the microgrid might consider all other facilities to be the microgrid from its own perspective. The economic dispatch formula in this paper states that the economic dispatch solution e(t) models the calculus optimized supply and injection of power from the perspective of the microgrid forming campus at time t on the basis of cost savings. The goal of economic dispatch, as a refresher, is to obtain the lowest value of C (cost per kilowatt or kilowatt-hour) that solves e (problem) for P (Power) at time t. Thus, a linear equation is formed to solve e(t). If there are multiple cost coefficients available at time t by whatever means in the system, then a system of linear equations is formed that may be solvable as a matrix of coefficients or via differential equations; however, the mathematics to demonstrate such a dynamic system are beyond the scope of this paper and its intended audience. Assuming a decentralized decision-making consensus, a central entity no longer solves for this problem as the utility once did or as the system designer may be tempted to place upon the shoulders of the microgrid formed by data center campus itself (i.e., adding cost, adding footprint, and carbon impact, and adding additional equipment). Instead, blockchain enables the solution for economic dispatch in an elegant fashion using the resources of the generation and distribution network itself. While numerous methodologies are available to establish collaboration in blockchain, imagine a high-level adaptation from the proof of stake method to a proof of trust for sake of thresholding block miners, in which the network itself distributes computation. To achieve successful dispatch, the following pseudo-algorithm must be realized by design and operation. Such that Gdc­ represents data center resources, demand side management, reserve power, and control. Such that is the unity set, implying that an entity defaulting to the state of unity is normally injecting power into its main bus. Imagine a second case in which there is a microgrid supporting a data center (i.e., a microgrid forming campus of colocation facilities whose bus-to-bus infrastructure forms a path for the supply of power to some data center) capable of storing enough reserves from distributed energy resource dispatch and whose reserves are capable of supplying enough energy to regularly exceed the demand of the data center campus or facility. So, it follows: ​ For whatever predicted or required time interval [a,b]; also, in which the initial computation assumes that the microgrid is operating in utility mode so that value of “ utility cost” is non-negative and the data center bus is not initially injecting power. If the initial computation finds that the right-hand side conditional is satisfied, then the system allows for the operation of the data center as a power source via either demand side management to demand response (reducing the strain on macrogrid in exchange for financial compensation from the utility) or direct injection from the facility buses and/or Distributed Energy Resource (DER) buses into the utility bus. The economic dispatch problem is initiated upon autonomous identification of need for energy downstream or an offer to supply energy upstream. The data center may try to resell power it has received from the utility or from any distributed energy resource back to the alternative for a profit and DERs may try to do the same to the data center, the utility, and one another. A proverbial energy trading market is formed by each microgrid. Metering technology is integrated via partial storage at local IoT with the blockchain so that a need for power at any point in the network may be detecting and the network optimization of energy supply as a trade initiated. Each entity has their own rulebook containing parameters for a safe investment of trust tokens and energy dispatch for direct profit, and their rulebook is autonomously evaluated against an anonymous need for power somewhere in the network alongside conditions. If the entity’s rulebook greenlights the need for power, then that entity’s processors will automatically wager trust to build the block of economic dispatch; such that, if the entity builds the block of a partial economic dispatch solution that is a false solution or is somehow biased to the block builder outside of the network’s threshold for loss, then the entity building the block will lose their wagered trust and have a reduced ability to be selected for block building power trading solutions. If the entity builds a truthful solution to the economic dispatch problem, then they are rewarded with an increase in trust. The consensus algorithm has thresholds for loss, and the blockchain itself has algorithms for determining false blocks or deliberate attempts at sabotaging the blockchain, which is much more difficult to accomplish in a proof of stake or proof of trust system. Out of all entities wagering some amount of their trust tokens, the network consensus algorithm selects a random subpopulation from the bidding entities that fulfill the minimum computed “net trust” requirement based on importance of the economic dispatch being considered, a set of summed trust so that the total trust of block builders fulfills good actor requirements even if an individual bad actor makes it into the bid, or a computed minimum wager of trust to be selected. The third option may have an issue in which participant rulebooks either need to know the minimum wager of trust or they are just blindly guessing how much to wager. The second option builds additional resilience against bad actors because a single bad actor getting in and wagering trust on a solution they provide can be systematically overwhelmed by all other solutions to the problem. Some weight may need to be provided to individual solution submission based on how trustworthy an entity is or how much trust they wagered to create an incentivizing market. In proof of trust, the prover must wager some amount of their trust tokens, Sk, in order bid on being included in the pool of potential block building participants. Depending on the system, in theory, trust tokens could form a secondary trading system on top of the direct energy trading floor created by bus to bus injections or it could be used by entities who would like to sell energy injection anonymously to another entity calling for grid injection. They could wager their trust that the solution is either cost efficient, energy efficient, or zero carbon if that is of interest to the recipient of power. There are many different ways to get participants interested in wagering trust in order to build blocks, each with pros and cons; as well, each method of gamifying this system may need to be tweaked for resilience and holes in logic. where B is the initiation of some block algorithm to wager trust Sk in exchange for the ability to sell power to some anonymous demanding load pj. This notion of wagering trust brings up another potential implementation of ZGHG data center formed microgrids and energy trading in which multiple entities may provide solutions that benefit them for profit and fall within some maximum threshold of either loss, price increase, or temporary efficiency drop at any point in the system. So long as it falls within the consensus algorithm maximum loss thresholds, and the trust ranking of the party is high enough, they can either be directly selected as an economic dispatch solution which optimizes cost for their facility’s perspective or they can be entered to a random pool of individuals with fair trust rankings. Once a solution is selected, it is sent to a random set of entities with trust rankings that match the significance of the dispatch proposed. It is hashed to protect individual identities, and either proof of stake, proof of trust, or proof of knowledge is used by these block builders to ensure that they are incentivized to build a correct block for the selected dispatch solution. The redundant blocks are based on redundant data taken from computations made across the network to ensure a higher tier redundancy in computational infrastructure, and the redundant blocks are checked against the consensus algorithm. Redundant blocks offer higher protection but, just like a redundant data center, they can increase cost to the network or individual entities. Assuming a single block is made and added to the blockchain, the blockchain update is examined, approved or rejected, and distributed to the network. Transactions, operations, exchanges of funds and power are automatically dispatched for all relevant parties. Thus, the zero-greenhouse gas microgrid is able to meet its own power needs and potentially obtain external profit via injection. In conclusion and in brief, using the framework of decentralized networking, control, and computation enabled by blockchain technologies and hybrid peer to peer IoT storage, it is possible to model the operation and control of a simple yet fully renewable microgrid data center environment. Instead of designing green data centers living downstream of high emissions utility power supplies, it is finally possible for engineers and entrepreneurial interests to create a system designed for green energy, one that demands it. ​ ​ ​ 7. Future Papers ​ Second Paper, First Series: “Server Subletting to Save the World: How Automated Server Resource Trading Works and Why Green Data Centers Need it” Third Paper, First Series: “Taking Back the Grid: Integration between Zero-Emission Microgrids and Data Center Tenants” Fourth Paper, First Series: “Microgrid 2.0: How the Decentralized Tomorrow will Create Microgrids of Data centers” “Decentralized Energy As A Service: A Green Future Without Macrogrids” Emerging Technology Round-Up: "A Who’s Who of Zero Carbon Data Center Innovators" ​ ​ ​ 8. Further Reading ​ Y. Sang, U. Cali, M. Kuzlu, M. Pipattanasomporn, C. Lima and S. Chen, "IEEE SA Blockchain in Energy Standardization Framework: Grid and Prosumer Use Cases," 2020 IEEE Power & Energy Society General Meeting (PESGM), Montreal, QC, Canada, 2020, pp. 1-5, doi: 10.1109/PESGM41954.2020.9281709. ​ R. G.S. and M. Dakshayini, "Block-chain Implementation of Letter of Credit based Trading system in Supply Chain Domain," 2020 International Conference on Mainstreaming Block Chain Implementation (ICOMBI), Bengaluru, India, 2020, pp. 1-5, doi: 10.23919/ICOMBI48604.2020.9203485. ​ V. Naidu, K. Mudliar, A. Naik and P. Bhavathankar, "A Fully Observable Supply Chain Management System Using Block Chain and IOT," 2018 3rd International Conference for Convergence in Technology (I2CT), Pune, India, 2018, pp. 1-4, doi: 10.1109/I2CT.2018.8529725. ​ R. S. Kadadevaramth, D. Sharath, B. Ravishankar and P. Mohan Kumar, "A Review and development of research framework on Technological Adoption of Blockchain and IoT in Supply Chain Network Optimization," 2020 International Conference on Mainstreaming Block Chain Implementation (ICOMBI), Bengaluru, India, 2020, pp. 1-8, doi: 10.23919/ICOMBI48604.2020.9203339. ​ M. Nakasumi, "Information Sharing for Supply Chain Management Based on Block Chain Technology," 2017 IEEE 19th Conference on Business Informatics (CBI), Thessaloniki, Greece, 2017, pp. 140-149, doi: 10.1109/CBI.2017.56 ​ Z. Mahmood and J. Vacius, "Privacy-Preserving Block-chain Framework Based on Ring Signatures (RSs) and Zero-Knowledge Proofs(ZKPs)," 2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT), Sakheer, Bahrain, 2020, pp. 1-6, doi: 10.1109/3ICT51146.2020.9312014. ​ Aljosha Judmayer; Nicholas Stifter; Katharina Krombholz; Edgar Weippl; Elisa Bertino; Ravi Sandhu, Blocks and Chains: Introduction to Bitcoin, Cryptocurrencies, and Their Consensus Mechanisms, Morgan & Claypool, 2017, doi: 10.2200/S00773ED1V01Y201704SPT020. ​ S. E. Chang and Y. Chen, "When Blockchain Meets Supply Chain: A Systematic Literature Review on Current Development and Potential Applications," in IEEE Access, vol. 8, pp. 62478-62494, 2020, doi: 10.1109/ACCESS.2020.2983601. ​ 9. Works Cited ​ H. Carruthers and T. Casavant, “Commission for Environmental Cooperation,” in What is a "Carbon Neutral Building", 2013, pp. 1–6. http://www3.cec.org/islandora-gb/islandora/object/islandora:1112/datastream/OBJ-EN/view ISG, Does your enterprise need blockchain? Information Services Group, 2021. https://isg-one.com/consulting/blockchain L. Lamport, R. Shostak, and M. Pease, “The Byzantine Generals Problem,” ACM Transactions on Programming Languages and Systems, vol. 4, no. 3, pp. 382–401, 1982. https://lamport.azurewebsites.net/pubs/byz.pdf M. Conoscenti, A. Vetrò, and J. C. De Martin, in “Blockchain for the Internet of Things: A systematic literature review”. 2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA), pp. 1–6. https://ieeexplore.ieee.org/document/7945805 M. Huillet, “Bitcoin Will Follow Ethereum And Move to Proof-of-Stake, Says Bitcoin Suisse Founder,” 14-Apr-2020. https://cointelegraph.com/news/bitcoin-will-follow-ethereum-and-move-to-proof-of-stake-says-bitcoin-suisse-founder Tomorrow's Decarbonized and Decentralized Power Market. Grid Evolution. “What are zk-SNARKs?,” Zcash, 09-Sep-2019. [Online]. Available: https://z.cash/technology/zksnarks/ . [Accessed: 19-Apr-2021]. ​ ​ ​ About the Author ​ Matthew J. Karashik, EIT, is an Electrical Engineer at EYP MCF. Matthew’s Experience includes engineering designs, drafting, standards review, and NFPA-70 (National Electric Code) compliance, development of single line diagrams, electrical floor plans, grounding plans, grounding diagrams, and electrical details. Matthew has also performed local applicable code review, site visits, surveys, and site assessments. Matthew’s experience includes energy efficiency and cost savings analysis of emerging technologies for data centers and power utilities. Matthew has done numerous site evaluations and demand site management using energy modeling and monitoring software tools, his experience includes design using Revit/BIM360. Matthew has a Bachelors of Science in Electrical, Electronics and Communications Engineering from New York University. ​ Download PDF

  • West 7 Center Economizer | EYP Mission Critical Facilities, Inc. | United States

    April, 27, 2021 ​ ​ ​ ​ ​ Using a Data Center Water Side Economizer on an existing facility to reduce water and energy usage ​By: Gardson Githu, PE ​ ​ EYP Mission Critical Facilities Inc (EYP MCF) worked with the client to reduce the power and water consumption of their fully operational data center(s). Considering different variables including cost, code compliance, maintaining existing reliability, space, and constructability while the data center is fully operational, the clients can obtain up to 25% of energy savings, and 7% on Water Usage Savings ​ Rising Realty Partners , owner of West 7 Center , formerly known as the Garland Center, a premier purposed-built mission-critical, co-location, and data center facility located at 1200 W.7th Street in downtown Los Angeles is one of the most energy-efficient data center following their recent central plant retrofit project. At a full load of 12-megawatt of data center load, the projected energy saving is approximately 6,900,000kWh per year after the retrofit project. With annualized mechanical partial PUE of 1.37, this data center at full load is among the top tier of its kind in the middle of downtown Los Angeles business district. The data center is located on the subterranean levels of a nine-story office building, it comprises of three subterranean levels. The bottom floor on lower level 3 is reserved for the critical mechanical and electrical infrastructure while the two top levels of lower level 1 and lower level 2 are the data center white space. EYP MCF performed an energy audit and made recommendations on improving energy usage in the data center and the lowering of PUE (Power Usage Efficiency). Upon completion of the audit, EYP MCF recommended the installation of a water side economizer to the existing 4400-ton water-cooled chiller plant for energy savings and for the plant to be compliant with California Energy Code (Title 24). The existing central plant is an (N+1) chiller plant which consists of four (1100-ton) Trane centrifugal water-cooled chillers, four (3300 GPM) condenser water Pump, four (2600GPM) primary chilled water pumps, cooling towers, and multiple secondary chilled water pumps. ​ Energy Analysis and Benchmarking ​ The West 7 Center facility operations provided central plant energy usage trends from the Building Management System (BMS), water usage trending, and utility bills. During the study, the total building cooling load demand was approximately 761-tons, so at the time only one of the four chillers was required to meet the cooling demand for the building. Upon completion of benchmarking process, the data was analyzed, and the table and chart below represent the breakdown of the energy usage. (Figure 1) ​ ​ Download PDF Figure 1 Central Plant Energy and water Usage Summary Full Build Data Center Projected Energy Savings ​ The energy base model was created using Trane Trace Energy Modeling tool. The base model was calibrated and tested using actual field measurements and power consumption of the mechanical equipment at the site. The calibrated base model was then used to model future data center load profile and power and energy consumption. Figure 1 above represents the data center’s full load energy usage without an economizer system and chilled water supply set point of 43-degree Fahrenheit. ​ The economizer system allows the chilled water system to operate at higher chilled water temperatures above the current design setpoint. Chilled water setpoint for economizer base model was set at 65-degree Fahrenheit driven by current Energy Code requirement and industry projected future trends. Figure 2 below represents the data center’s full load energy usage with an economizer system and a chilled water supply set point of 65-degree Fahrenheit. Figure 2 Full Build Data Center Energy Consumption with Economizer. The projected annual energy savings of 1,563,145 kWh and water savings of 1.8-million gallons at the current load condition was computed and complied as summarized in Figure 3. However, the projected full implementation of the economizer will result in 6,944,371 kWh, and water usage savings of approximately 7-million gallons per year at full load. Figure 3 Energy Savings summary ​ Economizer Design Considerations ​ EYP Mission Critical Facilities Inc. (EYP MCF) in corroboration with Rising Realty Partners considered several factors in the design of the economizer retrofit project. Those factors included cost, code compliance, maintaining the reliability of the existing system, space availability, and constructability while the data center was still fully operational. Of all these factors, constructability and reliability were the two most important factors for Rising Realty Partners and EYP MCF. The original design intent was for chillers to operate in line-up formation of a primary pump, condenser pump, and cooling tower, connected to a common header, however, none of the pumps or the towers are dedicated to any chiller. EYP MCF recommended and designed an economizer system that was consistent with the original design intent. The economizer heat exchanger was to be connected in series with the chiller on both sides of chilled water and condenser water systems. Figure 4 below represents one line economizer heat exchanger connection diagram. As indicated both chilled water and condenser water are connected in series thus eliminating the cost of additional pumps. The system is designed for an economizer system to provide partial cooling to the chilled water and the chiller providing trim cooling to meet the setpoint. On the condenser side, the system is designed to maintain condenser water temperature through the chiller within the recommended manufacturer’s range. This is accomplished by passing through warm water from the heat exchanger exit through the chiller and by modulating a 3-way valve controlled by the chiller control system. Figure 4 Heat Exchanger High-Level connection diagram Successful Project Completed During Shutdown ​ In the fall of 2019, the economizer retrofit construction commenced followed by vigorous balancing and testing, control system upgrade, and system functional testing. The project was completed in the fall of 2020. Figure 5 is a photo of one of the heat exchanger installations at the site while figure 6 is a photo of piping modification at the site. ​ Figure 5 Heat Exchanger installation photo Figure 6 Economizer Piping Modified Installation Photo Conclusion ​ The West 7 Center economizer retrofit project, is a sample case study of a successful retrofit project. It is projected to yield energy savings and maintain the operational reliability of the system. Projected annual energy and water savings and partial mechanical PUE of 1.37 makes this data center one of the most efficient data centers in downtown Los Angeles. This data center design by EYP Mission Critical Facilities Inc.(EYP MCF) provides operational flexibility as the information technology industry continues to evolve. ​ Download PDF

  • EYP Mission Critical Facilities, Inc. | Home | United States

    New White Paper Reaching for Net-Zero Achieving Zero Carbon Data Centers by Decentralizing Consensus of Power Supply Amongst Utility and Microgrid Providers by: Matthew Karashik, EIT Download here Our Focus Critical facilities solutions aligned to the digital revolution. Secure. Reliable. Flexible. Efficient. Read More Strategy Design Commissioning Sustainability DUE DILIGENCE Assurance Accomplishments & Credentials Sample Case Studies High Efficiency Off The Grid Datacenter Beacon Falls, CT 160,000 SF (Raised Floor) Data Center Master Plan 28 MW of IT Load EYP MCF’s Master Plan for this Tier III data center developer is premised on the capture of CO2 from the fuel cells to use for beverage production. ​ The site will utilize a 32 MW fuel facility with gas-fired generators as the primary power source, using the utility as backup capacity. The facility will use a chilled water central plant with centrifugal chillers to provide an absorption mechanical system. 300 Acre Data Center Campus with a 500+MW Utility-Scale Solar Facility Laramie, WY 300-acre Data Center Campus Design 25-200 MW EYPMCF is developing the conceptual design, building budget, and cost of power analysis for a new data center expected to have 25MW capacity Day 1, with the ability to scale to 200MW over time. ​ Set in a data center business park, to be located on 300-acres of privately-owned land designated as a Federal Opportunity Zone, the project also calls for a 500+MW Utility-Scale Solar Facility on up to 12,000-acres of the client's Ranch located in southeast Wyoming. Gas Turbines Low Emission Data Center Martinsburg, WV Greenfield Data Center Design 2 x 350,000 SF (Raised Floor) EYP MCF provided master planning and detailed design for a site capable of housing two 350,000 SF, 78MW data centers with a critical design load for the site of 104MW. ​ The project intends to use GE gas turbines as the primary power source and will include a unique and proprietary Linde/BASF carbon capture technology to eliminate all emissions from turbines. The facility is planned to utilize totally modular IT space, with mechanical and electrical Infrastructure using PVD Modular solutions. Immersion Cooling on Bitcoin Mining Coshocton, Ohio Data Center Campus Feasibility Study, Master Planning, Design 34 Units ​ EYP MCF is providing a data center feasibility study, master planning, and detailed design to convert an existing manufacturing site into a Bitcoin mining campus. ​ The project design specifications include 34 modular data center units and will utilize liquid immersion cooling to run the heat intensive data mining technologies. 38-acre Data Center Campus Ashburn, VA Tier III+ 5MW Commissioning, Level 2 to 5 EYP MCF was selected to provide commissioning service for this 38-acre Data Center campus residing in the heart of the nation’s densest connectivity corridor. EYP MCF will perform the commissioning of multiple quadrants with a total of 5MW of IT Load in 3 different phases. EYP MCF’s scope included the development and maintenance of issues log (Commissioning Deficiency list) tracking observations in the field related to construction. The goal of this log is to monitor and prioritize identified items. EYP provided Level 2 through Level 5 commissioning, the services offered included complying with the customer standards as well as reviewing of the MEP documents, Detailed review report of submittals, shop drawings, controls of sequence of operations and equipment monitoring and alarm, and the performance of infrared testing of electrical equipment per NETA-ATS requirements. Four Story Data Center Santa Clara, CA 160,450 SF 16MW Commissioning, Level 1 to 5 IST EYP MCF role was to act as commissioning authority (CxA) for the phased buildout for this Colocation provider. ​ The project was programmed to be designed as a four-story, 16-MW (IT load) facility of approximately 160,450 SF. EYP MCF provided data center commissioning services for the project throughout the design, construction, start-up, and the initial period of operation. ​ The primary role of the EYP MCF Commissioning Authority (CxA) was to act as the owner’s advocate to ensure that all parties adhere to the design intent and the contract documents. To achieve this objective, the CxA role was to assist with defining and documenting the Owner’s criteria for system function, performance, and maintainability in addition to developing and coordinating the execution of a testing plan and observing and documenting the performance of installed systems. Global Pharmaceutical Master Planning Multiple Locations Data Center Sourcing Options, TCO Analysis, Colocation RFI Development & Selection This Global Pharmaceutical company was looking for a company that can support with their facility assessments of their data centers and local server rooms associated with manufacturing and research sites. These assessments included: ​ Review of all major MEP systems, identification of single points of failure, and remediation recommendations. IT Deployment assessment: Development of IT inventory, layout, and rack elevations for existing inventory as well as planned deployments of new systems and layout, IT deployment and structured cabling. Develop a Data Center sourcing strategy to define In-house/Colo/Cloud environments. Support on identifying and selecting Colocation candidates in multiple countries around the globe. Hospital Group Hybrid IT & Data Center Strategy Boston, MA Multi-Cloud Strategy & Co-Location Selection One of the country’s largest healthcare consortiums was in the process of merging. This merge included multiple regional hospitals. The hospital IT and Real Estate organizations were seeking a strategy to combine and consolidate data centers between all hospitals. The goal was to modernize, increase resiliency, reduce cost, exploit new architectures, and reduce the reliance upon leased space. They needed someone to help these two large organizations come to develop options and costs for this complex planning effort. The options included a Data Center Sourcing Strategy to discuss the options to host their applications in different Multi-Cloud environments (In-house, Co-Location providers and Private Cloud Providers) Data Center Due Diligence Multiple Locations Across the United States 8 Data Centers EYP MCF was selected to provide due diligence site evaluation service of more than 8 data centers in the U.S. territory relative to their ongoing use and possible expansions. ​ The scope included on reviewing the existing building infrastructures as well as the data center operations including maintenance documentation (SOP, EOP, MOP’s), man-power of the facility, capital operating expenditures and budges for the last 5 years. ​ The study provided information on redundancy, drawbacks and limitations, SPOF, description on commissioning reports, geo studies, BMS Systems, Security, branch circuit monitoring, and software solutions. ​ EYP MCF also provided Analysis and commentary on refurbishment, maintenance and upgrade capital required, review expansion plans and budgets. Data Center Due Diligence North America and Europe 16 Data Centers EYP MCF was selected to provide Mechanical, Electrical and Plumbing/Fire Protection (MEP/FP) services culminating in a due diligence review of each site. ​ The purpose of the review was to give a professional opinion on the condition of the existing facility infrastructure relative to its ongoing use and possible expansion as a data center facility. EYP MCF followed a plan that started with the review of all the available documentation of each site, conducted a site visit of 13 of the 16 Data Centers, and the remaining 3 developed a desktop review. ​ The evaluation included the review of the building infrastructure relative to the data center operations and maintenance documentation.

View All