top of page

Search Results

67 items found for ""

  • Contact Us | EYP Mission Critical Facilities, Part of Ramboll | United States

    CONTACT Success! Message received. Send We would love to hear from you! ​Tel: +1-315-956-6100 ​ E-mail: info@eypmcfinc.com ​ Corporate Headquarters ​ 500 Summit Lake Drive Suite 180 Valhalla, NY 10595 Follow us

  • Newsletter | EYP Mission Critical Facilities, Part of Ramboll | United States

    NEWSLETTER EYP Mission Critical Facilities, Part of Ramboll Join our mailling list, never miss an update. Subscribe Now Corporate Headquarters ​ 500 Summit Lake Drive Suite 180 Valhalla, NY 10595 ​ info@eypmcfinc.com Tel: +1-315-956-6100 ​

  • Careers | Join us | EYP Mission Critical Facilities, Part of Ramboll

    CAREERS CAD/BIM Design Operator (Mechanical) - HVAC/Plumbing/Fire Protection - New York Metro Area Valhalla, NY, United States EYP MCF, Part of Ramboll is a pioneer and leader in Data Center Strategy, Planning, Design, Integration, Commissioning and Testing with Experience working in thousands of data centers in the U.S. territory and across the globe. We provide a broad set of services for enterprise , institutional, webscale, service provider and colocation companies. Our team of consultants assist clients in understanding how to bring data closer to their own customers, bringing all components of IT and the facility together, and enable rapid deployment of a solution that achieves critical objectives. We believe we are strongly positioned to create flexible environments that can easily adapt to changes and disruptions -- while eliminating risks and creating efficiencies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation RESPONSIBILITIES The CAD/BIM operator will work under the supervision of senior design engineers and will be responsible for: Setting up drawings Developing the drawings in AutoCAD and/or Revit Executing daily tasks in a timely manner to meet project/daily deadlines Roles and Responsibilities include: Conforming to company standards (AutoCAD/Revit, network) Update project drawing data base with AutoCAD/Revit backgrounds received from other consultants Execute redlines from engineering team Prepare final project drawings, collated by discipline, in pdf format if required Must have excellent computer (Microsoft - Word, Excel, etc.), mathematical and communication skills Must be able to work with others and accept direction from various senior personnel Must be a self-starter and able to work independently and be responsible for meeting deadlines, including working necessary hours to meet deliverables deadline. Able to understand floor plans and general knowledge of building systems Must have working knowledge of English language and the ability to speak and write English with technical terms. Travel to project sites, via car/plane/train as required to complete tasks ACCOUNTABILITIES Accountable for the accuracy and completeness of work assigned. Works under close supervision. Work is regularly reviewed for accuracy, adequacy and conformance with prescribed procedures. QUALIFICATIONS Technical School diploma or College diploma Must have minimum 5-7 years of experience in CAD/Revit EYP MCF, Part of Ramboll is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race; color; religion; genetic information; national origin; sex; pregnancy, childbirth, or related medical conditions; age; disability; citizenship status; uniform service member status; or any other protected class under federal, state, or local law. Apply here Experiencing any difficulties? Please submit your resume to info@eypmcfinc.com

  • Careers | Join us | EYP Mission Critical Facilities, Part of Ramboll

    CAREERS Mid-Level Engineer Mechanical – (HVAC) - California Los Angeles, CA, United States EYP MCF, Part of Ramboll is a pioneer and leader in Data Center Strategy, Planning, Design, Integration, Commissioning and Testing with Experience working in thousands of data centers in the U.S. territory and across the globe. We provide a broad set of services for enterprise , institutional, webscale, service provider and colocation companies. Our team of consultants assist clients in understanding how to bring data closer to their own customers, bringing all components of IT and the facility together, and enable rapid deployment of a solution that achieves critical objectives. We believe we are strongly positioned to create flexible environments that can easily adapt to changes and disruptions -- while eliminating risks and creating efficiencies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation RESPONSIBILITIES The engineering candidate shall work under the supervision of a senior engineer and will be responsible for HVAC design. Works on problems of diverse scope where analysis of data requires evaluation of identifiable factors. Exe rcise s judgment within generally defined practices and policies in selecting methods and techniques for obtaining solutions. May be the primary contact with clients. The candidate shall have a thorough understanding of HVAC systems and ability to execute projects with minimal oversight. Roles and Responsibilities include: Travel to project sites, via car/plane/train as required to complete tasks Ability to execute standard HVAC calculations Equipment Selections and Equipment Applications A general understanding of HVAC systems, building codes, etc. Able to execute their own drafting thru AutoCAD/Revit Conforming to company standards (AutoCAD/Revit, network) Good writing and oral communication skills Must have excellent computer (Microsoft - Word, Excel, etc.), mathematical and communication skills Must be able to work with others and accept direction from various senior personnel Must be a self-starter and able to work independently and be responsible for meeting deadlines, including working necessary hours to meet deliverables deadline. Must have w orking knowledge of English language and the ability to speak and write English with technical terms. ACCOUNTABILITIES Accountable for the accuracy and completeness of work assigned. Works without close supervision. Exercises independent judgment in selecting and interpreting information. QUALIFICATIONS Bachelor’s degree in mechanical engineering Must have minimum 5-7 years of experience in HVAC design and AutoCAD/Revit Professional Engineering License a plus Have a valid driver’s license EYP MCF, Part of Ramboll is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race; color; religion; genetic information; national origin; sex; pregnancy, childbirth, or related medical conditions; age; disability; citizenship status; uniform service member status; or any other protected class under federal, state, or local law. Apply here Experiencing any difficulties? Please submit your resume to info@eypmcfinc.com

  • About | EYP Mission Critical Facilities, Part of Ramboll | USA

    ABOUT EYP Mission Critical Facilities, Part of Ramboll ​ A complete portfolio of consulting and engineering services to lead you through the strategy, design, integration, and operational efficiency of your data center facilities. ​ Defining and implementing a holistic data center strategy that meets our client’s business needs and mission is our goal. Being able to advise those clients on the right hybrid IT or adaptive stack plan takes proven methodologies and hands on experience. This expertise simply cannot be learned “on the job”, as the wrong decisions can ultimately affect customer adoption, revenue generation, market share, P&L performance, or mission breakdown. Thus, we have assembled a team of industry professional strategists, engineers, IT consultants and facility operations experts that have on average 25 years of expertise in their space. These experts work collaboratively to understand the client’s ultimate objectives and develop strategies, roadmaps, designs and integration plans to achieve them. ​ We are aware the data center industry faces constant change and disruption, yet is inherently conservative when implementing new solutions. So given our deep history both developing new and innovative planning tools, design concepts and operations methodologies, and understanding what others have developed through working in thousands of data centers across the globe, we believe we are strongly positioned to create flexible environments that can easily adapt to those changes and disruptions -- while eliminating risks and creating efficiencies. ​ From our founding in 1996, EYP Mission Critical Facilities, Part of Ramboll has been a pioneer in data center strategy, planning, design and testing. We’ve never become complacent with our position in the industry, choosing instead to double down on our mission during challenging global economic times and internet and financial “bubbles.” When competitors chose to become generalists, we never lost focus on the data center, creating the first truly global critical facilities consulting practice. From cloud service providers, to colocation, to enterprises, our leadership in critical facilities has shaped the way public, private, and institutional entities store, secure, and move their digital assets. ​ Throughout our history, we’ve led the industry in “firsts.” From certifications to design topologies, testing techniques, and space, power and cooling tools, we set the standard for successfully melding the real world needs of the IT industry with their physical and environmental realities. Most recently, this has taken the form of modular and micro data centers focused on growth of the cloud, digital edge, and IoT environments. ​ We often hear from customers and partners that we “wrote the book” in this industry, and in our newest chapter, we continue to think years ahead in terms of technology trends, developing the solutions that enable our clients to deliver services to their customers and constituents in a complex, dynamic world. In fact, the experience EYP MCF, Part of Ramboll has garnered by being part of a global IT leader these last 10 years, and truly understanding computing infrastructure, high performance environments and converged and software defined infrastructure, gives us an unmatched understanding of how IT innovations and trends effect the data centers we plan, design and assure for our clients. ​ Accomplishments Our Accomplishments & Credentials

  • Locations | EYP Mission Critical Facilities, Part of Ramboll | USA

    OUR LOCATIONS Mexico City Los Angeles New York Singapore London Copenhagen Dubai Buenos Aires New York (HQ) ​ 500 Summit Lake Drive Suite 180 Valhalla, NY 10595 ​ New York London Los Angeles Singapore Mexico ​ ​Tel: +1-315-956-6100 Contact Us Success! Message received. Send

  • Staff Augmentation | EYP Mission Critical Facilities, Part of Ramboll | United States

    Add skilled professionals to your existing workforce providing engineering and subject matter services on demand supporting one or multiple data center projects. Service Overview ​ Creating a next-generation data center requires multiple challenges including the maintenance-friendly, scalable, energy-efficient, and sustainable design. Data center owners, operators, and developers are rolling out new facilities at a furious pace. Often, simply keeping up with the review of design and construction submittals, and keeping projects moving is not an option with their just-in-time delivery demands. This puts their internal staff demands under significant pressure. EYP MCF, Part of Ramboll brings this level of skilled support to the entire data center planning, design, and build process. Working side-by-side with your development, planning/design/construction, and operations teams, our EYP MCF professionals help assure your business objectives and design standards are incorporated and ultimately delivered throughout your project portfolio’s execution. How can we help? ​ Working on a T&M basis, our engineers are there to support the owner/operator throughout the entire planning, design and build process, releasing the pressure of meeting the challenging timelines, design requirements, equipment procurements, building permits, vendor relationships, and any other additional challenges that can affect the project timeline or budget. Common to all our work is our ability to customize every project to meet specific technologies and data center objectives, even if it has a prototype or a duplicate design, each project is different, and multiple circumstances can affect the design of it, including the location of the project and its climate constraints (i.e. Water scarcity), the procurement challenges of not finding the specified MEP equipment to meet the project timelines, or in some cases the technology demands from the client client's (i.e. Liquid cooling, HPC, or Immersion cooling). ​ Key Differentiators ​ Our engineers have the experience of designing over 70 million sq. ft. of data center raised-floor space. We lead the way in energy-efficient, cost-efficient, and performance-efficient data center design. EYP MCF, Part of Ramboll is a leader in the development of LEED-certified data centers, and that skill can be applied to optimize any data center. You will work with consultants who advise governments on data center energy policy to reduce costs and energy consumption, water usage, and greenhouse gas emissions. Extensive worldwide data center project experience, we leverage our proven high-density cooling, critical power, and energy-reduction design strategies for each new business case. Our proposed resources are engineering professionals who have significant experience in completing complex projects, including data center designs with over 100 megawatts (MW) of power capacity and over 700,000 square feet of raised floor. Experience in design using liquid cooling technologies that support extreme high-density servers in a High-Performance Computer (HPC) or Supercomputer. Provide Design capabilities to work with renewable energy and off-grid solutions including a hydroelectric power source. Extensive experience in different market sectors to provide a unique design up to the raised floor area using Computational Fluid Dynamics (CFD) analysis to predict and optimize design calculations of air volume and setpoint temperatures to enable savings in capital investment and system operating costs. ​ ​ ​ Case Studies ​ 15+ Data Center Campus Project (Recurrent) Multiple Locations +1.25 Million SF (Raised Floor) ~750 MW EYP MCF, Part of Ramboll has been providing ongoing Staff Augmentation & Technical Support Services to supplement the owner's design team for multiple build-outs simultaneously. Our subject matter expert's role includes reviewing, validating, and commenting on the multiple data center designs and deliverables to establish general acceptability of services content and general compliance with the Contract documents, local codes, and the Owner Project Requirements. Providing these services relieves the client from the need to hire and train new talent while allowing them to focus on timely and cost-effective delivery. Our scope also includes working with the Engineer of Record (EoR) and local construction teams to evaluate any Cost Optimization opportunities that are identified and ensure the implementation does not contradict with the OPR. Near Zero Planned Water Utilization Efficiency (WUE) is near zero (liters/kW/hr) Phoenix, AZ 800,000 SF 160 MW Campus EYP MCF, Part of Ramboll provided Staff Augmentation & Technical Support Services throughout the process of planning, designing, contraction selection process, construction, and commissioning of the client's Data Center Project. ​ a highly efficient closed-loop chilled water system generated through air-cooled chillers. The cooling system will also include an integrated economizer capability that allows reduced compressor energy based on outside ambient temperature, with a planned Water Utilization Efficiency (WUE) is near zero (liters/kW/hr). Multiple power density options with an average of up to 250W/SF Highly Scalable, Flexible and Efficient Data Center Ashburn, VA 720,000 SF (Raised Floor) 142 MW EYP MCF, Part of Ramboll provided Staff Augmentation & Technical Support Services throughout the process of planning, designing, construction, and commissioning of the client Data Center Project. Our subject matter experts provided a peer review of the design, and guidance during the basis of design, schematic design, design development, and construction documents phases. The project included the overall data center program master schedule/time assistance with RFP development and/or the contractor selection processes. This opportunity led EYP MCF, Part of Ramboll to provide similar services at future client's data center projects Data Center is located in one of the nation's lowest power rate areas Quincy, WA Tier III 16 MW The client selected EYP MCF, Part of Ramboll to provide Subject Matter Expert (SME) services for this project in Quincy, WA. The scope included EYP MCF, Part of Ramboll to act as representative for this Co-Lo provider be responsible to ensure the Co-Lo’s design standards and business objectives are respected and reflected in the projects, validate compliance of the project BOD, local code requirements, local conditions, and work with local construction teams to evaluate any cost optimization opportunities. The project consisted of a Tier III Data Center, with a 16MW One Design shell, with 8MW stacked DM spaces. EYP MCF, Part of Ramboll provided Staff Augmentation & Technical Support Services throughout the process of planning, designing, construction, and commissioning of the client Data Center Project. ​ Data Center located in the Silicon Valley area Santa Clara, CA 64 MW Shell with 16MW build-out The client selected EYP MCF, Part of Ramboll to provide Staff Augmentation & Technical Support Services for his largest project in California. The project consisted of a Tier III Data Center, that will have a 64MW of critical IT Load. EYP MCF, Part of Ramboll provided subject matter engineering support by acting as a representative for this client and was responsible to ensure the Company’s design standards and business objectives are respected and reflected in the projects. ​ Trusted Advisor Services throughout the process of planning, designing, construction, and commissioning of the client Data Center Project. Our subject matter experts provided a peer review of the design, and guidance during the basis of design, schematic design, design development, and construction documents phases. Services ​ ​ The following will detail our general services. Attend meetings as outlined. Review and comment on the design and deliverables to establish general acceptability of services content and general compliance with the Contract documents. Validate the compliance of the design against local code requirements and local conditions. Validate the compliance of the project Basis of Design (BOD) and project design against Owner Project Requirements (OPR) and review any deviations to the OPR with the client’s engineering team. Work with the Engineer of Record (EoR) and local construction teams to evaluate any Cost Optimization opportunities that are identified and ensure the implementation does not contradict with OPR. Review project RFIs and provide responses upon review of response from EoR. Review strategic project submittals and provide responses upon review of response from EoR. Support Sales and Solutions Engineering with Customer RFP response. On-Site and office engineering support as required. Any additional SME service(s) related to Mechanical, Electrical, Plumbing, Fire Alarm, Fire Protection, and Fuel Oil upon request.​ ​ White Papers & Blogs ​ The Case for Natural Gas Generators: Standby power generation considerations for reducing data center carbon emissions By: Yigit Bulut, PE, ATD Partner, Lead Electrical Engineer, Chief Technology Officer at EYP MCF, Part of Ramboll. ​ West 7 Center: Using a Data Center Water Side Economizer on an existing facility to reduce water and energy usage By: Gardson Githu, PE, Senior Mechanical Engineer and Consultant at EYP MCF, Part of Ramboll.​ ​ Sustainability in Data Center Lighting Design By: Angelica K. Hermanto, PE, LC, LEED AP, Senior electrical engineer at EYP MCF, Part of Ramboll. ​ ​ ​ Podcasts ​ Net-Zero Carbon Data Centers: Decentralizing Consensus Among Utility and Microgrid Power Supply. Special Guest: Matthew J. Karashik is an Electrical Engineer at EYP MCF, Part of Ramboll. ​ Data Center 2030: Sustainability and Greenhouse Gas (GHG) abatement - Special Guest: Yigit Bulut, PE, ATD Partner, Lead Electrical Engineer, Chief Technology Officer at EYP MCF, Part of Ramboll Staff Augmentation & Technical Support Services Description Case studies Services Articles & Blogs Podcasts Contact us

  • Data Center Design | EYP Mission Critical Facilities, Part of Ramboll | United States

    Create next- generation designs for both data centers and existing critical facilities, based on specific business objectives. Customize every project to meet specific technologies and data center objectives with a maintenance-friendly, scalable, and energy-efficient design. Service Overview ​ Working side-by-side with IT, real estate ​ EYP MCF, Part of Ramboll can provide all engineering services from the conceptual data center facility design, through all the phases that lead to stamped/sealed construction documents, suitable for bid or a CM-generated GMP. At the conclusion of design, the EYP MCF, Part of Ramboll team will continue into the construction administration phase keeping our designers and engineers engaged for oversight of design intent, and value engineering options. This end to end approach ultimately assures our clients that the mission-critical project is built in accordance with the design. How can we help? EYP MCF, Part of Ramboll has designed over 70 million square feet of critically powered space around the world, including numerous Tier 3/4 and LEED Certified data centers. Additionally, utilizing specialized tools, software programs, a database of industry-specific applications along with the lessons learned from our commissioning practice, contribute to the expertise and innovation of our design team. EYP MCF, Part of Ramboll ’s history and focus on ONLY serving the Mission Critical Industry proves our leadership and insight to the assistance our clients will need in either, retrofits, efficient designs, forensic evaluations, and other client critical needs. Common to all our work is our ability to customize every project to meet specific technologies and data center objectives with a maintenance-friendly, scalable and energy efficient design. Key Differentiators We provide everything from the conceptual physical data center design to schematic design, detailed design construction documents, and other tools. This ultimately helps you determine availability, reliability, and topology needs, as well as overall data center costs. Our engineers have the experience of designing over 70 million sq. ft. of data center raised-floor space. We lead the way in energy-efficient, cost-efficient, and performance-efficient data center design. EYP MCF, Part of Ramboll is a leader in the development of LEED-certified data centers, and that skill can be applied to optimize any data center. You will work with consultants who advise governments on data center energy policy to reduce costs and energy consumption, water usage, and greenhouse gas emissions. Extensive worldwide data center project experience, we leverage our proven high-density cooling, critical power, and energy-reduction design strategies for each new business case. Our proposed resources are engineering professionals who have significant experience in completing complex projects, including data center designs with over 100 megawatts (MW) of power capacity and over 700,000 square feet of raised floor. Experience in design using liquid cooling technologies that support extreme high-density servers in a High-Performance Computer (HPC) or Supercomputer. Provide Design capabilities to work with renewable energy and off-grid solutions including a hydroelectric power source. Extensive experience in different market sectors to provide a unique design up to the raised floor area using Computational Fluid Dynamics (CFD) analysis to predict and optimize design calculations of air volume and setpoint temperatures to enable savings in capital investment and system operating costs. Case Studies ​ Confidential Colocation Carrolton, TX 700,000 SF 60MW This Colocation provider wanted a "massively modular" design for its new data centers. It was super-sizing its approach to colocation facilities with the development of a 700,000 sq. ft. building in the Dallas market, and seeking to position modularity as an approach to a phased deployment of space using highly standardized elements. EYP MCF, Part of Ramboll was selected as the design and commissioning partner who could work through the engineering and design issues associated with this new concept. EYP MCF, Part of Ramboll developed the standard MEP designs, as well as the commissioning plan and test scripts to be used for future projects based on this data center design. In addition, EYP MCF performed multi-level commissioning for additional projects from this client. Confidential Hyperscale Quincy, Washington - 2 Sites Tier III 460,000 SF EYP MCF, Part of Ramboll provided MEP design services for the Quincy Datacenters. Built on 75+ acres, these facilities were the first of three 460,000 gross sq. ft. buildings on the site. They were built out in two phases, each 230,000 gross sq. ft. with 60,000 sq. ft. of raised floor. Citigroup Roanoke, TX Tier IV 243,000 SF EYP MCF, Part of Ramboll was first selected by this client as the design engineer for a greenfield data center in Roanoke, TX. The facility was a ‘ground-up’ single-story, 243,000 sq. ft Tier IV facility with 100,000 sq. ft of raised floor computer environment. ​ The Commissioning for this project included factory witness testing, developing scripts for startup, commissioning, and integrated system testing. The commissioning team provided oversite of construction and startup and performed functional and integrated system testing. Testing progressed from components of the systems and sub-systems to the systems that made up the electrical and mechanical infrastructure for these projects. Testing demonstrated to the Owner that the Data Center operated as designed. Orange Telecom Normandy, France 80,000 SF 20MW of IT EYP MCF, Part of Ramboll was selected to design and provide commissioning services (MEP, FP, BMS, CCTV) to multiple data centers. Including 2 campuses of 2 buildings each. Each building is 10 MW IT HQ host 4 Data Halls of 10,000 SF @ 250 KW/SF The project was designed with a data center standard modern for future constructions: 1 building to be cloned 4 times, using the latest technologies including Direct Air Free Cooling. 30% energy savings versus a standard design. Flexibility to adapt to the next 10-20 years technologies. Spectrum Health Grand Rapids, MI 9,000 SF EYP MCF, Part of Ramboll was selected to assist Spectrum health to improve its patient care and outcomes through higher reliability and service availability. The Flexible Data Center will be a completely modular approach to designing and building data centers that helps to keep the continuously growing business as healthy as the patients it serves. The project includes the reduction of OPEX by 25%, forecasted savings of +$10M over 10years. Reduce power usage by 27%. Confidential Hyperscale EYP MCF, Part of Ramboll performed startup and commissioning of approximately 100 units (400 sq. ft at 800-1000 W/per sq. ft). The prefabricated and prepopulated containerized solutions were modular and included a mix of adiabatic and chilled water cooling. The electrical solution included a mix of central UPS and in-rack critical power battery backup. EYP MCF, Part of Ramboll installed 45 units (30 EcoPODs and 15 Water Cooled 40’ PODs) in six months to support the deployment of 130,000 servers in the largest of four similar waves of deployment. The EYP MCF, Part of Ramboll solution consisted of design, project management oversight of sub-contractors, and Commissioning. The deployment met or exceeded very aggressive installation schedules to recover losses to the overall timeline after site construction delays, allowing the client to meet its financial obligations. Confidential Colocation Ashburn, VA 720,000 SF 142 MW EYP MCF, Part of Ramboll provided Trusted Advisor Services throughout the process of planning, designing, construction, and commissioning of the client Data Center Project. Our subject matter experts provided a peer review of the design, and guidance during the basis of design, schematic design, design development, and construction documents phases. The project included the overall data center program master schedule/time assistance with RFP development and/or the contractor selection processes This opportunity led EYP MCF, Part of Ramboll to provide similar services at future client's data center projects Confidential Hyperscale Maiden, NC Tier IV - LEED Platinum 509,00 SF ​ EYP MCF, Part of Ramboll was selected to assist a major technology company in the design of a new data center, located in the eastern United States. The project encompasses a total of 509,000 square feet with 120,000 square feet of data halls. The project included a 200-acre array of photovoltaic solar panels serves as a supplement to its utility power feed. The building is comprised of a precast structure and shell. The facility, which includes data center, support, and administrative spaces, was to be constructed in phases for growth. BBVA Madrid, Spain Tier IV 200,000 SF New DC highly available Tier IV by the Uptime Institute Design (1st Spain) and Construction (Europe 1st). BBVA’s Tres Cantos facility is the first DPC in Europe and the fourth anywhere in the world to receive this double certification, which the Uptime Institute assigns to data processing centers that feature optimal levels of reliability and safety. The Group’s targets for expansion in coming decades and BBVA’s firm commitment to information technologies were the two main drivers convincing the bank of the need to build a new DPC in Tres Cantos, adjacent the lender’s original facility. BBVA’s new DPC, which spans 20,000 square meters, doubles and has room to triple the density of equipment the previous facility was able to house. Indeed, the new DPC hosts equipment consisting of up to 10,000 high-end processors, vs. 4,800 at the old center. Confidential Technology Company Colorado Springs, CO Tier III - LEED GOLD 100,000 SF 16MW of IT EYP MCF, Part of Ramboll was selected to design and commission two Tier III data centers. The mechanical cooling design utilized indirect evaporative cooling units on the roof for the technology building and VRF (variable refrigerant flow) system for the administrative areas. Electrically the design had rotary style UPS units along with outdoor generators. The team was very interactive with design and heavily engaged in construction administration. BIM360 was used as the commissioning software tool. Commissioning services included Factory witness testing, developing scripts for startup, and integrated system testing Confidential Broadcasting Company Phoenix, AZ Tier III - LEED GOLD 113,000 SF 6MW EYP MCF, Part of Ramboll provided MEP Engineering services for a new state of the art digital media center providing content for streaming, live television, and pre/post - production as well as local television affiliates. Total facility load will be 6 MW with a concurrently maintainable MEP infrastructure. National Institutes of Health (NIH) A portion of an existing office building (approx. 5,400 SF) was repurposed into a new data center facility or Consolidated Computational Research Facility (CCRF). Work included extensive coordination with mechanical, electrical, plumbing, fire protection, landscaping, vibration, and technology consultants. Mechanical systems work includes construction of a new bidirectional fault-tolerant chilled water distribution system teamed with in-row type chilled-water cooling units situated on the raised data floor. New heat exchangers, redundant chilled water pumps, chilled water storage tanks and controls, along with a local low ambient emergency air cooled chiller will be installed to support the data center cooling needs. Systems to include new interior and exterior distribution gear, automatic transfer switches, uninterruptible power supplies, transformers, power transformers, and remote power supplies. Special systems include fire alarm systems, access control, video surveillance, and paging. Services ​ ​ Engineering Infrastructure Design (MEP & Fire Protection) : Comprehensive services encompass electrical, mechanical, fire protection, fuel, control, and security systems for standalone data center buildings MEP Retrofit and Refresh : New server rooms within existing buildings, and upgrade/expansion of existing facilities. Peer Review : Extensive quality review of MEP designs. Facility Programming, Design and Cost Modeling : Translates IT, space, power, and cooling requirements into a conceptual-level basis of design and develops a construction cost estimate and cost/benefit strategies for infrastructure investments. Space Planning : Analyzes and creates a complete technology space plan for critical infrastructure. Optimal Site Selection : Based on EYP MCF, Part of Ramboll innovation and industry-standard selection criteria that is custom-tailored to customer requirements. Asset Due Diligence : Leasebacks, Acquisitions. Energy Efficiency Design : Reduces energy costs and carbon footprint. We are industry leaders in designing energy-efficient , low PUE, and LEED Certified facilities. DCIM Design : Comprehensive services encompass Data Center Infrastructure Management Systems. Controls & Cybersecurity (BMS/EMCS/SCADA): Controls specialty services available during the Data Center planning to provides deeper discussions on cybersecurity and how the control systems will be designed to mitigate risk. Outside Plant (OSP) : Design a resilient telecommunications infrastructure to support next-generation communications networks. Structured Cable Systems (SCS) : Technology infrastructure for cable distribution methods and cable management to meet the latest communications transport protocols. Telecommunications Systems : Voice systems, corporate wireless LAN, and enhanced in-building cellular networks. Power Over Ethernet (POE ): Enable electrical currents to be transmitted over standard Ethernet network cables Webinars ​ Operate your data center effectively The Future of UPS Systems Understanding the economic advantages of a hybrid cloud environment ​ Podcasts ​ Technology Strategy Special Guest: Kevin Sanders Managing Principal, Data Center Consulting & Strategy EYP Mission Critical Facilities, Part of Ramboll DATA CENTER DESIGN Description Case studies Services Webinars Podcasts Contact us

  • Diversity and Inclusion | EYP MCF

    Diversity and Inclusion EYP Mission Critical Facilities, Part of Ramboll is committed to providing equal access and meaningful opportunities to all employees. Our diversity efforts are designed to maximize inclusion in both our employee and subcontracting communities. We strive to be hire, matriculate, and mentor women, minorities, veterans, disabled, LGBTQIA+, and other diversely designated people into our workforce. Additionally, we support, subcontract, and likewise mentor businesses that are majority owned by these same designated constituencies. A diverse and inclusive workplace is a strong workplace. Our unique experiences, backgrounds, and perspectives inspire a culture of innovation and empower our ability to achieve success. As companies seek to reflect their employees, their communities, and their customers, they must create a working environment that not only accommodates but celebrates the rich diversity of all constituents. At EYP MCF, Part of Ramboll, we consistently seek to build out our own workforce to represent that constituency and are incorporating initiatives to support diversity across all our services lines. In addition, we encourage our clients and select suppliers and partners that reflect this commitment to diversity and inclusion in the way they develop, manage, and use the facilities that we plan, design, and assure for them. Diversity brings many beneficial attributes to our firm by building vital partnerships, growing our vendor and sub-consultant talent, and providing a competitive advantage in securing and retaining new business.

  • Towards more sustainable data center design using a CHP case study

    White Paper Towards more sustainable data center design using a CHP case study ​ White Paper 3 ​ ​ December 2021 ​ By: Gardson Githu, PE EYP Mission Critical Facilities, Part of Ramboll (EYP MCF, Part of Ramboll) Download PDF Executive Summary ​ The primary focus of this white paper is to provide an example study that employs an on-site power and heating system to deliver GHG abatement and emissions reduction. The objective is to demonstrate how the data center carbon footprint and GHG emissions can be reduced by situating a data center next to a location where there is immediate need for campus cooling and heating. The need is for onsite power production, achieved through the provision of a microgrid with utility power back-up, together with power efficiency improvement, achieved through heat harvesting and re-use. Utilizing CHP would result in GHG emission reduction of approximately 102,529 tons of carbon dioxide, 112.17 tons of sulfur dioxide and 49.74 tons of nitrogen oxides per annum. Heat recovery from energy production will also lead to an improved power usage effectiveness (PUE), of approximately 1.1. ​ Content ​ Overview Combined Power and Heating Systems for Data Centers Heat Recovery and Re-used Co-Generation Case Study Data Center Power Usage Effectiveness (PUE) Economics Conclusion ​ ​ ​ ​ ​ ​ 1. Overview ​ The use of combined heat and power (CHP) systems is a serious and practical consideration for Green House Gas (GHG) reduction in the data center sector, taking its place amongst other initiatives which include a practical roadmap of progressive GHG abatement for existing data centers and net-zero GHG new data centers by 2030. The use of CHP systems can provide energy efficiency and reliability improvements as well as economic and environmental benefits. While natural gas is not a sustainable resource, its use to fuel gas turbines for a CHP system providing both power and cooling for a 10MW data center could potentially result in a reduction of approximately 50% GHG emissions compared to the use of a fuel-oil fired system and utility power. ​ ​ 2. Combined Power and Heating Systems for Data Centers ​ Though data center loads have always presented an attractive profile for the use of CHP plants, few have been utilized. The reasons for the data center industry’s reluctance to install combined heat and power plants include the emphasis on short payback periods, and a lack of awareness of CHP systems. Typical CHP system payback periods are in the region of 6+ years. However, the desired payback period for the data center industry is often 3-5 years. With this perspective, CHP plants are usually not considered for further analysis. Unpredictable IT loads and a tendency to under-utilize power systems has also contributed to the uncertainty. Facility operators are often left second-guessing the future and therefore are unable to adopt an informed approach to energy usage. Data center designers and operators are also often faced with challenge of low day-one data center demand, which contributes to making CHP systems less attractive. The current and future demand for data center space, power and cooling presents the industry with new challenges including the need for increased power generation to meet this growing demand, and for sustainable design with smaller carbon footprint. Onsite microgrid power generation is likely to be a significant contributor to GHG abatement in the near future. Such generation will include renewable sources such as solar, wind power and fuel cell technologies. Utility power will typically be provided as a back-up as well as providing a source of power to support and energize systems in case a cold start for onsite generation is needed. The ability to connect to the electric utility grid could also allow for the CHP to provide grid support services back to the utility, which can further improve the return on investment of the project. Careful consideration of plant capacity versus data center demand, combined with energy storage systems could allow the site to provide a range of grid support benefits to the electrical utility from regulation and spinning reserve voltage support to transmission, distribution, and generation upgrade deferral. In developing countries, smart city and technology center designs ought to be sustainable through methods and means of utilizing all available energy at the site. In some countries, onsite generation is sometimes the only viable solution for reliable energy, but it can also present a lower GHG profile than incumbent energy from the grid. A CHP plant utilizing a cleaner primary source of energy, such as natural gas, versus the oil or coal burning generation plants typical of national grid systems in many countries will provide a means for reducing GHG. Successful implementation of any CHP plant requires a careful analysis of the energy needs of the site. Fundamentally a load that can utilize the waste heat from the first pass energy conversion process is needed to achieve the conversion efficiency needed to make most projects financially viable. ​ ​ 3. Combined Power and Heating Systems for Data Centers ​ Heat harvesting can be broken into two categories for data center application: High grade heat recovery (such as heat from CHP plant) and low-grade heat recovery such as from data center process and liquid cooling. Low grade heat recovery, which can be conveniently and suitably applied to offset domestic water heating demand by pre-heating the water before the boiler or heating element. Other applications that utilize low grade heat recovery includes pre-heating of air handling units air intake for administrative buildings, external frost protection, use in conjunction with heat pumps to provide higher grade heat and free cooling to the data center system. Data center operations should consider using a data center process cooling system with processing fluid return temperatures at 98 degrees Fahrenheit (36.7°C) to raise domestic water temperature from 55-degree Fahrenheit (12.8°C) to 90 degrees Fahrenheit (32.2°C). This can result in significant energy savings for a campus application which requires constant flow of domestic hot water. ​ 4. Co-generation Case Study ​ The efficiency of coal or fuel-oil based power plants is usually less than 35%. Typical natural gas generating plants have an efficiency from around 35% up to 47%. The rest of the energy from the power generation process is rejected into the atmosphere during power production phase. However, up to 65% of this waste energy can be recovered and utilized leading to an improved overall efficiency of the power production process. An additional benefit of locating power generation at the point of consumption is the reduction in losses associated with the transmission of power. This paper presents a proposal for an improved power production process for data centers, utilizing heat recovery which can then be used for other applications such as heating water to high grade level and not limited to just low-grade systems. The design involves the use of CHP at the end user site, minimizing power transmission losses by capturing the heat from the exhaust of a gas turbine, thereby improving the overall efficiency of the power production process. Natural gas is used as the primary fuel source for the turbine. The installation of the heat recovery system boosts the efficiency of the plant from 35% to 80%. ​ Figure 1 - Schematic flow diagram of a modified and improved power production plant. The following examples are based on an assumed IT power capacity of 10MW (with overall electrical capacity of 11.48MW) and associated cooling demand of 3000-Ton (10.5 MW). A typical installation would include three turbine engines in a N+1 redundant configuration. All mechanical cooling equipment is also in N+1 configuration. As indicated in Figure 1, the exhaust gas temperatures range between 650 degrees Fahrenheit (343°C) and 1000 degrees Fahrenheit (537°C). Exhaust gases are diverted through a heat exchanger to produce steam which can be used in an absorption chiller to produce chilled water. To control the pressure and the quality of the steam a boiler is provided downstream of the heat exchanger. The boiler will only be used if the steam demand is greater than that being produced by the heat exchanger. In this design the estimated electric demand is 10MW, at full load operation using data published by Solar Turbine company, two 5MW gas turbines have a cumulative exhaust gas flow rate of approximately 150,000 lb/hr, sufficient to produce over 7000 Ton of cooling (24.6 MW). Steam output can be estimated as. Q steam = M exhaust x cp exhaust x (T exhaustfromturbine – T exhaustfromheatexchanger ) Where the M exhaust mass flow rate, Mexhaust and temperature, T exhaustfromturbine are typically given as turbine performance specifications. The specific heat of natural gas exhaust cp exhaust is approximately 0.26 Btu/lb-F. The following is the published gas turbine specification for Solar Turbine model number Taurus 60. ​ Figure 2 – Example of the Intermittent and Uncertain Characteristics of VRE - Source: Ela and others (2013) The estimated 10MW computer equipment load will require a 3000 Ton of heat rejection. Traditional cooling systems frequently include a centrifugal chiller and cooling towers to reject the heat utilizing reversed Carnot cycle process. Typical cooling systems utilizing a water-cooled chilled water plant with centrifugal compressors, cooling towers and pumps have an energy usage range of 0.8 to 1.0 kW per Ton of cooling. Using 1.0 kW per Ton, the centrifugal chiller plant energy usage is approximately 3MW, leading to site total energy usage of 13MW (IT load plus mechanical load). Using absorption chillers, the additional 3 MW for the cooling system is removed from the proposed design thereby reducing demand and therefore the overall energy consumption of the facility. Installing the co-generation plant at the site, will provide all required power, minimize power transmission losses, provide cooling and heating for both data center and the campus and, depending on the energy fuel mix, reduce the carbon footprint of the site. Also depending on the reliability of the grid, power reliability can be improved by onsite generation with utility power backup. Figure 2 provides a summarized analysis using a theoretical calculator tool provided by the EPA Combined Heat and Power Partnership. Figure 2 – Overall CHP Thermal performance. Based on the above example, the co-generation projected displaced electricity production is 100,565 MWh/year which includes data center electric power demand, cooling load demand and reduced power transmission losses as summarized in Figure 3. Figure 3 – Displaced Electricity Profile. Estimated energy savings for this model is 29% an equivalent of the annual energy consumption of 7,664 cars or 3,759 homes as shown in Figure 4. Figure 4 – Projected Energy Savings and Equivalent Energy Consumption. Figure 5 provides a comparison schematic diagram for conventional power flow from a coal power plant and onsite power generation with heat recovery to meet cooling and heating demand load. Figure 5 – Overall benefits of Co-generation power and cooling plant. In this example, the design reduces the carbon footprint of a 11.48MW connected load data center by 50%, and fuel consumption is reduced by 553,431 MMBtu. The reduced carbon emissions are equivalent to total annual greenhouse gas emissions generated 20,258 cars or 10,818 homes. See Figure 6 below. Figure 6 – Emissions Reduction Summary Table. 5. Data Center Power Usage Effectiveness (PUE) ​ Most of the power consumed by data center cooling systems is by mechanical refrigeration plant using compressors. In a centrifugal chiller plant like the one analyzed; the chiller power usage is approximately 0.546 kW/Ton. Co-generation plant will also lead to lower data center PUE since the chiller compressor power loads are reduced in the PUE calculation. In this case study there is reduction of 1638 kW of chiller compressor power for a 10MW data center with a total cooling demand of 3000-Ton. This leads to a reduced PUE of approximately 1.05. ​ 6. Economics ​ Since the (US) natural gas industry was deregulated by the government, gas prices have until recently been falling. According to the Edison Power Company deregulation of electrical utilities may lead to slightly decreased electricity prices, making this onsite power generation a workable energy saving solution. The reluctance of private companies to adopt this type of solution is partly driven by unstable natural gas prices together with the initial capital outlay for the system. Government intervention and rebate programs will be required to encourage the participation of private companies. However, for Government data centers such as the projected modernization of the health care industry, this energy saving solution is a potential candidate. In other parts of the world, such as Europe where 20% of the total electricity generation comes from Wind and Solar, doubling the world average share (9%). Although this is an incredible number, the region is still dependent on Natural gas (24%) to reduce the consumption of Coal and to meet the increased energy demand and storage, especially during the winter cold months. ​ 7. Conclusion ​ As the research continues for alternative energy sources, the efficiency of the existing power plant cannot be ignored. Improvement of existing power plants and designing systems with heat recovery will not only result in energy savings but can also provide a reduction of carbon footprint and reduced reliance on the utility supply. ​ References ​ Sustainable Energy “Choosing Among Options, Jefferson W. Tester, Elisabeth M. Drake, Michael J. Driscoll, Michael W. Golay, William A. Peters 2005 Take Action for the Sustainable Development Goals – United Nations Sustainable Development Stoecker 1989 Design of Thermal Systems Third Edition CHP Energy and Emissions Savings Calculator | US EPA Gas Turbines - Products | Solar Turbines Design Brief Chiller Efficiency.pdf (lbl.gov) https://ember-climate.org/wp-content/uploads/2021/03/Global-Electricity-Review-2021-EU.pdf ​ About Author Gardson Githu, PE is a Senior Mechanical Engineer and Consultant at EYP Mission Critical Facilities, Part of Ramboll . Gardson’s experience focuses on the design and analysis of HVAC systems for commercial, industrial, and Data Center infrastructure facilities. His experience includes new facilities design, retrofit design, and mechanical systems analysis. His project experience includes chilled water plants, thermal storage systems, fuel oil systems, and air handling systems. Gardson specialized in mechanical system energy optimization, data center risk site assessment and data center thermal mapping (computational fluid dynamic analysis). He holds a Bachelor of Science degree in mechanical engineering from California State University Los Angeles , and a Master of Science degree in mechanical engineering with Themo-fluids option, from California State University Northridge . He is a team member of the recently launched EYP Mission Critical Facilities, Part of Ramboll and I3 Solutions Group Sustainability Initiative to offer a practical roadmap towards a Carbon Net-Zero data center by 2030. ​ Contact Us ​ For further information about the EYP MCF, Part of Ramboll and i3 GHG Abatement Group, please email David Eisenband deisenband@eypmcfinc.com or Kerry Neville kerry.neville@i3.solutions Download PDF

  • Achieving Zero Carbon Data Centers | EYP MCF | USA

    White Paper Reaching for Net-Zero: Achieving Zero Carbon Data Centers by Decentralizing Consensus of Power Supply Amongst Utility and Microgrid Providers ​ White Paper 1 ​ ​ July 2021 ​ ​By: Matthew Karashik, EIT EYP Mission Critical Facilities, Part of Ramboll (EYP MCF, Part of Ramboll) Download PDF Abstract ​ This paper is the first in a series of UN2030 Sustainable Development Goals initiative thought experiments focused on using blockchain and decentralized consensus algorithms to overcome logistical barriers to a true zero carbon emissions data center and microgrid environment. The scope of this paper is the optimization of energy dispatch and supply from distributed resources down to a data center or microgrid supported campus. Future papers will build upon the arguments laid forth and culminate in a fully integrated blockchain consortium beginning at the manufacturing of parts for all entities in the network and reaching all the way down to the individual tenants in a co-location data center performing tenant-to-tenant server subletting during demand response events for the data center or macrogrid (i.e., a traditional wide area synchronous grid or colloquially, an electrical utility grid). ​ (Keywords: game theory, good actors, bad/malevolent actors, UN2030, centralized consensus, decentralized consensus, proof of work (POW), proof of stake (POS), private-by-design (PbD), microgrid, macrogrid, blockchain, byzantine fault tolerance, embodied carbon, operating carbon, traveling carbon, zero greenhouse gas (ZGHG), distributed energy, block mining, data structures, peer to peer (P2P), server to server (S2S), client to client (C2C), internet of things (IOT), demand response (DR), tolerance for loss) Content ​ Introduction – Why do we need standby generators? Decentralized Consensus and Byzantine Fault Tolerance Distributed Energy Resources for Microgrids as a Database and Automated Transaction Network Determining Loss-Tolerance, Threshold for Trust, and Proof of Stake Optimizing the Supply of Zero-Greenhouse-Gas Energy via Decentralized Networks Automation of Components in a Microgrid Future Papers Further Reading Works Cited ​ ​ 1. Introduction ​ While some understanding of the keywords mentioned in the Abstract section is assumed and required for this paper, the concepts of decentralized consensus, byzantine fault tolerance, distributed energy resources forming a database, loss-tolerance and trust, proof of stake, embodied carbon will be described at a precursory level. The objective of this whitepaper is to conceptually portray an application for which blockchain removes barriers to the success of zero carbon emissions operating and embodied environment. This is achieved via a thought experiment in which four distributed energy suppliers need to cooperate to deliver synchronized or grid forming power to a microgrid supported data center and accurately respond to events of increased demand to maximize their own revenues. They cannot trust each other as competitors but they must find some way to communicate information between one another to form a grid and optimize their economic dispatch because the utility brings unnecessary overhead, and the data center campus lacks the resources to continually log and verify upstream information on its own. ​ ​ ​ 2. Decentralized Consensus and Byzantine Fault Tolerance ​ Before discussing a decentralized consensus, or even blockchain, the idea of centralized consensus needs to be understood as the “conventional wisdom” for problem-solving. Decision making and problem solving are easy when all players in a game are on the same team or when multiple businesses trust a banker to accurately record and upkeep a ledger of transactions between one another. This trust filled, naturally collaborative, network of individuals forms a centralized consensus, a decision making and record keeping model in which all parties trust each other or trust the same person and have no reservations about exchanging information (Krawiec-Thayer). This is a highly effective method for getting work done with colleagues or for paying bills, but it is not a practical way to get competitors in a single industry to work together or exchange data. That is where de-centralized consensus algorithms come into play. Decentralized consensus, As Dr. Mitchell P. Krawiec-Thayer writes, “is the ability for many parties to safely store and share information, without having to rely on a central authority or trust any other participants in the network” in the editorial blog post, “What’s the big deal about Decentralized Consensus”. Paying close attention to the ability for “many parties” to share and store information in a common database, both safely and without trusting each other or a central authority, it becomes immediately apparent why any technology that can achieve decentralization of decision making is vital to the success of modernized microgrid and data center technology. Dr. Krawiec-Thayer states the following: Any effective decentralized consensus system must solve a fundamental challenge: how can a system arrive at universal agreement under adversarial conditions where messages may be unknowingly lost and participants may behave dishonestly for their own gain? As Krawiec-Thayer later mentions, this problem was concisely posed to the greater technological community almost 40 years ago, in what is often referred to as the Byzantine Generals’ Problem: Imagine a group of generals of the Byzantine army camped with troops around an enemy city. Communicating only by messenger, the generals must agree upon a common battle plan — whether to attack or retreat. Either way, they must arrive at agreement and act in unison since an attack with only a portion of the troops would be disastrous. However, one or more of the generals may be traitors who will try and confuse the others. The problem is to find an algorithm to ensure that the loyal generals will reach agreement. (Lamport, Shostak, and Pease; 1982) Considering this context, a blockchain is a software and system policy implementation of valid solutions to the Byzantine Generals Problem. Blockchain, as a decentralized problem-solving system, creates an opportunity to empower mission critical facilities. Consider how data centers fall under the need for decentralized decision making. Data center facility operations teams and sub-groups barely trust each other. They tend to act more like tribes than a single unit, even though they roughly share a common goal. Decision making often requires lengthy chains of approval at various stakeholder levels for even simple changes to electrical equipment. Trust between any power facility and the electrical utility is illusory at best and antagonistic at worst. There is a high level of added overhead and delay due to this slow-moving and poorly fitting centralized consensus that is forced onto data center design and operation. On top of these logistical inefficiencies, consider that there are a high number of nodes measuring power and computational information in a data center. Server cabinets or racks all need some method of periodically or continually checking the load against capacity and upstream equipment ratings. The service transformer to a data center, conventionally, must be metered by a central utility. Multiple telecom and monitoring systems are often required to implement building fire protection, alarm, lighting, and emergency power response control policies. If only there were some ways to remove the need for a data center to oversee all of this information, especially at such a great financial cost to the facility owners and shareholding entities. Instead of forcing conventional decision making that barely functions in microgrid and data center environments, it is possible to utilize a so called “decentralized consensus algorithm” that removes central authority dependence and encourages success amongst the many parties involved in these larger operations efforts. Literature dating as far back as 1982 lays the foundation for arguing that a zero-greenhouse gas (ZGHG) future requires shifting the current paradigm of central entities (see Byzantine Generals Problem). Instead of forcing the data center facility, microgrid operator, or macrogrid power utility to manage and verify data, designers of truly renewable and sustainable data center infrastructure must create new systems that decentralize. These systems require no trust amongst players but witness trust as an emergent property of continued success and will be built upon in this paper for renewable internet-of-things (IoT) applications (such as zero net carbon peer to peer clouds, smart grid metering, electric vehicle, etcetera). Relating the idea of byzantine faults, blockchain, and decentralized networking back to mission critical and renewable applications, the July 1982 paper “The Byzantine Generals Problem” (cited by approximately 7377 scholarly articles, according to google scholar), coauthored by Leslie Lamport, Robert Shostak, and Marshall Pease in the fourth volume of the ACM journal, “Transactions on Programming Languages and Systems” directly ties mission critical applications to decentralized problem solving in terms of “reliable systems”. Using redundant computers, systems, or facilities and cross referencing (via internal or external voting) to a single result is the only alternative to utilizing materialistically reliable device components, when attempting to implement reliable computer systems. As the three authors explain: “This is true whether one is implementing a reliable computer using redundant circuitry to protect against the failure of individual chips, or a ballistic missile defense system using redundant computing sites to protect against the destruction of individual sites by a nuclear attack. The only difference is in the size of the replicated "processor". (Lamport, Shotak, Pease 398) As Lamport, Shotak, Pease go on to explain, there are some flaws in this reasoning; however, the basic premise of critical systems requiring reliable outputs remains valid. After discussing parameters of a reliable voting solution, and the issues with circumventing material or voting considerations via hardware, Lamport, Shotak, Pease arrive at a significant realization for mission critical systems, “redundant inputs cannot achieve reliability; it is still necessary to ensure that the nonfaulty processors use the redundant data to produce the same output”. From the perspective of ZGHG microgrid supported power distribution, there is a major flaw in the paradigms of mission critical design: Redundant systems cannot achieve true reliability without using the same redundant data (i.e., power, information) to produce the same output (Lamport, Shotak, Pease; 1982, 387). This sounds paradoxical from the perspective of mechanical, electrical, and plumbing (MEP) design for redundancy and may require further scrutiny to determine the truth of such statement in the realm of power distribution design; however, Lamport, Shotak, Pease might argue that the electrical equipment adjacent to MEP design in mission critical must achieve these requirements, of redundant nonfaulty processing systems using the same sets of redundant data to produce the same output at any order or tier of redundancy, to be considered truly reliable. This is not to decry against formal requirements for redundancy, like the Uptime Institute’s tier system, by any means. The two key takeaways are that mission critical design has been a consideration of decentralized consensus networks since the field’s founding thesis in 1982, and that any system with a need for reliability must consider redundant computation, redundant data, and reliable parts. As data centers, being mission critical, intrinsically have a high need for reliability, they are no exception to this claim. Moving on to the main subject of this paper, achieving optimal dispatch of power to a microgrid via decentralized power, consider the various interests in a microgrid: Authorities having jurisdiction Macro-grid owning utilities Micro-grid operating data center campuses Distributed energy supplying and storing entities All of the aforementioned parties, with the exception of the campus being served, are in fierce competition for survival. The question of how to operate a successful zero carbon impact data center then shifts to forming an effective system for decentralized consensus. This system, method, treaty, and algorithm must somehow rope multiple parties who do not trust each other into a common database that removes all red tape and is highly resistant to bad actors. In the case of economic dispatch amongst competing renewable energy providers, the competitors, data center, and all operating staff in between, are analogous to the byzantine generals. The generals issue a command, to attack or retreat, while the interested parties issue commands of how much power to send and when. The messenger is no longer a horseback rider, but a public (or semi-private) communications network along the blockchain. The byzantine general’s problem can be distilled further into several bullet points which identify any scenario that may be solved by a decentralized consensus technology like blockchain: There is a need for a common exchange of information, i.e., database. There are any number of resource-governing entities involved. The parties in this game have conflicting incentives or reason to mistrust each other. These parties are likely governed by different rules. There is a need for a truly objective and unbiased, unchangeable log of records. The rules behind decision making rarely change if-ever. So, a system to solve the problem of Byzantine Generals operating distributed energy resources in a microgrid environment or colo tenants in a data center environment must include the following to be truly optimal: Consensus on records and transactions must be decentralized. The system functions whether or not the players trust each other. Highly resistant to tampering with data, creating false records, and collusion against the common goal. This is known as a high value of byzantine fault tolerance. Good actors are rewarded for exchanging valid information and acting trustworthy. Bad or malevolent actors lose resources, and identify themselves as more untrustworthy, whenever they attempt to falsify communications. Resources, information, and decision-making occurs quickly. Decisions involving multiple parties may need to be bi-directionally automated, such that either party can deliver commands over both party’s resources, under a verification system. The growing record of transactions, information exchanged, and consensus decisions is secure, permanent, and able to identify false information quickly. ​ ​ Flowchart (above), “Does your enterprise need blockchain?” – (Source: Information Services Group) This, in a nutshell, is a conceptual description of blockchain and what it fixes in a data center, and microgrid environment, that conventional logistics cannot. As depicted in the flowchart created by the Information Services Group, a zero GHG microgrid supporting collection of energy resources fulfill the criteria of requiring a trust-independent database operated by many individuals and representatives. They do not trust their client blindly, they do not trust each other, and they do not trust the power utility as a competitor. For a true zero impact microgrid to succeed at the level of distributed energy resources, the energy operators must utilize a blockchain. ​ ​ ​ 3. Distributed Energy Resources for Microgrids as a Database and Automated Transaction Network ​ Imagining that there are some number of third-party suppliers of renewable energy to a microgrid supported data center, it is very easy to understand the desire for a database of transactions, electrical characteristics over time, as well as supply chain parameters. A solar plant wants to log its own reserves and a client wants to have an idea of the rate at which the plant generates energy reserves. The same can be said for any sort of distributed energy resource of a renewable, zero or low embodied carbon footprint nature (be it wind, geothermal, hydro, etcetera). However, it is conventional wisdom that the user cannot get access to utility data and the microgrid cannot get real-time or truly deep data about its 3rd party suppliers. This can make the process of verifying certified renewable and zero embodied carbon energy, upstream of an entity, almost impossible. This is when a blockchain is very handy. Because transactions records are permanent, require stake or proof (see proof of work, proof of stake) to generate, and are tamper/falsification proofed, a supplier of power is assured that their data is secure by key. The recipient of power receives assurance that the claimed reserves were available because the log of capacity witnessed by the solar plant is the same log witnessed by the data center as a client. A database is then created between the two parties containing transaction records, voltages, currents, power quality, ambient conditions, consistent values of energy entering transmission lines and energy exiting it, as well as the available solar reserves for use by the data center in an emergency. This idea can be daisy-chained such that a database is formed between all distributed energy resources and resource users in the microgrid environment; each one of them is able to form an economic dispatch without trusting each other or knowing the identity of other players in the database. Furthermore, the blockchain allows any party to propose and dispatch resources across the entire network, so long as their dispatch request fulfills the conditions of the consensus algorithm. At this point, the consensus algorithm becomes a decentralized and automatic transaction machine or an automated transaction network; such that, each member of the network independently forms the transaction machine (it has no single point of failure, and it exists at all points in the network). Transactions between two or more parties may be initiated through the automated network by a single member. This brings unrivaled speed and efficiency to demand response, power optimization, and economic dispatches because the network solves for all parties’ internal decision-making algorithms against the network-wide consensus algorithm almost instantly. Relevant parties receive funds or power dispatch. Compensation is then distributed to the selected miners or proven good actors who were used to perform computations on behalf of the transaction machine. As another point for consideration, this blockchain machine increases the security of transactions against malevolent actors. By making dispatches transparent, even if anonymous, and by giving all users the ability to command the network under consensus, the network machine identifies bad actors as users who fail to satisfy the consensus algorithm. ​ ​ ​ 4. Determining Loss-Tolerance, Proof of Stake, and a Threshold for Trust ​ One of the most crucial steps in creating a successful blockchain or automated decentralized transaction network is determining the method of verifying good intent. Thinking back to the problem of the Byzantine Generals, to ensure that a command or information being issued by one general to the whole army is truthful, and in good faith, there needs to be some system in place to expose malevolent actions (Lamport, Shostak, and Pease; 1982). This way, bad actors and defectors expose themselves and, in modern systems, lose resources or trust when they issue a malevolent command by the system’s design. Another benefit is that costly commands prevent repeated malevolent actions against the network unless many parties are pooling resources and colluding. There are a few systems that can be used to ensure honest, successful, and quick operation of a zero-greenhouse-gas system of energy distributors. Although not all methods of achieving reliability in a decentralized microgrid network shall be examined for this paper, there are many valid means of achieving such a system. It is recommended that key terms, works cited, and further reading are consulted for the implementations not utilized here. As an alternative to the proof of work paradigm for block chains and decentralized consensus networks, consider “proof of knowledge” and “proof of stake”. Regarding proof of knowledge, consider the zk-SNARKs technology behind the Zcash cryptocurrency. The creators of Zcash, the Electric Coin Company explains in the article, “What are zk-SNARKs?”, that their technology “refers to a proof construction where one can prove possession of certain information, e.g., a secret key, without revealing that information, and without any interaction between the prover and verifier”. Diagrammatically, this process is described as follows: ​ ​ Figure adapted from “How zk-SNARKs are constructed in Zcash” by the Electric Coin Company Imagine a system that could command many different competitors to cooperate without revealing sensitive information to any of them. This would be referred to as a zero-knowledge proof of knowledge by the creators of zkSNARKs and Zcash. Zero knowledge proof of knowledge blockchains are constructed by following the algorithm shown in the figure adapted from “How zk-SNARKS are constructed in Zcash”. To elaborate further, in a zero-carbon microgrid supported data center context, this decentralized control network commands many different competing energy suppliers to generate the optimal amount of electricity for any arbitrary load, given zero knowledge assertions of each supplier’s fixed or dynamic price and zero-knowledge assertions of each supplier’s current or scheduled energy generation and storage capabilities. Competing suppliers are incentivized to implement the blockchain by the promise of maximized revenue capabilities and microgrid forming or supported data center facilities are incentivized to implement the blockchain to minimize their total cost to achieve zero-carbon (or highly carbon efficient) off-grid power across the system. Due to the nature of zero knowledge proofs of knowledge, no member of this blockchain would need to know the other members’ prices or how much energy they were allowed to supply. As an application of the microgrid control policy network implemented with a zero-knowledge proof of knowledge blockchain, imagine this technology being used to achieve a zero carbon emissions microgrid data center environment and provide enrolled electricity suppliers with maximized revenue during any electrical event or extended contract of service. Take the follow assumptions as truth for the sake of this practical thought experiment: There exists a microgrid either formed or electrically followed by a data center or collection of data halls on a data center campus. The grid forming and following facilities may or may not have their own distributed energy resources and storage. This is irrelevant to the goal of supplementing energy during shortages, events, extended periods of time. Cost optimization opportunities may appear at any time for the third-party suppliers and the data center campus. The microgrid can off-grid, entering what is known as island mode. The primary goal of this microgrid shall be to support the continuous operation of mission critical facilities via zero carbon and green energy. There exists an arbitrary solar company, an arbitrary wind company, an arbitrary geothermal company, and an arbitrary biodiesel that can supply electricity to the microgrid. These companies are in direct competition for revenue created by supplying electricity into a microgrid. There exist 53 server tenants as an arbitrary number. There must be more than 3 members of the blockchain. To eliminate utility involvement in metering, a data center, building, or an individual server tenant interfaces their equipment monitoring values to the decentralized network system. This interface allows any of the load users to inform blockchain members of how much energy was used per billing cycle and prove that they have maintained a record of equipment monitoring status to verify that tampering did not occur, without revealing info to other load users or suppliers. Because it is zero-knowledge, the equipment monitoring channels inarguably provide true and anonymous readings of energy usage without utility verification. This also enables the added capability for load users to schedule out required demand without revealing classified information to any of the competing suppliers. The same outcome can be achieved for any other metric, system, or information that the blockchain members wish to include in their anonymous exchanges and supply optimizations. All suppliers receive direct commands from the decentralized network process (that they collectively form) on how much energy to supply, for maximized revenue, after asserting how much energy they hold in reserves and how much they charge. For sake of argument, the solar supplier creates 31kWh of zero-carbon emissions energy. The solar supplier, as a prover, may inarguably assert that they produce 31kWh of monthly energy for microgrid and data center use to either a server tenant (verifier) or any competing supplier (verifier) without revealing the number itself. At the load side of the blockchain and microgrid, a server enterprise user or data center can verify and select just how much green energy they are willing to pay for at the source, without the exchange of sensitive information. In the example given, the arbitrary 53 server tenants (provers) prove to a wind supplier, a solar supplier, a geothermal supplier, and a biodiesel supplier (all four as verifiers) that they used between 1kWh and 3kWh per tenant, totaling 91kWh in a month without revealing the total amount to any of the four suppliers. They first prove that they have some true value of total power to all four entities without revealing the number itself via a hashed value that hides 91kWh in randomness and secretness. This hashed value is unique such that it must correspond to the value 91kWh that none of the suppliers know. As the old paradigm of C language coding goes, “one cannot need to need to know how a script will be used.” Once all suppliers (or selected block miners) have performed their role in the verifying algorithm to confirm that the hashed value does exist and corresponds to some secret number, the 53 tenants may prove to each supplier the individual amount used, or verify the individual amount supplied by power entity (as a vice versa scenario, or perhaps both must occur). They use a zero-knowledge proof of knowledge to let each power supplier know that they have four true values, each hidden by a hash value. In the though experiment, these tenants prove to each supplier that they owe payment for 31kWh from the solar supplier, 30kWh from the wind supplier, 15.1kWh from the geothermal supplier, and 14.9kWh from the biodiesel supplier without telling any of the other three suppliers how much power was supplied by their competitors. Because of this implementation, no electric utility metering is installed in the microgrid system and data center facility personnel are no longer needed to act as the central authority between microgrid, power suppliers, and server tenants. As Marco Conoscenti writes in his 2016 IEEE Computer Systems and Applications Conference review paper, “Blockchain for the Internet of Things: A Systematic Literature Review”, A private-by-design IoT could be fostered by the combination of the blockchain and a P2P storage system. Sensitive data produced and exchanged among IoT devices are stored in such storage system, whose P2P nature could ensure privacy, robustness and absence of single points of failure (Conoscenti, Introduction; 2016) In an internet of things environment, it becomes necessary to consider that not every IOT device is capable of storing an entire blockchain, so a hybrid between blockchain and peer to peer storage that is hashed for security, in any manner described in this paper or in its consulted works, enables block chaining between lower levels of I/O devices and higher-level entities like dispatchable generation and load consuming facilities. For example, if a substation only needs to store the portion of a blockchain required for it to function as a node, then its owner is more likely to enroll it as an additional resource. Consulting Conoscenti’s rather thorough review once more, he goes on to write: Combined with this storage system, the blockchain has the fundamental role to register and authenticate all operations performed on IoT devices data. Each operation on data (creation, modification, deletion) is registered in the blockchain: this could ensure that any abuse on data can be detected. Moreover, access policies can be specified and enforced by the blockchain, preventing unauthorized operations on data. The hybridization of blockchain to each local IoT device enables the tracking and verification of entire oceans of pure data; furthermore, granting the control policy for a modernized grid to the blockchain creates scrutiny against malevolent actors at an almost microscopic level by design-intent, without intruding on an individual entity’s privacy. Think back to the zero-knowledge proof of knowledge for clarity. It is possible, via emerging cryptographic algorithms like zkSNARKs, for blockchains to validate if a self-metering transmission pole registered 32 amperes of current at some arbitrary time without knowing the value was 32 amperes. ​ ​ ​ 5. Automation of Components in a Microgrid ​ Taking things, a step further, if given the proper implementation, power systems, and computational resources, it is possible to fully automate the following components of a microgrid: Creation of economic dispatch computation. Assurance of data security. Decentralized computation of economic dispatch problem for the microgrid. Proposed dispatch of power based on optimized solution to the economic dispatch problem. Likely, via the standard application of nonlinear distributed Newton Raphson method to modernized grid modeling, if microgrid has a large quantity of buses; otherwise, simple calculus scripts should be able to solve. Proof of stake or zero knowledge proof of optimal solution to economic dispatch and of funds owed to all affected parties. Proposal of financial compensation for economic dispatch to the network consensus algorithm without a 3rd party broker or metering entity. Approval or denial of proposed economic dispatch, financial compensation, and decentralized computation of conformity to consensus algorithm by the blockchain network. Dispatch of power from generating entities. Scrutiny against malevolent actions after or during dispatch. Storage and processing of data from all points between generation and loads. Decentralized updates to a hashed ledger of transactions or ledger of power flow at set intervals. The full scope of autonomy and reduced costs for creating central facilities to manage the microgrid, generation dispatch, transmission stations, and data center oversight is astounding. As Conoscenti writes, “In this framework, people are not required to entrust IoT data produced by their devices to centralized companies: data could be safely stored in different peers, and the blockchain could guarantee their authenticity and prevent unauthorized access” (Conoscenti; 2016). There is no need for an electrical utility or 3rd party to meter, verify, or broker the exchange of power for payment because the entire network of power systems connected to the microgrid and supporting the data center facility provide decentralized oversight. This system satisfies the mission critical application of the original Byzantine Generals Problem by virtue of high quality, zero embodied carbon, device components being used to provide distributed redundant computations stored partially and fully across a much higher number of systems than a data center would typically be able to rely upon for logistical data. The microgrid itself, is intrinsically designed without a single point of failure. Blockchain systems have a high resistance to byzantine faults, providing high reliability in data security. Now consider a scenario in which each node in the modernized microgrid (i.e., measurement and relaying, smart devices, control systems, electrical equipment, etc.), data center (service point, monitoring and control systems, electrical equipment, etc.), or transmission and distribution (controls, transmission transformers, meters, substations, etc.) contains a partial or full record of the blockchain to address the issue of limited resources at remote equipment locations. Each node may be owned by a different entity, or many nodes may be owned by one entity (cautioning against centralization). The architecture of the blockchain network is layered to reduce computational strain on entities with less power or smaller nodes and each node contains at least the portion of the blockchain it needs to operate and connect to at least three neighboring nodes (see: Lamport, Shotak, Pease regarding “neighboring commanders”). Figures 6 and 7 from Lamport, Shotak, Pease, “Byzantine Generals Problem“ Thus, the partial blockchains forming or supporting the network collectively form at least a “3-regular graph” to ensure solvable distribution and decentralization of messages and communications. This is necessary in a smartgrid environment because smaller components may not be able to store the entire blockchain. One can reference the Byzantine Generals Problem for further information on the requirement for at least a 3-regular graph, but the concept essentially ensures unique paths between entities and that all neighboring nodes provide sufficient different routes for data resilient against malevolent action. The necessity of partial blockchain storage is great for microgrids. As described by Conoscenti, “we suggest to develop IoT applications on top of another secure but scalable blockchain […] Moreover, we suggest to adopt a layered architecture which supports thin clients to allow IoT devices with limited resources to store only a portion of the blockchain”. Such a blockchain is ideal for an IoT and microgrid environment because it allows smaller measurement devices to form or follow the blockchain without an overwhelmingly large stock of data resources at the seemingly infinite count of nodes that form a microgrid. Regarding the significance of understanding proof of stake, consider that in April of 2020, the founder of Swiss crypto broker Bitcoin Suisse, Niklas Nikolajsen claimed that Bitcoin will transition to a Proof of Stake algorithm once the Ethereum cryptocurrency network demonstrates the algorithm’s success in market. To an avid follower of blockchain technologies, this is highly significant and disruptive. As author Marie Huillet recounts, an outtake from a German documentary uploaded on April 6, 2020, records the founder, Nikolajsen, saying, “[Bitcoin’s move to Proof-of-Stake] is not planned, but the second-largest cryptocurrency, Ether, will move to a Proof-of-Stake concept that demands vastly less electricity, already in a few months. I’m sure, once the technology is proven, that Bitcoin will adapt to it as well” (Huellet; 2020). Nikolajsen actually goes on to claim that Proof of Stake (POS) is a superior system to Proof of Work (POW) once it is proven to work well. To briefly describe Proof of stake, imagine a blockchain whose “nodes in the network engage in validating blocks, rather than mining them, as in PoW”. In POS, these block validators are selected by algorithm, in the case of cryptocurrency, based on “the number of tokens a given node has staked in their wallet — i.e., deposited as collateral in order to compete to add the next block to the chain” (Huellet; 2020). In the case of a microgrid, or any modernized grid technology, POS can be applied as such: Block validators are selected from a public pool of miners or a private pool of microgrid involved entities (i.e., dispatchable generation, storage, transmission, tenants, data center, microgrid distribution and operations, smart devices, etcetera) based on deterministic algorithm. The algorithm selects a subpopulation of the network to be block validators based upon how much tokenized “trust” they are willing to stake, and iteratively how much trust they have successfully demonstrated for past computations. Any gamification method, direct financial compensation, dynamic incentive, or other method of providing entities a return for staking trust can be used to ensure continued involvement in proving stake. Thus, when a supplier’s conditions for sending a power dispatch downstream are met, they submit a computation to the network that is turned into a block with identifying data hashed to secret randomness associated with a unique value, and members of the network autonomously are selected and autonomously bid to add the next block into the chain. Entities awarded the bid are granted an incentivizing return upon successful demonstration of continued stake, and their trust index is increased. This modified proof of stake can be considered a threshold means for determining trustworthiness and incentivizing members of the microgrid to continue acting in the best interests of the system. Because proof of work is not required, and only continued stake, there is potential for lower power requirements to complete computations in the network. ​ Continuing to examine the transition from theory and finance to microgrid operations, take the below system of operations, a purely hypothetical system, for a block chain integrated electric dispatch network of entities serving a large client’s demand. (Source: EYP Mission Critical Facilities, Part of Ramboll) Any number of operations to encrypt, package, track, and record information and transactions may be introduced to the system and given a control policy dependent on the demand/required load and measurements of the system. The block chain algorithm can take the considerations and rulebooks of each entity as triggers for a dispatch request, scramble them into hashed secretness that operates to satisfy the consensus algorithm. Thus, dispatch requests can be triggered autonomously, and transactions authorized so long as they satisfy the algorithm for consensus. So long as the miners of distributed computations, or elected trustworthy members of the network, act in good faith and continue to perform either zero knowledge proof and validation or continue to stake and gain indexed trust as a result of achieving true results from the consensus algorithm, participants in the game of this network witness optimized reliability, operational efficiency, and profits. ​ Using distributed computations via block mining, proof of stake, and/or the index of trust, the algorithm can be checked against current conditions by distributing calculation across entities least likely to feed bad information or send false results. Once this is performed, a block or proposed dispatch is produced and distributed to the network on the consensus algorithm as a final check. The consensus algorithm in the case of renewables, must consider a threshold or variance for which slight profit, energy, or minimal and momentary increase in embodied carbon can occur. Anything that falls outside these bounds is rejected outright. Anything that fails the consensus algorithm of the blockchain network is also rejected. Anything that satisfies the UN2030 requirements for a true zero greenhouse gas power distribution scheme, falls within the bounds of losses, and satisfies the consensus algorithm is approved autonomously, and funds are distributed to all relevant parties enrolled and effected by the economic dispatch and use of computational resources to calculate the optimized dispatch. ​ ​ ​ 6. Optimizing the Supply of Zero-Greenhouse-Gas Energy via Decentralized Networks As Helen Carruthers and Tracy Casavant of the Light House Sustainable Building Centre Society discuss in the 2013 Commission for Environmental Cooperation standards review, “What is a ‘Carbon Neutral’ Building?”, the definition of “carbon neutral” continues to evolve, as it relates to measurement, reduction, and offsetting carbon energy (Carruther & Casavant, 1). Though a scholar like Dr. Krawiec-Thayer might argue it is the result of political controversy and industry resistance to buzzwords, Carruthers as both a Project Manager and LEED AP more likely would attribute changes in the meaning of “carbon neutral” to the emergence of new technologies, new authorities, and an improved ability to identify the embodied and operating energy of carbon emissions. Take, for example, Carruther and Casavant’s description of a popular approach to carbon neutral building design: ​ • Integrating passive design strategies • Designing a high-performance building envelope • Specifying energy efficient HVAC systems, lighting, and appliances • Installing on-site renewable energy • Offsetting ​ Observe that this approach to carbon neutral design considers the integration of design strategies, material performance, renewables, and energy efficient device specification. It appears that this process has rigor. By applying informal reasoning, designers may even argue the mainstream approach adequately addresses cradle to grave needs for carbon neutrality; however, this approach to carbon neutral design is dangerously lacking on the basis of its bounds, verification, and certification. It appears to be more of a bookend to the carbon emitting designs of old than an evolution in techniques. There is no consideration for the carbon emissions due to employees, on-site clients, and the delivery of supplies to site; furthermore, this approach lacks any mention of the carbon emitted to create a high-performance building envelope. It does not consider how much carbon is emitted to create energy efficient electrical equipment. It also fails to consider the production of renewable equipment or the emissions that may occur during the installation of equipment and build phase of the facility itself (Carruthers and Casavant, 1-2). As Carruthers and Casavant state: A carbon neutral definition should include specific information/requirements relating to the following: ​ • System boundary – includes within it all areas associated with the buildings where energy is used or produced, i.e. operational energy, embodied energy of the materials used, energy used for the construction process and travel for occupants. • Renewable energy and carbon offset 3rd party certification. • Verification or certification of the calculated carbon emissions. (Carruthers and Casavant, 1). ​ Just as a physicist must consider the most relevant boundaries of a phenomena under observation, true ZGHG microgrid and data center creating entities and firms must take into consideration, at the earliest stages of design, both the true boundaries of their project’s carbon impact, the validity of their carbon emissions calculations, and the objective certification of emissions offsetting. Thus, Carruthers and Casavant introduce three definitions for carbon neutral building design: ​ • Carbon Neutral – Operating Energy[1 ] : […] Carbon neutral with respect to Operating Energy means using no fossil fuel GHG emitting energy to operate the building. Building operation includes heating, cooling and lighting. • Carbon Neutral – Operating Energy + Embodied Energy: This definition for Carbon Neutrality builds upon the definition above and also adds the carbon emissions associated with energy embodied in the materials used to construct the building. • Carbon Neutral – Operating Energy + Site Energy + Occupant Travel: This definition of carbon neutrality builds upon the inclusion of operating energy and embodied energy, and also reflects the carbon costs associated with a building's location. This requires a calculation of the personal carbon emissions associated with the means and distance of travel of all employees and visitors to the building. (Carruthers & Casavant, 3). ​ The third definition considers both the emitting energy of building operation, production of the site and its parts, and the carbon costs induced by the build site’s location. In the face of so many additional contributions to carbon emissions, it becomes apparent just how little the popular approach to carbon neutral design achieves. Perhaps popular design satisfies investors and pundits, but it does not address every node of carbon emissions in such a large network of varying interests. As conjecture, this supply chain, itself, could be examined for decentralization by future authors. ​ ____________________ [1 ] “The base definition for Carbon Neutral Design is taken from https://www.architecture2030.org “ (Carruthers & Casavant) (Source: Grid Evolution, Vintage 2019) Consider the visual depiction of “Tomorrow’s Decarbonized and Decentralized Power Market”, as illustrated by Grid Evolution. In modernized grid technologies, a bidirectional network of data, power, and transactional nodes emerges between dispatchable generation and the end customers, loads, or consumers of power. The complexity of such a system is massive and exceeds the ability of a central authority to effectively manage, as argued previously in this white paper and corroborated by the implications of Lamport, Shostak, and Pease. Consider, on top of the smart grid itself, the inevitability of ZGHG smart grids, which bring the considerations of operating + site + travel energy to any power system and accompanying infrastructure. At this point such a system has a high order of data nodes and multiple rulebooks to consider. The authors of the Byzantine Generals Problem would likely argue that a system of many commanding entities that do not necessarily trust one another and cannot agree on a single entity to act as an unbiased verifier cannot succeed via centralized decision making (Lamport, Shostak, and Pease; 1982). So, a microgrid itself, as modernized grid technology, is unlikely to ensure reliability when involved with more than three parties, let alone a microgrid that is designed to have a zero-carbon impact from end to end and from cradle to grave. Take into consideration any of the schemes or hypotheticals proposed so far and this seemingly unsolvable problem finds clarity. Observe the kick-off to this series of papers on the future of zero carbon networks, the “Decentralized Dispatch Problem”. Let there exist a microgrid formed by electrical infrastructure and multiple facilities on a data center campus, whose modernized grid received dispatchable generation from four sources. These sources are parameterized with initial conditions, at the highest level, as follows: G1[g=φ], A large solar plant at the null set condition “φ” G2[g=φ], A small solar plant at the null set condition “φ” G3[g=φ], A large wind plant at the null set condition “φ” G4[g=φ], A small geothermal plant at the null set condition “φ” The null set condition for these four sources is “not to dispatch power”, owing to the “retreat” command of the Byzantine Generals Problem. All four of these sources are competitors; by nature, if one were to fail, more power would be requested by the remaining three. Therefore, none of them can trust one another. This system shall be assumed trustless. However, all four do share a common goal, that is, the dispatch of generated and stored power to a microgrid supporting a data center for profit. To achieve the optimal dispatch of power, it would not be unusual for an economic dispatch problem to be computed by a central entity as follows: And cu is the time varying cost or revenue of utility power or injection back to the utility and pdc is the storage of power being loaded for reserves or being unloaded for use or injection at time t. C(t) is cost of the economic dispatch solution to power requirements and P(t) is the net power of the economic solution from the perspective of the data center main bus at time t. e(t) is the solution to the economic dispatch problem at time t. τ is some constant value for t, to keep this model simple. Note that this model could be expanded to include scattered reserves for storing power within the microgrid itself. Additionally, the resources of only a single data center shall be considered for this initial paper, though a data center microgrid could be modeled with parameters at each colocation facility, and at a deeper level, each colocation tenant. The bounds of a microgrid are dynamic and highly dependent on frame of reference, such that varying colocation facilities on a data center campus that forms the microgrid might consider all other facilities to be the microgrid from its own perspective. The economic dispatch formula in this paper states that the economic dispatch solution e(t) models the calculus optimized supply and injection of power from the perspective of the microgrid forming campus at time t on the basis of cost savings. The goal of economic dispatch, as a refresher, is to obtain the lowest value of C (cost per kilowatt or kilowatt-hour) that solves e (problem) for P (Power) at time t. Thus, a linear equation is formed to solve e(t). If there are multiple cost coefficients available at time t by whatever means in the system, then a system of linear equations is formed that may be solvable as a matrix of coefficients or via differential equations; however, the mathematics to demonstrate such a dynamic system are beyond the scope of this paper and its intended audience. Assuming a decentralized decision-making consensus, a central entity no longer solves for this problem as the utility once did or as the system designer may be tempted to place upon the shoulders of the microgrid formed by data center campus itself (i.e., adding cost, adding footprint, and carbon impact, and adding additional equipment). Instead, blockchain enables the solution for economic dispatch in an elegant fashion using the resources of the generation and distribution network itself. While numerous methodologies are available to establish collaboration in blockchain, imagine a high-level adaptation from the proof of stake method to a proof of trust for sake of thresholding block miners, in which the network itself distributes computation. To achieve successful dispatch, the following pseudo-algorithm must be realized by design and operation. Such that Gdc­ represents data center resources, demand side management, reserve power, and control. Such that is the unity set, implying that an entity defaulting to the state of unity is normally injecting power into its main bus. Imagine a second case in which there is a microgrid supporting a data center (i.e., a microgrid forming campus of colocation facilities whose bus-to-bus infrastructure forms a path for the supply of power to some data center) capable of storing enough reserves from distributed energy resource dispatch and whose reserves are capable of supplying enough energy to regularly exceed the demand of the data center campus or facility. So, it follows: ​ For whatever predicted or required time interval [a,b]; also, in which the initial computation assumes that the microgrid is operating in utility mode so that value of “ utility cost” is non-negative and the data center bus is not initially injecting power. If the initial computation finds that the right-hand side conditional is satisfied, then the system allows for the operation of the data center as a power source via either demand side management to demand response (reducing the strain on macrogrid in exchange for financial compensation from the utility) or direct injection from the facility buses and/or Distributed Energy Resource (DER) buses into the utility bus. The economic dispatch problem is initiated upon autonomous identification of need for energy downstream or an offer to supply energy upstream. The data center may try to resell power it has received from the utility or from any distributed energy resource back to the alternative for a profit and DERs may try to do the same to the data center, the utility, and one another. A proverbial energy trading market is formed by each microgrid. Metering technology is integrated via partial storage at local IoT with the blockchain so that a need for power at any point in the network may be detecting and the network optimization of energy supply as a trade initiated. Each entity has their own rulebook containing parameters for a safe investment of trust tokens and energy dispatch for direct profit, and their rulebook is autonomously evaluated against an anonymous need for power somewhere in the network alongside conditions. If the entity’s rulebook greenlights the need for power, then that entity’s processors will automatically wager trust to build the block of economic dispatch; such that, if the entity builds the block of a partial economic dispatch solution that is a false solution or is somehow biased to the block builder outside of the network’s threshold for loss, then the entity building the block will lose their wagered trust and have a reduced ability to be selected for block building power trading solutions. If the entity builds a truthful solution to the economic dispatch problem, then they are rewarded with an increase in trust. The consensus algorithm has thresholds for loss, and the blockchain itself has algorithms for determining false blocks or deliberate attempts at sabotaging the blockchain, which is much more difficult to accomplish in a proof of stake or proof of trust system. Out of all entities wagering some amount of their trust tokens, the network consensus algorithm selects a random subpopulation from the bidding entities that fulfill the minimum computed “net trust” requirement based on importance of the economic dispatch being considered, a set of summed trust so that the total trust of block builders fulfills good actor requirements even if an individual bad actor makes it into the bid, or a computed minimum wager of trust to be selected. The third option may have an issue in which participant rulebooks either need to know the minimum wager of trust or they are just blindly guessing how much to wager. The second option builds additional resilience against bad actors because a single bad actor getting in and wagering trust on a solution they provide can be systematically overwhelmed by all other solutions to the problem. Some weight may need to be provided to individual solution submission based on how trustworthy an entity is or how much trust they wagered to create an incentivizing market. In proof of trust, the prover must wager some amount of their trust tokens, Sk, in order bid on being included in the pool of potential block building participants. Depending on the system, in theory, trust tokens could form a secondary trading system on top of the direct energy trading floor created by bus to bus injections or it could be used by entities who would like to sell energy injection anonymously to another entity calling for grid injection. They could wager their trust that the solution is either cost efficient, energy efficient, or zero carbon if that is of interest to the recipient of power. There are many different ways to get participants interested in wagering trust in order to build blocks, each with pros and cons; as well, each method of gamifying this system may need to be tweaked for resilience and holes in logic. where B is the initiation of some block algorithm to wager trust Sk in exchange for the ability to sell power to some anonymous demanding load pj. This notion of wagering trust brings up another potential implementation of ZGHG data center formed microgrids and energy trading in which multiple entities may provide solutions that benefit them for profit and fall within some maximum threshold of either loss, price increase, or temporary efficiency drop at any point in the system. So long as it falls within the consensus algorithm maximum loss thresholds, and the trust ranking of the party is high enough, they can either be directly selected as an economic dispatch solution which optimizes cost for their facility’s perspective or they can be entered to a random pool of individuals with fair trust rankings. Once a solution is selected, it is sent to a random set of entities with trust rankings that match the significance of the dispatch proposed. It is hashed to protect individual identities, and either proof of stake, proof of trust, or proof of knowledge is used by these block builders to ensure that they are incentivized to build a correct block for the selected dispatch solution. The redundant blocks are based on redundant data taken from computations made across the network to ensure a higher tier redundancy in computational infrastructure, and the redundant blocks are checked against the consensus algorithm. Redundant blocks offer higher protection but, just like a redundant data center, they can increase cost to the network or individual entities. Assuming a single block is made and added to the blockchain, the blockchain update is examined, approved or rejected, and distributed to the network. Transactions, operations, exchanges of funds and power are automatically dispatched for all relevant parties. Thus, the zero-greenhouse gas microgrid is able to meet its own power needs and potentially obtain external profit via injection. In conclusion and in brief, using the framework of decentralized networking, control, and computation enabled by blockchain technologies and hybrid peer to peer IoT storage, it is possible to model the operation and control of a simple yet fully renewable microgrid data center environment. Instead of designing green data centers living downstream of high emissions utility power supplies, it is finally possible for engineers and entrepreneurial interests to create a system designed for green energy, one that demands it. ​ ​ ​ 7. Future Papers ​ Second Paper, First Series: “Server Subletting to Save the World: How Automated Server Resource Trading Works and Why Green Data Centers Need it” Third Paper, First Series: “Taking Back the Grid: Integration between Zero-Emission Microgrids and Data Center Tenants” Fourth Paper, First Series: “Microgrid 2.0: How the Decentralized Tomorrow will Create Microgrids of Data centers” “Decentralized Energy As A Service: A Green Future Without Macrogrids” Emerging Technology Round-Up: "A Who’s Who of Zero Carbon Data Center Innovators" ​ ​ ​ 8. Further Reading ​ Y. Sang, U. Cali, M. Kuzlu, M. Pipattanasomporn, C. Lima and S. Chen, "IEEE SA Blockchain in Energy Standardization Framework: Grid and Prosumer Use Cases," 2020 IEEE Power & Energy Society General Meeting (PESGM), Montreal, QC, Canada, 2020, pp. 1-5, doi: 10.1109/PESGM41954.2020.9281709. ​ R. G.S. and M. Dakshayini, "Block-chain Implementation of Letter of Credit based Trading system in Supply Chain Domain," 2020 International Conference on Mainstreaming Block Chain Implementation (ICOMBI), Bengaluru, India, 2020, pp. 1-5, doi: 10.23919/ICOMBI48604.2020.9203485. ​ V. Naidu, K. Mudliar, A. Naik and P. Bhavathankar, "A Fully Observable Supply Chain Management System Using Block Chain and IOT," 2018 3rd International Conference for Convergence in Technology (I2CT), Pune, India, 2018, pp. 1-4, doi: 10.1109/I2CT.2018.8529725. ​ R. S. Kadadevaramth, D. Sharath, B. Ravishankar and P. Mohan Kumar, "A Review and development of research framework on Technological Adoption of Blockchain and IoT in Supply Chain Network Optimization," 2020 International Conference on Mainstreaming Block Chain Implementation (ICOMBI), Bengaluru, India, 2020, pp. 1-8, doi: 10.23919/ICOMBI48604.2020.9203339. ​ M. Nakasumi, "Information Sharing for Supply Chain Management Based on Block Chain Technology," 2017 IEEE 19th Conference on Business Informatics (CBI), Thessaloniki, Greece, 2017, pp. 140-149, doi: 10.1109/CBI.2017.56 ​ Z. Mahmood and J. Vacius, "Privacy-Preserving Block-chain Framework Based on Ring Signatures (RSs) and Zero-Knowledge Proofs(ZKPs)," 2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT), Sakheer, Bahrain, 2020, pp. 1-6, doi: 10.1109/3ICT51146.2020.9312014. ​ Aljosha Judmayer; Nicholas Stifter; Katharina Krombholz; Edgar Weippl; Elisa Bertino; Ravi Sandhu, Blocks and Chains: Introduction to Bitcoin, Cryptocurrencies, and Their Consensus Mechanisms, Morgan & Claypool, 2017, doi: 10.2200/S00773ED1V01Y201704SPT020. ​ S. E. Chang and Y. Chen, "When Blockchain Meets Supply Chain: A Systematic Literature Review on Current Development and Potential Applications," in IEEE Access, vol. 8, pp. 62478-62494, 2020, doi: 10.1109/ACCESS.2020.2983601. ​ 9. Works Cited ​ H. Carruthers and T. Casavant, “Commission for Environmental Cooperation,” in What is a "Carbon Neutral Building", 2013, pp. 1–6. http://www3.cec.org/islandora-gb/islandora/object/islandora:1112/datastream/OBJ-EN/view ISG, Does your enterprise need blockchain? Information Services Group, 2021. https://isg-one.com/consulting/blockchain L. Lamport, R. Shostak, and M. Pease, “The Byzantine Generals Problem,” ACM Transactions on Programming Languages and Systems, vol. 4, no. 3, pp. 382–401, 1982. https://lamport.azurewebsites.net/pubs/byz.pdf M. Conoscenti, A. Vetrò, and J. C. De Martin, in “Blockchain for the Internet of Things: A systematic literature review”. 2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA), pp. 1–6. https://ieeexplore.ieee.org/document/7945805 M. Huillet, “Bitcoin Will Follow Ethereum And Move to Proof-of-Stake, Says Bitcoin Suisse Founder,” 14-Apr-2020. https://cointelegraph.com/news/bitcoin-will-follow-ethereum-and-move-to-proof-of-stake-says-bitcoin-suisse-founder Tomorrow's Decarbonized and Decentralized Power Market. Grid Evolution. “What are zk-SNARKs?,” Zcash, 09-Sep-2019. [Online]. Available: https://z.cash/technology/zksnarks/ . [Accessed: 19-Apr-2021]. ​ ​ ​ About the Author ​ Matthew J. Karashik, EIT, is an Electrical Engineer at EYP MCF, Part of Ramboll. Matthew’s Experience includes engineering designs, drafting, standards review, and NFPA-70 (National Electric Code) compliance, development of single line diagrams, electrical floor plans, grounding plans, grounding diagrams, and electrical details. Matthew has also performed local applicable code review, site visits, surveys, and site assessments. Matthew’s experience includes energy efficiency and cost savings analysis of emerging technologies for data centers and power utilities. Matthew has done numerous site evaluations and demand site management using energy modeling and monitoring software tools, his experience includes design using Revit/BIM360. Matthew has a Bachelors of Science in Electrical, Electronics and Communications Engineering from New York University. ​ Download PDF

  • | eypmcf

    Join Steve Shapiro @ 7x24 on Tuesday, October 23rd @ the JW Marriott Desert Ridge, Phoenix Arizona from: 11:30 AM - 12:30 PM Panel: Battery Technology ​ This panel discussion will present available UPS chemical battery technologies including the latest innovations in Lead Acid Valve Regulated and Flooded Cells as well as Lithium Ion and future battery technologies. Reuse of electric vehicle batteries, battery recycling technology and the future of battery ecology will be discussed. We will review the pros and cons of each technology, application of the tech as well as review total cost of ownership case studies for the various technologies. Panelists have extensive experience in the design, application and ongoing use of batteries for UPS support. Each panelist will provide a short presentation on the technology, case studies and TCOs will be reviewed and then the floor will be open to questions. Register Today never miss an update Subscribe JOIN OUR MAILING LIST

bottom of page