In the past several years, biotechnology in the food industry has been the central theme of numerous scientific reviews, national and international symposia, and several major reference works (Earle, 1984; Harlander and Labuza, 1986; Jarvis and Holmes, 1982; Kirsop, 1985; Knorr, 1987; Knorr and Sinskey, 1985; Moo-Young et al., 1985; Rehm and Reed, 1983). Reports of significant advances have come from the full spectrum of biotechnology research and development resources: universities and institutes as well as genetic “biotiques” and large food corporations. Important business alliances continue to be formed on a worldwide scale, linking advanced biotechnology research skills with large producers and marketers of food products, principally in the United States, Japan, the United Kingdom, and Europe. These alliances include Amgen/Kodak, CalBio/American Home Products, Genentech/Lilly, Genentech/Corning (Genencor), Interferon/Anheuser-Busch, Molecular Genetics/Upjohn, Synergen/Procter & Gamble, American Cyanamid/Pioneer Hi-Bred, Dupont/Advanced Genetic Sciences, W. R. Grace/ Cetus (Agricetus), Hoechst/Harvard, Monsanto/Genentech, Monsanto/Washington University/Rockefeller University, Roche/Agrigenetics, Beatrice/Ingene, Campbell/DNA Plant Technology (DNAP), Campbell/Calgene, CPC/Enzyme Biosystems, Kraft/DNAP, General Foods/DNAP, Kellogg/ Agrigenetics, Heinz/ARCO, McCormick/Native Plants, Inc., Molson/Allelix, RJR-Nabisco/Escagen, and Seagram/Biotechnica. Corporate boards and strategic planning groups of major food companies now understand the language of biotechnology and can perceive its utility and value; this has been the case with their corporate research departments for years. One thing is clear: The excitement and enthusiasm for biotechnology so characteristic of the pharmaceutical and medical areas in the early 1980s have now begun to hit the food industry with increasing force, and this momentum will likely establish this industry as the largest commercial arena for biotechnology. Companies involved include Archer Daniels Midland, American Home Products, Beatrice, Campbell, Cargill, Corn Products Company, Coors, Chr. Hansen’s Laboratory, Firmenich, General Foods, Heinz, Hunt-Wesson, Kraft, LaBatt, McCormick, Nestle, Pillsbury, Purdue, Procter & Gamble, Ralston, RJR-Nabisco, Staley, Unilever, IESG Engineering LLC, and Universal Foods.
At least three important factors are responsible for this. First, in pharmaceuticals, the feasibility of the biotechnology promise has been established and the commercial reduction to practice (i.e., commercial application) is in place in the marketplace. This was achieved by using many of the same technical concepts and strategies currently envisioned for food industry applications. Second, key advancements in technology continue to be made, principally in molecular genetics, cell technologies, computer-aided protein engineering, bioreactor design, and biosensor/diagnostic technology. These advancements have substantially redefined the technical skills base and broadened the potential applications of biotechnology to foods. Third, within the food industry, reports of successful new applications of biotechnology (e.g., those reported here) add confidence to the prediction that biotechnology may well be the next key source of competitive leverage at the corporate and international levels, and may be the most important single technical consideration in consolidation strategies.
The following paragraphs are a review of new applications of biotechnology in each of the following food-related areas: enzymes, including the processing of cheese; fermentation, including brewing and wine making; agricultural raw materials (e.g., crop plants, meat, poultry, fish) with improved functionality; and plant cell bioreactors for food ingredient production.
IESG Engineering LLC is been working in a variety of biotechnology projects and we are thrilled to be part of a business that is impacting so well in our development as a society.
Beverage production is among the oldest, though quantitatively most significant, applications of biotechnology methods, based on the use of microorganisms and enzymes. Manufacturing processes employed in beverage production, originally typically empirical, have become a sector of growing economic importance in the food industry. Pasteur’s work represented the starting point for technological evolution in this field, and over the last hundred years progress in scientifically based research has been intense. This scientific and technological evolution is the direct result of the encounter between various disciplines (chemistry, biology, engineering, etc.). Beverage production now exploits all the various features of first and second-generation biotechnology: screening and selective improvement of microorganisms; their mutations; their use in genetic engineering methods; fermentation control; control of enzymatic processes, including industrial plants; use of soluble enzymes and immobilized enzyme reactors; development of waste treatment processes and so on. Research developments involving the use of biotechnology for the purpose of improving yields, solving quality-related problems and stimulating innovation are of particular and growing interest as far as production is concerned. Indeed, quality is the final result of the regulation of microbiological and enzymatic processes, and innovation is a consequence of improved knowledge of useful fermentations and the availability of new ingredients. IESG Engineering’s projects are working to led to the contributions to this industry as a clear evidence of the growing need for adequate information about scientific and technological progress.
National University of Singapore researchers have pioneered a new water-based air-conditioning system that cools air to as low as 18º C (64.4º F) without using energy-intensive compressors and environmentally harmful chemical refrigerants.
This disruptive type of technology could potentially replace the century-old air-cooling principle that is still being used in our modern-day air-conditioners. Suitable for both indoor and outdoor use, the novel system is portable and it can also be customized for all types of weather conditions.
NUS Engineering researchers developed a novel air cooling technology
that could redefine the future of air-conditioning.
Led by Associate Professor Ernest Chua from the Department of Mechanical Engineering at NUS Faculty of Engineering, the team’s novel air-conditioning system is cost-effective to produce, and it is also more eco-friendly and sustainable. The system consumes about 40 percent less electricity than current compressor-based air-conditioners used in homes and commercial buildings. This translates into more than a 40 percent reduction in carbon emissions. In addition, it adopts a water-based cooling technology instead of using chemical refrigerants such as chlorofluorocarbon and hydrochlorofluorocarbon for cooling, thus making it safer and more environmentally-friendly.
Adding another feather to its eco-friendliness cap, the novel system generates potable drinking water while it cools the ambient air.
Associate Prof Chua said, “For buildings located in the tropics, more than 40 percent of the building’s energy consumption is attributed to air-conditioning. We expect this rate to increase dramatically, adding an extra punch to global warming. First invented by Willis Carrier in 1902, vapor compression air-conditioning is the most widely used air-conditioning technology today. This approach is very energy-intensive and environmentally harmful. In contrast, our novel membrane and water-based cooling technology is very eco-friendly – it can provide cool and dry air without using a compressor and chemical refrigerants. This is a new starting point for the next generation of air-conditioners, and our technology has immense potential to disrupt how air-conditioning has traditionally been provided.”
Current air-conditioning systems require a large amount of energy to remove moisture and to cool the dehumidified air. By developing two systems to perform these two processes separately, the NUS Engineering team can better control each process and hence achieve greater energy efficiency.
The novel air-conditioning system first uses an innovative membrane technology – a paper-like material – to remove moisture from humid outdoor air. The dehumidified air is then cooled via a dew-point cooling system that uses water as the cooling medium instead of harmful chemical refrigerants. Unlike vapor compression air-conditioners, the novel system does not release hot air to the environment. Instead, a cool air stream that is comparatively less humid than environmental humidity is discharged – negating the effect of micro-climate. About 12 to 15 liters of potable drinking water can also be harvested after operating the air-conditioning system for a day.
Associate Prof Chua explained, “Our cooling technology can be easily tailored for all types of weather conditions, from humid climate in the tropics to arid climate in the deserts. While it can be used for indoor living and commercial spaces, it can also be easily scaled up to provide air-conditioning for clusters of buildings in an energy-efficient manner. This novel technology is also highly suitable for confined spaces such as bomb shelters or bunkers, where removing moisture from the air is critical for human comfort, as well as for sustainable operation of delicate equipment in areas such as field hospitals, armored personnel carriers, and operation decks of navy ships as well as aircraft.”
The research team is currently refining the design of the air-conditioning system to further improve its user-friendliness. The NUS researchers are also working to incorporate smart features such as pre-programmed thermal settings based on human occupancy and real-time tracking of its energy efficiency. The team hopes to work with industry partners to commercialize the technology.
Looks and sounds good. But that discharge temperature looks barely adequate for the developed world’s jaded consumers. Drying the air will make huge difference, cooling the circulating air far enough to condense out the water vapor is a big part of the A/C energy cost. So it looks like a sure half way, first step, kind of thing.
The approach is also a sophisticated look at A/C as its done now. Going to two steps with such impressive results is sure to cause an engineering rethink for current designs marketed in the developed world now. Is it a revolution or disruptive technology? Almost.
Where new installations with capital cost decisions are tight and running expense is a concern, this technology should find a warm reception.
When artificial intelligence is brought up in conversation, the classic idea of a robot versus a human emerges somewhat of an us-versus-them mentality but artificial intelligence works at its best when is machine learning, natural language processing, and robotics is viewed as a partnership with the human workforce. Enter augmented intelligence, which sits at the nexus between artificial intelligence and humans, and revolves around technology helping people to complete their work more efficiently and allowing them to focus more on high-value “human-only” type activities.
Today’s utilities are faced with multiple market disruptions including the proliferation of distributed energy sources, evolving regulatory and policy changes, the increased adoption of energy efficiency products and programs, changing consumer behaviors, and an imperative to modernize their technologies and processes. Faced with these disruptions, utility executives can leverage innovative approaches such as augmented intelligence to position themselves for success.
Exploring the ‘art of the possible’ with machine learning and natural language processing
Capital Budget Planning
Utilities make investments in new equipment by upgrading existing assets, such as transformers and substations, and performing preventative maintenance, all with the goal of improving reliability of service. Current approaches to budget planning require utility engineers to try and analyze hundreds of different parameters from dozens of different data sources to identify the capital investments that will deliver the most improvement to reliability.
Utilizing outage and other operational data, maintenance history, asset types, and load patterns, as well as new nontraditional variables such as DER adoption, energy efficiency program efficacy, augmented intelligence can more systematically and consistently define the most effective ways to deploy capital in modernizing or upgrading the electric network. By analyzing patterns of prior investments that have improved reliability of the grid in the past, and factoring in diminished returns, unsupervised machine learning could reveal the most impactful near-term capital investments in the electric grid to the human workforce.
Damage Assessment and Restoration Activity
After a storm or fire event, utilities typically send crews out to assess the damage to their assets (poles, wires, transformers) and then they prioritize restoration and repair activities. The crews visually review the storm damage and phone in their reports to the storm base, and then they drive to the next area and repeat this activity manually/visually over many days.
Based on a combination of aerial, drone and satellite imagery, augmented intelligence could be used to analyze images to more quickly and reliably assess damage to the grid after a storm (downed wires, damaged poles and transformers), ultimately helping the utility to determine the priority of repairs that would restore power in the fastest, most effective manner.
Electrification of Transport
Electric vehicles (EVs) create a challenge for utilities in planning for load growth. Right now, engineers are trying to predict with disparate datasets and manual analysis which customers on their network will likely purchase an EV, as well as the timeframe. This information is used to predict the network load and help determine if upgrades are needed. For example, if 20 customers are all on the same circuit and they all purchase a Tesla at around the same time, the load requirements on that circuit would increase significantly, potentially causing outages.
Based on consumer behavior (e.g. social media feeds), car dealership sales, local government incentives, augmented intelligence could be used to identify a pattern for consumer and fleet vehicles (e.g. UPS, FedEx, etc.) electric car adoption. By predicting when a utility customer might purchase an EV, a utility could plan its investment in the electric grid to increase load requirements, thus ensuring no overloads (outages) on a given circuit.
When a utility’s call center receives calls from customers, a Customer Service Representative (CSR) may not know the identity of the customer, their past interactions with the utility, nor may they know what the customer might be calling about at that moment, etc. This leads to a poor customer service interaction, and a CSR who is unable to be proactive about how they serve the customer or offer them additional or new services.
Utilizing predictive data analytics, the CSR would be able to identify the customer when they call, and predict why they are calling based on past interactions or specific customer characteristics. This in turn would provide the CSR with the best information to provide a more positive customer experience. The CSR would also be well-positioned to offer additional targeted services, such as energy efficiency services.
Some utility customers (e.g., millennials) prefer not to make a phone call at all. Recent advances in natural language processing (NLP) provide customers with the opportunity to use a chatbot instead of talking to a human being. NLP technology is used to evaluate the text entered in the chat field to automatically answer simple questions (e.g., “How much is my bill this month?”) or direct the customer to a human being for more complex questions (e.g., “Why is my bill high this month?”). In an outage, NLP can also be used to analyze social media content from Twitter, Facebook, and Instagram to improve utility operations situational awareness, which will ultimately allow the utility to provide informed updates to customers.
Virtual assistants, such as Amazon’s Alexa, also stand to transform the customer experience, allowing the customer to be able to ask questions or take actions based on connections with smart home devices or utility systems. For example, “What’s my usage so far this month?”, “What’s my solar contribution to my bill so far this month?”, “Pay my bill” or “Which appliance uses the most electricity in my home?” Taking it a step further, virtual assistants could also develop new skills, providing them with the ability to assist in certain actions (e.g. running the washing machine at the least expensive time of day).
The use of robotics to streamline organizational processes
There are several applications of robotics that utilities could adopt, with all of them providing a combination of cost savings, reducing operational expenditures, and offering better customer service.
Hazardous Vegetation Detection
Vegetation near power lines can cause power outages during severe weather events. Identifying and managing vegetation is a costly and on-going battle for many utilities – when to trim, where to trim and how often.
Autonomous drones could sense and follow the power line routes and automatically identify and mark overgrown points along the route (live video feeds to the cloud, coupled with machine learning). The drone would sense overgrowth and automatically fly a 360 around the area at different altitudes to provide a full view of the extent and type of obstruction. This would provide vegetation crews with specific GPS locations of problem areas so they can systematically plan and manage their vegetation removal actions.
Further, these drones can identify diseased or damaged trees outside of the utility’s typical trim zones for targeted removal. Doing so can be helpful in ensuring that they do not fall down on the power lines and cause outages during a storm.
Utilities are required to inspect their transmission and distribution power assets on a regular basis for damage, potential issues and environmental impacts.
Autonomous drones could be used to perform visual automated inspections of power assets including substations, transformers, poles, towers etc. After setting an initial starting point, the drone would sense the asset type ( e.g. pole mounted transformer) and then fly a predetermined flight sequence to fully capture a real-time multi-angle view of the asset. In addition to high definition video, drones can be fitted with thermal, inductive, X-ray or magnetic inspection devices for other types of non-destructive analysis. This can be particularly helpful in the future either during nighttime or less than ideal conditions (i.e. wind) too hazardous for humans.
Right now, there are significant hurdles, particularly in the United States, as Federal Aviation Administration (FAA) regulations limit drone operations to “visual line of sight,” meaning that the pilot or an observer must always be able to see the drone while in operation. However, progress is being made to modify the FAA regulations.
Construction & Repairs
When power poles and lines are damaged by storms or accidents, repair crews are dispatched to assess and repair damage by fixing or replacing poles, transformers, and insulators. Autonomous vehicles could be used to deliver tools and replacement parts to specific job sites more cost-effectively and quickly, and have the construction trucks loaded before the start of a shift so utility crews can minimize loading times and maximize construction times. The automated loading drones can record information generated by the autonomous damage assessment drones and determine which tools and spare parts are required to restore service. Again, this minimizes the loading times, and can speed restoration activities.
The road to augmented intelligence
The utility landscape has changed dramatically over the past several years, and as such, utilities have shifted their focus to digitizing existing business processes, meaning that they are layering new technology onto existing processes. However, to become a successful Next Generation Energy company, the path forward must include the integration of augmented intelligence into business processes, enabling them to truly innovate with technology.
Residential energy storage deployments hit a record in the first quarter, according to the latest U.S. Energy Storage Monitor report from GTM Research and the Energy Storage Association.
There was as much grid-connected residential storage deployed in the first quarter, 36 MWh, as was deployed in the previous three quarters, according to the report.
Much of the increase in residential energy storage deployments can be attributed to changing policies in California and Hawaii, which together accounted for 74% of the residential deployments in the quarter, according to the report.
Both California and Hawaii have made changes to their solar programs over the past few years that resulted in reduced net metering compensation, which consequently increases incentives for energy storage.
Hawaii, for example, capped its consumer grid supply program and placed an export moratorium on its consumer self-supply program. Both programs were put in place as alternatives to the state’s net metering program, which was canceled in 2015.
California is transitioning to time of use (TOU) rates, causing greater customer demand for storage because it will give them greater control over their electricity bills, a senior analyst on energy storage at GTM, Brett Simon, told Utility Dive via email.
The combination of California’s TOU rates and the state’s Self Generation Incentive Program (SGIP) makes solar-plus-storage almost competitive with solar-only, based on 2018 assumptions, Simon said, adding that solar-plus-storage will be the superior choice for homeowners on TOU rates within a few years.
Solar-plus-storage is, in fact, emerging as a key driver in the growth of the energy storage market. “More than 95% of all residential storage is solar-paired,” Simon said. It is also an important factor in the commercial and industrial and the front-of-the-meter markets (FTM), he said.
Residential storage systems accounted for 28% of the megawatt hours deployed in the first quarter, but the residential segment was second behind the FTM segment, which accounted for 51% of deployments. The non-residential segment, meanwhile, accounted for only 21% of the deployed MWh.
Overall, 126 MWh of energy storage were deployed in the first quarter, a 26% increase from fourth quarter 2017, but a 46% decline year-over-year. But GTM analysts say fourth quarter 2017 was an anomaly because that is when many of the large energy storage projects needed to offset the gas leaks at Aliso Canyon in Southern California came online.
GTM Research sees the energy storage market approaching the 1 GW mark in 2019 and crossing it in 2020. By 2023, BTM energy storage deployments will account for 47% of the annual market, GTM estimates.
Experts say wind-plus-storage could become viable with longer-duration batteries.
Figure 1. Eolic Energy, Wind Turbines.
Energy storage is storming the U.S. power industry, driving changes from the bulk system level to the customer level.
At the system level, February’s Federal Energy Regulatory Commission (FERC) Order 845 required bulk system operators to design new rules to integrate storage. April’s Order 841 rewrote the rules on interconnection, opening new opportunities for storage.
At the customer level, state lawmakers and regulators in 32 states considered 57 policy actions on deployment, targets, studies and rebates for energy storage in Q1 of this year.
Until about 2015, utility executives and renewable energy skeptics regarded cost-competitive battery energy storage as unachievable. Today, it is a central focus of the power sector.
Figure 2. Eolic Energy, Wind Turbines.
“We always called it the ‘holy grail’ because we knew too much wind and solar would break the grid without energy storage, but we thought it would always be too expensive,” former Southern California Edison VP Jim Kelly told in a 2015 conference.
As the stack of services storage can offer, including capacity and resilience, became understood, it went from a holy grail to the hottest topic in energy. Lithium-ion batteries have captured the most attention, but there are several other fast-advancing battery chemistries and storage technologies, according to the November 2017 Levelized Cost of Storage Analysis from Lazard.
Battery storage’s cost is highly variable because of the range of technologies and applications, but the much-discussed cost plummet is real. The overall estimated cost fell 32% in 2015 and 2016, according to the 2017 GTM Reseach utility-scale storage report. That will slow over the next five years, GTM reported. But battery storage is — in certain places and applications — on its way to cost-competitiveness.
Industry insiders expect a cumulative drop in the levelized cost, depending on location and application, as much as 36% between 2018 and 2022, according to Lazard.
Figure 3. Eolic Energy, Wind Turbines.
Those prices are leading many renewable energy developers to pair their solar projects with energy storage. In California, the nation’s dominant solar energy state, the Solar Energy Industries Association chapter formally revised its name to the California Solar and Storage Association this February.
The early success of solar-plus-storage is leading some developers to consider combining batteries with large wind projects, but researchers and industry officials say storage technologies will need to develop further before the paired resource is competitive.
The current capital cost for storage, which was $1,000/kWh in 2012, is estimated as low as $200/kWh, according to a study from the National Renewable Energy Laboratory (NREL) previewed May 10 at the American Wind Energy Association’s (AWEA) annual national wind energy conference. The study foresees the capital cost for battery storage falling to $100/kWh — but does not conclude it will be cost competitive for wind.
Minimizing spikes in energy demand is one of the top priorities for many utilities around the globe. With market and regulatory pressure to deliver enough power for the smallest window of peak demand, utilities have prioritized customer demand reduction to mitigate costly investments in infrastructure expansion. To encourage reduced energy consumption from commercial & industrial customers during these peak times utilities offer special pricing packages, create tiered time-of-use pricing, implement demand response programs, and incentivize energy shifting and efficiency enabling technologies.
Alternative energy generation, particularly wind and solar, have been growing at an increasing rate and have helped alleviate some of the demand challenges. An obvious benefit of solar is that hours of solar collection often overlap with high demand periods. But, one limitation to these alternative energy sources is the intermittent nature of the power supply which then requires a back-up power source. Secondly, when enough of these renewables are online alleviating the typical peak demand periods they inadvertently cause a new peak demand period on the grid when the wind stops blowing or the sun goes down (see California’s growing “Duck Curve”).
Figure 1, Wind Power Stations
The Achilles’ Heel of renewable energy sources and potentially the solution to the broader problem of demand spikes is energy storage. By reliably storing enough energy during low consumption periods and deploying that energy during high consumption periods, power generation levels could be flattened and predictable. Enter the massive influx of investment in storage technologies, particularly batteries.
Even as battery technology advances, there are a handful of energy-intensive industries that batteries are unable to economically power for extended periods of time. One of these is the cold storage industry comprised of frozen food warehouses that range from 10,000 to 200,000+ square feet and grocery and restaurant walk-in freezers from 100 to 1,000+ square feet. There are over 2,200 cold storage frozen warehouses plus almost 40,000 supermarkets and over 620,000 restaurants with walk-in freezers in the US alone.
Maintaining stable sub-freezing temperatures around the clock requires massive amounts of energy to run their refrigeration equipment. Because of this, cold storage operations have the highest energy demand per cubic foot of any industrial category and are the third highest commercial energy consuming category, consuming over $30 billion USD of power every year.
Despite the huge amount of potential demand reduction and load shift the industry represents, and the millions of square feet of rooftop space, the industry is relatively slow to adopt solar. Partially because these facilities must run refrigeration equipment 24/7 to protect their inventory of frozen food. Also, despite the massive amount of roof space, most of these energy-intensive facilities cannot be fully powered by solar without additional real estate and solar panels.
Figure 2, Solar Panel Stations
Now there is an alternative storage technology, Thermal Energy Storage (TES) from Viking Cold Solutions, that can hold enough energy to provide up to 12 hours of load shed for typical freezer scenarios, a four times longer discharge duration than lithium-ion storage. TES systems alone save cold storage operators 20 to 35% on energy costs. When paired with solar generation, TES can save even more and address alternative energy’s shifts in demand (Duck Curve) and intermittent nature (Case Study: Pairing PV & TES in a CA Cold Storage Facility – Saved 39% on annual energy costs).
This alternative storage technology makes investment in solar or other renewables much more attractive.
The behind-the-meter TES systems consist of phase change material (PCM), intelligent controls, and 24/7 remote monitoring & reporting software that easily install and run in tandem with the cold storage facilities’ existing refrigeration, control, and racking systems.
Figure 3, Thermal Storage Unit
During solar-generating hours the facility’s existing refrigeration equipment runs and freezes the non-toxic, environmentally-safe PCM. During non-solar hours operators can shut down refrigeration systems for extended periods of time. During these prolonged time periods, the PCM absorbs and stores 85% of all heat infiltration in the freezer, maintains temperature stability to ensure food quality and safety, and reduces energy consumption up to 90%.
Globally there is an opportunity to shift thousands of megawatts of cold storage demand with thermal energy storage, and many progressive utilities around the country have already tested, approved, and included thermal energy storage systems in their incentive programs. Paring the alternative storage technology TES (installed price under $1,000 per kilowatt) with alternative energy generation can eliminate millions more dollars of utility infrastructure costs across the globe.
Explore chilled beam systems, geothermal, night-sky cooling, and thermal energy storage as some new HVAC possibilities.
HVAC is a necessity in every building. Whether you’re heating, cooling, or ventilating, you need to have systems in place that will do the job efficiently, effectively, and comfortably. But, there’s not one system that will do the job for every facility. As technology develops, green becomes status quo, and people demand healthier, more comfortable places to live, work, and play, you may want to investigate other HVAC options.
If your HVAC systems haven’t caused you big problems or complaints, why should you give them a second thought? The 2-20-200 rule is a good reason. Consider this: In a typical U.S. commercial building …
Roughly $2 per square foot is spent each year on energy.
The cost of construction, amortized over 25 years, equals about $20 per square foot per year.
Overhead costs, salaries, etc. to keep occupants in the building total around $200 per square foot per year.
When you’re able to increase occupant productivity by just 1 percent (the equivalent of about 5 minutes per day per person) via better indoor air quality or better temperature control, that increase pays for your building’s energy use for an entire year. And, if you’re able to increase occupant productivity by 10 percent, you could pay for your building.
If your HVAC system isn’t cutting it anymore – or even if it is – check out some of these alternatives: chilled beam systems, geothermal, night-sky cooling, and thermal energy storage systems.
Chilled Beam System
Thermal Energy Storage
An example of night-sky cooling in use:
Located on Stanford University’s campus in Stanford, CA, The Carnegie Institute for Global Ecology, built in 2004, makes use of a night-sky cooling system. Chilled water is supplied at between 55 and 60 degrees F. using only 0.04 kW/ton, and using approximately half as much water as a traditional water-cooled chiller.
An example of thermal energy storage in use:
The Ronald Reagan Washington National Airport in Arlington, VA, uses thermal energy storage; its system has been in operation since 1996. It utilizes a 2.2 million-gallon, above-ground chilled water storage tank that stores 40-degree F. chilled water to supply cooling to various airport locations. The stored water is also available for fire suppression.
An example of chilled beam systems in use:
In late 2009, the Constitution Center in Washington, D.C., was the first large-scale building in the United States to use chilled beam technology. Chilled beams were chosen to overcome duct-distribution issues and offer comfort for tenants. The chilled beams serve the primary office area (floors 2 through 10). Conventional systems serve the entrance, lobbies, conference areas, etc.
An example of geothermal in use:
For the Killbear Provincial Park Visitor Centre, which opened in June 2006 and is located in Nobel, ON, the nearby Georgian Bay waters provide a cost-effective, energy-saving source for heating and cooling. A closed loop of condenser water using food-grade glycol sits 15-feet below the water’s surface. The loop feeds 11 high-efficiency heat pumps inside the building and eliminates the need for a supplementary boiler or cooling tower.
Plug loads are an important contributor to a building’s peak air-conditioning load and energy consumption. Plug loads over time have evolved to become a larger percentage of a building’s overall heat gain. Two factors are responsible for this increased significance. First, over time, computer use has continued to increase resulting in a much larger number of personal computers in use in buildings. Second, advances in building techniques have improved envelopes and reduced that portion of the load/energy use.
As building envelope and system technology have improved, computer technology has advanced. Lower energy notebook computer and LCD monitor use are more widespread while at the same time, computing power, peripherals use, and enhanced or multiple monitors use have increased.
The industry is moving toward a much greater focus on low energy and even net zero energy buildings. Part of this industry movement results in a need to design based on the lowest possible plug load assumptions. Every project or application is different, and engineers are often asked to apply their judgment for plug load assumptions without the benefit of all the needed or available information. This article is intended to provide data and recommendations that will allow engineers to make these important decisions on just how low they can go in terms of plug load assumptions for a specific project or application.
Computer use in buildings started to become prevalent and began to be a consideration in building air-conditioning loads in the 1980s. At that time, loads were generally calculated based on the nameplate data on the computers and other electronic equipment. In the late 1980s, computer use began to become more widespread. In this era, the authors observed that it was not uncommon for air-conditioning systems to be sized for plug loads of 3 to 5 W/[ft.sup.2] (32 to 54 W/[m.sup.2]).
A 1991 ASHRAE Journal article (1) reported on research done in Finland where the actual load from computers and other equipment was measured and compared to nameplate data. This relatively modest effort revealed that the measured load of this equipment was typically only 20% to 30% of the nameplate data. This revelation provided the first hard evidence of this issue and changed the way that plug loads were considered in load and energy calculations.
Next, Wilkins and McGaffin in 1994 (2) reported measurements in five U.S. General Services Administration (GSA) office buildings in the Washington, D.C. area. Their work included informal measurement of a large sample of individual equipment items, as well as measurements at panels that served computer equipment within a given area of the building. The results provided further verification of the nameplate discrepancy of individual equipment, provided measured data for the determination of the load factor of an area and, for the first time, allowed the load diversity factor to be derived based on measured data.
ASHRAE followed up this informal research with the execution of two research projects: RP-822 (1996), “Test Method for Measuring the Heat Gain and Radiant/Convective Split from Equipment in Buildings” and RP-1055 (1999), “Measurement of Heat Gain and Radiant/Convective Split from Equipment in Buildings.” (3,4) The experimental results corroborated the earlier findings but did so in a more formal and traceable manner. All of this work led to a widely referenced ASHRAE Journal article in 2000. (5) This data was incorporated into the ASHRAE Handbook–Fundamentals starting in 1997 and then significantly expanded in the 2001 edition.
Current ASHRAE Handbook Data
Data presented in the 2009 ASHRAE Handbook–Fundamentals, Chapter 18, Nonresidential Cooling and Heating Load Calculations, relative to office equipment loads (or plug loads) is based largely on the research and publications cited previously. Data is presented in a number of formats and breakdowns but can be best summarized by considering Table 11 in Chapter 18, which states that a “medium density” office building will have a plug load of 1 W/[ft.sup.2] (10.8 W/[m.sup.2]). It is believed that this value of 1 W/[ft.sup.2] (10.8 W/[m.sup.2]) has been widely used in the industry since the mid 1990s. The authors believe this value is, and always has been, somewhat conservative when used in office environments. However, its use has proven to provide an appropriate balance to cover potential future loads while not introducing significant over-design in building systems.
Trends to Date
This approach and recommended load factor have remained roughly the same since the mid-1990s. Computer technology has certainly changed since that time but until recently, there was no need to change the use of 1 W/[ft.sup.2]. In fact, a comprehensive study was conducted by Koomey, et al, (6) and reported in December 1995 where it was predicted that plug loads in office buildings would decrease modestly through at least 2010 (Figure 1).
This decrease was expected to be due to technical advances that would result from ENERGY STAR and other related programs. Their predictions were based on energy use, not peak load values, but it is believed that these trends would be similar and, in fact, history has proven this to be the case. Office equipment has become more efficient, and overall plug load intensity has decreased.
Current State of Plug Loads
Predicting the future of the information technology (IT) world is not attempted here, but recent studies, as described later, have provided new data that gives a clearer picture of the current state of plug loads. It is important to understand the current state of the equipment that contributes to plug loads and how this equipment now in use differs from equipment in use at the time 1 W/[ft.sup.2] (10.8 W/[m.sup.2]) was found to be an appropriate load factor. Hosni and Beck have recently completed the latest ASHRAE-sponsored research project RP-1482, “Update to Measurements of Office Equipment Heat Gain Data,” (7) where measurements were obtained from an up-to-date sample of office equipment including notebook computers (laptops) and flat screen (LCD) monitors.
Table 1 shows how this most recent data compare to previously referenced work, as well as some other data from Kawamoto (8) and Moorefield (9) for some of the most common office equipment. Desktop computers show a trend toward increasing peak energy but the sleep mode has become much more effective over time. This increase in the desktop computer peak wattage has been offset by the lower power consumption of LCD monitors. Using a notebook computer, instead of a desktop computer and an LCD monitor, results in a fairly significant reduction in peak wattage. It is clear that notebook computer’s popularity, flexibility, cost, and computational power have expanded their use and is expected to result in a meaningful reduction in plug load power levels.
In the work by Moorefield, four modes of operation for computers and monitors were considered that included active, idle, sleep, and standby. These categories were determined by statistical grouping of the measured data and not based on internal operation of the equipment. Power consumption during what was referred to as sleep and standby was generally low and corresponded to the findings for what was called either idle or sleep mode by Hosni in RP-1482.
For the purposes of load calculation discussions, it seems that consideration of only two modes, active and sleep is appropriate. Moorefield also reported periods of notebook computer operation with power levels as high as 75 W, but no explanation for what contributed to this was provided.
Notebook computers may introduce a secondary peak condition that could occur when the internal battery is charging while at the same time the notebook is in full use. This condition may increase the power consumption by as much as 10 W during the charging period according to informal measurements by Hosni. The data shown in Table 1 represent the peak for fully charged battery condition.
Recognizing that computers and monitors represent the largest share of the plug loads in most conventional office buildings, the power reduction during idle operation will certainly have a significant impact on energy consumption and may be having an impact on the peak cooling load as well. The question to be answered in terms of peak air-conditioning load is how much of the equipment is in sleep mode at the time of peak air-conditioning load. To answer this, diversity factor must be considered.
Diversity factors were not presented in the work by Moorefield, but the data that were collected did allow for an approximation of diversity factor to be calculated. Energy use data were collected from groups of individual items of equipment and then these groups of data were averaged. Diversity is then the average measured energy divided by the peak measured energy. In this case, the peak measured represents the average of the peaks for all equipment of the given type that was in the study.
Figures 3 and 4 represent detailed curves for desktop computers diversity and Laptop docking station diversity. A single week of data was chosen and presented that represents the higher end of usage.. For the purposes of the table and the development of load factors discussed later, the diversity factor for Laptop docking station was assumed to be the same as for desktop computers.
Impact on Load Factors
The most useful form of this data for use by engineers performing load calculations is when it is presented as a load factor such as watts per square foot (W/ [ft.sup.2]). This new equipment and diversity factor data were coupled with some general assumptions and used to generate the updated load factor data presented in Table 3. It can be seen that if 100% notebook use is assumed and typical diversity factors are applied, plug loads could realistically be as low as 0.25 W/[ft.sup.2] (2.7 W/[m.sup.2]). Even light and medium use of desktop computers results in plug loads below the traditional 1 W/[ft.sup.2] (10.8 W/ [m.sup.2]). More extreme scenarios can be considered such as the case where all workstations use two full-sized monitors that can result in plug load of 1 W/[ft.sup.2] or more. The most extreme scenario considered assumes very dense equipment use with no diversity at all and results in a plug load factor of 2 W/[ft.sup.2] (21.5 W/[m.sup.2]).
The load factors presented are based on hypothetical conditions with the best available data applied to them. Each of these includes a factor to account for some level of peripheral equipment such as speakers. This analysis suggests that there will be many cases where the design plug load can be assumed to be below the traditional value of1 W/[ft.sup.2] (10.8 W/[m.sup.2]) without risk of under-designing the system. There are many factors that could impact the actual plug load for a specific space or building and careful consideration must be given to the assumptions used for any given condition.