Plug loads are an important contributor to a building’s peak air-conditioning load and energy consumption. Plug loads over time have evolved to become a larger percentage of a building’s overall heat gain. Two factors are responsible for this increased significance. First, over time, computer use has continued to increase resulting in a much larger number of personal computers in use in buildings. Second, advances in building techniques have improved envelopes and reduced that portion of the load/energy use.
As building envelope and system technology have improved, computer technology has advanced. Lower energy notebook computer and LCD monitor use are more widespread while at the same time, computing power, peripherals use, and enhanced or multiple monitors use have increased.
The industry is moving toward a much greater focus on low energy and even net zero energy buildings. Part of this industry movement results in a need to design based on the lowest possible plug load assumptions. Every project or application is different, and engineers are often asked to apply their judgment for plug load assumptions without the benefit of all the needed or available information. This article is intended to provide data and recommendations that will allow engineers to make these important decisions on just how low they can go in terms of plug load assumptions for a specific project or application.



Historical Perspective

Computer use in buildings started to become prevalent and began to be a consideration in building air-conditioning loads in the 1980s. At that time, loads were generally calculated based on the nameplate data on the computers and other electronic equipment. In the late 1980s, computer use began to become more widespread. In this era, the authors observed that it was not uncommon for air-conditioning systems to be sized for plug loads of 3 to 5 W/[ft.sup.2] (32 to 54 W/[m.sup.2]).
A 1991 ASHRAE Journal article (1) reported on research done in Finland where the actual load from computers and other equipment was measured and compared to nameplate data. This relatively modest effort revealed that the measured load of this equipment was typically only 20% to 30% of the nameplate data. This revelation provided the first hard evidence of this issue and changed the way that plug loads were considered in load and energy calculations.
Next, Wilkins and McGaffin in 1994 (2) reported measurements in five U.S. General Services Administration (GSA) office buildings in the Washington, D.C. area. Their work included informal measurement of a large sample of individual equipment items, as well as measurements at panels that served computer equipment within a given area of the building. The results provided further verification of the nameplate discrepancy of individual equipment, provided measured data for the determination of the load factor of an area and, for the first time, allowed the load diversity factor to be derived based on measured data.
ASHRAE followed up this informal research with the execution of two research projects: RP-822 (1996), “Test Method for Measuring the Heat Gain and Radiant/Convective Split from Equipment in Buildings” and RP-1055 (1999), “Measurement of Heat Gain and Radiant/Convective Split from Equipment in Buildings.” (3,4) The experimental results corroborated the earlier findings but did so in a more formal and traceable manner. All of this work led to a widely referenced ASHRAE Journal article in 2000. (5) This data was incorporated into the ASHRAE Handbook–Fundamentals starting in 1997 and then significantly expanded in the 2001 edition.


Current ASHRAE Handbook Data

Data presented in the 2009 ASHRAE Handbook–Fundamentals, Chapter 18, Nonresidential Cooling and Heating Load Calculations, relative to office equipment loads (or plug loads) is based largely on the research and publications cited previously. Data is presented in a number of formats and breakdowns but can be best summarized by considering Table 11 in Chapter 18, which states that a “medium density” office building will have a plug load of 1 W/[ft.sup.2] (10.8 W/[m.sup.2]). It is believed that this value of 1 W/[ft.sup.2] (10.8 W/[m.sup.2]) has been widely used in the industry since the mid 1990s. The authors believe this value is, and always has been, somewhat conservative when used in office environments. However, its use has proven to provide an appropriate balance to cover potential future loads while not introducing significant over-design in building systems.


Trends to Date

This approach and recommended load factor have remained roughly the same since the mid-1990s. Computer technology has certainly changed since that time but until recently, there was no need to change the use of 1 W/[ft.sup.2]. In fact, a comprehensive study was conducted by Koomey, et al, (6) and reported in December 1995 where it was predicted that plug loads in office buildings would decrease modestly through at least 2010 (Figure 1).
This decrease was expected to be due to technical advances that would result from ENERGY STAR and other related programs. Their predictions were based on energy use, not peak load values, but it is believed that these trends would be similar and, in fact, history has proven this to be the case. Office equipment has become more efficient, and overall plug load intensity has decreased.



Current State of Plug Loads

Predicting the future of the information technology (IT) world is not attempted here, but recent studies, as described later, have provided new data that gives a clearer picture of the current state of plug loads. It is important to understand the current state of the equipment that contributes to plug loads and how this equipment now in use differs from equipment in use at the time 1 W/[ft.sup.2] (10.8 W/[m.sup.2]) was found to be an appropriate load factor. Hosni and Beck have recently completed the latest ASHRAE-sponsored research project RP-1482, “Update to Measurements of Office Equipment Heat Gain Data,” (7) where measurements were obtained from an up-to-date sample of office equipment including notebook computers (laptops) and flat screen (LCD) monitors.
Table 1 shows how this most recent data compare to previously referenced work, as well as some other data from Kawamoto (8) and Moorefield (9) for some of the most common office equipment. Desktop computers show a trend toward increasing peak energy but the sleep mode has become much more effective over time. This increase in the desktop computer peak wattage has been offset by the lower power consumption of LCD monitors. Using a notebook computer, instead of a desktop computer and an LCD monitor, results in a fairly significant reduction in peak wattage. It is clear that notebook computer’s popularity, flexibility, cost, and computational power have expanded their use and is expected to result in a meaningful reduction in plug load power levels.
In the work by Moorefield, four modes of operation for computers and monitors were considered that included active, idle, sleep, and standby. These categories were determined by statistical grouping of the measured data and not based on internal operation of the equipment. Power consumption during what was referred to as sleep and standby was generally low and corresponded to the findings for what was called either idle or sleep mode by Hosni in RP-1482.
For the purposes of load calculation discussions, it seems that consideration of only two modes, active and sleep is appropriate. Moorefield also reported periods of notebook computer operation with power levels as high as 75 W, but no explanation for what contributed to this was provided.
Notebook computers may introduce a secondary peak condition that could occur when the internal battery is charging while at the same time the notebook is in full use. This condition may increase the power consumption by as much as 10 W during the charging period according to informal measurements by Hosni. The data shown in Table 1 represent the peak for fully charged battery condition.
Recognizing that computers and monitors represent the largest share of the plug loads in most conventional office buildings, the power reduction during idle operation will certainly have a significant impact on energy consumption and may be having an impact on the peak cooling load as well. The question to be answered in terms of peak air-conditioning load is how much of the equipment is in sleep mode at the time of peak air-conditioning load. To answer this, diversity factor must be considered.



Diversity Factors

Diversity factors were not presented in the work by Moorefield, but the data that were collected did allow for an approximation of diversity factor to be calculated. Energy use data were collected from groups of individual items of equipment and then these groups of data were averaged. Diversity is then the average measured energy divided by the peak measured energy. In this case, the peak measured represents the average of the peaks for all equipment of the given type that was in the study.
Figures 3 and 4 represent detailed curves for desktop computers diversity and Laptop docking station diversity. A single week of data was chosen and presented that represents the higher end of usage.. For the purposes of the table and the development of load factors discussed later, the diversity factor for Laptop docking station was assumed to be the same as for desktop computers.



Impact on Load Factors

The most useful form of this data for use by engineers performing load calculations is when it is presented as a load factor such as watts per square foot (W/ [ft.sup.2]). This new equipment and diversity factor data were coupled with some general assumptions and used to generate the updated load factor data presented in Table 3. It can be seen that if 100% notebook use is assumed and typical diversity factors are applied, plug loads could realistically be as low as 0.25 W/[ft.sup.2] (2.7 W/[m.sup.2]). Even light and medium use of desktop computers results in plug loads below the traditional 1 W/[ft.sup.2] (10.8 W/ [m.sup.2]). More extreme scenarios can be considered such as the case where all workstations use two full-sized monitors that can result in plug load of 1 W/[ft.sup.2] or more. The most extreme scenario considered assumes very dense equipment use with no diversity at all and results in a plug load factor of 2 W/[ft.sup.2] (21.5 W/[m.sup.2]).
The load factors presented are based on hypothetical conditions with the best available data applied to them. Each of these includes a factor to account for some level of peripheral equipment such as speakers. This analysis suggests that there will be many cases where the design plug load can be assumed to be below the traditional value of1 W/[ft.sup.2] (10.8 W/[m.sup.2]) without risk of under-designing the system. There are many factors that could impact the actual plug load for a specific space or building and careful consideration must be given to the assumptions used for any given condition.




Nearly all building projects today have a goal of using the minimum energy possible and having a small overall carbon footprint. Computer equipment used in offices has been a part of the overall trend toward energy use reduction. It is now possible to realistically conceive of an office space that could have a peak plug load as low as 0.25 W/[ft.sup.2] (2.7 W/[m.sup.2]). When this lower plug load level is coupled with the lower lighting power density targets, the result is the building internal loads are being reduced to very low levels.
Using a very low plug load assumption in an attempt to design ultra-low energy buildings comes with some risk. The occupant at the time of design may have fully embraced a low-energy office mentality, but in the future, there may be new occupants with less dedication or equipment with different energy consumption. However, the new data suggests that the time has come to reexamine the use of 1 W/[ft.sup.2] (10.8 W/[m.sup.2]) as the default industry norm.


As revealed by the Urban Green Council in its yearly energy and water use reports, NYC buildings use the largest share of their energy consumption for space heating and domestic hot water, where natural gas is the most common heat source. Building cooling also ranks among the largest loads, with lighting being the only electrical system with a higher consumption. Thus, HVAC engineering services can provide high value for property management companies, making heating and cooling systems more reliable and energy efficient. Consider that upgrading these systems does not only save energy, it also improves indoor conditions for occupants.
To get an idea of the benefits you can get from HVAC consulting services, consider all the positive attributes of a well-designed and well-maintained HVAC installation:
  • It keeps indoor temperature and humidity within a range that is healthy for humans, making building interiors suitable for long-term occupancy.
  • It provides indoor air quality (IAQ), ensuring a constant supply of fresh air and preventing the buildup of pollutants such as volatile organic compounds (VOC).
  • It achieves the two benefits described above at an optimal energy cost. Although HVAC expenses can be expected to be high in a large building, there is no need for them to be excessively high.
Keep in mind that NYC also has a very demanding Energy Conservation Code, and compliance is mandatory for projects above certain size thresholds outlined in the code. Working with qualified HVAC engineers is the best way to ensure your property is code-compliant.
If you are considering a major renovation, it represents a great chance to improve your HVAC installations. Under normal conditions, a deep HVAC retrofit can be highly disruptive for building operation. However, the building interior is taken apart anyway during a major renovation, so why not use the chance to improve key systems like HVAC?

HVAC Engineering Guarantees the Right Temperature and Humidity

We don’t think about temperature and humidity when they are adequate, but when they fall outside the range considered suitable for humans, we quickly feel discomfort. Poor temperature and humidity control can even lead to health issues, such as respiratory system diseases and skin irritation. Harmful organisms such as mold, dust mites and bacteria thrive in humid environments, adding to the health risk.
In many cases, especially older buildings, heating and cooling systems are sized based on “rules of thumb” instead of detailed HVAC engineering. There is a common misconception that oversizing equipment is good practice, but actually it leads to poor humidity control and fluctuating temperature. Oversized equipment also tends to run in shorter cycles, accelerating component wear and increasing maintenance expenses.
If HVAC equipment is properly installed, temperature and humidity stay within a range suitable for humans, and without drastic fluctuation. This improves health and comfort, and in business settings it also leads to increased productivity.

HVAC Engineering Improves Indoor Air Quality

Outdoor air is generally believed to be more polluted than indoor air, but research by the US Environmental Protection Agency indicates otherwise. On average, indoor air is 2 to 5 times more polluted than outdoor air, and this applies for urban and rural settings alike.
HVAC engineering not only guarantees adequate temperature and humidity; it also ensures that the building is properly ventilated. Consider that the NYC Mechanical Code establishes minimum airflow requirements depending on the type of building and number of occupants, and the HVAC system must make sure that the specified airflow is delivered.


When dealing with HVAC, ventilation cannot be addressed separately from heating and cooling equipment, since system components are constantly interacting with each other. In HVAC engineering, a whole-system approach yields much better performance than addressing different building systems in isolation. It is also important to note that ventilation efficiency measures deliver significant heating and cooling savings: if there is less air to heat or cool, energy requirements are reduced.


HVAC Engineering Improves Energy Efficiency

NYC has some of the highest electricity rates in the USA, and many HVAC system components run with electricity, including fans and air-conditioning units. Therefore, it is in your best interest to ensure this equipment consumes as little energy as possible.
Space heating and domestic hot water systems normally rely on natural gas or heating oil, which are a less expensive heat source than electricity, but are also a source of emissions. Since NYC has an ambitious emissions reduction goal of 80% by 2050, buildings will eventually have to cut down their fossil fuel consumption.
HVAC consulting services can help you find trouble spots in your building systems, allowing you to detect and prioritize the most promising building upgrades. If you want to reduce the energy expenses of your building, a targeted approach normally yields a much higher return on each dollar spent, compared with prescriptive measures.

Final Recommendations

HVAC engineering services can help you detect opportunities to achieve significant savings, especially considering the high cost of energy in NYC. If your lighting installations have not been upgraded for a long time, also consider a lighting upgrade – you can achieve additional building cooling savings by reducing the heat output of lighting installations. In case you are considering a major renovation, it is also a good chance to improve the building envelope and achieve even higher HVAC savings.


It is important to identify when the The NYC Energy Conservation Code is mandatory. It was first created through Local Law 85 of 2009, taking the NY State Energy Conservation Construction Code as a starting point, and introducing amendments that made the code more demanding for NYC. The code is subject to constant revision, and updated editions have been published in 2011, 2014 and 2016.
If you own a building or are planning a real estate project in New York City, you cannot overlook the Energy Conservation Code. The first and most important step is to determine whether your project is covered by the code, and the best recommendation is to ask a qualified engineering consulting firm. However, keep in mind that energy efficiency measures are beneficial even when they are optional, so you should consider them even if the energy code does not impose building upgrades in your case.
New constructions are always covered by the NYC energy code and compliance is mandatory. However, existing buildings must only be upgraded to meet the energy code under certain conditions, which are described in this article.

NYC Energy Code in Existing Buildings: Local Law 88

Normally, existing buildings are only subject to the NYC Energy Conservation Code when they undergo changes such as additions or renovations. However, there is one case where the code imposes upgrades regardless of planned modifications: when existing buildings are covered by Local Law 88 of 2009.
Like the NYC energy code, Local Law 88 is part of the Greener, Greater Buildings Plan, and it can be summarized as follows:
  • Individual buildings with at least 50,000 ft2 of floor space are covered.
  • Groups of 2 or more buildings with at least 100,000 ft2 are covered, if they are under the same tax lot or condominium ownership.
  • Lighting systems in covered buildings must be upgraded to meet the NYC Energy Conservation Code, and the deadline is January 1, 2025.
  • Building owners must also deploy sub-metering to track the electricity consumption of tenant spaces above certain size thresholds (explained in Local Law 88).
  • The following occupancy groups are exempt from the lighting upgrade even if they meet the conditions above: Residential Groups R-2 and R-3, and houses of worship under Assembly Group A-3.
This is the only case where the NYC energy code imposes an upgrade for an existing building where no changes are planned. Otherwise, only buildings that undergo changes are affected.
Keep in mind that the NYC energy code has different requirements for residential and commercial buildings. If you must upgrade the lighting system to meet Local Law 88, make sure you are following the guidelines that apply for your type of building.
The residential version is less demanding, since it only imposes a minimum percentage of 75% high efficacy lamps.
On the other hand, the commercial version imposes maximum lighting power densities (watts per square foot) depending on the activities carried out in each space, while introducing automatic control requirements.
If a single building is split into residential and commercial areas, they must be addressed separately following the corresponding energy code requirements.

NYC Energy Code Compliance for Building Modifications

If an existing building is not subject to a mandatory lighting upgrade by Local Law 88, there are four scenarios where the NYC Energy Conservation Code requires upgrades:
  • Additions: Projects that increase the conditioned floor space or height of a building.
  • Alterations: Defined by the energy code as constructions, retrofits or renovations that require a permit from the NYC Department of Buildings, excluding additions and repairs. The term also applies for mechanical, electrical and plumbing (MEP) projects that expand or modify the existing arrangement and require a permit.
  • Repairs: Reconstructions or renewals that are part of maintenance processes or are carried out to fix damage.
  • Change of occupancy: Any change in building usage that leads to its reclassification under a different occupancy group.
This list of conditions applies for both residential and commercial building. The main difference is that repairs subject to the energy code differ slightly between both building types.
Repairs in Residential Buildings
1) Glass-only replacements in existing windows.
2) Roof repairs.
3) Lighting repairs when only the ballast or bulb is replaced in existing fixtures, without increasing lighting power.
Repairs in Commercial Buildings
Same as residential, plus two more conditions:
4) Air barriers are not considered roof repairs if the rest of the building envelope is unaltered.
5) Door replacements between conditioned spaces and the exterior count as repairs, but excluding vestibules and revolving doors.


If you ask engineering consultants, they will always recommend energy efficiency, especially considering the high electricity prices in NYC. However, there may be cases where the NYC Energy Conservation Code makes these upgrades mandatory. Like in any construction or renovation project, working with a qualified engineering firm from the start ensures code compliance and provides long-term benefits.
In the specific case of lighting upgrades to meet Local Law 88, consider that Local Law 26 demands sprinkler system installation for all office buildings at least 100 feet tall. If your property is subject to both laws, consider merging both projects to minimize disruption – both lighting upgrades and fire sprinkler installation involve removing portions of the ceiling.


Forecast Highlights


Global liquid fuels

North Sea Brent crude oil spot prices averaged $58 per barrel (b) in October, an increase of $1/b from the average in September. EIA forecasts Brent spot prices to average $53/b in 2017 and $56/b in 2018.
West Texas Intermediate (WTI) crude oil prices are forecast to average almost $5/b lower than Brent prices in 2018. After averaging $2/b lower than Brent prices through the first eight months of 2017, WTI prices averaged $6/b lower than Brent prices in September and October. The spread between Brent and WTI prices is expected to remain at this level through the first quarter of 2018 before narrowing to $4/b during the second half of 2018.
NYMEX contract values for February 2018 delivery that traded during the five-day period ending November 2 suggest that a range of $45/b to $67/b encompasses the market expectation for February WTI prices at the 95% confidence level.
EIA estimates U.S. crude oil production averaged 9.3 million barrels per day (b/d) in October, down 90,000 b/d from the September level. Crude oil production in the Gulf of Mexico averaged 1.4 million b/d in October, which was 260,000 b/d lower than the September level. The lower production reflected the effects of Hurricane Nate. At the time of publication, most oil production platforms in the Gulf of Mexico had returned to operation following the hurricane, and EIA forecasts overall U.S. crude oil production will continue to grow in the coming months. EIA forecasts total U.S. crude oil production to average 9.2 million b/d for all of 2017 and 9.9 million b/d in 2018, which would mark the highest annual average production, surpassing the previous record of 9.6 million b/d set in 1970.
U.S. regular gasoline retail prices averaged $2.51 per gallon (gal) in October, a decrease of 14 cents/gal from the average in September, which was the highest monthly average since July 2015. The September prices reflected the effects of market disruptions following hurricanes Harvey and Irma. EIA forecasts the average U.S. regular gasoline retail price will average $2.47/gal in November and $2.39/gal in December. EIA forecasts that U.S. regular gasoline retail prices will average $2.40/gal in 2017 and $2.45/gal in 2018.

Natural gas

U.S. dry natural gas production is forecast to average 73.4 billion cubic feet per day (Bcf/d) in 2017, a 0.6 Bcf/d increase from the 2016 level. Natural gas production in 2018 is forecast to be 5.5 Bcf/d higher than the 2017 level.
In October, the average Henry Hub natural gas spot price was $2.88 per million British thermal units (MMBtu), down 10 cents/MMBtu from the September level. Expected growth in natural gas exports and domestic natural gas consumption in 2018 contribute to the forecast Henry Hub natural gas spot price rising from an annual average of $3.01/MMBtu in 2017 to $3.10/MMBtu in 2018. NYMEX contract values for February 2018 delivery that traded during the five-day period ending November 2 suggest that a range of $2.08/MMBtu to $4.52/MMBtu encompasses the market expectation for February Henry Hub natural gas prices at the 95% confidence level.

Electricity, coal, renewables, and emissions

EIA expects the share of U.S. total utility-scale electricity generation from natural gas will fall from an average of 34% in 2016 to about 31% in 2017 as a result of higher natural gas prices and increased generation from renewables and coal. Coal’s forecast generation share rises from 30% last year to 31% in 2017. The projected annual generation shares for natural gas and coal in 2018 are 32% and 31%, respectively. Generation from renewable energy sources other than hydropower grows from 8% in 2016 to a forecast share of about 9% in 2017 and 10% in 2018. Generation from nuclear energy accounts for almost 20% of total generation in each year from 2016 through 2018.
Coal production for the first 10 months of 2017 is estimated to have been 656 million short tons (MMst), 59 MMst (10%) higher than production for the same period in 2016. Annual production is expected to be about 790 MMst in both 2017 and 2018.
Wind electricity generating capacity at the end of 2016 was 82 gigawatts (GW). EIA expects wind capacity additions in the forecast to bring total wind capacity to 88 GW by the end of 2017 and to 96 GW by the end of 2018.
Total utility-scale solar electricity generating capacity at the end of 2016 was 22 GW. EIA expects solar capacity additions in the forecast will bring total utility-scale solar capacity to 27 GW by the end of 2017 and to 31 GW by the end of 2018.
After declining by 1.6% in 2016, energy-related carbon dioxide (CO2) emissions are projected to decrease by 0.8% in 2017 and then to increase by 2.1% in 2018. Energy-related CO2 emissions are sensitive to changes in weather, economic growth, and energy prices.


Watch below table in a Wide Screen mode for a mobile option.

Price Summary

West Texas Intermediate.

 Average regular pump price.

 On-highway retail.

 U.S. Residential average.
WTI Crude Oila

 (dollars per barrel)
Brent Crude Oil

 (dollars per barrel)

 (dollars per gallon)

 (dollars per gallon)
Heating Oild

 (dollars per gallon)
Natural Gasd

 (dollars per thousand cubic feet)

 (cents per kilowatthour)






EIA expects the share of U.S. total utility-scale electricity generation from natural gas will fall from an average of 34% in 2016 to about 31% in 2017 as a result of higher natural gas prices and increased generation from renewables and coal. Coal’s forecast generation share rises from 30% last year to 31% in 2017. The projected annual generation shares for natural gas and coal in 2018 are 32% and 31%, respectively. Generation from renewable energy sources other than hydropower grows from 8% in 2016 to a forecast share of about 9% in 2017 and 10% in 2018. Generation from nuclear energy accounts for almost 20% of total generation in each year from 2016 through 2018.
Wind electricity generating capacity at the end of 2016 was 82 gigawatts (GW). EIA expects wind capacity additions in the forecast to bring total wind capacity to 88 GW by the end of 2017 and to 96 GW by the end of 2018.
Total utility-scale solar electricity generating capacity at the end of 2016 was 22 GW. EIA expects solar capacity additions in the forecast will bring total utility-scale solar capacity to 27 GW by the end of 2017 and to 31 GW by the end of 2018.




Watch below table in a Wide Screen mode for a mobile option.

U.S. Electricity Summary

Retail Prices
(cents per kilowatthour)
Residential Sector
Commercial Sector
Industrial Sector
Power Generation Fuel Costs
(dollars per million Btu)
Natural Gas
Residual Fuel Oil
Distillate Fuel Oil
(billion kWh per day)
Natural Gas
Conventional Hydroelectric
Renewable (non-hydroelectric)
Total Generation
Retail Sales
(billion kWh per day)
Residential Sector
Commercial Sector
Industrial Sector
Total Retail Sales
Primary Assumptions
(percent change from previous year)
Real DIsposable Personal Income
Manufacturing Production Index
Cooling Degree Days
Heating Degree Days
Number of Households


Proper selection of centrifugal pumps is more important than ever. Getting it wrong can have drastic consequences on maintenance, reliability and efficiency. However, the selection process remains difficult for many users.
Pumping System Optimization by the Hydraulic Institute is an evaluation of 1,690 pumps at 20 process plants. The study discovered some alarming results. It found average pumping efficiency to be below 40%. Additionally, over 10% of pumps were less than 10% efficient. A major reason behind such poor numbers was improper pump selection.
These findings should be appreciate in the context of pump economics. The general rule is that a pump and motor combo will cost about $1 per day per horsepower of the motor. While energy cost vary by location, this is a good starting point to begin understanding the potential costs. For larger horsepower pumps running inefficiently, the waster capital can be staggering.


But energy cost alone are seldom a reason for change, much less the transformation of an industry. Once pumps are installed and running, the energy costs can sometimes be out of sight and out of mind.
On the top of that, there are many other costs in industrial facilities. Discovering the true cost of the pump is difficult when it is buried in an industrial energy bill alongside the high cost of heating, cooling, and running of the equipment.
Even contracting out the selection process to a reputable engineering company bring no guarantee of success. It takes a clear understanding of centrifugal pump design, the common pitfall involved in the selection process and the consequences of improper selection.
Centrifugal pump technology has been around 100 years without any revolutionary changes. Certainly, there are new alloys and coatings for casting and impellers, and efficiencies have increased. If anything, older designs are more robust than many you see today.
A pump application plays a critical role. A high quality pump from a reputable manufacturer may perform poorly in certain systems. Even an expensive model made from titanium and designed to NASA specifications for a 30 year life cycle could be inadequate for certain applications. To fit pump to the right application, it is necessary to dig into the basic operating points of centrifugal pumps.
Figure 1: The Components of a Centrifugal Pump
As the pump shaft spins, it turns the impeller inside the casing which adds energy to the process fluid. This allows the impeller to act as a cantilever with a wear ring, seals, and bearings that keep everything in place and fluid from leaking out. The spinning impeller changes the incoming fluid’s direction, which can cause intense radial loads on the pump. The bearings not only reduce rolling friction, but support the pump shaft and absorb these radial loads.
All pumps have a design point where efficiency is maximized, known as the best efficiency point (BEP). This is where the pump runs the smoothest and radial forces are minimized. The farther away from the BEP, the higher the radial loads on the pump (Figure 2).


The pump generally will have a critical speed around 25% the BEP where its natural frequency os reached and excessive vibration may occur, The pump could shake itself apart, first going through the wear ring, then the seals and finally the bearings. This is easy to spot; the pump will vibrate and may begin leaking fluid well before the next scheduled maintenance period.
The BEP should be a factor in the selection of centrifugal pumps. Pums curves demonstrate the strong relationship between pump life, pump reliability and where the pump operates on its curve.
The performance of individual pumps is a combination of design and operating conditions.
The pump’s performance data is provided in the form of pump curves, whose primary function is to communicate or define the relationship between the flow rate and total head for a pump.
Pump curves are provided by the manufacturer and show the operating characteristics of a specific pump type, size, and speed based on the results from standardized tests and test conditions. A healthty pump always maintains the defined relationship between the head and flow.
The pump curve is required for proper pump selection, monitoring pump health and troubleshooting the entire piping system. It will ensure the pump is matched to the system requirements, indicate if the pump is not operating on the published curve, and pinpoint any problems and how to resolve them.
A pump curve is critical for every point. The BEP on the pump curve indicates the peak or maximum efficiency. To operate on the BEP, the system must either control the pressure at the outlet of the pump or the flow through the system to keep the pump operating point (indicated by the red arrow in figure 2).
For example, if the system causes the discharge pressure to rise, the operating point will move to the left up the curve and flow will reduce. If the system causes pressure to drop at the pump’s discharge, the operating point will move down and to the right. Moving to the left or right of the BEP increase forces on the impeller. These forces causes stresses which have a negative effect on the life and reliability of the pump.
If we overlay the expected life of the pump as a function where the pump is operating, we get a “Barringer Curve” which shows the Mean Time Between Failure (MTBF) as a function of BEP flow rate. This curve was created by Barringer & Associates in a study of seal failures in centrifugal pumps.
Using the curve on page 2, the closer the pump is operated to its BEP. The greater the MTBF. As the operating flow rate of the pump moves further to the left or right of the BEP, failures occur more frequently.


 Similar Steam Turbine Design
DongFang has successfully developed a new generation of steam turbine, the parameters reach 35MPa/ 615℃/ 630℃/630℃, heat consumption is lower than 6,800 kJ/ kWh and the generating efficiency of power plant exceeds 50%. The goal is to reach 650℃ and 700℃.’
The article contains excerpts from the paper, “The technology development of high efficiency steam turbine” by Dong Fang engineers presented at the 2017 ASME Turbo Expo conference.
The 13th Five Year Plan has targeted the improvement of the efficiency of the existing coal power plants. As per the plan, the coal consumption of active coal-fired power generating units which would be upgraded must be lower than an average of 310 g/ kWh; the coal consumption of 600MW or more in active service (except air cooling units) which would be transformed is demanded to be lower than an average of 300 g/ kwh after 2020. For the newly built power plant, the power supply coal consumption of 1000MW turbine is not higher than 282(wet cooling), 299 (air cooling) g/ kwh, and for unit of 600MW the parameter is not higher than 285(wet cooling), 302 (air cooling) g/ kwh.
In the next ten years, it is expected that the following units will become the new direction of the development of steam turbine technology —  620℃ level of high-power double-reheat unit; 620℃ level low back pressure power unit; 630℃ level steam turbine technology; 650℃ level steam turbine technology; 700℃ level steam turbine technology.
By strengthening reheat, the efficiency of the coal-fired units can be improved. The efficiency of the double-reheat unit can increase about 2% than single-reheat units.
After the technology of 600℃ supercritical power generation is mature, many countries have started the advanced ultra-supercritical power generation technology research project of 700 ℃. Ni-based materials are envisaged for 650℃, although the 700℃.
The development of steam turbine technology has been closely related to the material. The steam parameters of the unit largely depend on material development. DongFang has the capacity of self casting cylinders and valves with the material ZG12Cr9Mo1Co1NiVNbNB. Up to now DongFang has produced the valves and cylinders more than 200 pieces with total weight up to 1140 ton. The heavy forging use material 9Cr-3Co-3W-B which is developed by JSW (Japan Steel Works) for 630℃.
In the case of certain parameters and boundary conditions, a better thermal system can bring the unit higher efficiency. This unit adopts the T-turbine scheme which is proposed in the 700℃ power plant. The scheme can greatly reduce the temperature of all the heat recovery steam extraction and the manufacturing cost and pipe cost, which improves the safety and reliability of the unit as well.

Similar Steam Turbine Design

The system brings the following advantages:  The volume flow of reheating steam was reduced by 35%.The cost of the first/second heating pipeline was obviously declined, and the safety and reliability increases;  The high pressure module of steam turbine has no steam extraction, which increase the efficiency and decrease obviously the stress of rotor.  The system can reduce the initial investment without setting the steam cooler; the inlet temperature of the heater can be reduced significantly; after the reduction of the flow rate, the geometric size of the high pressure module of the steam turbine is decreased. Although the system has many advantages, but also make the system more complex. Control and adjustment of variable working conditions for the system needs further research and actual running test.
Because of the high pressure, DongFang has developed a new type cylinder which was named double-barrel-cylinder. The upper and lower half of the inner casings are set together by seven rings
With ring hoop the cylinder, the inner casing has better air tightness. This structure has been successfully put into operation in the many projects of DongFang, such as Anyuan (31MPa), Wanzhou (28MPa). Because the main steam pressure is increased to 35MPa, the geometric size of the cylinder is smaller and more secure. Inner cylinder shares most of the pressure, about 22MPa, the outer cylinder only to withstand the exhaust pressure of 13MPa. The studs of outer cylinder works in lower temperature and lower stress level, which improves the safety after long-term operation.
DongFang has been using radial-flow stator design to reduce the rotor temperature. With this design, the actual working temperature of the rotor is lower than 620℃.
The loss of the flow path in the steam turbine mainly derives from the loss of type, end loss, steam leakage and other aspects, especially the VHP (HP) module. The secondary loss and leakage of steam loss accounted for the main part. DongFang has developed a new highly after-loaded stator profile. For stages with large blade aspect ratio, DongFang has developed and applied highly frontal loaded stator profile. The secondary loss does not dominate anymore, and the flow pattern is more like a 2D flow. So it is reasonable to use a profile which has lower profile loss, while has larger blade loading to decreases blade number.
Similar Steam Turbine Design
In order to further improve the efficiency of the unit, the HP VHP module uses the single path design. The single path has a longer blade than the double path, and the leakage loss is smaller. The total internal efficiency of steam turbine is expected to exceed 92%.
The LP module uses last stage blades of height 1200mm. The blade has been applied in many 1000MW units, such as Zhou Shan, Wan zhou, Liu heng project.
Dong Fang has designed a new steel last blade whose height is 1400mm. The LP model design has been completed. The exhaust area of the 1400mm blade is 14.5m2 , it’s suitable for low back pressure of 1000MW units or a higher power unit. The 1400mm steel blades will be used in some 1000MW units with low back pressure, such as Yun cheng project.
The 700 ℃ unit needs a lot more research and development. The 630 ℃ unit will become the main trend in the next five years. In particular, after the parameters are improved, the economic performance of the unit can be further improved by combining with double reheating technology. The DongFang 1000MW ultrasupercitical steam turbine has parameters of 35MPa/615℃/630℃/630℃. With double-reheat, the heat rate of the steam turbine is lower than 6800kJ/ kWh, The generating efficiency is expected to exceed 50%. This unit being more efficient than the 620 degree unit, when proved safe and reliable, will become a good option in the future.


Precision flow testing can play an important role in optimizing gas turbine performance. It can determine the performance and expected lifespan of individual components vital to maintaining a reliable and profitable gas turbine.
By combining this information with performance and emissions data, it is possible to suggest improvements to individual components that will benefit overall turbine operation.
Due to the complex nature of individual components and the need for strict tolerances, a small anomaly can develop into a more serious issue that can affect the service life of combustion and hot-gas-path parts. This can result in increased repairs to components, de-rating of the machine, or even taking the gas turbine offline.
Data for each component’s flow test is collected, recorded and reviewed. It is then used to assess the component’s condition and useful life.
Figure 1: Inspection of 501 FD2 dual fuel support housing in Sulzer’s 450 kV, 5-axis digital imaging X-ray booth

Optimizing combustion flows

Vacuum flow testing replicates the direction of flow that occurs on combustion components while in operation. It helps to verify and adjust the flow rate through combustion liners to ensure temperature uniformity (Figure 1).
Fuel enters and mixes with air in the primary mixing zone of the combustion liner. Supplied air is directed through mixing, dilution and the louver features of the liners. The position, size and effective flow area of these features affect the fuel to-air ratio, flame temperature, flame profile, and ultimately the performance and emissions of the turbine.
The effective flow area of combustion liners may change following a repair. This may result from the removal of thermal barrier coatings and base material, for example.
Therefore, the effective flow area needs to be carefully tested after new barrier coatings are reapplied.
Flow testing of first-stage nozzle vane segments should also be considered (Figure 2).
Figure 2: Flow test of 501 DF42 fuel nozzles with steam injection on high-flow test bench 20
The first-stage nozzle consumes a significant portion of air supplied to the compressor discharge case. Turbines that underperform can be affected by the oversupply of cooling air to the first-stage nozzle vane segments. This simultaneously reduces the required air flow for combustion within the liners, limiting the output performance of a turbine. Gas turbines operate in environments where small particles can be ingested and deposited on components. The fouling or blockage of fuel nozzles, for example, can significantly affect turbine performance.
It can also lead to physical damage of the machine. Therefore, blades and buckets should be tested to check that cooling passages are not blocked (Figure 3).
Evaluating uniformity in sets of component flow data can help pinpoint parts that have issues. By comparing data from tests performed when parts are received to those performed after refurbishment, it is possible to identify problems.
Trending this information over time can help develop more informed maintenance schedules and estimates of useful component life.
Liquid flow testing monitors the flow rate of liquid-fuel and water-injection circuits of the fuel nozzles. Liquid flow testing also allows for the visual monitoring of spray patterns, which can indicate component wear or internal blockages. Liquid circuits need to maintain their spray patterns in order to achieve the correct flame profile and temperature distribution. Wear or internal debris may cause combustor burnout. If this occurs, the flame can impinge upon the combustion liner or basket wall and cause component damage.
Figure 3: Flow test of 7001 1st stage bucket on high-flow test bench

Assessing liquid flows

Fuel nozzle flow rates need to match those required by the manufacturer’s design specifications in order to achieve the expected output. These criteria may vary from one turbine to another, even if the turbines are of the same model type. An evaluation of flow data should be performed and made available for inventory spares or sets of fuel nozzles. Large operators minimize downtime during outages by using inventory spares.
Spare fuel nozzles, for example, should be clearly identified with a reference that identifies the turbine they originated from and specific flow data. Large organizations that operate multiple turbines in different locations with an inventory pool of spare parts, may have performance issues after swapping out for an inventory set.
For example, fuel nozzles designed for a turbine at sea level will have different specifications than the same model at another elevation. Without a comparison of flow rates, unknowingly swapping these fuel nozzles can reduce performance and may even affect start-up.
It is wise, then, to carefully review flow data from parts being removed from the turbine as well as those scheduled for installation to ensure there are no major differences.
Even when minimal differences are present, this analysis can determine the need for other actions, including changes to fuel valve settings, operational control adjustments or scheduling the turbine for tuning. Ultimately, flow testing aims to remove anomalies from a system, minimizing variation and delivering a more efficient machine. Minimizing temperature spread and vibration reduces wear to components, lowering repair and maintenance costs.

Automobile Software Can Now be Updated While Driving

With the amount of software in today’s cars in the dimension of millions of lines of code, updating vehicle software today is a cumbersome business. Now Continental has created the necessary technology and infrastructure to enable secure software updates over the air, doing away with the need to visit the garage for every update.
With significance for software for the user experience of car buyers updates having dramatically increased over the past decade or so, automotive manufacturers are feverishly working on solutions to establish similar mechanisms for their vehicles. So far, only Tesla dares to update the software of its cars automatically. All others look jealously over the fence, frightened by the prospect of a terrible glitch or, even worse, a cyber attack against the transmission path. Also, updating a vehicle’s software is somewhat more complex than updating a smartphone’s operating system: Up to 100 computers are involved, and since they are all connected, the activities of most of them can have side effects on others. Plus, the number of possible variants and options in a car is much bigger than in a smartphone. And last but not least, no one can afford a failed software update – in a car such a situation would have far more serious consequences than with a smartphone.
Therefore, despite intensive R&D activities by companies like RedBend Software or Harman, the roll-out of solutions for software updates over the air for cars seemed to got stuck for quite a while. Now it seems like Continental has made the grade: The automotive supplier has developed the necessary solutions at the hardware level in the car and established a reseller relationship with Texas-based company Carnegie Technology which offers a software update platform. Continental will integrate this software into its automotive telematics solutions. The software will run on the next generation of Continental’s telematics module along with a supporting cloud-based component for analysis and diagnosis functions.
During the ride, this technology aggregates the bandwidths of available transmission paths and the seamless handover between mobile radio cells as well as between different wireless technologies such as WiFi, LTE, 3G and satellite connections. As a supplement to the terrestrial networks Continental together with satellite communications provider Inmarsat is currently developing wireless update techniques for satellite-based software updates. This will enable worldwide updates for vehicles and makes car vendors widely independent from mobile radio operators. For the update process, the Continental platform establishes a two-way satellite data connection.
Carnegie’s software solution is constantly monitoring and assessing the quality of the available network connections options. It then selects the connection that promises the fastest, most reliable and most cost-effective connection. Through its VehicleLink function, it can also use smartphones, laptops or similar devices integrated into the car and the bandwidth they can contribute. To reduce communications cost, the system also has access to external WiFi cells. The Carnegie solution also makes sure the downloads are resumed after interruptions, for instance if a car has passed through a tunnel. Likewise, this solution also manages voice calls to and from the vehicle and allows a broad range of priority options.
At the device level, Continental’s in-car networking modules can be either integrated into a smart antenna module or be used as an independent telematics unit. They are complemented by gateway units that in turn are connected to the vehicle’s internal data buses, providing the infrastructure for the OTA updates.
The solutions will be introduced to the public at the international automotive exhibition in September in Frankfurt Germany.

Voith launches variable speed drive for compressors and pumps