Right now, the mindset across the IT industry is to reduce data center power consumption in every way possible. Traditionally, data centers have been some of the most egregious environmental and financial offenders in the enterprise, as a typical data center can sometimes suck up as much energy as a small city.
As we dive deeper into 2014, some expected data center power trends will rock the industry, particularly as companies look for ways to slash their budgets and be kinder to the environment without affecting performance. Here’s a look at some of the leading trends:
Low power server alternatives: Over the past two decades, most servers have come in the form of a 19-inch rack-mounted package with an x86 compatible chip inside. Just about every server maintained a standard architecture that fit this model. In 2014, however, we will start to see major server vendors offer x86 models with low power designs for the first time, as well as low power “ARM” chipsets. This will give data center managers cost-effective solutions for processing data.
Natural solutions: Many enterprises are now choosing to build data centers in areas that are environmentally suitable for assisting with data center power operations. Facebook, for instance, recently constructed a new data center in Sweden—a facility that is being hailed as the most energy-efficient computing facility ever constructed. An average data center needs three watts of energy for power and cooling to produce one watt for cooling. Facebook’s Lulea facility, however, is three times more efficient. Look for more companies to follow Facebook’s clean carbon footsteps over the course of the next year.
Power monitoring: It’s hard to reduce energy efficiency if you can’t benchmark progress. A major trend for 2014 will be data center power monitoring. IT executives will use smart power meters and switching solutions to better understand how power is distributed in the data center, and how it can be better allocated or reduced. Smart power distribution units (PDUs) and switching units will play an integral part in creating more energy-efficient data centers this year.
For more information about how Server Technology can help your data center better manage its power supply so you can save money and reduce its consumption, please click here.
This is the second in a series of blog posts providing tips and tricks for answering common data center questions with the use of SPM. Last month, I gave some guidance on answering, “How do I identify the best places to install new IT equipment?” Once you understand the ways SPM can help you with that question, the next question tends to naturally follow.
Question: How do I predict when I will run out of power?
The importance of the answer to this question is obvious. It is a primary input for growth and capacity planning in the near-term and long-term future of any organization with a data center.
- Understanding today’s power usage is the first step. SPM offers several levels at which to monitor the power usage from the granular outlet-level measurements of the POPS CDUs to the overall building-level Locations.
- Cabinets are a useful monitoring point for the obvious reason that the IT equipment being installed must fit within the capacity of the circuits brought into the cabinet. Setting up cabinets in SPM also provides a redundancy check for A/B power feeds in addition to comparison of power draw to capacity.
- Circuits are a means to aggregate current and power levels of multiple cabinet CDUs into hierarchical levels that are directly comparable to inputs of upstream power devices such as RPP, PDU, and UPS. Setting up circuits in SPM provides Smart access to dumb upstream power devices.
- The optional key-activated Custom Devices feature of SPM provides the ability to define SNMP capable upstream power devices. By entering the OIDs of input and output current, voltage, and power, the SPM can monitor the usage as reported by Smart RPP, PDU, and UPS devices.
- Locations are the top level of monitoring when trying to understand today’s usage in the organization. By completely setting up a meaningful hierarchy of CDU to Cabinet to Location(s) with Circuits and Custom Devices added, the data center is defined so that growth and capacity planning can move forward.
- While setting capacities for the items in “1” above, alert thresholds for total power of those Cabinets, Circuits, and Locations can be set as well. See the previous post for more tips.
- Alerts on Current measurements are needed to prevent tripping of circuit breakers or fuses. The safety rated levels for current on the CDU and Circuit phases are critical for this purpose on a moment-to-moment basis.
- Alerts on Power measurements are needed to understand proximity to general capacity limits when comparing equipment power usage and cooling requirements. This is typically useful at the Cabinet and Location levels above all else.
- Ultimately, the data center and facilities teams want to predict when power at each level will reach its limits. Knowing when growth will trigger the need for more resources is important for any rapidly growing organization.
- Predicting the future is a challenge, no doubt. The first thing to understand is past experience. By setting up Cabinets, Circuits, and Locations, and then waiting and watching for a sufficient period of time, one can then identify growing power usage at any particular level in the data center. The period of time to monitor is dependent upon the individual situation. There will typically be daily, weekly, and monthly cycles of power usage; however, the usage over several months will typically show the overall general trend.
- The predictive trends provided by SPM for power give two linear fits to differing time frames. For example, you may look for increasing usage based on 3 month and 12 month historical data. Additionally, like the standard trends, the predictive trends state the minimum, maximum, and average measurements over the given period of time.
- By setting up alert levels in #2 above, predictive trending analysis can extend to alerts on future conditions. On a per-device (Cabinet, Circuit, Location, etc.) basis, this alert can be activated for notification of a potential breech of the threshold within a specified period of time in the future.
Understanding growth, past, present, and future, is an important aspect of running the data center. For more information on Growth Planning, see the Application Note on the subject.
Every year, Data Center Solutions hosts the annual DCS Awards, designed to reward those in the data center industry who design, manufacture and supply data center products and solutions. Once again, Server Technology has been selected as finalists in the categories of "Power and Cooling Product of the Year" for HDOT and Data Center Management "Product of the Year" for SPM. Voting is open until May 1st. Winners will be announced at the DCS gala in London on May 15th.
Please vote for Server Technology this year: http://dcsawards.com/voting.php
Right now data centers everywhere are in the process of virtualizing servers to save money and space. In fact, virtualization is being heralded as one of the best ways to reduce costs in the data center and enhance efficiencies. However, there is a hidden problem to virtualization that if left unchecked could put a serious damper on your cost saving plans: heat generation.
When systems become virtualized, the density of the hardware increases. As a result, power utilization rates shoot up and more heat is created in the process. In fact, the data center cooling market is expected to reach $8 billion by 2018
—up from $4.9 billion in 2013.
As more and more equipment becomes virtualized in the data center in the coming years, IT managers are going to have to start monitoring both power consumption levels and environmental metrics. If left unchecked, high-density servers could cause power bills to skyrocket. Overheated equipment will also lead to increased expenses surrounding cooling equipment and could even lead to network crashes. Once a network experiences downtime, the financial loss could be devastating. In fact, the majority of data centers that go down for prolonged periods of time typically do not recover and often go out of business shortly thereafter.
If you are planning on virtualizing your data center, consider investing in a robust power distribution and monitoring solution so that you can watch for hidden fees and prevent network outages before they occur. Today’s leading data center power management software gives you the ability to keep track of both power consumption and environmental factors from any location over a Web browser.
For more information about how Server Technology can help you keep track of the power consumption in your enterprise data center, click here
These days, it is more important than ever to promote safe and responsible power consumption across your enterprise, and one of the most pressing areas to start is your data center. That’s because data centers suck up a lot of energy and consume a ton of resources. In fact, recent research shows that data centers consume about five percent of the total electrical use in the U.S. As resources become increasingly scarce and expensive, it is imperative to have a plan for ensuring that you properly disperse each and every watt of energy across your network.
But where do you start? Once you make the decision to monitor your energy consumption, there are a few key things to pay attention to, as outlined in a recent Server Technology white paper titled “The Practical Science of Data Center Capacity Planning.”
Here are some of the key takeaways from the white paper that you should take heed of as you begin your initial planning process:
Establish a benchmark: It’s easy to make changes, but it’s hard to track your progress and actually improve your power monitoring without first understanding your facility’s resources such as power, cooling, infrastructure and space. Analyze your current situation so that you know exactly what you need to improve on so that you can sufficiently measure progress as you move forward.
Consolidate for success: Power is one of the leading drivers of cost in the enterprise data center. However, consolidating servers can help reduce power and, subsequently, costs. Recent interviews with leading executives indicate that they were able to reduce power by doing just that. As ING’s Ton Roberts explains, it’s all about planning ahead and consolidating where possible.
“When the data center is 60 percent full, you have to work out where the best place to install the new servers will be,” he said.
Best practices call for identifying where you can potentially slash power consumption. Additionally, it means planning ahead before you reach full capacity—or in other words, before it becomes an emergency situation.
Invest in a power distribution unit (PDU): One trend that IT executives everywhere are agreeing with involves the use of automated PDUs to stay on top of monthly power consumption. With a PDU, you will understand the granular details of your power consumption and will be alerted in the event that anything goes wrong. You can also control your equipment from a single dashboard so that if a server or switch goes down, you can stop power and address the problem from a remote location. One of the biggest benefits is that instead of searching all day—or week—for the source of your problem you can gain instant insight as to the location. This will also help reduce the likelihood of undergoing a system spike by turning equipment on and off in the process.
A robust monitoring solution will also come with the ability to track environmental changes in your data center, so you can prevent problems before they ever have the chance of occurring. .
Are you interested in adding responsible capacity planning solutions to your data center? This Server Technology white paper was compiled from the knowledge of 15 industry executives who oversee a total of 400 data centers, as well as a combined revenue of $350 billion and one million staff members. To access your free copy of the white paper for yourself, please click here http:/powerstrategyexperts.servertech.com/.
Since 1984, Server Technology has been a leading source for companies seeking data center power saving solutions. Its products and services help reduce downtime and improve energy efficiency through an extensive line of offerings that help IT managers monitor power consumption and make sure networks stay up and running.
Now, Server Technology is unveiling a new solutions-based micro-site to add to its arsenal of power consumption expertise. This site will act as a virtual library where users will find valuable content designed to offer information, tips and strategies for tracking and managing data center power.
Aside from serving as a resource where customers can find a wealth of value-driving white papers and information about Server Technolgy, the site will also serve as a solution for maximizing retro fit projects in the following areas:
- SPM systems
- HDOT hardware
- 48 VDC solutions
- wireless power monitoring
This way, customers can browse through the site while retrofitting data centers and find a multitude of tips, money saving options and best practices for maximizing data center power and space.
Server Technology hopes that its new micro-site will be a content-rich environment capable of catering to customers in need of advice for projects both big and small. It is a new endeavor designed to offer useful information without the tag of a sales pitch at the bottom. The site will help IT managers stay on budget while gaining the most amount of power usage effectiveness (PUE) possible.
So, are you ready to discover some great new power saving strategies? Come check out the new Server Technology micro-site and tighten up your power consumption practices by clicking here powerstrategyexperts.com.
Server Technology Solution Partner Providers (Part 1 of at least 2 Parts)
I always find it interesting that the power chain is measured and monitored in multiple locations throughout most data centers but is often forgotten once the power enters the data center cabinet. Especially considering that roughly half of the power used (or more) can be traced directly to the cabinet (See Figure 1) and power is typically one of the greatest single costs associated with operating a data center. Often monitoring at the cabinet is the invisible line between the IT and Facilities groups. Even though monitoring at the in-feed of the Cabinet Power Distribution Unit (CDU) is really the same as monitoring at the Remote Power Panel (RPP) as the branch circuits coming out of the RRP are the in-feeds to the cabinet power distribution unit (CDU).
Intelligent Cabinet Power Distribution Units (CDUs) provide power, environmental monitoring and control at several different levels that reach well beyond the cabinet itself. Not only can you better understand your power infrastructure there are multiple opportunities to increase efficiency and ensure uptime as well. It is the ability for us to provide this critical information to our customers and partners that has driven our solution partner strategy.
Server Technology has a number of key solution provider partners within the data center space. These are other organizations that offer complimentary products where we have integrated and tested our solutions together. These integrated solutions are running in mission critical installations around the world. Partner products and solutions include but are not limited to:
1) DCIM (Data Center Infrastructure Managers) Solution
2) BMS (Building Management Systems)
3) KVM Solutions
4) Console Solutions
5) Wire Free Power Monitoring Tags
6) Smart Cabinet Providers
If you are interested in integrating with our HW or SW CDU solutions please contact me at email@example.com or click on servertechsolutionpartners.
In part 2 of this Solution Partner blog, I will talk more about our power solution products and ways we integrate with other vendors within the data center space.
Pick up the latest issue of Processor
Server Technology’s 30 years of business was highlighted in the latest issue of Processor, a highly regarded data center publication. For 30 years Server Technology has lead the field in data center power management, delivering high-quality devices to customers globally. This year, not only will Server Technology turn 30, it will completely flip the power management market on its head with the recent launch of High Density Outlet Technology (HDOT). We expected that our competitors would quickly look to us as leaders in the industry and adopt our lingo, and sure enough one of them did—but we promise there’s no imitation for true HDOT.
Read the article on processor.com
Last month I raised the specter of robots in the datacenter, and left things hanging regarding how rack level power fits in. So this month, I hope to bring that into perspective.
If a datacenter asset has failed and rebooting it does not bring it back on line, the safest thing to do is to power off the asset until it can be repaired or replaced. You have several options to fully power off the gear – unplug the unit from the rack PDU; flip the power switch of the device to the off position; or throw the breaker upstream of the rack where the failure occurred, possibly taking more gear down along with it. All of that takes manual labor. The nature of the failure will determine the magnitude of the response. Most datacenters just leave the failed unit plugged in and relocate the work that was running on it. But what if you could just turn off the power to the outlet remotely? Wouldn’t that make the most sense in most instances? Whether your power strip manufacturer calls it “managed” or “switched” or something else, the ability to send a command either through an Ethernet interface or an OOB channel ( modem + serial port anyone?) is cheap insurance against a cascading equipment failure.
When it comes time to pull the offending gear out of service, wouldn’t it be nice to send another machine in to do the work? Presently most hardware requires access in both the hot aisle and the cold aisle in order to remove/service it. Power cords, networking cables both copper and optical, and the occasional USB cable snake out of the back side of many servers, storage enclosures, load balancers, and switches. These are generally in the hot aisle. And then the retention screws, faceplates, and other knickknacks are on the front of the unit, sitting in the cold aisle. The difficulty of teaching a robot to master all of the fine motor skills of cabling and screw management followed by pulling gear of various shapes and configurations without mangling makes it too difficult to use a robot today.
So what would happen if all of the servers in a rack had access to one or more optical interconnects built into the U-slot of the rack? The ability to precisely align an optical transceiver to an optical fiber could eliminate manual data cabling. And what if the servers and storage all had a way to make a connection to power from the rack without requiring a jumper cord from a PDU? Wouldn’t that mean servicing the gear would only need to take place from the cold aisle? Once you only need to access one side of the gear, the hot aisle can get narrower, and robots could access the gear from the front. If we were to eliminate screws from the front panel so that everything could be pulled or inserted from the front of the rack using standard handles, wouldn’t we be able to use robots to service the gear?
What are the options for powering without jumper cords? So far, using “blind mating connectors” is the most prevalent option. Wireless power (a la PMA or A4WP) is another long term potential, particularly as we adopt low power servers (Atom, ARM, and the like). Or I am sure that some gear head can figure out a way to use DC power to only have to connect a single pin/wire for power to the server, with the chassis being “common”, just like a car.
If your datacenter robot knew where every piece of gear was located, and had access to the front panel of a standardized chassis and rack system, how much easier would that make it for massive datacenters to take another manual labor step out of the expense of running the datacenter?
Maybe it is time for the OCP folks to begin considering their next moves to support further automation. Rethinking how we deliver power that last foot, from the PDU into the server, is a key part of the datacenter of the future.
1) Robotics Invade the Datacenter - http://www.informationweek.com/infrastructure/data-center/robotics-invade-the-datacenter/v/d-id/1112866 - Bill Kleyman video
2) Robots may run data centers, someday - By Joe McKendrick for Service Oriented | June 28, 2013 - http://www.zdnet.com/robots-may-run-data-centers-someday-7000017472/
“Kleyman does caution that "server hardware isn't quite ready to be handled by robotics," and the hardware has to be customized. Plus, many data centers still rely on cables, and it's hard to imagine a robot being able to thread and connect cables beneath raised floors or in ceilings. The rise of wireless connectivity could alleviate some of that.”
Last month, I introduced a series of posts with the intent of outlining some tips and tricks, that I have shared with customers over the years, for using SPM in the data center and beyond. These posts are not intended to be a comprehensive instruction, but rather a starting point for discussion with one of our fine Sales Engineers or Technical Support staff.
Question: How do I identify the best places to install new IT equipment?
On the surface, this seems like a simple question. One might simply answer, “Walk out to the data center and find a slot.” But, of course, the data center might not be in the next room, nor might you be installing just one low-power device. SPM provides a number of tools to help answer this capacity-related question dependent upon the individual data center conditions.
- Monitoring of power usage is the first tool in the bag. This has always been the primary function of SPM in the data center. To fully-utilize the power of SPM:
- Set capacities for CDU, Cabinet, Zone, and Location in volt-amps based on your infrastructure. These can be set on a per-device basis or en masse by right-click in the Setup Items menu and selecting “Configure Thresholds”. Or, with a left-click selection of the particular type of device (e.g. Cabinets) in the Setup Items menu, a list of those items is brought up in the main window. From this list, you can filter, sort, and multi-select to configure the thresholds of a specific group of items.
- While setting capacities, alert thresholds for total power of those CDUs, Cabinets, Zones, and Locations can be set as well. Follow that with ensuring the most critical alert, infeed current, is at the threshold desired. Again, use the selection methods identified in “a” above for setting multiple thresholds at once.
- Create System Total Power, Cabinet Redundancy, Energy Consumed, and Low Energy Utilization reports. Create various Total Power trends such as for Cabinets and Locations. Also, share these reports for access by other decision makers. Finally, schedule the reports and trends to run and email those involved on a regular basis.
- Create Views that include location displays based on CDU Capacity % Used, list of cabinets, and various reports and trends (see “c” above). And, don’t forget to share your Views so that other decision makers don’t have to re-invent the wheel.
- Checking for open U-space and outlets goes along with the simplistic answer of putting the equipment where it fits.
- Within each Cabinet, you can create Cabinet Devices to define the specific locations your devices occupy and the specific outlets from which they derive their power.
- Run the Cabinet U-space report to gain information about available locations for your new installs. Share and schedule reports as desired.
- Run the Cabinet Device Inventory report to provide asset information to applicable personnel. Share and schedule reports as desired.
- Monitoring of temperature allows for finer analysis of the better locations based on cooling performance.
- Create reports and trends to monitor temperature and humidity at various locations in the data center including any expected hot spots based on equipment in use. Share and schedule these as desired.
- Set thresholds at reasonable levels. Start with an understanding of the expectation of temperature and humidity based on reports and trends from “a” above. Like with the power thresholds, environmental thresholds can be set en masse.
- Create Views that include location displays based on Temperature and Humidity, alarm history, and various reports and trends (see “a” above). Finally, share your Views like you did with the power monitoring above.
Understanding growth, past, present, and future, is an important aspect of running the data center. For more information on Growth Planning, see the Application Note on the subject.