Posted By Zen Kishimoto,
Sunday, September 16, 2012
Updated: Sunday, September 16, 2012
Recently, I had a chance to hear the
keynote speech delivered by Justin Rattner, CTO of Intel, at Intel Developer
Forum (IDF) 2012. He heads Intel Labs, and a large room was packed with media,
developers, technical managers, and business people.
Justin Rattner started his speech wearing the latest
communications gear with moving ears.
The theme of his speech was very clear, as the
following slide shows.
It has been said that connected devices
will be everywhere, comprising the Internet of things. But when the Intel CTO
spoke, that statement carried weight. I attend many conferences and take pictures
for my blog, but usually only a few people take pictures of speakers and their
slides. In this case, a bunch of people
were taking pictures every time he advanced his slides.
Intel's main business is in semiconductor
chips, but it has a wide variety of technologies to push its vision for
connected devices. I am sure there are
many more technologies in development at Intel, but Rattner demonstrated six. He reminded us of what was talked of as a
dream at IDF 2002, 10 years ago. At that conference, someone from Intel said that
someday devices would be networked together wirelessly at a reasonable cost.
With WiMax (Intel was a big proponent, but I do not hear much about it from
Intel these days), Wi-Fi, LTE, and other wireless technologies, this is no
longer a dream at all.
In any event, Rattner presented these six
Scaling down of communications chips. In the communications
area, analog technologies have been in the mainstream with converters
between them and digital technologies (computers are already digital).
Because analog technologies are hard to scale down in size, a greater
percentage of the communications parts are becoming digital. A chip once (circa
2002) manufactured with 90 nanometer (nm) technology is now (since 2010)
done with 32 nm technology. With the scale down, size and power
consumption are down from 1.2 mm2 and 50 mW to 0.3 mm2 and 21 mW. In two
years or so (circa 2014), with 14 nm technology, its size is expected to
go down to 0.03 mm2. So scaling and power conservation are
constantly being improved.
Wireless Gigabit (WiGig). WiGig Alliance's view is given as
follows from their site.
Alliance envisions a global wireless ecosystem of interoperable, high
performance devices that work together seamlessly to connect people in the
digital age. Our technology enables multi-gigabit-speed wireless communications
among these devices and drives industry convergence to a single radio using the
readily available, unlicensed 60 GHz spectrum.
president is also an employee of Intel. Multi-gigabit-speed wireless technology
would enhance existing applications and make it possible to develop new
3. Battery life is a concern for everyone. Batteries for mobile devices
do not last very long, and until the technology advances to the point being
able to store a lot more power, the best thing to do is to conserve power as
much as possible. With its Smart
Connect Technology, Intel has a NIC allow only absolutely necessary packets
to reach the main computing engine, saving unnecessary processing and power.
4. Video Aware Wireless Networks is necessary, as video occupies a
large portion of Internet traffic. From 2011 to 2016, its growth is expected to
be 32% CAGR, and in 2016 video will be 55% of the total traffic, as shown in
5. Security is necessary but usually diagonal to ease of use. They are
advocating biometric technologies; Rattner’s demo used a palm for authentication.
The technology, from Fujitsu, is used in several applications. Several years
ago, Mitsubishi Tokyo UFJ bank
vein authentication for verifying each account holder. The Vantage data center uses it as
6. The wireless infrastructure needs to be improved. In the current
implementation, each cell tower is self-serving and, because it is provisioned
to accommodate the peak load, which may not occur often, it is also wasteful of
ICT equipment/energy. By networking the cell towers together and controlling them
from a central data center, load balancing can be applied and unnecessary
equipment can be turned off without affecting the overall operation.
If you want to know more about Rattner’s presentation,
I am sure you can find articles and blogs detailing each point he made. Instead
I would like to ponder the role of ICT technologies in general, regarding their
contributions to our society. Up until now, technical progress has been made to
provide more convenience to us, as if we had an infinite amount of resources.
It is only recently that we started to realize that we cannot continue to assume
an infinite amount of resources.
In his presentation, Rattner did not state
that his motivation was to conserve more power or energy. But in some cases, in
order to keep up with demand, we must conserve power and energy and, on top of that,
use what we have more effectively.
So looking at each technology area in his
presentation from that perspective, area 1 concerns the need to make each
component fit in a smaller area and consume less power. Areas 2 and 4 indicate
that spectrum is a limited resource and should be used effectively to support
loads. To do that, the technologies
need to be improved. Area 3 indicates that, given limited battery capacity, the
best thing to do is to conserve power. Area 5 may be the closest to the energy
view. I think we can do a lot to make ICT technologies more effective in terms
of energy consumption. And moreover, ICT can make other things more efficient
and convenient. That is why I am interested in the field of green ICT.
Those who do not follow my blog may be
confused without some background. Teladata is a consulting firm
focusing on data center technologies. They saw a huge gap between IT
and facilities that is making data center operation less efficient.
That is well illustrated in my previous blog.
I had the opportunity to moderate a
panel session at this conference to investigate the current status of
data center infrastructure management (DCIM). For details of this
session, see here.
These were the panelists:
Chuck Rego, Chief
Architect, High Density Data Centers at Intel Corporation
Director, Global Technology at Equinix
Phil Reese, Research Computing Strategist at
This blog is a summary of that session.
(It is almost impossible to moderate a panel and take notes at the
same time.) There are many ways you can structure a panel discussion.
One extreme is for the moderator and panelists to share a common
scenario, even down to the details of Q&A. Of course, the other
extreme is to set a big theme and a direction for the discussion, and
let the conversation take its own course.
I took the second approach, mainly
because the panelists’ three data centers were drastically
different, making it extremely difficult to ask each person the same
question. On one end of the spectrum, Phil Reese has data centers for
researchers at Stanford University and is starting to use a
commercially available DCIM tool. On the other end, Pam Brigham's
company, Equinix, is in the colocation business worldwide, and she
uses homegrown tools. Chuck Rego produces a set of DCIM tools at
Intel and uses other commercially available tools.
Technical difficulties prevented my
monologue slides from being included in the presentation. But I said
the following in them:
DCIM tools are software and
hardware tools used to design and operate data centers effectively.
This definition may qualify almost any tool as a DCIM tool.
In general, a tool has only one
DCIM tools came out of the
different needs and categories of data center operations. Therefore,
there is no standard for sharing data and no common communications
Very little information about use
There were no clear disagreements about
this explanation. However, Chuck was a little skeptical about any
tool being a DCIM one. I am not 100% sure, but I think I heard that
energy management tools were not DCIM tools. I take a very liberal
stance on the definition of DCIM. If we take the meaning of DCIM
literally, any DCIM tool should directly touch the infrastructure.
Energy management tools may not deal with the infrastructure directly
but they do indirectly. If we draw a line to define what is a DCIM
tool and what is not, it would be too cumbersome. I suggest putting
everything into this category.
There were a few more topics discussed,
although I am sure I forgot others:
Homegrown tools were developed
when no tools were commercially available.
A dashboard display that
integrates several tools' results would be desirable.
Some kind of standards are
As for item #1, both Pam and Chuck said
why they developed their own. Pam needed to provide some kind of
automated way to let sales guys know what colocation space is
available at which data center, with some detailed specifications.
One such tool is web based and provides information instantly. When
there is no tool commercially available, you need to develop your
own. Pam said she had been looking into commercially available tools,
but none of them satisfies her needs yet. A tool needs to be flexible
and customizable because no two data centers are alike. A tool
without any flexibility may apply to one data center but not to
another, even though you own them both.
Chuck's case is interesting. He
developed several tools as a suite to meet his needs but ended up
making them commercially available. So Intel eats its own dog food.
I think both Phil and Chuck brought up
item #2. Phil is using SynapSense to monitor his data center. He also
has some CFD tools. Down the line, he will need more tools. It would
be very desirable if these tools were integrated with one display
window, rather than multiple windows, to make it easier to grasp
what's happening at your data center.
Item #2 brings up item #3. To integrate
tools together, we need a common platform for sharing data and a
communications mechanism. But because each tool was developed to
perform one function and one function only, this need was not taken
into consideration. However, there is some movement in this
direction.Future Facilities now teams up with other companies,
including Intel, to integrate their tools together.
In summary, the DCIM segment is in its
infancy. Its definition is not even agreed upon. There is going to be
debate over whether a tool belongs to DCIM. That would confuse the
market, but it is a process we need to go through to mature this
segment. But one thing is clear. Someone with a lot of weight behind
him should take the initiative to set the standards in this segment.
Chuck, how about you?
As most people in the data center market know, both facilities and
IT folks consider monitoring one of the most important elements in operating
data centers. Smaller companies were the first to provide monitoring and
reporting functions. Although this is not an exhaustive list, I had a chance to
talk to some of these vendors and write about the meetings:
I understand their services and their
usefulness. Some provide sensor hardware and software, but others provide only
software. They all monitor, aggregate, and report several parameters relevant
to data center operations, such as temperature, humidity, and power
consumption. Some deal only with facilities equipment, and others handle data
coming from both facilities and IT equipment. There are no standards by which to
measure the data—no standard for frequency of measurement, data formats, or
protocols. Each vendor has their set of customers, and they seem to be happy
with the solutions they purchased.
Then there are Power Assure,
Romonet, and Future Facilities. Power Assure does monitor, but that is not all.
It also optimizes the use of power at your data center. Romonet is for capacity
planning. Future Facilities provides an electronic version of a data center
that you can play with before implementing your design physically. These three
cannot be classified as monitoring and reporting vendors. But their functions
are important to operating data centers, in addition to monitoring and
reporting, so a new term has been introduced to describe a new segment, which
Clearly, DCIM should contain several
categories of tools, including those for monitoring and reporting, capacity
planning, and simulation. As I said before, this segment is in its infancy;
there are no standards or actual-use information. Those who combat day-to-day
operation problems would be confused about which tools to select. Do they want
to buy one tool at a time or buy a suite of tools? But wait. There is no suite
of tools yet, although Future
Facilities (for example) has begun to
partner with other DCIM vendors to
If we were to develop a suite of tools or
a framework or platform for DCIM tools, what would the requirements be? It
would help if there were some information from actual use by someone other than
the vendors. Because DCIM tools are at a very early stage, there is very little
information about them.
Because the needs of operators can be quite different from one data
center to another, we will have a good assortment of panelists from different
Rego, Chief Architect, High Density Data Centers at Intel Corporation
Director, Global Technology at Equinix
Reese, Research Computing Strategist at Stanford University
Chuck develops Intel’s DCIM tools for their
own and partner use and use commerical ones as well, while Pam at Equinix has
homegrown tools. Phil at Stanford is starting to use a commercial tool. I will
ask them what problems they perceive as the most important to solve at their data
centers and why they chose their solutions, whether their own or commercial
tools. Are they quite happy with the tools they are using? If not, what is
missing? What additional work is needed to make them work? Conversely, were
there any extra benefits they did not expect in applying their DCIM tools?
If you are interested in the answers to these
questions, join me and the panelists at the panel and other sessions at the
At that time, Future Facilities’ (FF) main focus was computational fluid dynamics (CFD),
which was important then and still is today. But it was not
interesting enough for me to write about it (sorry, Sherman). In 2011
FF came out with new positioning and a new set of functions, a
virtual facility (with a suite of tools called 6SigmaDC), a digital
replica of a real data center. The virtual facility can put together
information on power, IT loads, and space, in addition to air flow,
and create a mathematical model and run simulations on it without
actually altering a data center.
I had an opportunity to listen to
people who are using this product at the recent FF conference. As I
listened to their talks and had a frank chat with Sherman, I began to
think that this replica has good potential to solve a big problem of
IT and facilities: disarray in managing high-power-density data
This blog is a summary of my chat with
Sherman and my thoughts triggered by it. FF did start its business
with a focus on air flow (the term DCIM did not exist then anyway,
although CFD is one of the DCIM categories). He said that earlier
they were brought in by data center facilities folks to clean up the
damage done by IT. The use of the word "damage” was interesting
because as a former long-time IT guy, I never thought facilities
people felt that way. Facilities people tailor air flow to IT needs
at the beginning of IT deployment. But because the IT way is
notoriously to change everything—including equipment, rack
configurations, and rack layouts—often and on-the-fly, air flow
customized before the changes no longer applies after the changes,
and thus IT does damage to operations in the entire data center.
After seeing this repeated again and
again, Sherman and his folks realized it would be better to let IT
and facilities folks work together to share air flow and other
information to avoid the problem early on rather than fight with it
later. Earlier in the conference, Hassan Moezzi, director of FF, said
that air flow is the single most important factor in managing a data
center because most data centers are cooled by air rather than liquid
(such as water). By controlling air flow and optimizing its effect on
cooling, most problems could be solved.
I think I knew this, but until it was
put that way I did not fully appreciate it. Another thing I
re-realized concerns IT and facilities integration. Since the
beginning of my writing about the data center segment, many people
have said that the difficulty of managing data centers is primarily
IT and facilities’ differences in culture and lack of close
collaboration. Some remedies were suggested, such as making both IT
and facilities report to the same boss and/or letting IT be
responsible for the power bill. Those are fine, but they are at too
high a level. What can we actually do? Sherman and FF are advocating
to create a digital replica (mathematical model) of a physical data
center. The model is used to test multiple data center configurations
to find the best before putting the real IT infrastructure in place.
This makes sense. I have toured many newly constructed data centers.
Standing in an empty floor, I often wondered how they would lay out
IT equipment to manage the entire data center in an energy efficient
way. They do not know in advance how the IT equipment will be laid
out and how electric and mechanical systems can support it. Come to
think of it, it is a scary thing.
Now my next questions. Developing a
mathematical model is fine, if we are talking about new construction.
Granted that many new data centers are popping up everywhere,
including Silicon Valley, there are a far greater number of existing
data centers. If the model cannot apply to existing ones, FF’s
solution is very limited. But if it can, that means a great business
opportunity. FF is often called in to find a solution for an existing
data center that has extra capacity (in theory) to host more IT
equipment but cannot expand further for some reason—maybe there are
hot spots. This is called stranded capacity. By diagnosing the root
cause, they can fix the problem by constructing
a virtual facility and analyzing it.
This is great, but there is no
mathematical model for existing data centers, which consist of
hundreds and thousands of pieces of IT and facilities equipment. How
do you collect a list of equipment and logical connections to
construct a model for that? Initially, FF collected and entered
information by hand, a time-consuming and error-prone process. Later,
they created an interface to bring in data automatically from
multiple sources, such as IT configuration databases that might be
produced by someone like Asset Point with their autoscanning of IT
equipment. With this interface, FF could work with a company like
A natural question is whether there
exists a standard for a communications protocol and data format to
share the data created by each DCIM tool. Unfortunately, at this
point there is none, although FF uses XML as a base. Even with XML,
you could still have your own data formats, although it might be
easier for conversion because XML is ASCII based. In any event, FF
developed their own interface and data formats, which they share with
their partners, like Intel, Nlyte, Aperture, RF Code, and SynapSense. This allows assets and monitoring
information into the virtual facility model.
Well, this is interesting. It would be
great if FF, or whoever leads the standardization of data formats,
could integrate many more DCIM tools into their virtual facility
platform and accelerate the adoption of DCIM. I explored this in my
FF is working with Intel as a
development partner, and their solution interacts with Intel’s data
center manager (DCM). Intel has established an interface with
data coming from servers and is working with FF to merge their
interface with it. Since the DCIM market is in its infancy, there are
no standards. Cooling and electrical solution providers like
Schneider and Liebert-Emerson and others have their own interface and
data formats. I know Intel is big and that more than 80% of all the
servers in data centers run Intel chips. Is Intel powerful enough to
force a standard to unite DCIM tools? After all, we need to convince
facilities types to agree on a standard, and they are not used to
Sherman thinks that the most important
thing for really optimizing the efficiency of data centers is to
understand data from servers, which is the real culprit, not cooling
or electrical systems. "If Intel controls such data, why not?” he
continued. It would be IT, not facilities, that would set the
standard, he said.
This argument is convincing, but my
skeptical nature forces me to wonder if the facilities type would go
for a standard. In the BMS market, vendors were forced to support an
interface with the Web because the Web revolution was so powerful
that they needed to support the Web/IP protocol. We need a similar
magnitude of scale to force the standardization of data formats so
that each DCIM tool can share information on a single platform like
FF’s. I do not have any idea what that would be. Would it
be a power
crunch, I wonder?
How about adoption? FF has roughly two
types of customers: Web/Internet and mission critical. The former
includes Intel, Facebook, Google, and Microsoft. The latter includes
Bank of America, which will soon announce its adoption of FF’s
solution, and JPMorgan Chase. FF is also targeting medium-size data
centers, as they expect them to get the same benefits as large data
center players. The company originally came from Europe, and their
presence there is fine. But they have yet to penetrate the Asian
market, although they have customers there for designing server boxes
with their tools.
As for channels and reselling their
products and services, EYP/HP might be the closest to being
certified, as FF is in discussions with them.
As Chuck Rego of Intel mentioned to me,
we need to cover both the monitoring and the capacity planning sides
of DCIM. If somehow FF can standardize the data for DCIM and unite
both sides, DCIM will make it mainstream, and many of the "damages
caused by IT” may be avoided.
At the recent Future Facilities (FF)
6SigmaDC conference, Chuck Rego, chief architect at Intel, delivered
a keynote speech. Intel manages their data center with their
homegrown DCIM tools and others, including the one from FF.
Stranded capacity is capacity that IT
cannot use because of a data center’s configurations and layouts.
When we design a data center, we prepare enough power and cooling to
meet IT needs. But depending on how we deploy facilities and IT
equipment, we may not be able to fully utilize the capacity
allocated. Chuck started his talk by saying that he wanted to get a
handle on stranded capacity by measuring and quantifying it.
The two most important things in data
center operations are reliability and utilization. When we discuss
data center energy efficiency, the most-used metric is PUE. Chuck was
one of the original five people who discussed the definition of PUE,
even before The Green Grid (TGG)defined it officially. What is missing from the current PUE is the
incorporation of load information. TGG has been working to
incorporate IT utilization and other information to improve PUE.
Another metric, CADE, which was suggested jointly by Uptime Institute
and McKinsey, also considers the utilization of IT and facilities
equipment in its definition. However, I am afraid it has not caught
on with the majority of data center operators. PUE is still the
dominant metric for energy efficiency for data centers.
Chuck wanted to find out how
utilization information might have an impact on PUE. He set up a
model that assumes an average of 8 kW/rack and a peak of 12 kW/rack.
With this assumption, we can obtain fairly low PUE. Does this level
of PUE hold when the pattern of operations changes? What if we
calculate PUE for an environment where IT utilization is low? With
this average and peak power requirements assumption for a data
center, PUE is 2.0. But under a utilization factor of only 20%, the
actual operating PUE goes up to 5.7. This is because other supporting
elements (both mechanical and electrical) were set up to support much
higher loads. He calls this type of PUE actual operating PUE. The
point is that the way you operate your data center could make a big
impact on the actual efficiency of your data center, even though it
was designed to be energy efficient for average utilization.
Hassan Moezzi, director of FF, said
that there is a disconnect between the operations of the entire data
center on the one hand and server design and rack configuration and
layouts on the other. Most IT folks, including me, do not know or
care how each server is built; we’re not going to open up a chassis
and carefully review the components. According to Chuck, factors like
the following may make a 10% difference in energy efficiency:
Shadowed or unshadowed processors
(relative positions of multiple CPUs have impacts on the cooling
efficiency of each CPU)
Processor efficiency based on
different levels of workloads
Fan speed control
Even at 21°C, these affect efficiency,
and under ASHRAE’s increased temperature and humidity setting of
27°C, the difference would be much more.
conducted experiments to find out what impacts air flow has over data
center operations. He learned two things from his experiments. One is
the importance of finding the optimal location to measure
temperature. Traditionally, it is measured at the return points of
each CRAC unit. His experimentation indicated that temperature
control should be done at the supply points (inlets to servers)
rather than at the air return points at CRACs. At the return points,
there could be some complex air flow, so they may not accurately
reflect necessary cooling requirements for server loads. In his
experiments, the temperature oscillated widely at the return points,
while the supply temperature stayed pretty much constant. As the
temperature is increased from 21°C to 27°C, this trend would be
Another finding was the need to set
cooling at a higher temperature. At higher temperature, cooling needs
are relaxed, while the IT side may increase power consumption with
higher fan speed and silicon leakage (at a higher temperature, CPUs
tend to consume more power). So the difference between the gain by
facilities and the loss by IT should be carefully weighed. In raising
the temperature, reliability and performance should not be
compromised. The experiment involved 900 servers for 10 months and
tried several temperatures, ranging between 21°C and 35°C. But he
did not observe any performance degradation or visible failures at
all. This is quite impressive, with real data to back up the result.
Chuck then talked about the placement
of sensors. If we want to obtain useful data from each server, we
need to attach a sensor to each server. In a big data center, the
number of servers can be in the tens of thousands, and it is not
reasonable to assume we can attach one sensor to each server. He then
talked about smart servers, which come with an embedded sensor. The
measurement of relevant information, such as temperature, can be done
underneath the OS (so that it is applicable to either Linux or
Moreover, cooling traditionally has
been static and unchanging, even with different loads. But loads
change dynamically, and cooling needs should change accordingly.
Otherwise, some cooling capacity is wasted. When IT decides to move
virtual machines (VM) from one server to another, the loading factor
of each server changes with the changed cooling requirements. Power
and cooling requirements also could be adjusted, if more accurate
loading and operating data are available. Intel has a prototype to
give feedback to the dynamically changing server environment and let
some servers sleep to optimize the energy efficiency of the whole
data center. The last time I talked to PowerAssure, their product had
such a feature and worked with Intel.
Sherman Ikemoto of FF said that what
ultimately decides energy efficiency for a data center is data from
servers but not from facilities equipment. I was somewhat skeptical
about that. But after Chuck’s presentation, I am more convinced of
his opinion. Maybe we have been tackling the symptoms of the problem
rather than its root cause. The problem is, in Sherman’s phrase,
"damage done by IT.” But we were not dealing with the real
problem of controlling IT equipment. Some time ago, Emerson issued a
white paper on Energy Logic and claimed this about the power saving
at the server level:
1 watt savings at
the server-component level creates a reduction in facility energy
consumption of approximately 2.84 watts
Although it was saying the same thing,
it was not positioned to emphasize both Sherman and Chuck’s points.
By changing the mindset, we may make progress in improving data
center energy efficiency.
Innovation Center Denmark–Silicon Valley (in partnership with Trinity Ventures and Squire Sanders) presented this symposium June 3–4. See my previous blog for details about the program and speakers.
There were a number of interesting and informative sessions and presentations. Unfortunately, space is limited and I cannot cover all of them. Instead, I will discuss some of the noteworthy points of some of the presentations and panel discussions.
Overall, technology synergy between the Intenet/telecom and the power grid was discussed over and over again. For example, in a panel entitled, “Smart Grid Solutions Marketplace,” moderated by our Jon Guice (see picture below), David Pejcha, director of marketing for Silver Spring Networks, discussed the close synergy between the Internet and the power grid. He came from Cisco and said his background in networking was easily transferable to his new job. Silver Spring Networks provides infrastructure technologies and products for smart grid. See my previous blog on it.
Jon Guice of AltaTerra opened a panel discussion.
Even though the final communication protocol has not been picked yet, Silver Spring Networks bets on IP, specifically IP v6. If IP becomes the protocol of choice, each device and piece of equipment will need a unique IP address, and IP v6 will have to be in place to accommodate so many IP addresses and security.
David Pejcha of Silver Spring Networks presents his company’s business.
As IP is becoming the de facto standard for the transmission/distribution protocol, and as home area networks need to work as part of the smart grid, Intel is interested in smart grid as well. Lorie Wigle of Intel (see a picture below), who is also president of Climate Savers Computing, informed us that Intel hosted a meeting at its headquarters in which IEEE discussed smart grid. The video of that meeting is available here.
Lorie Wigle of Intel talks about some of the challenges of smart grid.
Another discussion emphasizing the synergy between the Internet/telecom and the power grid was given by Prof. Randy Katz of UC Berkley. His research attempts to overlay an information structure on the power grid that is similar to the signaling system controlling dumb telephone networks.
Finally, Denmark appears to be pretty advanced in the area of clean tech and smart grid. This, however, should be looked at from the following perspective. Denmark is small and its population is concentrated almost exclusively in Copenhagen. The United States is 200 times as big and its major cities are spread out. What is possible in Denmark may not be readily applied to the United States. Scalability of technologies and practices is very important.
Recently, Cisco announced its intention of entering the smart grid market. As smart grid needs to link many meters and sensors to reflect power usage and consumption information in real time, networking gears and technologies are a must to support it.
Monitoring gears are placed in data centers to measure their temperature, humidity, loads, and the power usage of each piece of equipment. Vendors like SynapSense, Sentilla, and Sensicast provide products for this space.
Katie Fehrenbacher of GigaOM reported the re-emergence of WiMAX in the smart grid segment.
Now Alvarion, a WiMAX gear vendor, is experimenting with using its WiMAX gear as an aggregation point for data collected from smart meters. As is known, WiMAX is not being given much attention these days. However, it might find an application area in the smart grid space. Other companies are applying WiMAX as well:
GE and Intel have developed a WiMAX-based smart meter using startup Grid Net’s software, and other startups like Full Spectrum are selling gear that uses WiMAX where power is distributed from generation to substation.
Some technologies can be applied to areas other than the one they were developed for. WiMAX may be such a technology.
Intel: · Energy efficiency in manufacturing and automation at data centers. · #1 purchaser of carbon credits (according to EPA) HP: · Merger with EDC · IT consolidation (virtualization, data center consolidation, 85 à 6) · Merger with EYP · Renewable energy (1.2MW solar campus) IBM: · Green IT (Data center efficiency, server consolidation, etc.) · 110k of 400k employees telecommute · Water conservation SAP: · Monitoring footprints · Use of renewable energy · Reduction of business trips · Data center consolidation Sun: · 20% carbon reduction from 2002 by 2012 was accomplished in 2008 · Data center consolidation · Efficient data center (in-row cooling with no raised floors and air-economizer – Denver, Co) · 50% of total # of employees cubicles
IBM: · Not slowed. Green IT payback time is less than 2 years (in months). Can possibly save $6-9M /month · CFO more sensitive and pressure CIO HP: · Payback, economy is a big driver
What things have you not done yet? Intel: · Culture of sustainability (4% of bonus allocated to how sensitive environment issues) HP: · Grassroot & community for sustainability IBM: · Role of technology not well defined SAP: · Understand long vs. short time impacts · Change thinking pattern Sun: · Education of the world · Internet use is carbon negative
Intel: · Linking each action with carbon & energy consumption with a dashboard terminal · Power consumption by PC and manufacturing process is bigger than data centers SAP: · Crisis is good time for innovation · Application architecture should be standardized for further energy efficiency IBM: · Data center consolidation reduces: software licensing, software/hardware maintenance, labor, and real estate construction fee HP: · Automation is a key for saving · Saving is good for environment Sun: · DC power distribution within data centers is no longer better than AC · Need to assess software architecture for energy efficiency
A few articles have been written on this subject. One such an article is by Chris Preimesberger of eWeek. Intel is developing WISP (Wireless Identification and Sensing Platform) and is involved with product concepts...
“ranging from low-power, self-sustaining sensors that can gather and record data on weather and other environmental conditions all the way up to larger sensors with transmitting devices that can help monitor and run data centers.”
Some of the use at data center could be:
"If one corner [of the data center] is running too hot, then the automation -- in concert with server virtualization -- can redirect the workload to where the air is cooler, smoothing out the load and conserving power."
I have written about dynamic power management before. We can move virtual machines (VMs) to some specific servers, making several other servers idle and eventually turning them off. Then, cooling for those turned-off servers can be turned off. If this is complemented with this type of sensors, we could further control cooling effectiveness.
To do this, a smart software system needs to be put in place to work with the IT and infrastructure equipment.
When one innovation is implemented and applied, more ideas are invented to make data centers even more energy efficient and green. Research should go on!!