Posted By Zen Kishimoto,
Wednesday, May 15, 2013
| Comments (0)
This is a continuation from Part 1.
Interfaces required to multiple
I think their decision to keep
themselves a software infrastructure company is smart. In this way,
they can apply their systems to many
market segments where operations are involved.
When operations are performed, some kinds of data are generated and
often times those data should be collected, stored, and analyzed to
tune and improve operations and business processes. In order to dive
into new domains, they need to keep adding new interfaces as well as
adding and revising in areas they already cover. Dave Roberts told me
that they now have close to 500 interfaces.
Coming from the IT segment, I see
people tending to converge to a handful of well-defined standards
and, therefore, interfaces. When I first put my foot into the data
center market, I was very, very surprised to find out that there were
many interfaces on the facilities side. Although BACnet
is becoming a major force in the data center facilities protocol of
choice, there are still several other protocols, such as Modbus
being used. An IT guy like me tends to think we can force facilities
to adopt a single standard to consolidate all the protocols into one,
which is IP. I now know it does not work that way. I got involved in
Grid Interoperability Panel, which was
organized to come up with a set of standards to allow smart grid to
function without conflicting technologies and protocols. The power
industry has been around longer than IT, and there are many standards
and others. The power industry has been conducting business to keep
the lights on for more than 100 years, and they will not listen to IT
about consolidating everything to IT technologies and protocols, for
How to translate domain specific
requirements for software developers
OSIsoft maintains that their core PI
system is generic and does not change when they apply PI to different
vertical markets. When they pick a new domain, they add new
interfaces specifically required for that domain. So every time they
step into a new domain, they need to worry about yet more interfaces
to maintain. This seems daunting, but it is the only practical way to
have a generic system to apply to many areas, such as the power
industry, oil and gas, and building management segments.
For each vertical domain there is a
dedicatedindustry management team that includes experts in that field
who can communicate natively with customers. The experts get
agreements on requirements, then translate those requirements to a
specification for software development teams and partners/ecos to
How to enter a conservative industry
like the power industry
IT's change of pace is very fast. New
technologies come and go quickly, sometimes within months, if not
days. In contrast, utility companies are very conservative and do not
replace their technologies and equipment for many years until new
technologies or equipment are proven to work solidly. I was curious
to find out how a software company like OSIsoft could penetrate into
the conservative power industry. In the 1990s, OSIsoft partnered with
and also with ABB.
Through their introductions to utilities, they started to work with
utility players. They expanded their market presence in the utilities
market. Although there are a lot of similarities, each utility has
specific needs, which triggers customization. But OSIsoft does not
provide customization services. Customization is done by utilities
themselves or system integrators. Nearly all—97%—of their revenue
comes from software maintenance; the remaining 3% comes from basic
services such as installation. So a highly configurable nature is
important for their product.
Sharing data among multiple entities
In general, if two entities work
together, it would be most beneficial to share data among the two.
For example, let me refer to the power grid in California. California
ISO (CAISO), which reliably balances power
supply and demand on the transmission, does not maintain the
transmission lines. The lines are maintained by PG&E,
a local utility in my region that also is responsible for the
distribution grid. Power imbalances can be caused by operational or
equipment problems. Therefore, it is very useful if CAISO shares data
with PG&E so that they can work together to solve the problem.
For this, OSIsoft has released a new feature called PI
Cloud Connect, which allows highly granular
data to be shared with specific accessibility control in a cloud
setting. In this way, any number of organizations can share
time-series data with a specific access privilege. Yes, this is a
good application of ICT.
Once data are captured and stored, they
are analyzed to derive useful information to improve operations and
business processes. Analytics can be done at many levels. They can be
as simple as out-of-bounds values analysis all the way up to
prediction. Here OSIsoft does not do its own analytics packages but
makes sure to plug in others' packages seamlessly to the PI system. I
am currently looking into analytics more in detail. Because analytics
is a very broad term and it contains so many angles, most
presentations or white papers on products do not mention it in
detail. That is frustrating, to say the least.
What is an example of analytics in the
Analytics example 1: equipment
Do you see boxes of different colors
and shapes on utility poles around you? One of those boxes is called
a transformer and is used to step down high voltage to lower voltage
before power gets to your home. Most transformers are based on the
discipline and degrade physically as time goes by. If a transformer
malfunctions or fails, power to your home will be interrupted. It
would be nice to know when to repair or replace it before it fails.
One of the analytics packages can monitor its health, bounce it with
the historical trend, and provide an early warning.
Analytics example 2: wind power
Another example is in wind power
generation. Wind is hard to predict. It is blowing one moment but not
the next. It is vital to balance the demand and supply of power every
second. If we cannot predict power generated by wind, it makes it
more difficult to balance power. So it is very important to predict
when wind blows and when it stops. Predictive analytics is used
widely in weather forecasting, and wind prediction is part of it.
First, a prediction model is developed from the historical data, and
the model is fine-tuned and modified as more data are collected.
Analytics example 3: smart charging
Currently, in California, power demand
increases as the day goes on and hits a peak in the early afternoon.
It goes down to its lowest point during the night. An electric
vehicle (EV) like the Nissan Leaf or Chevy Volt is known to draw
about the same amount of power as a typical household. If they are
charged when power demand is at peak, we run out of power to satisfy
demand. But during the night, we usually have plenty of power
available, and it is suitable to charge EVs at night at home. This is
what a typical EV owner does now. As more public charging stations
pop up, and faster yet power-hungry new charging technologies
proliferate, charging may be done during peak time. That would
disturb the power balance and lead to outages. For this reason, smart
charging needs to be developed and deployed. The result of this type
of analytics would dynamically allow charging to start when supply
Different utilities could use an
analytics package developed by one utility, but OSIsoft does not
share particular users' analytics algorithm with others. OSIsoft has
communities, and those who belong to them might
share such an algorithm via community. The T&D
User Group community exists for 20
years, and they tend to share information when there is no
competition among them.
Analytics example 4: more renewable
energy sources for power generation in California
California has adopted a renewables
portofolio system, known as RPS. This specifies
the minimum percentage of renewable energy sources, like solar and
wind, in power generation. California plans to attain 33% of all the
power from renewable energy sources by 2020. Although not all the
renewable energy sources are highly volatile, like wind power, a lot
of unknowns will be thrown into the power grid. Constant power-supply
predictions based on ever-changing weather (the wind may or may not
blow at any given minute, and solar power goes down when clouds set
in) will be vital to keep the power grid stable all the time.
Applying PI to more demanding
Smart grid is to make the power grid
smarter. Our physical infrastructures consist of more than the just
the power grid; we need, for example, gas, water, waste,
transportation, government, street lights and traffic systems. Dave
is working on the next topic beyond the power grid, which is the
city. According to Dave, a smart city is
defined differently by different people. But currently, US cities
like Austin, Seattle, New York, and Chicago have their smart city
projects. OSIsoft is involved in some of them, and a public
announcement is coming shortly.
Collecting, aggregating, storing, and
linking all sorts of data from its different sources would provide
tremendous intelligence to a city. A utility at the conference
reported that they collect 100,000 data per second. If we implement a
system for a smart city, the number of data points would explode by
the order of 2 to 3 magnitudes. That means millions of data per
second would bombard the PI system. Even though the PI system is
created to cope with a large amount of data of many kinds, at some
point, they may have to alter their architecture and technologies to
process such a massive amount of data. That makes me interested in
talking to their technology visionary. Stay tuned for that in a
Posted By Zen Kishimoto,
Tuesday, May 14, 2013
| Comments (0)
Smart grid is where power, IT, and
communications meet. In this blog, IT and communications technologies are
grouped as ICT. These days, most industry areas have become so complex that we
cannot cope with problems without applying ICT.
When smart grid was first introduced, Cisco
declared that the power grid would be much bigger than the Internet. From the
data point of view alone, the amount of data produced and processed on the
power grid is on a scale that none of us has experienced before. And with
more-sophisticated monitoring technologies, the volume of data will even
increase. The data collected may include equipment health, power flow, and
quantity of power consumption. Simply collecting data does not do much good. We
need to process what we collect—make heads and tails of it—to produce useful
information for better operation and maintenance. This is the Big Data
problem that is getting a lot of attention these days in ICT and other
Usually, Big Data problems are due to the
proliferation of SNSs, such as Facebook, Twitter, and LinkedIn. But with the
advent of low-power and low-priced, yet very sophisticated, end devices and sensors, different kinds of Big Data problems are
emerging, such as the one I just mentioned.
There are several companies that apply
their software systems and tools to solve Big Data problems in a particular
vertical market, such as the power industry. When I was covering data centers
and their energy efficiency, I visited
at its San Leandro, CA, headquarters
in 2009. They collect data sent by end devices like sensors and their equivalents and store, analyze, and
visualize the collected data to take appropriate actions for improving
operations. Since that visit, my focus has expanded to include the power
industry, which is only one of the markets OSIsoft addresses (see the other markets here).
Recently, I had an opportunity to attend
their users conference in San Francisco.
I listened to several representatives of
utilities and others in the power industry talk about their use of OSIsoft's
PI system. I also talked to Dave
Roberts, Fellow and market Principal – Smart Cities, who is an expert in
the power industry.
The following is my summary of our
discussion, with my comments.
Some power grid basics
I am targeting this blog to very, very IT
people and not to power people. So I think very simple, basic information is
useful. The power grid is a big connected network of power lines. The power
grid consists of two types of grids: transmission and distribution. Generated
power is transmitted at a very high voltage via transmission lines to
neighborhoods of consumers. Then the high voltage is transformed to much lower
voltage, and power is delivered to consumers like you and me via the
distribution grid. Because power must be consumed as it is produced, demand and
supply need to be balanced all the time. Power on transmission lines is managed
by each utility or by organizations called ISOs/RTOs
(independent of utility companies) to make sure the balance of demand and
supply is maintained—to keep the lights on. Also, as with computer networks, it
is important to know the health and status of each device and all the equipment
hanging from the grid. As in computer networks, such information is collected
from multiple places in the grid. The number of collection points grows as more
technologies are developed.
What OSIsoft does
Although from my conversations with other
OSIsoft people, I knew what business they were in, I just wanted to make sure
who they are and what they do. They provide a software infrastructure system
to connect remote devices, gather/collect/aggregate data from them, and store
and retrieve the collected data for further analysis, such as data analytics
and visualization. They do not provide end devices like sensors or analytics
engines. In other words, PI is one of the important components of the Internet
of Things, M2M, or intelligent systems. Different people define the
Internet of Things, M2M, and intelligent systems slightly differently, and
the terms are often used interchangeably.
Here's an oversimplified view of PI
My view on the conceptual view of PI
PI is not an operating system but there is
some analogy between PI and Windows. Windows provides a base operating
environment for applications to run in. Microsoft in general does not provide
any applications packages but provides this base plus some tools/utilities and
libraries via APIs. Third parties exploit this platform to write applications.
PI is similar and does not provide applications, including data analytics
packages. So PI can be said to be a general platform and applications area
This will continue to Part 2.
Posted By Zen Kishimoto,
Wednesday, May 08, 2013
| Comments (0)
A previous blog explained how the
connectivity of end devices leads to intelligence. Simply connecting the
devices does not by itself produce intelligence, but connecting them to a
bigger system that aggregates, stores, and analyzes their data does. Many
details still need to be worked out.
An ecosystem for intelligent systems
consists of several players, such as chip, OS, middleware, end device, cloud
service, back office processing and analytics providers, and system
integrators. Ayla Networks, which is
still in stealth mode, claims that they provide secure connectivity for an
end-to-end solution for an intelligent system. They currently focus on the
consumer market but do not rule out expansion into other areas.
I sat down with David Friedman, CEO of Ayla Networks, during the recent Design West to find out what they
are up to.
Who they are
David was VP business development for a wireless chip company before. After
selling it in 2010 , he saw a business opportunity. At that time, end devices
were beginning to be connected to form the Internet of Things. But the ugly
reality was that those thousands of end devices were very different from each
other, with microcontrollers in a variety of architectures and operating
systems, as compared with the nonembedded world dominated by Windows and Linux.
All those differences sure were a hindrance to accelerating and proliferating
the Internet of Things. David and his cofounders saw the need for a generic
solution that could absorb these differences. That led to the formation of Ayla
David and his team started to work on his
solutions. Using his background as a chip guy, he teamed up with STMicro because ST is a major
player in the microcontroller market. Ayla Networks is a software company and
does not deal with hardware, so this is a good combination. Chip vendors focus
on how to design and develop new and better chips but are not experts in
networking such as Berkeley
sockets and SSL. In other
words, companies should focus on their core competency and outsource the rest.
In the same vein, application vendors are not experts in the lower layers of
software that support applications. During Design West, I heard from several
players that application vendors should outsource the lower layers and
concentrate on their core business; that is, design and develop applications.
So David is saying "Come to us. We will
absorb any protocols differences and security needs to support your
applications. You do not need to worry about the lower layers and other
What they do
Ayla provides an end-to-end connectivity
software; for example, to remotely control your AC from outside your home with
your smartphone. If you implement something like that on your own, you need to
develop lower-layer software for the smartphone, including secure interfaces
with its OS and networking stack. Then you need to develop an application to
work with that infrastructure. Then you need to worry about how to connect it
to your target AC. Communication can be via cellular, WAN, LAN, or PAN. You
need to choose the right one. Finally, on the target AC, some mechanism needs
to be incorporated to receive data and control from your smartphone. For that,
a small board with a communications chip on it must be inserted along with the
lower-layer software. And as with your smartphone, you need to interface with
that chip's OS and networking stack, on top of developing applications.
What Ayla provides:
- Client-side lower-layer software for applications
- Networking solutions with security
- Cloud services
- Lower-layer software on target appliances
The client-side software can be integrated
with your applications and downloaded from Apple Store and Google Play like
other applications. Ayla provides whatever networking protocols are required by
the applications. In addition, they provide cloud services to connect your
client devices to target appliances. David did not elaborate on how they
provide such services. Cloud services consist of cloud infrastructures and
applications in the form of virtual machines. Because of the proliferation of
inexpensive cloud infrastructures services, a startup like Ayla can afford to
provide the cloud services. The lower-layer software on target appliances is
the same as #1. Application developers can focus on their core business of
developing applications without getting bogged down in lower-layer stuff.
Now this seems to require a lot of
technical expertise in several areas, such as embedded systems, networking, and
cloud. Although these areas are closely related, no one person could address
all of them. Although David did not reveal details about his team, he did say
that he gathered technical people who had worked together for several years.
People who like to innovate and have a passion to create something new are
attracted to his team.
Devil is in the details
Many people have discussed controlling an
AC from outside with a smartphone or a tablet, and that by itself is nothing
new. David told me that now is the perfect time to bring their solutions to the
market. Technologies have advanced and the market is opening up. An article
by Reuters reports that by 2022 a typical household will own 50
Internet-connected devices, compared with 10 now. David said that we do not
want 50 solutions for 50 devices but only a single solution so that any new
device can easily belong to the existing network. He also emphasized that
creating a supportable product is really, really difficult.
Their infrastructure pieces must be easy
- configure with a lot of latitude
- implement with secure delivery
They claim that they have met all four
They are in a perfect position to collect
and aggregate data, but David did not reveal any future plan for business
exploiting such a position. But he did not rule out the possibility, either. If
I were an AC OEM, I would be very interested in analyzing control data sent by
smartphones, to reflect on how to tune my AC features. David told me that the
key to the use of Big Data is anonymization with the ability to opt in or out.
What about power consumption? Smartphones
eat a lot of power, and additional features like these would consume even more.
David told me that his developers pay a lot of attention to curbing power
consumption. Power-use optimization implemented as in the iPhone would attain
We chatted a bit about power in general
when everything is connected. My view was as follows:
- Advantages: There are many advantages to deriving useful
information from generated data that might be otherwise discarded. Some
information can be used to save power.
- Disadvantages: Unless we can intelligently select which data to
collect, or keep, or discard, we will end up with a pile of useless data
occupying a lot of storage and server equipment, wasting energy.
I think what David said about the
disadvantages was interesting. He said that analyzing a vast amount of data,
transforming them to a small number of useful data, and discarding the rest
might do the trick. I do not know the doability of such a thing, but it is an
David did not give me any concrete future
plans, but this system can expand beyond the consumer segment to the commercial
and industrial markets. I think there is a reasonable level of traction
in the consumer market at this point, and
there will be greater demand later. In addition to a clear application for
turning an AC on and off, I can think of a few more examples. Sprinklers for
lawns are usually on timers, and occasionally they start to work even in the
rain while you are not at home. Your remote device can override this. Or better
yet, you can program sprinklers in conjunction with moisture-detecting sensors
buried in the ground and with sensors for other local weather.
But I think the really big applications are
in the commercial and industrial segments. I think it is very smart of Ayla to
choose the consumer market first. There are two reasons. The first is that the
commercial and industrial segments are known to be late adopters. The second is
that if you target very specialized and sophisticated industry-grade equipment,
how many people will know? But familiar appliances like ACs show up on many
people’s radar screens; after success in the consumer market, Ayla can enter
The conversation stayed at a high level
because they are still in a stealth mode, but a public announcement is
forthcoming. Meanwhile, you can register to purchase their design kit by
Internet of Things
Low layer software stack
Posted By Zen Kishimoto,
Monday, May 06, 2013
| Comments (0)
I hear more and more about intelligent
systems. Are they different from M2M and the Internet of Things? In a
West, one of the themes was intelligent
systems. What is it, and how is it different from M2M and the
Internet of Things? I attended one informative session by speakers
from a research firm, a professional organization, and vendors.
I may cover that session later but
offer a few takeaways now:
Many people who were surveyed
expect the market for intelligent systems to grow, but they are not
using them yet.
Application vendors would like to
concentrate on applications rather than the lower support layers. In
other words, concentrate on your core value and outsource the rest.
So I wanted to talk to someone who does
the lower layers, a.k.a. infrastructure. I was lucky enough to talk
to Wind River.
Although I never used their VxWorks
before, I knew the company before it became a part of the Intel
family. They were a specialist in embedded systems with the VxWorks
operating system and platforms, including middleware. Then they
expanded their product lines to include their versions of Linux and
Android. They are now providing infrastructures for intelligent
systems for end devices, although they do cover other areas, like
gateways and networking, as well.
Slide to show smart
end devices like vending machines (Source: Wind River)
End devices come in many sizes and
functions, as shown in the figure above, such as smart vending
machines and digital signage. Smart vending machines can collect data
about which products sell well, and signal when stock gets low. The
general trend is that end devices and equipment are getting smarter
or more intelligent, which is the driving force behind intelligent
systems. Wind River is addressing the need for those end devices to
be more intelligent.
To prepare for a meeting with Wind
River, I read several articles by the company, published here,
and listened to a Wind River presentation.
made by Wind River (Source: Wind River)
The following is a summary of my
conversation with Wind River. I hope it helps my readers understand
intelligent systems, still a very young market.
What are the differences among M2M,
intelligent systems, and the Internet of Things?
I started to hear about M2M
a few years ago, and a little after that the Internet
of Things became widely known. Finally,
intelligent systems appeared on my radar screen. I just want to note
that there is no entry for intelligent systems in Wikipedia.
The current entry refers to a company with that name. In that entry,
there is a link saying, "For the computer science phenomenon, see
intelligence.” But I do not think that the
intelligent systems discussed here are the same as artificial
intelligence. This may not mean a lot, but it indicates that the term
is so new that there is no independent entry for it in Wikipedia yet.
Are they different? Based on my quick
search and Wind River's definition, M2M refers to technologies to
connect end devices via networks, whether wired or wireless (PAN,
LAN, or WAN). But the difference or the similarity of the Internet of
Things and intelligent systems is not clear. Actually, those three
things are being used loosely in the marketplace, as this field is so
new and is still evolving. My own take is as follows. M2M focuses on
device connectivity. Connecting end devices became possible, and that
in itself was a big deal. And then it led to a new phenomenon called
the Internet of Things. Then people realized that connecting devices
made the entire system more intelligent because the injection of
intelligence became possible, hence the term intelligent systems.
As we have a more connected society, whether the connection is human
to human, human to machine, or machine to machine, more intelligence
will be injected at many places.
So it seems that connectivity led to
the formation of intelligence. Then what is intelligence?
What is intelligence?
Does connectivity alone generate
Take my favorite example, a smart
meter. The power meter you and I have at home used not to be very
smart. It simply measured our power consumption and recorded it. A
human meter reader showed up once a month to read it to find out how
much we consumed, regardless of when we consumed it. A smart meter
collects power consumption information every hour and sends it to
your regional utility. I challenged Wind River by saying that adding
connectivity to a meter alone does not generate intelligence. A meter
with connectivity does not seem any smarter than a traditional meter.
Both measure power consumption. I have a tendency to nitpick some
details that many people take for granted. Their answer was very good
and really articulated what intelligent systems are all about. They
said connectivity makes it possible to inject intelligence because it
is not possible to do so without connectivity.
Let's take an example they used. By
adding connectivity to a meter, it is now possible to implement new
things, including the following:
pricing, which can distinguish when you use power.
The price during the night is the cheapest, and the price at peak
time (usually early afternoon) is the highest.
Smarter (optimized) operations of
the power grid that may eliminate the need to build more power
A service delivery platform to the
A smart meter alone could not realize
these functions. But its connectivity injects a new set of functions
into the entire power grid, and that is the injection of intelligence
that makes the power grid system more intelligent and brings it
closer to being an intelligent system.
Is there any other intelligence? In
addition to the above, data collected from smart meters and other
parts of the power grid will be analyzed with analytics packages and
useful information may be derived. That information can be used to
improve the operations and maintenance of the power grid. That is
intelligence. But Wind River is not currently addressing this type of
What's wrong with the roll-your-own
As I said before, unlike the IT market,
each vertical market needs a specific infrastructure, including many
different kinds of networking protocols. Wind River recommends that
you not implement your own solution but use the infrastructure of
someone whose job it is to develop and maintain such infrastructures.
Those in a specific vertical market should concentrate on their
applications (core value) rather than infrastructures (nonessential
value). This is the same message I heard from one of the vendors
during the early intelligent systems market analysis session. This
sounds very reasonable, and I think their message is a good one.
But I have two reservations. One is
specificity. If you are in a vertical market and require a specific
infrastructure but no one can provide it, what can you do? Like
OSIsoft, Wind River monitors the market and adds necessary
infrastructure support as needed. No single company could support
each and every interface. What reasonably can be done is to select
major ones and support them. The next figure shows a partial list of
standards for different markets supported by Wind River.
A partial list of
standards for different markets supported
by Wind River (Source Wind
My other reservation is that Wind River
and the Intel family are not the only company that provides such a
solution. What if, in the future, you need to interact with
organizations that use other vendors' solutions? There is no
guarantee that the systems will be interoperable.
I know it is not a fair question when
the market is still in its infancy. From my past involvement with the
Smart Grid Interoperability Panel (SGIP)
which defines standards for smart grid technology interoperability, I
know that standardization efforts will be required. Towards that end,
Intel has announced its Intelligent
Systems Framework (ISF). I hope that over time,
through input from others, such a framework will grow up to be a
standard that allows any end device and equipment to be freely
interoperable in intelligent systems.
Energy efficiency by intelligent
Do intelligent systems contribute to
energy efficiency or energy conservation?
Wind River stated here:
Considerations: Machines can perform power management tasks with
finer precision and faster response times than manual,
human-dependent systems – saving energy, prioritizing usage,
setting policies for response to outages, and the like.
That says it all.
Can we be optimistic?
I was somewhat pessimistic about the
settlement for interoperability in the near future, as I have
experienced SGIP's work and see many existing standards across
different vertical markets. Wind River was bullish for such a
settlement. I knew it was not a fair question but asked Wind River
about a timeframe. They said they were not certain but thought that
in five to ten years some kind of movement would occur for
interoperability. I certainly hope they are right. I, for one, want
to have such interoperability to exploit real intelligent systems
that not only make our society convenient but also promote energy
Internet of Things
Posted By Zen Kishimoto,
Wednesday, March 27, 2013
| Comments (0)
A data center is a complex building. It
houses IT and facilities equipment along with office and amenity accommodations. Because of this, its
operations touch many categories of areas, and different automated
tools and operation techniques are required to manage and run it
effectively. It would be great if we had a handful of standards that
applied to most components for their management. In reality, it is
not so. In general, there are IT and facilities views of data centers
and having one single view has been hard. Because of this, IT and
facilities have been managed separately, although some attempts have
been made to manage them together.
It is necessary to understand the
infrastructure of a data center before you can manage it effectively.
There are a few classifications given in the area of how data center
infrastructure is managed. One
informal categorization might be inventory,
change, capacity, simulation, and efficiency modeling, although some
analysts use more comprehensive categories.
One basic aspect
of data center infrastructure management (DCIM) is to measure and
monitor what's happening in a data center. To run a data center
effectively, we need to know what's in the data center (asset
management) and how each component is functioning, including its
status and consumption of energy (measuring and monitoring). It
sounds trivial to add a sensor to each component and poll its
condition regularly. We can interrogate and manage each component as well as
the aggregate to grasp the entire status of a data center from a
it at a high level is straightforward and simple, but the devil is in
the details. There are several
vendors in this space, and I have talked to some of them, like
I had a chance to speak to Richard Jenkins, VP Marketing of RF
Code. RF Code manufactures RF tags and sensors
as well as the software to process the data they collect, and markets
them as an integrated system. Their solution tracks assets, and
monitors the environment around the assets in a data center.
Asset management is one of the basic
functions of a data center. Not confined to a data center, corporate
assets need to be tracked from time to time because they may be lost
physically or be hard to locate without close tracking. In the past,
asset management was done manually on a spreadsheet. When there are
many pieces to track in a data center, manual tracking requires an
enormous effort and is not practical. On top of that, equipment,
especially in IT, gets moved and replaced constantly. Without some
automated means, it is next to impossible to track each piece's
Also, it is essential to monitor each
piece of equipment and measure data relevant to its operation,
because each element in the data center must function flawlessly to
ensure reliable operation. Without automated means, it is almost
impossible to do so for the large number of components residing in a
It is relatively easy to claim that you
have a solution in asset management and monitoring at a data center
by deploying sensors and the software to manage them. So I asked
Richard about RF Code’s differentiation. Every company claims that
its solutions and products are unique and stand out from the
competition. His answer was twofold.
Product layering: One part is a
common infrastructure to track and monitor elements in a data center.
In a single infrastructure, different tags and sensors hang together,
ranging from sensors for humidity and temperature to those for
motion. This is well described in the following figure.
In this common hardware infrastructure,
they put in a higher layer of software for more sophisticated
Open API: Expanding this common
product infrastructure philosophy, I would like to classify two major
trends for integration. The DCIM market is growing but still rather
confusing. There are many aspects to managing a data center. A few
analysts have defined a model and areas of coverage. In addition, an
increasing number of vendors have rebranded their tools and utilities
as solutions for DCIM. Because no single vendor could provide a
complete and comprehensive solution for the entire data center
operation, large and small vendors alike have started to partner
together to provide comprehensive solutions.
Some may provide nonstandard APIs only
common to their partners to integrate their tools and utilities to
work seamlessly together. The advantage of that scheme is that you
have good coverage of tightly integrated DCIM functions as long as
you make that particular group as vendors of your choice. Also, if
that group's APIs become standard, it would be great. The downside of
that is that you may become locked in to that group of vendors. They
may not have some functions that may become necessary for you later.
If other ways of integrating functions and APIs become standard, you
may need to adjust your solutions accordingly.
Another way is make your solutions
interoperable with established standards, such as network/serial data
formats, and web-based standards like XML and JSON. Those
well-established standards tend to be at a higher level, and your
level of integration may be looser than in the former approach.
However, the upside of this approach is to guarantee that your
solution would be integrated with any other tools or utilities that
conform to the established standards and would be future proof.
Both approaches are valid because we
cannot tell how the DCIM market will shape up in the future. RF Code
selected the latter approach.
Wireless communications: The
second point Richard raised was wireless communications. When you see
racks of IT gear in their surroundings, you probably see many cables
for networking and power alike all over the place. If we place
sensors in all the strategic locations, you would be inundated with
many more cables for communications and sourcing for power. Managing
those cables alone would add a burden for data center operators. RF
Code, as its name indicates, manufactures and markets only wireless
sensors with no cables. A battery-powered wireless node sounds like a
But it creates its own problems in
Scalability with wireless
A large effort in tagging
As for #1, their radio complies with
18000–7: Air interface at 433.92 MHz. When
you deploy a large number of nodes for communications, interference
tends to happen and accurate communications may not be guaranteed. RF
Code has developed a set of technologies to pack wireless nodes in a
way that prevents interference among them and has obtained seven
patents in this area to increase the number of nodes without
interference. Their patents are listed here.
The details are beyond the scope of this blog.
As for #2, some security-sensitive
organizations, such as financial institutions, do not want to use
wireless communications because of the potential security risk.
Unlike in wired communications, packets can be easily grabbed in
wireless communications and the information in them stolen. Wireless
communications can be protected via encryption (by something like
SSL), but encryption eats bandwidth. RF Code has an option not to
encrypt the communication but to send very proprietary data. It would
still be possible for someone to make a reader that could read their
tags’ beacons if hackers really wanted to "sniff” the data.
However, aside from the tag-identifier data (basically, a serial
number) and current sensor reading (for sensors, or for asset tags
that feature tamper detection/motion detection/IR receivers), there
is no other information about the assets included in the beacon. So
even if an asset tag is affixed to a very important piece of
equipment, its beacon data doesn't look any different from data
affixed to anything else—it's just a number. All tag-ID-to-asset
correlation is done in the backend stem, which would clearly be
secured behind a firewall. RF Code’s customers include a number of
banks, such as Lloyds
Bank in the UK.
As for #3, each of RF Code's wireless
nodes is powered by an internal battery with a life of somewhere
around five to seven years. That means that every five to seven years
a battery must be replaced. In a data center, a server's life is
about that long or even shorter. When a battery requires replacement,
the server is also replaced. That is why I think this may not be a
Also note that some of their asset tags
equipment and all of their sensors are user serviceable, enabling the
user to replace the battery. They also include a "low-battery
warning” feature that will alert the admin that the battery is
getting low when it gets down to about 20% charged.
As for #4, as the amount of equipment
grows, the effort to tag it all grows as well. It would be a major
effort to tag an existing data center. When equipment has been
deployed already, reaching each piece may be cumbersome and
difficult. IT equipment like servers may be stored inside a rack, and
extra effort may be required to attach a tag at the right location.
Some servers may have an outlet exhaust on the side rather than on
the back. Without proper placement, a tag may be influenced by the
sideways exhaust. Also, without detailed documentation, tagging
information may be only on the equipment's label and nameplate, which
may not be very precise or adequate. For example, it may not be easy
to find out which department particular equipment belongs to and what
it is for.
In any event, it would be much easier
to place a tag on each piece of equipment and device before its
deployment. RF Code gets involved in an early stage of data center
construction and avoids this problem. But if people need to tag
assets that are already in place, they recommend that they do so as
part of a typical annual inventorying process that requires staff to
physically account for each individual asset. Since they have to do
this for accounting purposes anyway, tagging assets as part of that
process presents the least additional effort.
Expanding the idea of tagging as early
as possible before the data center is in operation, let's consider a
container-based data center. A container-based solution has several
advantages. Those include ready-made modular additions to data center
capacity and less garbage as a result of not having to unwrap each
component. When a container-based solution is put together, a tag can
be applied to each piece of equipment as it is assembled into the
container. At the time of the assembly, it is easy to reach any
location for tagging. Also, the information on each piece of
equipment is readily available, and each tag can contain precise
RF Code works with IBM and HP to
integrate the data they collect with their software system. Although
large companies like IBM and HP have many divisions and each division
sometimes behaves like an independent company, both companies have a
container-based data center solution. Incorporating RF Code’s
solutions into their container-based data centers would improve their
Finally, I asked Richard two questions:
about the DCIM market and their future plans.
DCIM market present and future
Richard and I were in agreement that
the DCIM market is poorly defined and is very confusing. He felt that
DCIM started to receive recognition only in the past 12 months. He
also mentioned that DCIM is a poorly formed acronym. He thought it
was more like data center management infrastructure (DCMI) than DCIM,
which typically tracks assets and monitors power consumption and
environmental conditions. It also needs to integrate with building
management systems. But that is not enough. On top of what DCIM
provides, DCMI needs to have functions for software loads,
networking, and hardware management. With those additional functions,
DCMI could provide real-time proactive management of a data center.
I agree with his idea. Many of the
currently available solutions mostly touch the facilities side and
have very little impact on IT equipment. Even if they touch the IT
side, it is only to look at each piece of IT equipment as a black box
and not to deal with what's happening inside, such as loading factor,
software status, virtualization, and software execution efficiency.
The reason for the omission is simply the difficulty of measuring and
monitoring such data and analyzing and incorporating it into the
dashboard for visualization and good integration of IT and
facilities. I am glad that a vendor like RF Code has a good vision of
DCIM similar to mine.
As for the near future of the DCIM
market, I asked Richard if he thought some companies will dominate
the market. It’s crowded right now. Many vendors market their
solutions as DCIM, and they come in many sizes and functions, such as
established companies like Intel and Schneider. Smaller companies
like RF Code provide niche, pure DCIM solutions. Richard thought that
for the time being, the DCIM market will be dominated by a
combination of large established companies and startups. I agree with
him. Using the business intelligence market as an example, he
predicted how the market might shape up. Smaller companies will be
merged or acquired by larger ones, and consolidation will take place
as it did in the BI market. At some point, DCIM will be one of the
functions in the larger infrastructure management solution market.
RF Code’s future directions
Code applies their technologies to the data center segment as well as
to the oil and gas market. I thought their technologies could be
applied to the power industry. The power grid consists of a large
assortment of devices and equipment. Smart grid is an attempt to
merge power, communications, and IT technologies into a cohesive
system to increase the effectiveness of power generation,
transmission, distribution, and consumption. Along the power grid,
there are many assets deployed to support each function, and it is
important to track them accurately. For example, after a power outage
is confirmed, it is necessary to locate which device or equipment is
at fault and identify its location so that a service crew can be sent
to repair it.
power grid must be maintained to guarantee reliable operations to
keep the lights on. Each device and piece of equipment needs to be
monitored for its proper operation in real time. As each becomes
electronics-based from the electromagnetic base, each component will
be controlled in a more precise manner, and tracking its location and
status will be more important. I think RF Code’s products could be
adjusted to be applied to the power industry, but each vertical has
its own idiosyncrasies in vocabulary and operations. It may not be so
easy to do. They do not have a plan to expand into this market at
instead would like to grow into the higher stack to increase
functions with software.
Although people have started to realize
the importance of managing the data center infrastructure in the US,
it is not clear whether, or what kind of, comprehensive solutions
should be employed. Because the market is still young and no clear
standards are set, people are hesitant to invest in a comprehensive
solution. But they want to deploy a piecemeal solution that brings a
visible result. The DCIM market outside of the US, like APAC, is
still being formed. When I talked to a large data center provider, it
was not clear to them what DCIM is or what DCIM covers. DCIM is an
important market and will grow, but more education on its merits will
promote it further.
This post has not been tagged.
Posted By Zen Kishimoto,
Monday, March 11, 2013
| Comments (0)
This is a continuation of the previous
blog on hybrid clouds. In part 1 and part 2, I discussed
and its technologies for implementing a hybrid cloud. Now that we
know a hybrid cloud can be successfully implemented, what does that
mean to us? How does it change the IT world? By the way, the
following discussion assumes that a perfect hybrid cloud can be
implemented. The following rant is not based solely on the current or
future technologies of CloudVelocity.
What does it mean?
How does the IT scene change with the
implementation of hybrid cloud computing? First let's consider
private clouds only. In the following, I will use an enterprise data
center and its private cloud interchangeably for the ease of
discussion, although not all data centers have been converted to
private clouds yet. Some company may have several data centers (and
therefore private clouds) in the US, or even worldwide, across
multiple time zones. So even before talking about hybrid, using this
technology we can combine those physical data centers into one single
logical private cloud. A logical cloud consists of physical private
clouds (data centers) and may be recognized as one entity.
Logical private cloud
With a logical private cloud, using
some technologies from CloudVelocity, we can move applications that
may consist of physical machines (PMs: not virtualized) and virtual
machines (VMs) anywhere and anytime we choose. In the following
figure, we can pass PMs and VMs back and forth seamlessly between our
home cloud and any other private clouds of our company. Although it
shows only a subset of interactions below, we can potentially move
PMs and VMs in any way that makes sense by some predetermined
criteria. It may that one PM or VM is passed to another cloud and
then to the third one and so on. It would become pretty complex to
manage your Pms and VMs under such a new paradigm.
PMs and VMs move around only among
private clouds owned by the same organization. A set of such private
clouds may be considered as one logical private cloud.
This means we can finally implement
several things discussed in part 1, including:
Follow the sun
In a given workday, access to software
applications and utilities running on servers and other IT
equipment—and therefore clouds—fluctuates. Access starts to grow
as people start their day’s activities in the morning, it hits a
peak, then subsides towards the evening. Access is lowest during the
night. So you might want to move your PMs and VMs to other time zones
where the sun still shines and more loads need to be processed. We
can expect a better response time when loads and processing units are
close to each other.
Follow the moon
In many countries, power is cheaper
during off-hours (normally nights, hence follow the moon).
Sending your Pms and VMs to such time zones may reduce your operation
cost. Additionally, even within the US, power cost can fluctuate
hourly if a variable power pricing model is implemented and applied
to data centers. By shifting your VMs to a data center whose region
gets the lowest power cost, you may save on running costs.
Just as we load-balance among
servers at a data center, we may want to send loads to several
different private clouds. In this way, when one data center gets very
busy, such loads can be passed to other data centers to share the
burden. How you move PMs and VMs should be determined by predefined
metrics to optimize your operations for a few factors, such as
operating cost, response time, and throughputs. Each organization has
its own goal for its operation, and the metrics should be tailored to
Cloud bursting may be related to
load-balancing, although it is not the same. When a load increases in
a private cloud, we may want to move all or part of it to a public
cloud for on-demand processing; this is known as cloud bursting. PMs
and VMs that are processing the load can be moved to a public cloud
for continuous processing. When the load subsides, PMs and VMs on the
public cloud can be disabled. There has been a lot of talk about
cloud bursting, but now it can become a reality. We need a good
automated system to move PMs and VMs, and to enable and disable them
as needed. A good policy is a must-have for this as well.
The San Francisco Bay Area will have a
major earthquake some day for sure, and when it happens, much of the
existing infrastructure, including data centers, will be unusable. If
we have a way of duplicating what we are running in our primary data
centers at a secondary site far enough away (such as the Sacramento
area, a little more than 80 miles from the Bay Area) and transferring
execution state information intact to the distant site, processing
could proceed without interruption.
Super logical private cloud
With this technology, we do not have to
consider the boundary between private and public clouds either. So
the logical private cloud can include public clouds, becoming a super
logical private cloud, or what I call a supercloud.
A green oval depicts a private cloud,
and a light-blue one represents a public cloud.
This configuration would make managing
PMs, VMs and clouds much more complex. We can move our PMs and VMs
between private clouds, between private and public clouds, and among
public clouds. We will no longer be restricted to a move between one
cloud and another cloud (a one-to-one move) but can implement
one-to-many and many-to-many as well. Then it will become necessary
to develop a system that allows automation. As we involve many
private and public clouds of various implementations, we will not be
able to easily track how to optimize such moves. For that, we will
probably need a policy based on predefined metrics. Cost may be the
number one factor. But at the same time, we want to maximize response
time—and the performance of developers scattered around the globe.
Also, note that many superclouds may
share the same private and public clouds. This means that loads at
each private and public cloud could fluctuate over time. So depending
upon how busy each cloud is, we may want to dynamically alter how we
form a super logical private cloud for optimization.
By the way, when a supercloud is
developed and deployed, will we call it a supercloud or simply a
cloud? Those IT folks who will follow us in the future may take it
for granted and consider it a normal IT deployment and execution
environment. Throughout IT history, when some technology or method
becomes transparent as part of an overall system, that is when we say
that that technology really has matured.
Who uses hybrid clouds and benefits
I can think of three parties, although
there may be more.
Enterprises that have their own private
clouds can extend them to public clouds to produce hybrid clouds to
exploit the things I mentioned above.
Data center providers
If you are a colo provider, you can
sell extra services at your center to realize hybrid clouds for your
clients. There are different levels of providers. Some may simply
rent a space, while others provide both equipment and services. Some
may provide both private and public clouds at the same data center.
For them, this is a perfect tool to increase their revenue.
If a colo provider does not want to
provide any service other than space, those guys with the hybrid
cloud technology can help end-users implement hybrid clouds.
Finally, my blog always ends with a
question about what the subject means to energy efficiency. Although
inconclusive, there has been some discussion about whether cloud
computing is more energy efficient than its predecessors. I think it
depends upon whose view you take. If you are a user, you pass some or
all of your computing needs, along with support staff, software,
hardware, power, cooling, water, and other things, to your cloud
provider on an on-demand basis. Since you can reduce your investment
on these, it is certainly energy efficient for you. It may or may not
be for your provider. If the provider has very little utilization of
their facilities, they may not be profitable or energy efficient at
all. You may still have to have a large staff, a large space,
dedicated IT and facilities equipment, facilities support such as
cooling, and so on. That cannot be very energy efficient.
When a hybrid cloud becomes a
supercloud and our energy becomes more scarce, we may need to look at
energy consumption and energy efficiency at the supercloud level
without distinguishing private or public clouds, which may sound
silly at this point. It is because the US seems to be doing fine for
the foreseeable future with shale gas and oil, but who knows what may
Posted By Zen Kishimoto,
Friday, March 08, 2013
| Comments (0)
continues the discussion of CloudVelocity’s
hybrid cloud technology. In this blog, I would like to talk about
what’s under the hood.
Some technical details
a former technologist, I wanted to open the hood and find out more
about the underlying technologies. For this, Anand Iyengar,
CloudVelocity’s founder and CTO, gave me a chalk talk.
this is not a white paper detailing the technology, I only describe
it at my layman’s level. However, it is such an intriguing
technology that I’m accepting Anand’s offer for further
discussion and will write more about it in the future.
elaborated on the details, but I made a simpler diagram to fit the
space. It is not that much different from the picture above.
machines (VMs) move between a typical enterprise private cloud
(mostly VMware-based) and a public cloud (typically Amazon AWS).
take a quick look at the architecture:
first look at your own data center or colocation facility (private
cloud). In the modern software application system, an application
does not run on a single server. Instead, the running of an
application spans multiple physical and virtual machines. So we call
it a multisystem application. The configuration may differ according
to usage and design. Typically, it consists of load balancers, web
servers, application servers, and sometimes a cluster of other
is illustrated in the figure above. To save space, I drew only two
machines, S1 and S2. The multisystem application typically uses a
database, file systems mounted from a closed-box NFS server system
(NFS1), and services from an LDAP server (LDAP). Everything in the
public cloud is a copy of what is in the private cloud, including
NFS1. Note that NFS provides files locally but not over the cloud
boundary. Moreover, in the private cloud there is a server, such as
an LDAP, that one may not want copied to the public cloud but kept
in the private cloud for security reasons.
are virtual appliances (CloudVelocity Nexus Site Manager for the
private cloud and CloudVelocity Cloud Manager for the public cloud)
that together keep the cloud site images synchronized with the most
recent changes to systems in the private cloud. CloudVelocity uses
the term appliance to emphasize its dedicated function.
CloudVelocity Nexus may run on a physical server, while
CloudVelocity Cloud Manager runs as a virtual machine.
further assume that S1 (in the VMDK file format) is virtualized, but
neither S2 nor DB1 is virtualized.
A. System S1, which
is virtualized needs to be copied to a pubic cloud. S1 is copied via
the link to the public cloud, unless there is a copy left over from a
previous need, in which case only the differences are copied. It is
converted to AMI automatically. In the case of S2, it must be copied
via the link to the public cloud. Like S1, if there is not a copy
left over from a previous need, it gets virtualized to run on an AMI
B. System DB1 and
NFS1, which are physical servers, go through the same process. They
also are automatically virtualized to run on AWS/AMI.
two clouds are linked by the Internet or a dedicated connection via
any of the systems are no longer necessary, they can be disabled and
deleted, or retained for future use. The copy may be retained to
minimize copying time in the future.
high-level description continues regarding how those components work
together. The actual workings are much more complex, but I have
simplified them for this presentation.
Nexus inventories all the pertinent information regarding computing
power in the private cloud, including applications and supporting
servers, such as file systems and databases. The configuration
information is stored in a proprietary file format.
information is passed to the CloudVelocity Cloud Manager in the
public cloud. This appliance is virtualized to run on AWS (in the
AMI file format) all the time. Storage and computing time for this
appliance are charged per AWS pricing. The size of the appliance is
negligible at several hundred kilobytes, and it does not cost much.
Once Cloud Manager receives the configuration information, storage
volumes for each component get allocated for each system and
populated, without running it. This reduces activation time for the
public cloud counterparts. EC2 charges are heavier for computing
than for storage. The design is a good compromise for reducing
copying time and saving on computing charges on EC2.
the systems in the public cloud typically takes three to five
minutes, which is the time required to boot up a VM in the AWS
cloud. They are started in parallel.
systems may be disabled when not needed in the public cloud. The
user may expect another need for the systems sometime soon and keep
a copy around, or delete it to save the storage charge by the AWS
If the private cloud goes down for any reason but the operation
cannot be halted, a full, earlier copy of the application systems
may be started in the public cloud to take over the operation.
This is called cloud fail-over and can be used for disaster
recovery and for implementing features like follow-the-sun and
and testing sandboxes:
More than one full copy of the application can be started
simultaneously in the public cloud, while the application is still
running in the private cloud. These copies are fully sandboxed and
can be used for development or testing.
For datacenter space constraints and other reasons, the systems in
the private cloud may be cloned to the public cloud and those in
the private cloud, disabled.
This allows extending computing power in the private cloud by
enabling and cloning computing power in the public cloud, if a
load surge takes place. This can be accomplished without losing
data integrity in the private cloud, because two appliances can
tunnel update requests back to the local site. Any changes made on
the public cloud are constantly sent back to the private cloud for
data consolidation, so when the load surge subsides and the copies
in the public cloud are taken down, data integrity is maintained.
said that two technologies in One
Hybrid Cloud Platform (OHCP)
and CloudVelocity is applying for a patent for each.
first has to do with synchronizing two data stores via two appliances
that contain the inventory of computing equipment in both clouds. I
will not go into detail, but according to Anand, replicating and
maintaining synchronization between the two requires some work.
During switchover time between the primary and the secondary copy of
a VM by vMotion, pages dirtied on the primary copy are constantly
sent to the secondary copy for synchronization. This requires fast
(about 5 to 10 ms) communication between the primary and the
secondary, but it allows a game running on one server to run
continuously on another server after the move. The OHCP sends all the
changes once in the form of a file and that makes it possible to send
over a slower connection like the Internet with encryption (SSL). As
for the moving of a running game, OHCP does not support such a
second is concerned with letting the duplicated copies of VMs in the
public cloud have access over the connection to databases like LDAP
in the private cloud. As noted before, because of security concerns,
some servers and databases may not be duplicated in the public cloud.
So VMs in the public cloud need to have access to them in the private
discussion with Anand, I came to understand that vMotion and OHCP
address different problems, but may overlap in some functionality.
Both technologies move systems in execution from one cloud to
another. But there is more to it. I summarized the differences in the
between heterogeneous physical or virtual systems and clouds
clouds need to run with VMware
memory page and block storage
minutes (VM booting time on AWS)
particularly (can be Internet) with SSL
< 5 ms, or distance < 200 km; fast, dedicated connection
fail-over, development/testing, migration, cloudbursting
keen on quick switchover; within the same data center or
relatively short distance
at the table above, it appears that the two technologies are not
competing but can be complementary to each other. I will dig into
them more in my future blogs.
the way, I can try
out their system
free of charge.
But wait! I am not ready. I do not have a reasonable-size private
cloud myself, much less use AWS. I probably need to consult with some
of my friends who are involved in Silicon
Valley Cloud Center.
to part 3, which will discuss energy efficiency by cloud computing
and what it means to have a hybrid cloud.)
Posted By Zen Kishimoto,
Saturday, March 02, 2013
| Comments (0)
go to Japan three to five times a year. Every time I go, I take a set
of my computing gear, including a PC with a charger and a mouse, two
cell phones (one for Japan and the other for the US) with chargers, a
digital recorder, a digital camera, and a bunch of USB memory, which
I carry just in case. It is very unlikely that I’d interview
someone with a digital recorder. With the iPhone, I do not need a
digital camera. I always plan to write a bunch of blogs and articles
on the road, and usually end up with no results at all. But it is a
habit to take all this stuff. It gets very heavy and gives me a stiff
this time I tossed my digital camera and PC out of my bag and
experimented with my iPad. With the iPad, I do not need a mouse or
USB memory. Its charger is smaller and much lighter. Here’s what I
shoulder bag got much lighter (1.8
pounds vs. 4.2 pounds).
This is helpful. If you are in the US, you can carry a heavy bag
from your car for the short distance to your place of meeting. In
Japan, you walk a lot, and the heavy bag is not very convenient,
especially in crowded trains at rush hours.
always-on feature helps me to use it without booting up. If I don’t
use audio or video applications, it lasts long enough to get to
Japan (some ten hours).
without an Internet connection, I can check calendars and documents
via DropBox and Documents.
for English and Japanese works flawlessly.
consumption of the iPad is less. It consumes 12W
while my HP PC uses 65W.
iPad is basically a read-only device and is not really suitable for
writing, except for short messages in email and other documents.
iPad does not support USB memory unless in a special format, which
does not help me very much. I do not want to upload sensitive files
to the Internet via DropBox.
how did I cope with the two cons? This may not work for you unless
you have a situation similar to mine. I travel to Tokyo and Osaka. I
have close relatives living in these cities with a PC and an Internet
connection. So if I need to use a PC, I borrow theirs and both
problems above are solved.
an energy efficiency point of view, an airplane can be lighter,
consuming less fuel, and power consumption by an iPad is lighter,
the two problems fixed, I would be happier. Incidentally, on this
trip, I converted one of my relatives from a PC to an iPad. In Japan,
Apple iPads are still very popular. She does not carry her PC around
or write long documents, and she has minimum exposure to computing
technologies. So far she is happy.
Posted By Zen Kishimoto,
Sunday, February 24, 2013
| Comments (0)
When cloud computing was first
introduced, I did not expect that it would develop to such a degree
that the IT world would be greatly changed. First public
cloud and then private
cloud were introduced. Then hybrid
cloud became the center of discussion.
Some people project 2013 will be the
year of the cloud, and hybrid clouds are talked of as one of the
trends for the year to come. See here,
and many other places.
As I said before, much of hybrid cloud
is just talk and not reality, and there have been several
showstoppers before now.
Some of the many factors making it hard
to implement hybrid clouds are mainly technical:
Virtual machine (VM) file format
Amazon Web Services
was the first to implement a public cloud, and AWS is now the de
facto standard for public cloud. It uses its own proprietary file
Machine Image, a.k.a AMI) running virtual
machines on the Xen
hypervisor. Their file format is not the same as the original Xen VM
format. So even if you are running Xen hypervisor for your cloud,
you cannot enjoy interoperability with AWS without converting your
VM's file format. For example, Citrix virtualization environment is
based on Xen, but its file format is virtual
hard disk (VHD), which is also the file format
for Microsoft's virtual machine.
Private cloud: In the enterprise
market (private cloud), VMware's
VM file format (VMDK) is the de facto standard.
Hybrid cloud is an attempt to use
both private and public clouds to process IT demands by optimizing
suitable in-house and outsourced IT infrastructures as needed. So
when we want to move VMs back and forth between public and private
clouds, we need translations each time we move them across the cloud
boundary. It may not be very hard to do so, because there are some
translation tools readily available from vendors like Amazon
and VMware (vmkfstools).
It may be straightforward to move VMs that are not in execution, but
VMs in execution are generally hard to move with their execution
state intact. See the next.
Physical movement of VMs
If we want to exploit public and
private clouds for an application in execution, that execution
instance may be transported between two or more clouds to find the
most suitable execution environment. One big issue is the distance
between clouds. VMware's vMotion
allows you to transport your VM up to something like 100
miles) but no farther. With this physical
restriction, what you can do with hybrid cloud may be limited by the
distance between clouds.
Various support environment
Cloud is not just virtualization
but needs a comprehensive environment, such as management and
support, including tools and security considerations. Each cloud
tends to come with its own environment and idiosyncrasies, so what
you can do easily in one cloud may not be as easy in another cloud.
This would make managing a hybrid cloud cumbersome.
To date, most discussions on hybrid
have been at a very abstract level and not at all concrete. People
have talked about what we could do with hybrid cloud without
referring to its concrete implementation. Recently, I came across
yet another brand-new cloud company that claims to have solved the
aforementioned problems. Greg Ness recently sent me email with a
release and wanted to show what CloudVelocity,
his new company, is doing in the area of hybrid cloud.
I am by no means an expert in hybrid
cloud computing or any kind of cloud computing, for that matter, but
let me try to review how hybrid computing is implemented with their
technologies. To support hybrid cloud, VMs need to move back and
forth between private and public clouds. How can we implement such a
move? Because an execution space is not shared between a public and
a private cloud, we cannot literally move a VM across the clouds.
What we do is to make a copy of a VM executing at one cloud and
transport its execution status to a cloned VM at another cloud. Then
we can disable the original VM and enable the cloned one. If a VM is
not in execution, it is not that hard. But if it is in execution, it
is much harder.
If both private and public clouds are
implemented with the same technologies and the distance is less than,
say, 100 km,
the same VM could be transported with a utility like vMotion. But in
most cases, two cloud environments are not the same (see the
technical problems described above), and the distance could be
greater. Also, you can move only virtualized applications but not
traditionally maintained applications, because you cannot assume all
the applications have been virtualized into a VM format.
We need to have carbon copies of VMs
and non-VM versions of applications (that need to be virtualized) on
the other side. That means you need to have carbon copies of your
applications running on a public cloud. This sounds like a disaster
recovery (DR) system.
Disaster recovery/fail-over system
In such a system, you duplicate the
applications that are running at the primary location and operate
them with options at the secondary location. These options include
active-active and active-passive configurations. Active-active means
that the machines (and thus applications) are live at both the
primary and the secondary locations at the same time, with data being
copied from the primary to the secondary sites. In this scenario,
when the primary location cannot operate any longer for any reason,
the secondary location can take over seamlessly. The active-passive
configuration may not guarantee complete synchronization, because the
passive one in the secondary location does not run until the primary
location can no longer support applications.
In any event, if we duplicate the whole
thing for the secondary site, as in the case of DR in an
active-active fashion, the duplicated copies are always in the
secondary site with dedicated servers. This situation is the
farthest from cloud computing in spirit, especially for public
What we need is a solution like this:
Copies on the other side made only
when needed (on-demand).
Resolve VM file format and other
incompatibilities among major cloud systems, such as AWS, Rackspace,
Microsoft, and OpenShift.
Handle physical vs. virtual
applications in an IaaS cloud environment.
Now back to CloudVelocity. I visited
Greg Ness and Rajeev Chawla, CEO, at their headquarters in Santa
Clara. They claim to have implemented a solution to solve the
problems discussed above.
From left: Rajeev Chawla (CEO) and Greg
Ness (VP Marketing). See here
for their bios.
They have developed a comprehensive
system for implementing hybrid cloud that they call One
Hybrid Cloud Platform (OHCP), which is depicted
in the following picture. Applications move across the cloud boundary
in five steps:
Host discovery—Inventory your
private cloud (data center), which consists of all the pertinent IT
hardware and software.
Blueprinting—Create a database
of how the discovered components are put together.
Cloud provisioning—Duplicate and
create VMs on the target cloud (translating VMs and virtualizing
physical applications if necessary).
between the two clouds.
Service initiation—Let the
duplicated VMs take over and disable the original VMs.
CloudVelocity's comprehensive One
Hybrid Cloud Platform.
This sounds easy. How do they do this?
That will be covered in Part 2.
Posted By Zen Kishimoto,
Wednesday, January 30, 2013
| Comments (0)
Fujitsu has been one of the most successful
companies in Japan, and it also has operations worldwide, including North
America. The revenue distribution is Japan, $36.1B (61.8%); EMEA, $9.9B (17.9%);
APAC/China, $5B (15.6%); and the Americas, $3.5B (4.6%), for a total of $54.5B
worldwide. Because I cover the intersection between ICT and energy, I wanted to
find out what they are thinking of in terms of applying ICT to sustainability. Incidentally,
Fujitsu recently held its sixth annual conference, Fujitsu
North America Technology Forum 2013, at the Computer History Museum.
Fujitsu hosted the conference, and attendance
was free and included breakfast, lunch, and cocktails. Over the years, the
number of attendees has grown, and ICT analysts, such as Gartner and IDC, were
in the crowd. On top of that, they got very prominent speakers, like Dr. John
Hennessy, president of Stanford University, and Nicholas Negroponte,
founder of MIT's Media Lab. It is a
good deal, to say the least.
The following is my take on what was
Fujitsu develops products over a wide range of areas,
including hardware and software, as well as services.
Fujitsu covers many areas, as shown in the picture
above. Many Japanese companies are considered good at hardware but not
software. But Fujitsu actually does pretty well in the areas of software and
services as well as hardware. Because they put their foot in many areas and
their base is ICT, it is very interesting to see how they view the current
state of ICT. ICT used to stand by itself without much regard to other areas,
like energy. In the past, ICT technologies alone could generate revenues. But
things have changed a lot recently, and ICT needs to find other application areas.
O.K., the following is how Fujitsu sees the world in conjunction with ICT.
This picture shows that ICT could be
applied to areas like food/water, economy, energy, population/aging, health,
natural disasters, and transportation. Cloud computing would tie them all together.
If they are right about this, there are still a lot of application
opportunities for ICT to generate revenues, which is good news to many people
in the ICT field, including myself.
Following the first keynote on Fujitsu's
business, Stanford’s Hennessy gave a talk and there were three presentations by
Fujitsu people. I covered Hennessy's presentation in a previous blog. The
Fujitsu presentations were categorized as future solutions for smart energy
deployments, which is very relevant to what I look at these days. Along with
these three technologies, a total of 23 technologies were demoed. First, let me
touch on the three presentations.
Energy management system (EMS)
The first was about Fujitsu's energy
management system (EMS). In 2011, electricity saving by visualization was
implemented to cope with the power shortage after the big quake and the shutdown
of the 50 nuclear plants in Japan. From 2012 to 2013, Fujitsu developed a cloud-based
building energy management system (BEMS). From 2013 to 2015, it plans to
move its focus to smart city.
An example of that is to cut power consumption by observing peak times and by
controlling battery charging and discharging of laptops at offices. Their
experiment showed that they could reduce total office power consumption by 2–3%
by doing this. In most building energy management systems, attention is usually
given to high-power consumers like HVACs. It is very Japanese to even pay
attention to laptop battery charging. But who knows? Small savings may add up
to a big saving.
The second was Fujitsu's implementation of
Fujitsu was the first Japanese company to participate
in the OpenADR
2.0a interoperability test. Demand and response (DR) is one of the easiest
ways—by shaving off power at peak time—to generate logical power. Usually, when
demand increases, supply must be increased to cope with the higher demand.
Instead, demand is curtailed to fit the supply at a given time. In a way,
logical power was generated to solve the demand-and-supply imbalance. OpenADR
is a protocol specified by the OpenADR
Alliance to dictate in what form the DR signal is transmitted. The member
companies are listed here. I
chatted with a person who was manning the booth. Their product is a demand
response automation server (DRAS) paired with a client. Because OpenADR is a
protocol specification, each vendor could build their products based on the
standard communications protocol for competition. Fujitsu's involvement is
basically in the US; Japanese utilities are not ready to consider DR.
Efficient power supply
The third Fujitsu presentation was about
the power supply, which has a conversion rate of 94.8%. An organization called 80 Plus promotes a high
conversion rate for the power supply. Even with 80 Plus Platinum, however, the
conversion rate is only 90%. Fujitsu developed a few technologies to increase
efficiency. Fujitsu plans to release a product with this technology in 2014.
I had to be elsewhere so missed the subsequent
sessions. I think ICT has a lot of potential in many areas. Energy is one, and
Fujitsu seems to understand it very well.
Energy Management System
Fujitsu North America Technology Forum 2013