Feeds:
Posts
Comments

Archive for the ‘Trends’ Category

“Cloud computing is not just one more way to deploy information systems. It represents a total shift in how IT resources are delivered and ultimately will replace most if not all internally-maintained IT infrastructure.” This is how Franc Scavo starts his latest blogpost on “the inexorable dominance of cloud computing” and the raise of utility computing – inspired by a speech by Nicholas Carr at a Cloud Computing conference organized last week in London by Google.

This is, by the way, the same Nicholas Carr who shook the world of many IT managers and CIOs when he published his article “IT doesn’t matter” in the Harvard Business Review in May 2003 – trying to make the case that, from a strategic standpoint, infrastructural technologies will commoditize and become more and more invisible.

Nowadays, and some years later, we´re over this initially offending perspective. We are able to see that the value propositions of SaaS and Cloud computing strategies are significantly better than those of on-premise software. And that both SaaS and the Cloud rely on integrated technologies from an automated backbone. The more invisible, the more mature. The more mature, the better.

It was Nicholas Carr who coined the matching Cloud koan in his blog: “Not everything will move into the cloud, but the cloud will move into everything.”

His 30 minute talk you can view here:

Read Full Post »

It wasn’t really a surprise that one of the big topics at the VMworld virtualization fest in San Francisco early this month was the virtual machine sprawl and the long desiderated management answers by the IT departments.

This is not just the case in VMware environments as Jamie Erbes, Chief Technology Officer for HP, points out in a ZDNet interview: “On the need to manage virtualization efforts, the problem of “virtualization sprawl” is becoming real. Developers have adopted virtualization and IT managers don’t use the rigor needed for its use. For instance, if some department wants a new server a virtual machine can be created in minutes. Multiply those departments in a multinational corporation and you have a lot of virtual machines” – and a big threat to end-to-end management.

Hilton Chicago Elevator monitoring by cote, on Flickr In daily IT business the problem is not the quantity of the virtual machines, as the title borrowed from Timothy Pricket Morgan’s blog suggests. The problem is that these VMs are just automated and orchestrated inside the VMware virtual environment and isolated in relation to the IT processes which are still running separately in real and virtual environments. From this process viewpoint, managing the virtual environment is a far cry from managing the entire IT environment – as long as the dependencies between the virtual and the real world are unsolved. That’s the catch. With this in mind, serious virtualization efforts just make sense together with a serious automation strategy. And a serious automation strategy is cross-platform or not – as I stated in this blog months ago.

This is also in tune with Jamie Erbe’s point of view. She emphasizes in the interview mentioned above that “companies need to focus on their policies and how rules for things like storage, applications, cloud computing and storage all work together. If these policies aren’t well thought out the effort to automate infrastructure can be wasted and wind up costing you more.”

Don’t forget. You can do great things with automation. But first you have to make sure that the automation train goes in the right direction. This train is fueled by “integration”. Integration without restraints.

For more information look up the Whitepaper about Intelligent Service Automation for Real and Virtual Environments.

Read Full Post »

To begin with, I really love to read feature stories. The more meandering they are the better. But the closer I am to making a decision, the more I search for facts and figures. And reading them in a spreadsheet manner. No wonder that during these periods checklists like the one I just hit upon have a special appeal with me.

It’s a checklist for UC4 Insight which is mainly built out of questions. I picked out my favorites and, of course, would love to know about yours.

□ Do you like to have a tool to detect similarities and patterns that resulted in errors?
□ Do you sometimes have incidents where you cannot say why they occur?
□ Do you sometimes have problems to proof that the origin of an incident is not in your department?
□ Would you like to have a tool that gives a general overview of at which times and days delays in processing occur (jobs take longer to finish than normally)?
□ Would you like to have a tool that shows on which servers and operating systems most of the processing-incidents and/or processing-delays occur?
□ Would you like to have a tool that gives you a quick overview of user behavior?(access violations, cancellation behavior, etc.)?

You think these questions are suggestive and don’t allow a NO? You are right. They are! But did you also ask yourself if this may be something to do with the way businesses are run today? Maybe you ask yourself whether you can be successful in a dynamic market without these valuable insights into your production environment?

Read Full Post »

August started with good news. Mike Gualtieri and John Rymer from Forrester Research evaluated nine complex event processing (CEP) platforms and named
UC4 a “Strong Performer”
. Considering 114 criteria they proved on the side that the evaluation was no less complex than the subject itself.

Talking with UC4 CTO Vincent Stueger about the strategic value that CEP technology can bring to customers as they “strive to automate jobs, processes, applications and services that span hybrid computing environments” you no longer wonder that analyzing streams of data swirling around inside a business environment is becoming a crucial task for agile enterprises. “We are building the industry’s first Intelligent Service Automation solution that will support companies in an on-line on-time world, allowing them to sense and respond in real time to complex, interdependent events to optimize their IT resources.”

That CEP is a hot new enterprise middleware category is underlined by its position as “Technology Trigger” in the brand new Gartner 2009 Application
Architecture Hype Cycle
– brightly commented (and shown!) by Opher Etzion in one of his recent blogposts.

Gartner states for CEP that the “market penetration is 1% to 5% of target audience” with an expected growth “at approximately 25% per year from 2009 to 2014, but the use of COTS* CEP products is expected to grow more than 40% per year in this time frame.” That sounds great, for we all know that a low market penetration indicates that there is still a substantial growth potential, given that we “can overcome the adoption challenges”, as Opher Etzion adds.

I wouldn’t become suspicious if I didn’t come across the following Gartner statement: “Most business analysts do not know how to identify business situations that could be addressed through CEP, and that is limiting the rate at which CEP use can expand.”

Is this judgment again proving that the much lauded Business IT alignment is far from being real? Or could it be that these analysts confuse the complexity of the patterns with the complexity of the business situation? The latter is far from complex if we look at it from a customer’s perspective. This also fits the opinion of Charles Brett from Forrester Research, who is obstinately following the user to find the events: ”CEP is business-driven because that is where the events are.”

Therefore the CEP system must be the central processing system for events – spanning all the applications and systems in your IT environment. Bringing intelligence to business processes is key to unfolding event patterns, maximizing service availability and boosting your enterprise’s success.

Read Full Post »

As long as process execution runs smoothly nobody cares what happens in the event tunnel. But in the moment of an accident a narrow and murky tunnel becomes a big threat to people and businesses. Providing analyzing tools for subsequent insights and predictive simulations brings light into this tunnel, of course, but in the moment the traffic continues, the tunnel will be dark again.

With hundreds of thousands, maybe millions of events a day this picture is really threatening. That’s why the connection between the automation engine and the user is so essential nowadays, as we pointed out some weeks ago. On the other hand is ‘complex event processing’(CEP) “still a term that scares people away”, as Joe McKendrick points out in a ZDNet article, searching for “a softer way to describe what this thing is” about.

But what if CEP – or EP to soothe your nerves – is no less than key to the situational awareness of your business? A concept which is summarized by David E. Olson in a very interesting Event Processing Roundtable at ebizQ: “When you talk about situational awareness, you have three aspects of that continuum. We’ve got the past, the future and the present. What CEP does is add a significant amount of intelligence in the present, so that the business can act in the moment, and improve decision making in the future …”

It’s all about making your enterprise smart and agile. Complex Event Processing is designed to handle, normalize, aggregate, and analyze a wide range of information in real-time. You think that’s impossible considering hundreds of thousands events per day? Good point! But that’s part of the homework any event processing effort has to fulfill: to discern if a thing that happened is notable. Setting up this filter is a bit like searching for a needle in a haystack. But at the end of the day you will find out that there are just a handful of events remaining that are actually impacting on your business.

Read Full Post »

“You manage a mainframe environment that runs one or more of your business` mission critical applications. Things are good: security, performance and reliability are just where they should be. But when you think about your long term staffing strategy, you cringe, because most of the people on your mainframe staff have long term plans that include travelling, fishing and gardening … in other words, retirement.”

What starts like a mainframe’s farewell, turns out to be a mainframe’s hymn – in the Acxiom whitepaper about The Reality Facing the Mainframe World. The truth behind is that the predictions about the death of the mainframe, we have been listening to for 20 years (and the beginnings of networking), are not taken seriously anymore; even if mainframe staffing is supposed to be a tricky task nowadays.

Especially against the background of re-centralization it’s more than obvious that the mainframe is not only confronted with a reality. It is a reality – with “200 billion lines of COBOL code in existence and 5 billion lines of COBOL code added yearly” (according to eWeek).

Mainframe is here to stay. “It is alive and processing” – as the whitepaper points out. And it’s even more alive, if the processing of the mainframe AND the client-server world is done under a unifying roof of one comprehensive scheduler, I would like to add. Because it will not survive as an island. That’s for sure. The future is in the “cloud”. Maybe. And that would not be a good deal worse.

The necessary integration homework is described by this UC4 Whitepaper.

Read Full Post »

When people are talking about process automation it often happens that they disregard the power of change. They talk about dynamic markets and simultaneously think about processes as engraved in stone. But this would imply that they are perfect and timeless.

In fact, any automation effort must be accompanied by an ongoing optimization effort to ensure that the processes are up-to-date and the engine is not automating errors and detours. For this you will need some tools between the automation engine and the user, helping you to see what’s going on inside the process, to recognize and visualize patterns, to correlate events, to simulate workflows, to follow critical paths, and to report key performance indicators (KPIs) to the management.

In dynamic business environments monitoring is just half the battle. That’s why Complex Event Processing (CEP) (>see Wikipedia) is the technology of choice in the highly competitive automation market – “extracting business value from event data”, as Gartner Analyst Roy Schulte emphasizes.

A Bloor Research Whitepaper puts it in a nutshell: “Reacting to events to minimize risk´, predicting events to capitalize on opportunities and analyzing events to improve future performance; and doing all of this now rather than next week when it may be too late, is what event processing is all about. It is increasingly being recognized as a crucial capability for optimizing the business.”

Read Full Post »

Is free really dead?

It was in February 2008 that the Wired Magazine postulated ‘freeconomics’ as the future of business and the ‘vaporization’ of the traditional value chain. ‘Free’ was the new buzzword for the next months. But now, a year later, Kate Bradley considers it even possible that we already killed it.

What happened? And what’s the cause of death? And could it be that it is a murder without a corpse? That we just got tired – of the lack of quality? “The first time a previously expensive good or service is made free, we’re drawn to it precisely because of the freeness. The fifth time or tenth time, not so much,“ notes Seth Godin in his blog and I agree with him.

freeconomics

Maybe some people were also fooled by the ambiguity of the term, as Chris Anderson from Wired suggested recently at the SXSW in Texas: “One of the biggest advantages of ‘free’ as a marketing tool is the fact that the word has a double meaning in English – free as in freedom, and free as in price. In English we take all the good connotations of ‘free’ and use them to sell something.”

The point is: If you don’t value anything everything is too expensive. Think about your IT environment and the processes you run. Think about how you can weigh expenses and results and value it. If you do so you will find out that pure cost cutting is unsatisfying. But if you think about what to cut and what is worth growing it gets interesting. And if there is a use for something a value is guaranteed.

When I went back to the basics of the freeconomics as it was introduced to us a year ago, I got stuck at the following phrase: “Free is not quite as simple — or as stupid — as it sounds. Just because products are free doesn’t mean that someone, somewhere, isn’t making huge gobs of money.”

In other words, the question about how to make money around free content is still continuing. Especially when the market for free gets crowded. And offering a free product doesn’t guarantee much of anything. They will have to pay you to try it.

Read Full Post »

« Newer Posts