Feeds:
Posts
Comments

My last post dealt with monitoring and insight, reacting and optimizing as the two sides of the automation coin. Because monitoring and reacting are not enough when you are dealing with events, you also have to analyze and predict them as far as possible.

Especially if the event occurs in the shape of an error. Thinking about application assurance is thinking about how to handle change. And not necessarily about how to deal with alerts or trouble tickets which pop-up in your IT monitoring or business service management solution. Because when the problem occurs you are already on the reaction side of the automation coin trying to reduce the time it takes to fix a problem. The better and more sustainable approach to change would be to think about how we can turn this coin and prevent errors before they occur.

Of course, there is no perfect situation, and unforeseeable events happen all the time. Therefore, you will never get rid of the monitoring and reaction side. But talking seriously about application assurance you should at least be able to have an eye on both – what currently is going on and what is upcoming too.

Proper alert reaction needs insight
Take for example a job which is scheduled to start in 5 minutes. And then, suddenly, the alert comes from your monitoring tool that the database load is too high at the moment and the service aligned with the job will fail or at least slow down. Starting a manual investigation of the case is a kamikaze mission. But if you have pattern based-rules you can define options which can be automatically run through. Note you that you need a lot of insight into the whole system to answer the question of whether to reschedule the job when the database load is under 50% or to immediately allocate additional resources on a virtual basis. 1) You have to know the latest possible time to start the job without causing subsequent errors. And 2) you have to evaluate this job and know all the job-related SLAs (Service Level Agreements) to know if it’s even worth the effort to allocate additional resources.

Don’t forget: This insight must be available and automatically lead to a decision when the alert happens. And even then you may be running out of time. Take the same job scheduled not in 5 minutes but in two seconds – which in daily operations is often the remaining time after you have reached the threshold (e.g. 80% CPU usage) and the service is down.

That’s why the UC4´s Application Assurance solution incorporates real-time data, insight into the complete end-to-end business or IT processes, and intelligent decision making. And that’s why real-time monitoring encompasses business indicators AND infrastructure heart beat to allocate resources predictively.

When talking about automation, people easily ignore the power of change and consider the contemplated processes as engraved in stone. In spite of the fact that “change is not new and change is natural“, as Thomas L. Friedman (The World is Flat) pointed out in his thought-provoking book:“Change is hard. Change is hardest on those caught by surprise. Change is hardest on those who have difficulty changing too.”

Talking about change means talking about events – the secret currency of change counting any single change of state. This is worth emphasizing because events are not only the drivers of today’s businesses and operations, but they can occur everywhere – crossing platform, departmental and even enterprise borders.

Today you´re managing dynamic IT environments which are complex blends of physical, virtual, or cloud-based resources. In such environments transparency is key to staying agile and responsive. But even being reactive is not enough to keep your business situationally aware. To ensure that the processes are up-to-date and the engine is not automating errors and detours, any automation effort must be accompanied by an ongoing optimization effort.

The crux is that reaction and analysis are meshing. Take lunch break at school as real world example: the bell is ringing and 10 seconds later everyone stands in the line of the cafeteria waiting to be served. Following the classical monitoring way, cooking would start when the bell rings. Knowing more about the processes in the kitchen, the guys from UC4 start cooking 2 hours before – so everything is ready when the children come.

This kind of processing intelligence is key to avoiding overheads and running automated environments in a cost- and SLA-conscious way. Knowing the processes in school, the ringing bell is a foreseeable event. So you better not focus on reducing the reaction time and waste time and money. Otherwise it makes a lot of sense to monitor the cooking process as close to real-time as possible. It ensures that you have all the processing options available – before the bell rings!

Knowing that change is a constant not a variable and that automation can only be effective if it is combined with intelligence, UC4´s Application Assurance solution incorporates real-time data, insight into the complete end-to-end business or IT processes, and intelligent decision making.

Have a look. It’s worth it!

Of course, December is always the time for predictions, especially when we are going to enter a new decade. No wonder that December also marks a time when new buzzwords are created. One of these is “UC4”. Don’t laugh! Silicon republic predicts that “UC4 is set to dominate the CIO’s agenda 2010”! You can imagine how I stumbled first when I read this headline. 😉 But seriously, what does the new year hold – besides “Unified Communication, Collaboration and Contact Centre” (UC4)?

I will not contribute to this discussion with another buzzword. I just want to predict that it will probably be above all the big year for the user. This goes close with Brian Duckering who predicts for 2010 that also “management methods shift from system-based to user-based: Managing systems has always worked just fine. But it has gotten a lot more complicated and costly as users become more mobile and less predictable, demanding that their workspaces follow them from one device to another, seamlessly. For many this has caused a re-evaluation of what the purpose of IT actually is. The systems don’t create value for companies – the users do. Yet, the tools and methods predominantly deployed target devices, not people.”

Take a look on the still maturing virtualization market. Forrester predicts that server virtualization will grow from 10% in 2007, to 31% in 2008 to 54% in 2011. It’s a pretty impressive growth rate, of course. But actually nothing compared to the explosion the Gartner Group expects for the amount of virtualized PCs in the same period; it will more than centuplicate – from 5 Mio in 2007 to 660 Mio in 2011.

2010 will possibly be the year when the hyping technologies around virtualization will hit the front-end – where the user is waiting. This can also cause trouble – especially without transparent process management. Because “virtualization without good management is more dangerous than not using virtualization in the first place” – as Tom Bittmann, Gartner Analyst, already put it in a nutshell a year ago.

Hope you are ready for 2010!

Have you ever heard about the Global Information Industry Center (GIIC)? It’s part of the University of San Diego – situated close to the place where UC4 customers gathered for the annual user conference some weeks ago? They just published a new 2009 Report on American Consumers (entitled “How Much Information?”) trying to create a census of all forms of information an average American consumes in a single day.

Want to guess how much??? It’s 34 gigabytes of content and 100,000 words of information in a single day.

The New York Times twists the knife in the wound, pointing out that this “doesn’t mean we read 100,000 words a day — it means that 100,000 words cross our eyes and ears in a single 24-hour period. That information comes through various channels, including the television, radio, the Web, text messages and video games.”

But why do we have this voracious appetite for information? The answer is maybe a whole lot simpler than you would think: Because what we mainly eat is instant data and not nutritious information! It seems time for a diet – even on the business side? Because business processes nowadays are accompanied by myriads of event driven data while at the same we have to govern them almost in real-time. In a situation like this data is not enough. What we need are digestible pieces of information combined with pattern recognition capabilities.

Our diet plan is simple. Less junk data and more information bites. If you want to know what we use in the kitchen, get some UC4 Insight on our web. You will like the taste.

It was Thomas Samuel Kuhn (1922-1996), one of the most influential philosophers of science of the twentieth century, who revolutionized our picture of The Structure of Scientific Revolutions in claiming that, according to the Paradigm Concept, a mature science experiences alternating phases of normal science and revolutions: “In normal science the key theories, instruments, values and metaphysical assumptions that comprise the disciplinary matrix are kept fixed, permitting the cumulative generation of puzzle-solutions, whereas in a scientific revolution the disciplinary matrix undergoes revision, in order to permit the solution of the more serious anomalous puzzles that disturbed the preceding period of normal science.”

What this quotation from the Stanford Encyclopedia of Philosophy conceals is that in normal science the questions are also kind of fixed and prefabricated and that during these periods scientists normally just raise questions about which they already know the answers.

Our situation is not normal at all. It is a situation of fundamental change – and this not just because of the crisis. Mark McDonald from Gartner knows this. And he knows about the importance of raising the right questions. Questions which can move us forward: “Great CIOs ask good questions pretty much all the time. A good question is one that creates knowledge and shares understanding. A good question makes both parties smarter. Most questions are not great questions. Helpful yes, but they simply exchange information from one side to the other.”

I don’t want to withhold from you the following rough typology of great questions that Mark McDonald gives …

Logic checking questions – If that is true, then these other things must be false?
• Implications based questions – So given this issue we are also seeing these other things happening?
• Proof of fact questions – so how do you know the issue is happening and what are the consequences?
• Forward looking questions – so given all of that, what are the next steps or how do you suggest we take?

… also because most process optimization efforts follow the same steps.

By the way, Mark McDonald recently did a whole series of posts about what makes a great CIO.

“Over the next five years, IT automation will overtake offshoring as the next major efficiency trend in IT.” This is how Ken Jackson, President of Americas at UC4, starts his article about The Dawning of the IT Automation Era. This is surprising just for those who either consider offshoring as the universal answer to all cost reduction challenges or think that cost reduction is the only target of IT automation.

But in a world where IT environments are becoming more and more complex “squeezing every bit of extra cost out of your IT budget” and therefore leaving IT professionals with a “bare bones operating plan” is a tactic which is not sustainable at all. It`s like engaging in a rulebook slowdown while neglecting the fact that IT can really boost your business and ensure accurate service delivery.

The truth behind this is simple: you need money to invest in cost saving technologies. Because keeping IT systems just up and running is not enough. If you don’t want to throw the baby out with the bath water you have to jointly develop business innovation abilities and cost avoidance strategies.

The answer to complexity is process visibility, and a combination of real-time intelligence and just-in-time execution. This will help you to squeeze instead every bit of value out of your IT budget.

And this is what it’s all about.

For more information on the “several factors contributing to the coming age of IT automation” read Ken Jacksons inspiring article.

It’s not even 3 weeks ago that the SOA thought-leader community announced The SOA Manifesto as part of the closing keynote of the 2nd international SOA Symposium in Rotterdam.

It seems a good time for Manifestos, that’s taken for granted, but do we need to wrap the SOA approach into a Manifesto too? Now that the technological benefits of SOA applications over existing legacy applications have been well documented? Now that it is well known that SOA offers business benefits across applications and platforms, including location independent services, a loosely coupled approach providing agility, the dynamic search for other services, and that services are reusable?

According to Joe McKendrick, “SOA is not a thing that you do, and it definitely isn’t a thing that you buy … but something tangible, a style if you will, just as Roman, Greek or Modern are styles of architecture.” This perfectly paraphrases the very first statement of the SOA Manifesto, which philosophically states that service orientation is more an attitude, “a paradigm that frames what you do.”

The good thing about the Manifesto is that it reminds us of the cultural implications of a SOA approach. And that it plays nicely with the principle that states that “SOA is not made out of products”. But on the other hand we should not neglect the technical challenges, of course. Because the situation we are focussing on is the fact that over 50% of the processing in the current application landscape is background processing.

That is why we need interfaces that guarantee the inclusion of background processes into the SOA business process. That is why we need intelligent Workload Automation tools based on Web Services to initiate, monitor, and manage background processes. And that is why UC4 customers prefer an Automation Broker (here is the Whitepaper for Download) to bridge between SOA vision and batch reality than being part of a platonic style symposium.

I am looking forward to your comments!

Here is the video of the announcement:

You can find a complete list of the authors at the bottom of the manifesto, or view a picture of everybody on stage at the signing of the SOA Manifesto.