Archive for the ‘SOA’ Category

Everybody talks about virtualization. The technology hypes and the doubters are deliberately ignored. But let us be honest: virtualization necessarily leads to new abstraction levels which in turn results in restraints in terms of handling. That’s why Gartner analyst Thomas Bittman noted some time ago that “virtualization without good management is more dangerous than not using virtualization in the first place.”

This is not about inventing a new discipline, as Forrester Research points out in a brand new report entitled “Managing the Virtual World is an Evolution, not a Revolution”: “Process discipline and streamlined management automation were already operational mandates, but the advent of virtualization on industry-standard servers exacerbates these requirements. Invest wisely in these technologies to avoid getting stranded with limited point solutions that offer little hope of integration into the broader initiatives for operational excellence.”

The doubters might even become more suspicious and ask themselves why this report stresses the common sense that system management cannot succeed with fragmented tools and without a holistic approach at the process level? And what does the distinction between EVOLUTION and REVOLUTION bring to the customer or the CIO dealing with the backlash of virtualization?
Reading the review of the Forrester report by Denise Dubie, former senior editor at Network World, the four listed key product categories for IT managers who want to control a virtual environment seem artificially separated.

Of course, there are 1) the provisioning part, 2) the capacity management part, 3) the performance part, and 4) the automation part, but the fact is that the essential question in virtual environments is not so much a complete list of all the disciplines as how these are interconnected. Because in enterprises reality, the provisioning issue is part of the performance issue, with capacity management as a prerequisite. Considering this, automation is not just another category. It is what glues all these parts together – not just in the virtual environment but across physical and virtual borders.

Between the lines the report proves again that automation is not just the answer to real-time challenges in dynamic markets. It is the only way to deal with complex interdependencies of hybrid and service oriented environments: “Many IT services, like virtualization management, are reaching a level of complexity where sophisticated mathematical algorithms and object models of the servers are more precise and efficient than even your most talented engineers”.

To learn more about Virtualization Automation view the UC4 Tour on this topic.


Read Full Post »

People like to think in “either-or”solutions, trying to make their lives easier, and maybe unconsciously trusting the paradox that more choices may lead to a poorer decision (Paradox of Choice). This is in spite of a reality which often proves to follow a more fuzzy and compromising “both-and” logic.

Take the hype about cloud computing. Although the world is full of cloud apologists nowadays one should bear in mind that the cloud market is still nascent – with, so far, only 4 percent of small and enterprise companies from the United States and Europe taking advantage of it, which means, from a management point of view, that in the nearer future we will have to deal with the reality of hybrid environments and even more processes connecting physical, virtual and/or cloud-based platforms.

Bob Muglia, president of Microsoft’s Server and Tools Business, proves that serious cloud providers like Microsoft share this view: “There’s more than one flavor of cloud computing, including private clouds that run on a business’s on-site servers. And it needn’t be an all-or-nothing proposition; we expect customers to want to integrate on-premises datacenters with an external cloud”.

This reality one needs to have in mind when it comes to evaluate the new “Agent for Web Services” UC4 unveiled some weeks ago. In a conversation I had with Vincent Stueger, Chief Technology Officer, UC4 Software last week, he told me why this Agent is “a really big and promising step. Because it’s much more than offering a simple web interface to remotely start a job or change a parameter. With this agent you can seamlessly monitor and control a process from the ground to the cloud and back.”

If Milind Govekar, Research Vice President, Gartner, is to be believed, this bridging capability will not just decide on the future of automation, but also on the future of the cloud: “The ability of the technology platform to manage a hybrid environment consisting of legacy and cloud applications with a single automation platform will be needed for clearing the way for greater adoption of cloud-based models.”

The cloud is not our destiny, but it brings a big choice – if we are able to provide the bridges.

Read Full Post »

Don’t worry, I don’t want to open a new chapter in the “chicken or the egg” causality dilemma. But when I stumbled upon an argument by Bernard Golden – the author of the famous book: Virtualization for Dummies – I was briefly reminded of a dead-end street called “Business/IT alignment” we walked down some months ago.

Forget the wall separating IT and business. There is no such thing. It’s not true anymore that business pre-exists and IT is just a representation of what is happening on the business side. Therefore any “paving the cow paths” approach of computing will be a long shot, as Golden emphasizes in his brilliant article:

“In the past, IT was used to automate repeatable business processes — taking something that already exists and computerizing it. The archetype for this kind of transformation is ERP — the automation of ordering, billing, and inventory tracking. That “paving the cow paths” approach to computing is changing. Today, businesses are delivering new services infused and made possible by IT — in other words, creating new offerings that could not exist without IT capabilities.”

What Golden describes here is the end of the reaction model of IT as we know it – that someone acts and IT reacts. It’s not a pulling approach anymore it’s a pushing approach – where location-sensitive devices and mashed-up applications interact with each other as part of data driven processes.

Dynamization changes not only the application architecture, but also the requirements service-aware automation technologies must meet. In these highly variable environments applications will – according to Golden – need to “dynamically scale” “to gracefully and dynamically add new data streams as inputs” and “to rapidly shift context and data sets.

Does this new variability sound familiar to you? No wonder, it’s the linchpin of UC4’s Intelligent Service Automation. Workloads need to be distributed dynamically and out of the process because events are the heartbeats of modern applications. That’s why workloads neither adhere to system borders nor to business hours. They just don’t care.

Read Full Post »

My last post dealt with monitoring and insight, reacting and optimizing as the two sides of the automation coin. Because monitoring and reacting are not enough when you are dealing with events, you also have to analyze and predict them as far as possible.

Especially if the event occurs in the shape of an error. Thinking about application assurance is thinking about how to handle change. And not necessarily about how to deal with alerts or trouble tickets which pop-up in your IT monitoring or business service management solution. Because when the problem occurs you are already on the reaction side of the automation coin trying to reduce the time it takes to fix a problem. The better and more sustainable approach to change would be to think about how we can turn this coin and prevent errors before they occur.

Of course, there is no perfect situation, and unforeseeable events happen all the time. Therefore, you will never get rid of the monitoring and reaction side. But talking seriously about application assurance you should at least be able to have an eye on both – what currently is going on and what is upcoming too.

Proper alert reaction needs insight
Take for example a job which is scheduled to start in 5 minutes. And then, suddenly, the alert comes from your monitoring tool that the database load is too high at the moment and the service aligned with the job will fail or at least slow down. Starting a manual investigation of the case is a kamikaze mission. But if you have pattern based-rules you can define options which can be automatically run through. Note you that you need a lot of insight into the whole system to answer the question of whether to reschedule the job when the database load is under 50% or to immediately allocate additional resources on a virtual basis. 1) You have to know the latest possible time to start the job without causing subsequent errors. And 2) you have to evaluate this job and know all the job-related SLAs (Service Level Agreements) to know if it’s even worth the effort to allocate additional resources.

Don’t forget: This insight must be available and automatically lead to a decision when the alert happens. And even then you may be running out of time. Take the same job scheduled not in 5 minutes but in two seconds – which in daily operations is often the remaining time after you have reached the threshold (e.g. 80% CPU usage) and the service is down.

That’s why the UC4´s Application Assurance solution incorporates real-time data, insight into the complete end-to-end business or IT processes, and intelligent decision making. And that’s why real-time monitoring encompasses business indicators AND infrastructure heart beat to allocate resources predictively.

Read Full Post »

When talking about automation, people easily ignore the power of change and consider the contemplated processes as engraved in stone. In spite of the fact that “change is not new and change is natural“, as Thomas L. Friedman (The World is Flat) pointed out in his thought-provoking book:“Change is hard. Change is hardest on those caught by surprise. Change is hardest on those who have difficulty changing too.”

Talking about change means talking about events – the secret currency of change counting any single change of state. This is worth emphasizing because events are not only the drivers of today’s businesses and operations, but they can occur everywhere – crossing platform, departmental and even enterprise borders.

Today you´re managing dynamic IT environments which are complex blends of physical, virtual, or cloud-based resources. In such environments transparency is key to staying agile and responsive. But even being reactive is not enough to keep your business situationally aware. To ensure that the processes are up-to-date and the engine is not automating errors and detours, any automation effort must be accompanied by an ongoing optimization effort.

The crux is that reaction and analysis are meshing. Take lunch break at school as real world example: the bell is ringing and 10 seconds later everyone stands in the line of the cafeteria waiting to be served. Following the classical monitoring way, cooking would start when the bell rings. Knowing more about the processes in the kitchen, the guys from UC4 start cooking 2 hours before – so everything is ready when the children come.

This kind of processing intelligence is key to avoiding overheads and running automated environments in a cost- and SLA-conscious way. Knowing the processes in school, the ringing bell is a foreseeable event. So you better not focus on reducing the reaction time and waste time and money. Otherwise it makes a lot of sense to monitor the cooking process as close to real-time as possible. It ensures that you have all the processing options available – before the bell rings!

Knowing that change is a constant not a variable and that automation can only be effective if it is combined with intelligence, UC4´s Application Assurance solution incorporates real-time data, insight into the complete end-to-end business or IT processes, and intelligent decision making.

Have a look. It’s worth it!

Read Full Post »

“Over the next five years, IT automation will overtake offshoring as the next major efficiency trend in IT.” This is how Ken Jackson, President of Americas at UC4, starts his article about The Dawning of the IT Automation Era. This is surprising just for those who either consider offshoring as the universal answer to all cost reduction challenges or think that cost reduction is the only target of IT automation.

But in a world where IT environments are becoming more and more complex “squeezing every bit of extra cost out of your IT budget” and therefore leaving IT professionals with a “bare bones operating plan” is a tactic which is not sustainable at all. It`s like engaging in a rulebook slowdown while neglecting the fact that IT can really boost your business and ensure accurate service delivery.

The truth behind this is simple: you need money to invest in cost saving technologies. Because keeping IT systems just up and running is not enough. If you don’t want to throw the baby out with the bath water you have to jointly develop business innovation abilities and cost avoidance strategies.

The answer to complexity is process visibility, and a combination of real-time intelligence and just-in-time execution. This will help you to squeeze instead every bit of value out of your IT budget.

And this is what it’s all about.

For more information on the “several factors contributing to the coming age of IT automation” read Ken Jacksons inspiring article.

Read Full Post »

It’s not even 3 weeks ago that the SOA thought-leader community announced The SOA Manifesto as part of the closing keynote of the 2nd international SOA Symposium in Rotterdam.

It seems a good time for Manifestos, that’s taken for granted, but do we need to wrap the SOA approach into a Manifesto too? Now that the technological benefits of SOA applications over existing legacy applications have been well documented? Now that it is well known that SOA offers business benefits across applications and platforms, including location independent services, a loosely coupled approach providing agility, the dynamic search for other services, and that services are reusable?

According to Joe McKendrick, “SOA is not a thing that you do, and it definitely isn’t a thing that you buy … but something tangible, a style if you will, just as Roman, Greek or Modern are styles of architecture.” This perfectly paraphrases the very first statement of the SOA Manifesto, which philosophically states that service orientation is more an attitude, “a paradigm that frames what you do.”

The good thing about the Manifesto is that it reminds us of the cultural implications of a SOA approach. And that it plays nicely with the principle that states that “SOA is not made out of products”. But on the other hand we should not neglect the technical challenges, of course. Because the situation we are focussing on is the fact that over 50% of the processing in the current application landscape is background processing.

That is why we need interfaces that guarantee the inclusion of background processes into the SOA business process. That is why we need intelligent Workload Automation tools based on Web Services to initiate, monitor, and manage background processes. And that is why UC4 customers prefer an Automation Broker (here is the Whitepaper for Download) to bridge between SOA vision and batch reality than being part of a platonic style symposium.

I am looking forward to your comments!

Here is the video of the announcement:

You can find a complete list of the authors at the bottom of the manifesto, or view a picture of everybody on stage at the signing of the SOA Manifesto.

Read Full Post »

Older Posts »