Archive for the ‘Complex Event Processing’ Category

Nowadays, enterprises are more and more often a result of mixing physical, virtual and cloud environments. And therefore a single point of management is a prerequisite for meeting SLAs and ensuring that business processes crossing platform, application and even physical borders are completed on time.
The funny thing is: as long as we lack visibility we are still thinking in terms of hurdles and obstacles. But at the moment we can manage physical, virtual and even cloud resources and applications from one pane of glass, we can outpace the disruptions and unify multiple jobs into one coherent process flow.

But even then we are not on target. Because at the very moment we achieve this coherence another effect appears – a boosting performance, made out of end-to-end visibility, seamless workload distribution and unprecedented processing power. Having a closer look at these columns of intelligent service automation one might be reminded of another concept – a connection I hit upon by Theo Priestley, an independent analyst and BPM visionary:

“Who remembers SETI@home, the project run by SETI to harness internet connected PC’s across the globe to help analyse signals from space? It was an early and successful attempt at mass distributed (or grid) computing using a small piece of software to use latent CPU cycles on client machines when the screensaver was engaged.

Now jump forward and the question is why hasn’t anyone taken this concept into the enterprise and into the BPM world itself? If you can imagine the many desktops that exist in an organisation sitting fairly idle when they could act as a BPM grid project to;

  • analyse, predict and act upon real-time data,
  • alter business rules on the fly,
  • creating intelligent workflow,
  • perform background simulation and CEP

Why bother with expensive server hardware (and future upgrades etc) when there’s potentially far more power sitting across the organisation not being fully utilised? Are there any examples of this in the BPM industry currently, if so would be good to hear about it.”

Yes Theo, there are examples – potential case studies are queuing up in front of our doors. It seems to me that we randomly adapted this GRID concept to the enterprise. Anyway, technologically we are ready.


Read Full Post »

My last post dealt with monitoring and insight, reacting and optimizing as the two sides of the automation coin. Because monitoring and reacting are not enough when you are dealing with events, you also have to analyze and predict them as far as possible.

Especially if the event occurs in the shape of an error. Thinking about application assurance is thinking about how to handle change. And not necessarily about how to deal with alerts or trouble tickets which pop-up in your IT monitoring or business service management solution. Because when the problem occurs you are already on the reaction side of the automation coin trying to reduce the time it takes to fix a problem. The better and more sustainable approach to change would be to think about how we can turn this coin and prevent errors before they occur.

Of course, there is no perfect situation, and unforeseeable events happen all the time. Therefore, you will never get rid of the monitoring and reaction side. But talking seriously about application assurance you should at least be able to have an eye on both – what currently is going on and what is upcoming too.

Proper alert reaction needs insight
Take for example a job which is scheduled to start in 5 minutes. And then, suddenly, the alert comes from your monitoring tool that the database load is too high at the moment and the service aligned with the job will fail or at least slow down. Starting a manual investigation of the case is a kamikaze mission. But if you have pattern based-rules you can define options which can be automatically run through. Note you that you need a lot of insight into the whole system to answer the question of whether to reschedule the job when the database load is under 50% or to immediately allocate additional resources on a virtual basis. 1) You have to know the latest possible time to start the job without causing subsequent errors. And 2) you have to evaluate this job and know all the job-related SLAs (Service Level Agreements) to know if it’s even worth the effort to allocate additional resources.

Don’t forget: This insight must be available and automatically lead to a decision when the alert happens. And even then you may be running out of time. Take the same job scheduled not in 5 minutes but in two seconds – which in daily operations is often the remaining time after you have reached the threshold (e.g. 80% CPU usage) and the service is down.

That’s why the UC4´s Application Assurance solution incorporates real-time data, insight into the complete end-to-end business or IT processes, and intelligent decision making. And that’s why real-time monitoring encompasses business indicators AND infrastructure heart beat to allocate resources predictively.

Read Full Post »

When talking about automation, people easily ignore the power of change and consider the contemplated processes as engraved in stone. In spite of the fact that “change is not new and change is natural“, as Thomas L. Friedman (The World is Flat) pointed out in his thought-provoking book:“Change is hard. Change is hardest on those caught by surprise. Change is hardest on those who have difficulty changing too.”

Talking about change means talking about events – the secret currency of change counting any single change of state. This is worth emphasizing because events are not only the drivers of today’s businesses and operations, but they can occur everywhere – crossing platform, departmental and even enterprise borders.

Today you´re managing dynamic IT environments which are complex blends of physical, virtual, or cloud-based resources. In such environments transparency is key to staying agile and responsive. But even being reactive is not enough to keep your business situationally aware. To ensure that the processes are up-to-date and the engine is not automating errors and detours, any automation effort must be accompanied by an ongoing optimization effort.

The crux is that reaction and analysis are meshing. Take lunch break at school as real world example: the bell is ringing and 10 seconds later everyone stands in the line of the cafeteria waiting to be served. Following the classical monitoring way, cooking would start when the bell rings. Knowing more about the processes in the kitchen, the guys from UC4 start cooking 2 hours before – so everything is ready when the children come.

This kind of processing intelligence is key to avoiding overheads and running automated environments in a cost- and SLA-conscious way. Knowing the processes in school, the ringing bell is a foreseeable event. So you better not focus on reducing the reaction time and waste time and money. Otherwise it makes a lot of sense to monitor the cooking process as close to real-time as possible. It ensures that you have all the processing options available – before the bell rings!

Knowing that change is a constant not a variable and that automation can only be effective if it is combined with intelligence, UC4´s Application Assurance solution incorporates real-time data, insight into the complete end-to-end business or IT processes, and intelligent decision making.

Have a look. It’s worth it!

Read Full Post »

Have you ever heard about the Global Information Industry Center (GIIC)? It’s part of the University of San Diego – situated close to the place where UC4 customers gathered for the annual user conference some weeks ago? They just published a new 2009 Report on American Consumers (entitled “How Much Information?”) trying to create a census of all forms of information an average American consumes in a single day.

Want to guess how much??? It’s 34 gigabytes of content and 100,000 words of information in a single day.

The New York Times twists the knife in the wound, pointing out that this “doesn’t mean we read 100,000 words a day — it means that 100,000 words cross our eyes and ears in a single 24-hour period. That information comes through various channels, including the television, radio, the Web, text messages and video games.”

But why do we have this voracious appetite for information? The answer is maybe a whole lot simpler than you would think: Because what we mainly eat is instant data and not nutritious information! It seems time for a diet – even on the business side? Because business processes nowadays are accompanied by myriads of event driven data while at the same we have to govern them almost in real-time. In a situation like this data is not enough. What we need are digestible pieces of information combined with pattern recognition capabilities.

Our diet plan is simple. Less junk data and more information bites. If you want to know what we use in the kitchen, get some UC4 Insight on our web. You will like the taste.

Read Full Post »

The Gartner Symposium/ITxpo 2009 we have been attending in Orlando not only endorsed the big hypes aroung virtualization and cloud computing, but also our ongoing investments in service-aware process automation – offering real-time intelligence for just-in-time execution. It matched perfectly that Gartner analyst Roy Schulte and K. Mani Chandy, Professor at the California Institute of Technology in Pasadena, used this event to introduce their brand new book called “Event Processing: Designing IT Systems for Agile Companies” about the business drivers, costs and benefits of event-processing applications.

According to Mr. Schulte and Mr. Chandy the new aspirations in situation awareness and reaction accuracy can`t be achieved by simply speeding up traditional business processes or exhorting people to work harder and smarter with conventional applications. Instead they urge companies to make fundamental changes in the architecture of business processes and the application systems that support them by using more of the event-processing discipline. “While a typical business process has time-driven, request-driven and event-driven aspects, event-driven architecture (EDA) is underutilized in system design resulting in slow and inflexible systems,” said Mr. Chandy. “Event-driven systems are intrinsically smart because they are context-aware and run when they detect changes in the business world rather than occurring on a simple schedule or requiring someone to tell them when to run.”

“Event-driven CEP is a kind of near real-time business intelligence (BI), a way of `connecting the dots` to detect threats and opportunities,” explained Mr. Schulte. “By contrast, conventional BI is time-driven or request-driven. Complex events may be reactive, summarizing past events or predictive, identifying things that are likely to happen based on what has happened recently compared with historical patterns.”

Nothing to add. UC4 can deliver!

Read Full Post »

Sometimes industry hypes can reciprocally enforce each other and sometimes they even coexist so closely that the question: “Who was first” is verging on the “chicken or the egg causality dilemma”. With “Virtualization” and “Cloud Computing” it’s different. Of course, they do “hype” each other but the concept of cloud computing is not even thinkable without having virtualization technologies in mind. A concept which is defined by the U.S. National Institute of Standards and Technology (NIST) as “a model for enabling convenient, in-demand network access to a shared pool of configurable computing resources … that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

I found this definition whilst going through a brand new EMA White Paper titled “Achieving Virtualization Control with Intelligent Service Automation”. In this study EMA researcher Andi Mann develops the supporting argument that an efficient use and dynamic provisioning of resources depends on service-aware automation technologies.

Workloads don’t care whether they are happening on physical or virtual systems. That’s why – according to Andi Mann – an automated virtual service management is the basis of any serious cloud approach. “It sets the stage for well-managed cloud computing services, with two essential components. Firstly, the virtual infrastructure provides a turnkey approach to flexibility, agility, and scalability, and the essential convenient, on-demand configuration of shared computing resources. This is difficult, and perhaps impossible, with a tightly-coupled physical environment. Secondly, intelligent and sophisticated automation is essential to ensuring minimal management effort or service provider interaction, which would be impossible with a manual management approach.”

And that’s why the UC4 Intelligent Service Automation platform is an important building block for cloud computing: 1) It initiates the dynamic distribution of workloads out of the process. 2) It immediately integrates newly-provisioned systems into your daily recurring housekeeping routines for backup and maintenance. 3) It introduces real-time intelligence to Cloud-Computing by acting on not reacting to events inside the applications. And finally 4) it provides an end-to-end view for predictive process management, integrating real and virtual worlds.

Read Full Post »

“We recently had to replace the server in my office. It was seven years old and one of the hard drives failed. It was not an expenditure I expected to have this year. My IT guy said that my desktop is seven years old. He also informed me that half the machines in the office are between five and eight years old and that I should budget to replace all of them next year. If we did not have a weak economy, I would normally have replaced these machines after five years of service. I think my business is typical of many businesses around the world“. This is how Ronald Roge, chairman of R. W. Roge & Company, one of a highly regarded wealth management firm describes the situation.

The related article from forbes.com is about the IT market and the pent-up demand in many firms due to the crisis in the last year or two. The good news: Concerning 2010 forecasts, the industry’s experts expect the cork to pop this year.

What strikes me is that in the article they talk mainly about IT lifecycles and replacement procedures and not about how technology itself has changed in the last two years. Take virtualization technologies and the role they can play – even in a tense economic situation – as the key to doing more with less; to reducing hardware, space and energy demand and to making your business more available, more agile and more productive.

Of course, not before you have done your management lessons on integrating real and virtual environments in a consistent way. Using VMware it becomes clear that the dynamic provisioning of new systems is not enough. Unless they become part of your automation strategy they remain outside your business processes waiting for costly manual integration.

Talking about the cork which is supposed to pop next year we should also talk about the deadlocks threatening the virtualization issue. We should talk about virtual machine sprawl and costly process interruption at the intersections between virtual instances and physically deployed systems.

And we should underline that it is not enough to find out the status of your server hardware the moment you want to dynamically provision workloads; you have to go deeper – into the application layer that correlates and acts on events to bring real-time intelligence and real-time dynamic to virtual and cloud computing environments. It’s obviously much more than fulfilling IT lifecycles. It’s about considering IT as a strategic asset and not as a cost center.

Read Full Post »

Older Posts »