Feeds:
Posts
Comments

Archive for the ‘Business Intelligence’ Category

After 5 years of being the top CIO priority Business Intelligence has dropped in Gartner’s 2010 Executive Program survey to fifth place. Reason enough for Gartner analyst Mark McDonald to have a closer look and find out what has happened behind the curtains. And his conclusion is surprising and reassuring at the same time.

The good news is that BI is neither sick nor dead. The bad news for classical IT admins who still refuse to rack their brains with business related topics is that BI as technology is silently replaced by “Business Intelligence as a management capability” – which precisely reflects the shifting role of IT in general and that businesses nowadays need more than just technology to create value in complex, business driven environments.

“Creating a business intelligence capability demands a broader set of people, processes and tools that work together to raise intelligence, analytics and business performance … The move from technology to capability is about time as most organizations have already progressed through the peak of their CAPEX curve and moved from buying solutions to applying solutions.”

Nowadays, it seems more appropriate to talk about the “intelligent business” than “business intelligence”. Forget about the cube specialists in their ivory towers. Business Intelligence has left behind pure theory, while intelligent businesses have already arrived at the front-end fostering the seamless interaction between people, tools and processes.

That’s also why Josh Gingold and Scott Lowe from ZDNet can entitle their Live Webcast on the 20th of April: “The new ROI: Return on Intelligence” – proving that BI may have lost its position as a top technology priority according to the Gartner survey – but is increasing its importance to enterprise performance: “However, what many are now beginning to realize is that the return on an effective BI solution can actually exceed the cost of their entire IT budgets.”

The BI tools of UC4 are tightly coupled to process performance.  That’s our way to measure this new kind of ROI.
We invite you to have a look!

Read Full Post »

My last post dealt with monitoring and insight, reacting and optimizing as the two sides of the automation coin. Because monitoring and reacting are not enough when you are dealing with events, you also have to analyze and predict them as far as possible.

Especially if the event occurs in the shape of an error. Thinking about application assurance is thinking about how to handle change. And not necessarily about how to deal with alerts or trouble tickets which pop-up in your IT monitoring or business service management solution. Because when the problem occurs you are already on the reaction side of the automation coin trying to reduce the time it takes to fix a problem. The better and more sustainable approach to change would be to think about how we can turn this coin and prevent errors before they occur.

Of course, there is no perfect situation, and unforeseeable events happen all the time. Therefore, you will never get rid of the monitoring and reaction side. But talking seriously about application assurance you should at least be able to have an eye on both – what currently is going on and what is upcoming too.

Proper alert reaction needs insight
Take for example a job which is scheduled to start in 5 minutes. And then, suddenly, the alert comes from your monitoring tool that the database load is too high at the moment and the service aligned with the job will fail or at least slow down. Starting a manual investigation of the case is a kamikaze mission. But if you have pattern based-rules you can define options which can be automatically run through. Note you that you need a lot of insight into the whole system to answer the question of whether to reschedule the job when the database load is under 50% or to immediately allocate additional resources on a virtual basis. 1) You have to know the latest possible time to start the job without causing subsequent errors. And 2) you have to evaluate this job and know all the job-related SLAs (Service Level Agreements) to know if it’s even worth the effort to allocate additional resources.

Don’t forget: This insight must be available and automatically lead to a decision when the alert happens. And even then you may be running out of time. Take the same job scheduled not in 5 minutes but in two seconds – which in daily operations is often the remaining time after you have reached the threshold (e.g. 80% CPU usage) and the service is down.

That’s why the UC4´s Application Assurance solution incorporates real-time data, insight into the complete end-to-end business or IT processes, and intelligent decision making. And that’s why real-time monitoring encompasses business indicators AND infrastructure heart beat to allocate resources predictively.

Read Full Post »

When talking about automation, people easily ignore the power of change and consider the contemplated processes as engraved in stone. In spite of the fact that “change is not new and change is natural“, as Thomas L. Friedman (The World is Flat) pointed out in his thought-provoking book:“Change is hard. Change is hardest on those caught by surprise. Change is hardest on those who have difficulty changing too.”

Talking about change means talking about events – the secret currency of change counting any single change of state. This is worth emphasizing because events are not only the drivers of today’s businesses and operations, but they can occur everywhere – crossing platform, departmental and even enterprise borders.

Today you´re managing dynamic IT environments which are complex blends of physical, virtual, or cloud-based resources. In such environments transparency is key to staying agile and responsive. But even being reactive is not enough to keep your business situationally aware. To ensure that the processes are up-to-date and the engine is not automating errors and detours, any automation effort must be accompanied by an ongoing optimization effort.

The crux is that reaction and analysis are meshing. Take lunch break at school as real world example: the bell is ringing and 10 seconds later everyone stands in the line of the cafeteria waiting to be served. Following the classical monitoring way, cooking would start when the bell rings. Knowing more about the processes in the kitchen, the guys from UC4 start cooking 2 hours before – so everything is ready when the children come.

This kind of processing intelligence is key to avoiding overheads and running automated environments in a cost- and SLA-conscious way. Knowing the processes in school, the ringing bell is a foreseeable event. So you better not focus on reducing the reaction time and waste time and money. Otherwise it makes a lot of sense to monitor the cooking process as close to real-time as possible. It ensures that you have all the processing options available – before the bell rings!

Knowing that change is a constant not a variable and that automation can only be effective if it is combined with intelligence, UC4´s Application Assurance solution incorporates real-time data, insight into the complete end-to-end business or IT processes, and intelligent decision making.

Have a look. It’s worth it!

Read Full Post »

Have you ever heard about the Global Information Industry Center (GIIC)? It’s part of the University of San Diego – situated close to the place where UC4 customers gathered for the annual user conference some weeks ago? They just published a new 2009 Report on American Consumers (entitled “How Much Information?”) trying to create a census of all forms of information an average American consumes in a single day.

Want to guess how much??? It’s 34 gigabytes of content and 100,000 words of information in a single day.

The New York Times twists the knife in the wound, pointing out that this “doesn’t mean we read 100,000 words a day — it means that 100,000 words cross our eyes and ears in a single 24-hour period. That information comes through various channels, including the television, radio, the Web, text messages and video games.”

But why do we have this voracious appetite for information? The answer is maybe a whole lot simpler than you would think: Because what we mainly eat is instant data and not nutritious information! It seems time for a diet – even on the business side? Because business processes nowadays are accompanied by myriads of event driven data while at the same we have to govern them almost in real-time. In a situation like this data is not enough. What we need are digestible pieces of information combined with pattern recognition capabilities.

Our diet plan is simple. Less junk data and more information bites. If you want to know what we use in the kitchen, get some UC4 Insight on our web. You will like the taste.

Read Full Post »

“Over the next five years, IT automation will overtake offshoring as the next major efficiency trend in IT.” This is how Ken Jackson, President of Americas at UC4, starts his article about The Dawning of the IT Automation Era. This is surprising just for those who either consider offshoring as the universal answer to all cost reduction challenges or think that cost reduction is the only target of IT automation.

But in a world where IT environments are becoming more and more complex “squeezing every bit of extra cost out of your IT budget” and therefore leaving IT professionals with a “bare bones operating plan” is a tactic which is not sustainable at all. It`s like engaging in a rulebook slowdown while neglecting the fact that IT can really boost your business and ensure accurate service delivery.

The truth behind this is simple: you need money to invest in cost saving technologies. Because keeping IT systems just up and running is not enough. If you don’t want to throw the baby out with the bath water you have to jointly develop business innovation abilities and cost avoidance strategies.

The answer to complexity is process visibility, and a combination of real-time intelligence and just-in-time execution. This will help you to squeeze instead every bit of value out of your IT budget.

And this is what it’s all about.

For more information on the “several factors contributing to the coming age of IT automation” read Ken Jacksons inspiring article.

Read Full Post »

The Gartner Symposium/ITxpo 2009 we have been attending in Orlando not only endorsed the big hypes aroung virtualization and cloud computing, but also our ongoing investments in service-aware process automation – offering real-time intelligence for just-in-time execution. It matched perfectly that Gartner analyst Roy Schulte and K. Mani Chandy, Professor at the California Institute of Technology in Pasadena, used this event to introduce their brand new book called “Event Processing: Designing IT Systems for Agile Companies” about the business drivers, costs and benefits of event-processing applications.

According to Mr. Schulte and Mr. Chandy the new aspirations in situation awareness and reaction accuracy can`t be achieved by simply speeding up traditional business processes or exhorting people to work harder and smarter with conventional applications. Instead they urge companies to make fundamental changes in the architecture of business processes and the application systems that support them by using more of the event-processing discipline. “While a typical business process has time-driven, request-driven and event-driven aspects, event-driven architecture (EDA) is underutilized in system design resulting in slow and inflexible systems,” said Mr. Chandy. “Event-driven systems are intrinsically smart because they are context-aware and run when they detect changes in the business world rather than occurring on a simple schedule or requiring someone to tell them when to run.”

“Event-driven CEP is a kind of near real-time business intelligence (BI), a way of `connecting the dots` to detect threats and opportunities,” explained Mr. Schulte. “By contrast, conventional BI is time-driven or request-driven. Complex events may be reactive, summarizing past events or predictive, identifying things that are likely to happen based on what has happened recently compared with historical patterns.”

Nothing to add. UC4 can deliver!

Read Full Post »

“We recently had to replace the server in my office. It was seven years old and one of the hard drives failed. It was not an expenditure I expected to have this year. My IT guy said that my desktop is seven years old. He also informed me that half the machines in the office are between five and eight years old and that I should budget to replace all of them next year. If we did not have a weak economy, I would normally have replaced these machines after five years of service. I think my business is typical of many businesses around the world“. This is how Ronald Roge, chairman of R. W. Roge & Company, one of a highly regarded wealth management firm describes the situation.

The related article from forbes.com is about the IT market and the pent-up demand in many firms due to the crisis in the last year or two. The good news: Concerning 2010 forecasts, the industry’s experts expect the cork to pop this year.

What strikes me is that in the article they talk mainly about IT lifecycles and replacement procedures and not about how technology itself has changed in the last two years. Take virtualization technologies and the role they can play – even in a tense economic situation – as the key to doing more with less; to reducing hardware, space and energy demand and to making your business more available, more agile and more productive.

Of course, not before you have done your management lessons on integrating real and virtual environments in a consistent way. Using VMware it becomes clear that the dynamic provisioning of new systems is not enough. Unless they become part of your automation strategy they remain outside your business processes waiting for costly manual integration.

Talking about the cork which is supposed to pop next year we should also talk about the deadlocks threatening the virtualization issue. We should talk about virtual machine sprawl and costly process interruption at the intersections between virtual instances and physically deployed systems.

And we should underline that it is not enough to find out the status of your server hardware the moment you want to dynamically provision workloads; you have to go deeper – into the application layer that correlates and acts on events to bring real-time intelligence and real-time dynamic to virtual and cloud computing environments. It’s obviously much more than fulfilling IT lifecycles. It’s about considering IT as a strategic asset and not as a cost center.

Read Full Post »

Older Posts »