Forecast & Trends in Business Intelligence for 2013


                  A BIG (Business Intelligence Growth) Year :)



Defining Business Intelligence in 2013 as A Collaborative Experience and a Shared Exercise of Asking and Answering Insightful Questions About a Business
 
 

             What a year 2012 was for business intelligence! The said old world of databases is developing faster and faster, with startups addressing new data problems and established companies innovating on their platforms. Web-based analytics tools are connecting to web-based data. And everything’s mobile.
 


         With all the attention organizations are placing on innovating around data, the rate of change will only increase. So what should you expect to see?

 


     Proliferation of Data Stores.

              Once upon a time, an organization had different types of data: CRM, point of sale, email, and more. The rulers of that organization worked very hard and eventually got all their data into one fast data warehouse…2013 is the year we will recognize this story as a fairy tale. The organization that has all its data in one place does not exist. Moreover, why would you want to do it? Big data could be in places like Teradata and Hadoop. Transactional data might be in Oracle or SQL Server. The right data stores for the right data and workload will be seen as one of the hallmarks of a great IT organization, not a problem to be fixed. 





                                Hadoop is Real.

                    Back in 2008 and 2009 Hadoop was a science project. By 2010 and 2011 some forward-thinking organizations started doing proof-of-concepts with Hadoop. In 2012, we saw the emergence of many production-scale Hadoop implementations, as well as a crop of companies trying to address pain points in working with Hadoop. In 2013, Hadoop will finally break into the mainstream for working with large or unstructured data. It is also becoming more “right-time” for a faster analytics experience. 



             Self-reliance Is the New Self-Service.


                Self-service BI is the idea that any business user can analyze the data they need to make a better decision. Self-reliance is the coming of age of that concept: it means business users have access to the right data, that the data is in a place and format that they can use, and that they have the solutions that enable self-service analytics. When all this happens, people become self-reliant with their business questions and IT can focus on providing the secure data and solutions to get them there. 




The Value of text and other Unstructured Data  is (finally!) Recognized.

                One of the subplots of the rise of Hadoop has been the rise of unstructured data. Emails, documents, web analytics and customer feedback have existed for years, but most organizations struggled enough to understand their structured data that unstructured data was left alone. In 2011 and 2012 we saw more techniques emerge to help people deal with unstructured data, not least of which is a place to put it (Hadoop). With the explosion of social data like Twitter and Facebook posts, text analysis becomes even more important. Expect to see a lot of it in 2013. 





                             Cloud BI Grows up.


               Cloud business intelligence as your primary BI? No way! Not in 2012, at least. There are cloud BI services, but with important limitations that have made it difficult to use the cloud as your primary analytics solution. In 2013 we expect to see the maturation of cloud BI, so that people can collaborate with data in the cloud, just like they collaborate on their Salesforce.




               Visual Analytics wins Best Picture.


              For years visual analytics has been the Best Documentary of business intelligence: impressive, but for the intellectuals and not the mass audience. But people are finally beginning to realize that visual analytics helps anyone explore, understand and communicate with data. It’s the star of business analytics, not a handy tool for scientists. 




Forecasting and Predictive Analytics become common.


                Much like visual analytics, forecasting used to be seen as the domain of the scientist. But everyone wants to know the future. Forecasting tools are maturing to help businesses identify emerging trends and make better plans. We expect forecasting and predictive analyses to become much more common as people use them to get more value from their data. 




              Mobile BI Moves up a Weight Class.


            Last year we predicted that Mobile BI would go mainstream—and it did. Now everyone from salespeople to insurance adjusters to shop floor managers use tablets to get data about their work right in the moment. To date mobile BI has been lightweight—involving the consumption of reports, with a bit of interactivity. But the tremendous value that people have seen in mobile BI is driving a trend for more ability to ask and answer questions. 





        Collaboration is not a Feature, it’s a Reality!

            Business intelligence solutions have often talked up their collaboration features. In 2013, that’ll no longer be good enough. Collaboration must be at the root of any business intelligence implementation, because what is business intelligence but a shared experience of asking and answering questions about a business? In 2013, business will look for ways to involve people all around their organization in working together to understand and solve problems. 



         Pervasive Analytics: Finally…Pervasive.

As an industry, we’ve talked for years about terms like “pervasive BI” or “BI for the masses”. There’s a whole market for data that is outside of the market for “business intelligence.” When we talk more about data, and less about software categories like BI, we get to the crux of maximizing business value—and fast, easy-to-use visual analytics is the key that opens the door to organization-wide analytics adoption and collaboration. 


            These are the trends we see in talking with customers about what they’re doing today and where we are investing for the future. The good news is that investment is most often being driven by a desire to take good initiatives farther, not a sense of frustration with failed initiatives. Perhaps the new technology and investment of the last few years is finally starting to pay off. No matter what, you can expect lots of change in business intelligence in 2013.





Internet Governance In a Changing World | Ensuring an Open, Fair and Neutral Architecture

         This is a comment in response to a post I read on  TAP: Technology | Academics | Policy ,

The original post read as,


"The Internet's unique characteristics have made it remarkably resistant to traditional tools of state governance. This is both good and bad. Phil Weiser of Silicon Flatirons explains in the paper below. What do you think? "



Here's a link to the paper titled "The Silicon Flatirons Roundtable Series on Entrepreneurship, Innovation, and Public Policy"Internet Governance: The Role of Multistakeholder Organizations by Joe Waz and Phil Weiser*

Ref.http://siliconflatirons.com/documents/publications/report/InternetGovernanceRoleofMSHOrgs.pdf




Response,


       "As an observation among individuals and groups within the broader global community, this is not merely a case of inherent resistance, it is critically essential to have organizations that are presently integral to the net, and that actively participate and engage in diplomacy, society, commerce and the likes, to determine and to have a say when it comes to building consensus, scoping out the cultural aspects of the Internet, its evolving architecture and keeping pace with its exponential development. Such a sentiment among the broader community is what has, despite all opponents and obstacles, helped keep the net open, neutral and truly global to date.
           Skepticism is expected and is natural among those in governments across the world, some individuals in civil society, and in elements of industry, because there are no clear and obvious means by which they can 1)avoid isolation, 2)ensure inclusion/participation and 3) contribute to help make such a flexible governing body truly representative, accountable,and responsible.

P.S ~ Thank You so much for posting this. I am definitely going to dig deeper, research and keep track of their findings.

Best Regards, Jai Krishna Ponnappan :)
"


Suggested Reading (2012 Publication):



"Internet Architecture And Innovation" 
by Barbara van Schewick




Overview

        Today.. following housing bubbles, bank collapses, and high unemployment--the Internet remains the most reliable mechanism for fostering innovation and creating new wealth. The Internet’s remarkable growth has been fueled by innovation. In this path breaking book, Barbara van Schewick argues that this explosion of innovation is not an accident, but a consequence of the Internet’s architecture--a consequence of technical choices regarding the Internet’s inner structure that were made early in its history. The Internet’s original architecture was based on four design principles: modularity, layering, and two versions of the celebrated but often misunderstood end-to-end arguments. But today, the Internet’s architecture is changing in ways that deviate from the Internet’s original design principles, removing the features that have fostered innovation and threatening the Internet’s ability to spur economic growth, to improve democratic discourse, and to provide a decentralized environment for social and cultural interaction in which anyone can participate. If no one intervenes, network providers’ interests will drive networks further away from the original design principles. If the Internet’s value for society is to be preserved, van Schewick argues, policymakers will have to intervene and protect the features that were at the core of the Internet’s success.

About the Author

      Barbara van Schewick is Associate Professor of Law and Helen L. Crocker Faculty Scholar at Stanford Law School, Director of Stanford Law School’s Center for Internet and Society, and Associate Professor (by courtesy) of Electrical Engineering in Stanford University's Department of Electrical Engineering.


Reviews

       “...Internet Architecture and Innovation is an important work: it supplies a key piece of the broadband puzzle in its consideration of broadband transport as a necessary input for other businesses…van Schewick’s fundamental premise rings true: only neutral networks promote competition and innovation.” ars technica


Endorsements

"This is a tour de force on the topic of the end-to-end principle in the design of the Internet." Daniel E. Atkins, W.K. Kellogg Professor of Community Information, Professor of Information and EECS, and Associate Vice-President for Research Cyberinfrastructure, University of Michigan


"This is an important book, one which for the first time ties together the many emerging threads that link the economic, technical, architectural, legal, and social frameworks of the birth and evolution of the Internet." David P. Reed, MIT Media Laboratory


"This isn't a flash in the pan piece. This book will be an evergreen in a wide range of academic and policy contexts more than an introduction to how technology and policy should be analyzed, it is, in my view, the very best example of that analysis." Lawrence Lessig, author of Code and Other Laws of Cyberspace






Catering to Your Niche Crowd ~ Boutique Publishing

           
                   This is in response to an interesting article on Forbes,
https://www.facebook.com/forbes



"..Some of the most interesting experiments in publishing are the ones that use an entirely different math. NSFW Corp.(http://www.nsfwcorp.com/) is one of them. Launched by former TechCrunch writer Paul Carr over the summer, it’s a general-interest news and humor site that doesn’t need, doesn’t expect and doesn’t particularly want a ton of readers.

Since its launch, it has amassed — or a-niched — more than 3,000 subscribers who pay $3 a month for access. If it can attract 30,000 subscribers, it will be at or near the breakeven point. “If we can get 50,000, none of us will ever have to do anything else again,” says Carr.

NSFW’s mix of gonzo politics, guerilla journalism and open-mic comedy isn’t for everyone. But that’s the point. It’s because it doesn’t try to be for everyone that it can be what it is. After all, Carr notes, if there’s one thing the Web has proven, it’s that there’s money to be made in catering to niche tastes. “It’s nearer to paying for porn than it is to paying for news,” he says of his site’s appeal.


Smallness also makes it easier to innovate, which NSFW does on a variety of levels."


My response,


            
             There are quiet few select groups, underground networks and circles that offer membership by invite only. Off-course, once you identify and demarcate your niche crowd/select audience things such as scalability and monetization by numbers go out the window because the primary emphasis often is purely on things such as delivering quality and value that is relevant, unique and hard to find. P.S ~ Not an easy model to follow but worthy of consideration once you have matured in the business and tested the waters over the years.

~ Jai Krishna Ponnappan

Ref.https://www.facebook.com/photo.php?fbid=10151170951552509&set=a.10151047526737509.456571.30911162508&type=1&ref=nf

Content is King and So is Leveraging Big Data




" We Offer Content Management "




         

















               
       This is a statement being offered by many but is it really content management or merely storing documents that are linked to a database record?

Questions that immediately arise in my mind when I hear this statement are:

Do you provide audit logs reflecting what happens to the content?
What levels of security do you support? (Repository, folder, document, element)
How do you capture information? (Imaging, Office applications, Outlook)
What capture devices are supported? (Scanners, multi-function peripherals, cameras)
Is there an ability to use workflow to automate some processes?
What is the approach to search? (Keyword, Boolean, full-text, parametric)
Does it support versioning?

        The term content management is being used more and more by suppliers of all types and sizes and while there is some truth in their claims, the question is what does content management mean to you in relation to addressing your business requirements? Peel this back again and the real question becomes, what are your business requirements in relation to content and records management and does the application you are considering deliver what you need now and provide a way to grow in the future?

In my view, there are many good applications on the market today. Some of which are focused on content management and some that are focused on a business application with bolt on content management capabilities and some that combine a good degree of both. This does not mean they are bad or will not work for you, it means that you have to become more knowledgeable in order to make the right decisions and choices. The last thing you want to do is make an investment only to find out it does not provide what you thought it would.
Imagine that you make an investment to manage risk and when an audit comes, you cannot defend your information due to a lack of audit logs that would have presented a historical view of the information in question. When you are asked to show who accessed it, when they accessed it and what they did with it, could you answer these questions confidently? If you are brought to court and asked to provide all of the information related to the case which includes electronic files, emails and even metadata associated with the case, could you do so with a level of confidence that you have found everything or would you have a high level of doubt and uncertainty? In order to make the right choices, you need to identify and define your requirements, understand the technologies and assess what is available to you. It could be that the functionality being offered is suitable to meet your needs but wouldn’t you feel better knowing it for sure?
If you are ready to move forward and are finding yourself stuck or unfocused and are not sure where to begin or what to do next, seek professional assistance and/or training to get you started. Be sure to investigate AIIM's Enterprise Content Management training program.


And be sure to read the AIIM Training Briefing on ECM (authored by yours truly). Click on the image to download and read.
What say you? Do you have a story to tell? What are your thoughts on this topic? Do you have a topic of interest you would like discussed in this forum? Let me know.

Bob Larrivee, Director and Industry Advisor – AIIM
Ref. Posted originally on AIIM at, http://www.aiim.org/community/blogs/expert/We-Offer-Content-Management

An after Thought and Response,

"Content is King 
and So is Leveraging Big Data"
       
         Going a step further from this-> "The question is what does content management mean to you in relation to addressing your business requirements? Peel this back again and the real question becomes, what are your business requirements in relation to content and records management and does the application you are considering deliver what you need now and provide a way to grow in the future?"

...I believe it is important to ask

->"Does your organization recognize and value the power of data/content and the systems/solutions that can leverage it to your advantage?

             Many organizations across several industries know that "Content can be King" even within their organizational setting as well as potentially serve as a key area facilitating advantage, efficiency and growth in their wider industry and market place. As a result the understandable trend translates as a shift towards making this is a vital high level strategic decision that can clearly separate them from organizations with less adaptable, less compatible and poorly tailored and managed Data/content Management Systems. Many of these are experiencing measurable and significant setbacks as a result of it.

           Thank You for posting on this topic, I am quiet sure many of us have come across critical real life business scenarios and systems that serve as great case studies and examples that teach us valuable lessons worth sharing. Have a great day. Best Regards, Jai Krishna Ponnappan :)



Disaster Recovery Virtualization Using the Cloud


                      Disaster recovery is a necessary component in any organizations’ plans. Business data must be backed up, and key processes like billing, payroll and procurement need to continue even if an organization’s data center is disabled due to a disaster. Over time, two distinct approaches to disaster recovery models have emerged: dedicated and shared models. While effective, these approaches often forced organizations to choose between cost and speed.

                      We live in a Global economy that is balanced around and driven by a '24x7' culture. Nobody likes to think about it but in order to thrive and survive Disaster Recovery is a necessary component in any organizations’ plans. Even with a flat IT budget, you need to have seamless failover and failback of critical business applications. The flow of information never stops and commerce in our global business environment never sleeps. With the demands of an around-the-clock world, organizations need to start thinking in terms of application continuity rather than infrequent disasters, and disaster recovery service providers need to enable more seamless, nearly instanta-neous failover and failback of critical business applications. Yet given the reality that most IT budgets are flat or even reduced, these services must be provided without incurring significant upfront or ongoing expenditures. Cloud-based business resilience can provide an attractive alter-native to traditional disaster recovery, offering both the more-rapid recovery time associated with a dedicated infrastructure and the reduced costs that are consistent with a shared recovery model. With pay-as-you-go pricing and the ability to scale up as conditions change, cloud computing can help organizations meet the expectations of today’s frenetic, fast paced environment where IT demands continue to increase but budgets do not. This white paper discusses traditional approaches to disaster recovery and describes how organizations can use cloud computing to help plan for both the mundane interruptions to service—cut power lines, server hardware failures and security breaches—as well as more-infrequent disasters. The paper provides key considerations when planning for the transition to cloud-based business resilience and in selecting your cloud partner.



A Qualitative Trade Off Between Cost & Speed 


            When choosing a disaster recovery approach, organizations have traditionally relied on the level of service required, as measured by two recovery objectives:

●● Recovery time objective (RTO)—the amount of time between an outage and the restoration of operations 
●● Recovery point objective (RPO)—the point in time where data is restored and reflects the amount of data that will be ultimately lost during the recovery process. 

In most traditional disaster recovery models—that are usually dedicated and shared— organizations are forced to make the tradeoff between cost and speed to recovery.


          In a dedicated model, the infrastructure is dedicated to a single organization. This type of disaster recovery can offer a faster time to recovery compared to other traditional models because the IT infrastructure is mirrored at the disaster recovery site and is ready to be called upon in the event of a disaster. While this model can reduce RTO because the hardware and software are pre-configured, it does not eliminate all delays. The process is still dependent on receiving a current data image, which involves transporting physical tapes and a data restoration process. This approach is also costly because the hardware sits idle when not being used for disaster recovery. Some organizations use the backup infrastructure for development and test to mitigate the cost, but that introduces additional risk into the equation. Finally, the data restoration process adds variability into the process.

In a shared disaster recovery model, the infrastructure is shared among multiple organizations. Shared disaster recovery is designed to be more cost effective, since the off-site backup infrastructure is shared between multiple organizations. After a disaster is declared, the hardware, operating system and application software at the disaster site must be configured from the ground up to match the IT site that has declared a disaster, and this process can take hours or even days.


Measuring level of service required by RPO and RTO 




 Traditional disaster recovery approaches include shared and dedicated models 


The pressure for continuous availability 


According to a CIO study, organizations are being challenged to keep up with the growing demands on their IT departments while keeping their operations up and running and making them as efficient as possible. Their users and customers are becoming more sophisticated users of technology. Research shows that usage of Internet-connected devices is growing about 42 percent annually, giving clients and employees the ability to quickly access huge amounts of storage. In spite of the pressure to do more, they are spending a large percentage of their funds to maintain the infrastructure that they have today. They are also not getting many significant budget increases; budgets are essentially flat.1 With dedicated and shared disaster recovery models, organiza-tions have traditionally been forced to make tradeoffs between cost and speed. As the pressure to achieve continuous availability and reduce costs continues to increase, organizations can no longer accept tradeoffs. While disaster recovery was originally intended for critical batch “back-office” processes, many organi-zations are now dependent on real-time applications and their online presence as the primary interface to their customers. Any downtime reflects directly on their brand image and interrup-tion of key applications such as e-commerce, online banking and customer self service is viewed as unacceptable by customers. The cost of a minute of downtime may be thousands of dollars.


Thinking in terms of interruptions and not disasters 


Traditional disaster recovery methods also rely on “declaring a disaster” in order to leverage the backup infrastructure during events such as hurricanes, tsunamis, floods or fires. However, most application availability interruptions are due to more mundane everyday occurrences. While organizations need to plan for the worst, they also must plan for the more likely—cut power lines, server hardware failures and security breaches. While weather is the root cause of just over half of the disasters declared, note that almost 50 percent of the declarations are due to other causes. These statistics are from clients who actually declared a disaster. Think about all of the interruptions where a disaster was not declared. In an around-the-clock world, organizations must move beyond disaster recovery and think in terms of application continuity. You must plan for the recovery of critical business applications rather than infrequent, momentous disasters, and build resiliency plans accordingly.



 Time to recovery using a dedicated infrastructure 



Time to recovery using a shared infrastructure. The data restoration process must be completed as shown, resulting in an average of 48 to 72 hours to recovery. 




Types of Potential business interruptions



Cloud-based Business Resilience is a Welcome New Approach 


Cloud computing offers an attractive alternative to traditional disaster recovery. “The Cloud” is inherently a shared infrastruc-ture: a pooled set of resources with the infrastructure cost dis-tributed across everyone who contracts for the cloud service. This shared nature makes cloud an ideal model for disaster recovery. Even when we broaden the definition of disaster recovery to include more mundane service interruptions, the need for disaster recovery resources is sporadic. Since all of the organizations relying on the cloud for backup and recovery are very unlikely to need the infrastructure at the same time, costs can be reduced and the cloud can speed recovery time. 


Cloud-based business resilience managed services are designed to provide a balance of economical shared physical recovery with the speed of dedicated infrastructure. Because the server images and data are continuously replicated, recovery time can be reduced dramatically to less than an hour, and, in many cases, to minutes—or even seconds. However, the costs are more consistent with shared recovery.




Cloud-based business resilience offers several other benefits over traditional disaster recovery models:


Speed to recovery using cloud computing


• More predictable monthly operating expenses can help you avoid the unexpected and hidden costs of do-it-yourself approaches.
• Reduced up-front capital expenditure requirements, because the disaster recovery infrastructure exists in
the cloud.
• Cloud-based business resilience managed services can more easily scale up based on changing conditions.
• Portal access reduces the need to travel to the recovery site which can help save time and money.

A cloud-based approach to business resilience. Virtualizing disaster recovery using cloud computing

While the cloud offers multiple benefits as a disaster recovery platform, there are several key considerations when planning for the transition to cloud-based business resilience and in selecting your cloud partner. These include:

●● Portal access with failover and failback capability
●● Support for disaster recovery testing
●● Tiered service levels
●● Support for mixed and virtualized server environments
●● Global reach and local presence
●● Migration from and coexistence with traditional disaster recovery

The next few sections describe these considerations in greater detail. Facilitating improved control with portal access.  Disaster recovery has traditionally been an insurance policy that organizations hope not to use. In contrast, cloud-based business resilience can actually increase IT’s ability to provide service continuity for key business applications. Since the cloud-based business resilience service can be accessed through a web portal, IT management and administrators gain a dashboard view to their organization’s infrastructure.

               Without the need for a formal declaration and the ability to fail over from the portal, IT can be much more responsive to the more mundane outages and interruptions. Building confidence and refining disaster recovery plans with more frequent testing. One traditional challenge of disaster recovery is the lack of certainty that the planned solution will work when the time comes. Typically, organizations only test their failover and recovery on average once or twice per year, which is hardly sufficient, given the pace of change experienced by most IT departments. This lost sense of control has caused some organizations to bring
disaster recovery “in house,” diverting critical IT focus for mainline application development. Cloud-based business resilience provides the opportunity for more control and more frequent and granular testing of disaster recovery plans, even at the server or application level.

              Supporting optimized application recovery times with tiered service levels Cloud-based business resilience offers the opportunity for tiered service levels that enable you to differentiate applications based on their importance to the organization and the associated tolerance for downtime. The notion of a “server image” is an important part of traditional disaster recovery. As the complexity of IT departments has increased, including multiple server farms with possibly different operating systems and operating system (OS) levels, the ability to respond to a disaster or outage becomes more complex. Organizations are often forced to recover on different hardware, which can take longer and increase the possibility for errors and data loss. Organizations are implementing virtualization technologies in their data centers to help remove some of the underlying complexity and optimize infrastructure utilization. The number of virtual machines installed has been growing exponentially over the past several years.

               According to a recent survey of Chief Information Officers, 98 percent of respondents either had already implemented virtualization or had plans to implement it within the next 12 months. Cloud-based business resilience solutions must offer both physical-to-virtual (P2V) and virtual-to-virtual (V2V) recovery in order to support these types of environments. Cloud-based business resilience requires ongoing server replication, making network bandwidth an important consideration when adopting this approach. A global provider should offer the opportunity for a local presence, thereby reducing the distance that data must travel across the network.

              While cloud-based business resilience offers many advantages for mission-critical and customer-facing applications, an efficient enterprise-wide disaster recovery plan will likely include a blend of traditional and cloud-based approaches. In a recent study, respondents indicated that minimizing data loss was the most important objective of a successful disaster recovery solution. With coordinated disaster recovery and data
back-up, data loss can be reduced and reliability of data integrity improved.


Cloud computing offers a compelling opportunity to realize the recovery time of dedicated disaster recovery with the cost structure of shared disaster recovery. However, disaster recovery planning is not something that is taken lightly; security and resiliency of the cloud are critical considerations.



Posted by Jai Krishna Ponnappan

Keys to Business Intelligence by Jai Krishna Ponnappan


                 

               Historically, business intelligence has promised a lot of “Yes,” but the reality has been filled with “Nos.” The promises are enormously compelling. Companies collect vast amounts of information about markets, customers, operations, and financial performance. Harnessing this information to drive better business  results can have tremendous impact. Some corporations have achieved impressive gains after investing millions of dollars and multiple years of effort into building traditional analytical systems.



                   However, these success stories are frustratingly few and far between. Traditional BI, long the only option, can be prohibitively costly and complex. For companies without millions of dollars to invest, the options have been few and unattractive.   Further, even when these investments of time and resources can be made, they don’t guarantee success. For too many companies, BI doesn’t deliver on its promises -- it is too costly, too complicated, too difficult to scale and extend. The end result is a reality in which only a small minority of employees have access to BI. According to Gartner, only 20% of employees use BI today. This falls far short of the potential transformative capabilities of BI throughout a company. It’s time for BI that says Yes. Yes to the requirements of your budget, business, and business users. Yes to fewer compromises. This whitepaper first looks at the fundamental requirements that a BI solution should deliver to your company. Next, this whitepaper covers the 11 Key Questions that you should be asking of a future BI technology partner. When  the BI provider can answer Yes to all of these questions, you have BI that is capable of fulfilling your analytical and reporting needs both today and over time – it is flexible, powerful, and efficient. It is BI that says Yes.


Business Insight—Four Foundational Requirements

Any organization investing in business intelligence needs to define the capabilities
that will help them to win against the strongest competitors in their market.
Here are the four bedrock requirements that should define the core capabilities
of your solution:



Historical analysis and reporting.

Fundamentally, BI should give you insight into both business performance and
the drivers of that performance. An understanding of business influencers and
results is the foundation for successful, proactive decision making. Technically,
this capability requires the mapping and analysis of data over multiple years.
This can also often mean the modeling and manipulation of hundreds of millions
of database rows.


Forecasting and future projection.

While understanding historical data is a first step, it is also vital to project those
findings into the future. For example, once you know how different types of
sales deals have progressed in the past, you can examine current opportunities
from that perspective and make future forecasts. The ability to forecast and
align your business resources accordingly are key to success.

Ability to integrate information from multiple business functions.

Strategic insight often requires data from multiple systems. For example, operational
results require a financial perspective to show the full picture. Sales management
benefits from a comprehensive view of the demand funnel. Targeted, customized
marketing efforts require analysis compiled from customer, marketing, and
purchasing data. Your solution needs to be able to easily integrate information
from multiple sources in order to get answers to broad business questions.
Easily explored reporting and analysis.

Decision makers need to understand overarching business views and trends. 

They also need to examine increasing levels of detail to understand what actions can
be taken to achieve further success. It’s not enough to simply have a report; if
that report is not explorable, it might raise critical issues but not satisfy the need
to know more detail in order to make a decision. A full range of drill-down and
drill-across capabilities make it possible for decisionmakers to fully understand
an issue at hand and make critical decisions.

These four capabilities form the foundation of a powerful business intelligence
solution that can answer the critical questions facing your business. If a solution
cannot meet one of these requirements, your solution will not have the full
range of analytical capability that you will need to be competitive.



If the solution that you are considering meets the Four Foundational Requirements,
it is time to delve more deeply. The following twelve questions will help you to
assess your options and ensure that you are getting a robust, powerful solution
that meets your business requirements.

Can I get a comprehensive view of my business?


Even seemingly basic questions can require data from a variety
of operational systems, 3rd party data sources, and spreadsheets.



Even basic business questions such as “Which marketing campaigns generated
the most revenue this year?” or “Did the product redesign have the desired
effect on part inventory levels?” could require data from different operational
systems, 3rd party or partner sources, databases, and individual spreadsheets. As
a result, a core BI requirement is the ability to access, acquire, and integrate data
from multiple sources.
Traditional BI solutions provide this capability, but it can be arduous to implement
and maintain. Traditional BI accesses multiple data sources with complex and
expensive ETL systems that bring data together into one physical database.
Unfortunately, this database is totally disconnected from the world of the
business user. This requires another round of programming to connect the
physical data with the business user model.
A more modern solution enables you to:
• Experience a powerful, usable solution. Traditional solutions build from
the bottom up. A more modern approach starts instead from the top - the
logical business model. It then works downwards to manage the physical
data that is required to deliver these business views. This “top down”
approach manages the complexity that results from integrating multiple
data sources – so that the solution is both powerful and easy to use.
• Analyze information from all types of data assets. Data are provided to the
business in a variety of ways. Your BI solution needs to extract information
from corporate systems, stand-alone databases, flat files, XML files, and
even spreadsheets.
• Access remote or secured databases. Traditional BI uses an ETL process to
extract data out of a source database and place it into a data warehouse,
while some SaaS providers can only access data that is uploaded to their
servers. The more sophisticated SaaS BI providers can both upload data
and access local databases; that is, it allows you to access and analyze data
without actually uploading it. This is accomplished via real-time queries
against the database.
• Manage the required metadata. In addition to the management of data
sources, multi-source BI requires the management of all accompanying
metadata, the information about the data itself.



Does it provide full features at an affordable price?


Modern BI solutions make it possible even for smaller
organizations to afford a comprehensive BI solution.



Traditional BI solutions were often affordable only to the largest companies,
which had the large budget, IT staff, and resources required for initial deployment
and ongoing maintenance. Departments of enterprises and SMBs were
effectively priced out of the market.
Recently, the attractiveness of the midmarket has resulted in new “midsize”
solutions from traditional players and from new vendors. The catch, however, is
that the lower price often only purchases a “crippled” or partial solution.

So how can a smaller organization get a true BI solution? A fully deployed BI
solution must include the following: ETL and scheduling, database, metadata
management, banded/pixel perfect reporting, dashboards, e-mail alerts, and
OLAP slice-and-dice functionality.
Look for a solution that:

• Delivers a full BI solution, not parts of one. The license should include
everything that you need for a true solution: ETL, database management,
meta-data management, OLAP slice-and-dice query generation, banded
reporting, ad hoc reporting, and visual dashboards. A solution that has all
of the necessary components, already integrated for you, will deliver the
fastest, greatest value to your organization.
• Has easy-to-understand, affordable pricing. Traditional solutions have
many cost components – hardware, software, consultants, in-house IT,
support, and ongoing maintenance. As a result, the pricing is both high
and difficult to track fully. Modern solutions, such as ones delivered
software-as-a-service (SaaS), have more transparent and affordable pricing.
SaaS pricing is more comprehensive – the cost of hardware, software, and
support is in one monthly number. SaaS pricing is also more affordable,
since it leverages a shared cost structure, and these lower costs are spread
over time. This makes it easier to deploy and maintain a BI solution.


Can I start seeing value within 90 days?


In order for BI to be effective, it has to be deployed quickly
enough to address the critical issues that you are currently
facing.


Time to value is a prime determinant of the ROI of a business intelligence
deployment. Traditional BI solutions have struggled to deliver value to
stakeholders within a desirable timeframe. Due to challenges such as high
upfront capital expenditures, extensive IT resource requirements, and lengthy
development schedules, many traditional BI projects have taken over 12 to 18
months to complete.

Modern BI solutions can dramatically reduce the time to value by making use of
the following:

• Fully integrated solutions, from ETL to analytical engine to reporting engine
• Automation of standard processes
• Use of templates for typical reporting requirements, such as sales reporting,
financial reporting, etc.
• Software-as-a-service (SaaS) or on-demand, delivery models
• Leveraging of existing data warehousing investments
Modern solutions can also enable processes and approaches for BI deployment
that increase the likelihood of success. These include:
• Proving success incrementally and iteratively – avoiding the “Big
Bang.” In the earlier days of BI, customers were tempted to create a “big
bang” solution, since the cost and effort of creating the initial solution
and updating it over time were so high. Today, a BI solution offering a
fully integrated architecture – one with all of the components already
provided, working together -- allows companies to focus on initial highneed
projects, prove success, and expand or adapt over time. This ability
to iterate over time provides value more quickly, lowers ongoing cost, and
increases the likelihood of success.
• Deploying to the existing infrastructure; avoiding major infrastructure
upgrades. The second major reason that traditional solutions are slow to
deploy is that they often require an additional investment in new hardware or
software. This lengthens timeframes, since a major capital purchase requires
a financial approval process that can take up to a full year of review and
approval. If a solution can leverage the existing infrastructure, this process
step is avoided. Also, if the solution itself is more affordable, or, like SaaS
solutions, offered as a subscription (which can be charged to operating
expenses, not capital budgets), this budgeting process step can be bypassed
or shortened.
• Deploying with the IT team you have. The construction of a traditional
BI solution requires many specialized resources like data modelers and
ETL specialists. Any plan that requires these professionals will confront
resource bottlenecks.


Can I be assured that my data is secure
and available?


Data security and availability are key requirements of a BI
project.


Data security and availability are key requirements for any IT system. You need
a BI solution that matches the same high levels of performance, reliability, and
security that you expect of the other systems in your portfolio.
Security is fundamental, since the data your business uses is critical to competitive
advantage, effective operations, and consumer or patient privacy. Availability
is also critical, since you need to be able to make decisions in a timely manner,
addressing issues as they emerge. Your system needs to be ready to respond
when you need it.
Your BI system should:
• Provide high availability. If the solution that you are considering is a
traditional, on-premise one, how often is it down for maintenance or
updates? How reliably is it available, given your configuration? If the
solution that you are considering is a SaaS solution, what is the uptime
guaranteed in subscription contracts? You will want to be sure that your
solution will be available 99% of the time, if it is on-premise or SaaS.
• Be built on high performance hardware. If you are selecting a SaaS
vendor, make sure that their solution is operating on high performance
hardware that will provide the necessary reliability and availability that you
seek. If you are selecting an on-premise vendor, make sure that you are
making the appropriate investments into the type and quantity of hardware
that will provide high reliability and will also scale over time.
• Provide flexible security models. Most deployments have varying levels
of feature access, depending on the user’s role. Some users may only be
able to view a subset of reports, such as sales reports, while others will have
full access to all data, reports, and administration features. The solution
needs to ensure that users have access appropriate to their role. This will
require features such as defining row and column filters to limit data to
those individuals and groups who require it.
• Have SAS 70 data center certification (SaaS providers only). If you
are reviewing SaaS vendors, be sure that the data center where the
information will be stored has SAS 70 Certification. This represents that a
service organization has been through an in-depth audit of their control
objectives and control activities, which include controls over information
technology and related processes.


Can I proceed with limited IT resources?


Traditional IT solutions can monopolize IT resources. Modern solutions have a
lighter IT footprint, so that IT can focus on higher priorities.



Traditional BI solutions require significant IT resources up front for deployment,
as well as a high level of ongoing resources for maintenance, support, and
report creation and updating.
These intense IT requirements often limited the use of BI by smaller and midsize
organizations, which didn’t have a deep IT bench, or departments of enterprises,
which didn’t get enough allocation of IT resources.
Worse, IT resources were often required for report creation or updating. This led
to long lines outside of the IT department by business managers who wanted new
or better reporting. IT was swamped, and unable to focus on other priorities.

Modern solutions have a lighter IT footprint, which allows IT to focus on high
priority projects, and also ensures that business users get their questions answered
quickly and independently of IT. Look for a solution that:

• Minimizes IT resource requirements. Reducing the upfront and ongoing
IT resource requirements both saves money and increases the speed of
deployment. SaaS based solutions, for example, require less IT resources
since the solution is provided as a service – there is no hardware to buy, no
software components to cobble together. Updates happen automatically,
so IT maintenance burdens are dramatically reduced.
• Respects IT standards and expertise. An organization’s IT team is
fundamental to the company’s ongoing operational success. The solution
should meet IT requirements for security, availability, and compatibility with
other systems.
• Empowers the end users. When end users are more self sufficient, the
demands on IT are lighter, and IT can better prioritize their activities. Ideally,
trained users can define reports, dashboards, and alerts on their own,
without any Java programming or scripting. IT can oversee critical data
management functions without getting bogged down in time consuming
user-facing report definitions.




Does it avoid risky integrations?



A solution that is already integrated has far lower deployment and ongoing risk
than a solution that starts as standalone components.




Another major contributor to the high risk in traditional BI solution development
is the large number of products and technologies that must be bolted together
to get a full solution. To start with, an ETL product is used to manage the
task of extracting data, transforming it for analysis and inserting it into the
warehouse. These tools are very technical and require expensive programmers
with specialized training.
But vendors have menacing gaps within their own product suites. Most BI suites
have been created from acquired technologies with only loose integration
between the capabilities. Most enterprise BI vendors require you to use separate
technologies for OLAP, reporting, dashboards, and even on-line access to data.
These separate products each require configuration and support.
Modern vendors take an entirely different approach to solving the technology
problem. They:
• Deliver all key functionality in one solution. A fully integrated BI
platform means you have one solution to master and all of your metadata
is encapsulated in one place.
• Require your staff to learn one technology and toolset. A single
solution has one set of commands, syntax, and data structures throughout.
Once your users have been quickly trained to develop applications on
the underlying platform, they will be fully equipped to create all types of
customer facing functionality.
• Avoid custom coding. Because Birst connects everything within one
application, you eliminate all of the situations where you would be required
to use custom java code to script data or custom reports.
• Ease vendor management. Birst reduces the number of responsible
parties to the magic number of one. You won’t have to live with finger
pointing and cross vendor diagnostics when you have a problem. Birst provides
a unique answer to the ultimate need that you have for accountability.




Can business users easily create and explore
their own dashboards and reports?


Ideally, the solution can meet your current needs and easily scale to meet
future needs – even when that’s expanding from ten to ten thousand users.



Knowledge and speed are critical to solving business challenges. While BI
provides the information, it is the business manager who provides the timely
response to the new information. When insight is in the hands of business
professionals who can make a difference, organizations can achieve great success.
For this reason, it’s vital for a BI solution to make it easy for business users, not just
IT users, to analyze and explore information. The more BI becomes “pervasive” in
an organization, the more agile and proactive a business can become.
Achieving a solution that is easy for business users to “self serve” is challenging,
however. A solution has to be powerful enough to manage complexity and
make it simple for the end user.

To ensure that you have a solution from which your business users can ”self
serve,” look for one that:

• Is easy to learn and use. Users should be able to come up to speed
on the system within days, not months. The solution itself should take
advantage of user interface standards – dragging and dropping, dropdown
boxes, highlighting – that are already familiar to a web savvy audience. The
vendor should also provide adequate online, webinar, or in-person training
to ensure that your user base can take best advantage of the solution.
• Makes it easy to explore data and new information. A report is of
limited use if you can’t easily dig for more details or find the drivers of
why a result happened as it did. Dashboards and reports that allow you to
“drill” into deeper details, filter information to the exact data set that you
need, or reset information to desired parameters make it possible for you
to truly explore your data.
• Delivers quick responses; allows users to hone in on interesting data.
Even if the solution is analyzing gigabytes of information from across
multiple tables and data sources, answers need to be delivered quickly to
the user. Responsiveness, when combined with easy data exploration,
allows users to continue asking questions, refining them with each answer,
to hone in analyzing the exact issue of interest.
• Makes the complex easy. In order to make BI approachable for business
users, the solution needs to manage complexity to make analysis easier to
conduct. For example, one of the most complicated aspects of BI is dealing
with time variables. Every company has its own approach, and many business
questions include complex time nuances. Modern solutions can simplify
this complexity, allowing users to simply select options from a menu. Rather
than figuring out how to create formulas on their own, time-based reports
can be created with ease.


Can the solution scale to a large, diverse
user base?


The demands of your business can change rapidly and dramatically. Your BI solution
needs to keep pace.



Even BI projects with modest initial goals can eventually become huge deployments,
and you want to make sure that your solution can handle whatever the future
holds. If you are a midsize business with ambitions to grow significantly larger,
or a department of a large organization that realizes that your solution may
become a standard for the entire company – you want to make sure that your
solution can handle large, diverse groups of users, even if that’s not where
you’re starting.
A modern, SaaS architecture is highly flexible and scalable. It allows organizations
to start small, but add users quickly and at large scale. To be future proof, you
want your solution to:
• Quickly and easily scale to thousands of users. If your user base grows
from ten people to thousands in a short period of time, you want to be sure
that you can handle that growth in stride, without a major re-architecting
of the solution or the use of the full efforts of your IT team. This has two
benefits –since you only have to pay for what you need today, and you
only have to pay for what you need tomorrow, too. You don’t have to
pay upfront for “shelfware” that may or may not get used. The solution
should be able to add on users quickly, without a serious degradation in
performance, and without major resource and time requirements.
• Support multiple roles. As deployments get larger, users tend to fall into
different categories – super users, average users, occasional users. They
may have different demands on data, or have different security levels. Your
solution has to be able to easily accommodate these different types of users,
their access patterns, feature needs, and the ability to easily administer
them all.
• Grow without resetting. Scale should be organic and evolutionary, not
disruptive. You should be able to expand easily, without having to make
significant new investments in infrastructure or supporting headcount.
It should be a natural expansion, not a complete reconstruction of the
existing implementation.



Can the solution keep up with my business
as its needs change?


When information can be easily and securely shared with partners, vendors, or
customers, the entire business ecosystem is more efficient and effective.


A changing business landscape can challenge every company’s key systems,
but BI solutions confront even bigger obstacles than most. First, because of
their historical perspective, they must rationalize data across every version of
the business over a period of several years. BI cannot just move on to the next
release—it must accommodate the next release, as well as every prior iteration.
Second, much of the value of BI is to make sense of changing measures of
business effectiveness. Changes in customers, competitors, product offerings,
suppliers, and business units are all the target of your BI effort. A successful
solution must accommodate easily a dynamic business environment, rather than
requiring major reconstruction of data and functionality with each new major
product update.

A successful solution must:

• Add new data sources without requiring a major reset of the
solution. As your BI solution demonstrates its value with initial projects,
demand will increase to analyze more data sources. Your system should
be architected in such a way that it can accommodate this data easily and
seamlessly, without significant IT intervention or recoding of the solution.
• Be able to evaluate changes over time. To be effective, a BI solution
must model the many changes that happen over time. Looking at data
from an historical perspective requires a technology that can provide
meaningful views across data that is constantly changing.
• Offer business users self-service, so that they can answer their own
questions quickly and easily. Successful BI solutions become popular
solutions. If IT intervention is required for every new report request or
report update request, organizations end up with angry business users and
choked up IT request queues. When business users are empowered to
build and update their own reports and dashboards, the business is agile
and the IT agenda is focused on priorities. SaaS BI solutions, which have
the lightest requirements of IT teams, are particularly helpful on this point.
Heavy IT footprint solutions, such as open source software, can create
substantial IT backlogs over time.



Can the solution easily serve my
entire ecosystem?

Increasingly, organizations function by working with a network of suppliers, retailers,
partners, and channel resellers. Empowering these participants in your ecosystem with
timely information and analysis is a key to making this network function smoothly.
Achieving this extended view of information brings additional challenges to
your BI system, however. It requires a solution that can be easily and securely
accessed anywhere in the world. It also requires that information be tailored to
the level of access required – suppliers may have different views from logistics
partners. It may also require the effective delivery of information to a broad
array of devices - not just desktops and laptops, but mobile phones or tablet
computers as well.

A solution that serves your entire ecosystem should:

• Deliver a solution globally. While your direct employees may be
concentrated in one locale, your extended network is probably national or
global. Because of this, your solution must be accessible from any point
in the world where it is needed. While delivering a system like this in the
traditional method is complicated and prohibitively expensive, it can be
achieved fairly easily with SaaS solutions, which are available anywhere
there is an internet connection.
• Provide for multiple levels of access, with high security. The solution
should be able to control for which type and amount of data gets seen,
as well as which partners have the ability to add data or create their own
reports. Users could vary from people who only get alerts, people who can
see reports, and people who have full access to the solution. All should be
protected with the highest level of information security.
• Integrate partner data. Your strategic partners demand higher levels of
data integration. In the same way that your sales and marketing teams
want a unified view of the demand generation funnel, your partners
will want to see how, for example, your finished good inventory level
expectations match with their production capacity or parts inventories.
• Deliver to all types of devices. Supply chain users may be on the factory
floor. Sales users may be in transit, and executive users could be anywhere.
Keeping your ecosystem in synch requires that information be consumed
by the most convenient device, whether this is a desktop, laptop, mobile
phone, or tablet computer. SaaS solutions have another advantage here,
since they are accessed through a modern browser, so they can be easily
adapted to be consumed by small format devices.


Is the solution provider dedicated to my
ongoing success in BI?


A solution provider that is dedicated to providing best in class BI for its customers, 
both now and in the future, is a better bet.



Are the BI provider’s technology, incentives and motivations aligned with your
ongoing needs as a BI customer? Many traditional BI solutions have core
technology developed over two decades ago. These products were architected
in the age of thick clients, mainframe applications, and Unix database servers.
While these products have been updated with veneers of modern technology,
they still retain their older technology foundations.
Also, many traditional BI solutions have a business model that focuses on the
initial sale, not ongoing success. In the traditional software model, the initial
implementation is the largest payment to the software vendor. So completing
the initial sale is paramount, instead of ensuring satisfaction over the full
customer lifetime.

Companies deserve better than this. They deserve a company that is dedicated
to long term customer success. A modern vendor:

• Starts with a modern, standards-based architecture. Unlike traditional vendors
that continue to market what are essentially legacy products, modern vendors have
technology that is fully aligned with the cloud-based realities of today.
• Supports seamless, regular upgrades. Once a traditional BI solution
is deployed, it can be complicated, time consuming, and disruptive to
upgrade the solution, even when the new features are very desirable.
With a SaaS solution, new features and functionality are added regularly
and seamlessly, so that you can quickly experience the benefits of new
development while avoiding downtime and disruptions.
• Lives and dies by BI. The BI product category has come to be dominated
by technology giants that generate the majority of their revenues by doing
other things besides BI. As a result, the focus on business intelligence
innovation and customer satisfaction has declined. A vendor solely focused on
business intelligence is more dedicated to innovation and customer success in BI
• Is successful when the customer is successful – now and in the future.
SaaS vendors have a subscription model. They make their money over
the lifetime of a customer relationship, so their incentive is to ensure that
companies are up and running quickly, and satisfied with the ongoing
solution today, tomorrow, and five years from now. This is a significant
departure from the traditional model, where customers paid a significant
amount up front, but were left to manage deployment and maintenance
themselves; customer satisfaction concerns were left to the customer
themselves, and satisfaction was often low.

~Jai Krishna Ponnappan

Sustainable Success Starts with Agile: Best Practices to Create Agility in Your Organization.

             While it might frustrate us when we can’t control it, change really should be seen as opportunity in disguise. Enterprises everywhere recognize that to turn opportunity into advantage, business and IT agility is more important than ever. Fifty-six percent of IT executives in a recent survey1 put agility as the top factor in creating sustainable competitive advantage—ahead of innovation.

But creating true agility is hard. In the same survey, just one in four respondents said their enterprise’s IT was successful at improving and sustaining high levels of agility. Of the more crucial IT goals—optimization, risk management, agility and innovation—agility had the lowest rate of realization among enterprises.

True agility in today’s Instant-On Enterprise involves reducing complexity on an application and infrastructure level, aggressively pursuing standardization and automation, and getting control of IT data to create a performance-driven culture. While the challenges are great, getting it right enables enterprises to capitalize on change by reducing time to market and lowering costs.

Four drivers for agility

At its simplest, agility refers to an organization’s ability to manage change. But there isn’t always agreement on what agility means to the enterprise. Keith Macbeath, senior principal consultant in HP Software Professional Services, meets frequently with customers seeking to increase IT performance. Typically, he says, drivers for agility include the following:

IT financial management:
Understanding the levers that affect cost allow your enterprise to be more nimble. For instance, if your business is cyclical, moving to a variable-cost model in IT can help sustain profitability even through a down cycle.

Improved time to market:
This gets to the heart of agility: delivering products and services faster.

Ensuring availability:
"Availability isn't a problem ... until it is. Then it's at the top of the CIO's agenda," says Macbeath. Being able to rapidly triage, troubleshoot and restore service is as important to agility as the ability to get the service deployed in the first place.

Responding to a significant business event:
In the merger of HP customer United Airlines with Continental Airlines, the faster the IT integration, the more significant the savings.


Important factors for creating sustainable competitive advantage

The importance of measurement and benchmarking

Business agility starts with a holistic view of IT data, says Myles Suer, senior manager in HP Software’s Planning and Governance product team. "Getting timely access to data that allows you to drive to performance goals and then make adjustments when you don't meet those goals represents the biggest transformation possibility for IT," he says.

For CIOs focused on transforming their business, the key is gaining access to accurate, up-to-date metrics that are based on best-practice KPIs. Trustworthy metrics let CIOs control IT through exception-based management and understand what it will take to improve performance.
  
What to monitor and measure

To achieve greater agility, look to three broad areas of improvement and put in place monitoring programs to track against performance goals.

Standardization:

"That's the first step," Suer says. Ask yourself: What percentage of your applications run on standard infrastructure? How many sets of standard infrastructure do you have? "You want a standard technology foundation to meet 80 percent of your business requirements," says Macbeath. It may seem counterintuitive, but standardization makes a company more efficient and more agile than competitors. Standardization is also a prerequisite for automation. Public cloud providers aggressively pursue both. Taking the same approach for enterprise services means you can begin to realize cloud efficiency gains.

Simplicity:

"Agility is the inverse of complexity," says Macbeath. Your goal is to measure and reduce the number of integration points and interfaces in your architecture. The key is decreasing the number of platforms managed. Application rationalization and infrastructure standardization programs reach for low-hanging fruit, such as retiring little-used and duplicative applications and unusual, hard-to-support infrastructure. Longer term, your enterprise needs to tackle the more difficult task of reducing the number of interfaces between applications. Macbeath recommends driving to a common data model between applications. This reduces support costs and makes it faster and cheaper to integrate new functionality into an existing environment.

Service responsiveness:
This can be as simple as tracking help-desk response time, mean time to repair, escalations and so on. "Once you're systematically tracking responsiveness, you can move to a more sophisticated level of cost-to-quality tradeoffs," Macbeath says.

Translating agility into business success

In working with HP customers, Macbeath sees numerous examples of organizations that are successfully increasing their agility.

For example, a European bank working with HP has established an online system to show internal customers exactly how much it costs for one "unit" of Linux computing to run an application at multiple service levels: silver or gold. "Retail banking is a very cost-competitive business," Macbeath says. "Allowing internal customers to see the cost-performance tradeoffs for themselves and have the numbers right there makes it possible for business and IT to work together to make decisions to benefit the business."

Pursuing agility by way of automation let another HP customer, a global telecom company, reach new sources of revenue. Instantly deploying wireless hotspots provides a key capability for the company; by automating this process, IT was able to push out more than 40,000 new hotspots. The company turned on a new source of revenue almost immediately.

Other organizations are finding the same path to greater agility:

Standardization and automation combined with measuring results to drive performance. Through this process, their IT departments are demonstrating greater value for the business while delivering greater speed and transparency.