Key Big Data Challenges

Sixty-one percent of IT leaders expect spending on big data initiatives to increase, while only 5% expect decreases. The challenge: Finding the right big data talent to fulfill those initiatives, according to a recent survey.

Nearly 60% of the respondents are confident that their IT department can satisfy big data demands of the business and 14% are not confident.
"The data indicates current expectations of big data are still somewhat unrealistic due to market hype,”the report states. “Despite IT leaders expecting spending to increase, the confidence level in their department’s ability to meet big data demands in comparison to broader IT initiatives is lower.”


  
About two thirds of the IT executives rank big data architects as the most difficult role to fill. Data scientists (48%) and data modelers (43%) round out the top three most difficult positions to fill. More technical big data positions are ranked less difficult to fill.

Not by coincidence, big data companies are introducing online and face-to-face training programs and certifications for Hadoop and other related software platforms.
Still, other big data challenges remain. Variety—the dimension of big data dealing with the different forms of data—hinders organizations from deriving value from big data the most, according to 45% of those surveyed. Speed of data is next in terms of challenges, at 31%, followed by the amount of data, at 24%.

The application of big data is happening in a number of business areas, according to the study, with 81% of organizations viewing operations and fulfillment as priority areas within the next 12 months. This was followed by customer satisfaction (53%), business strategy (52%), governance/risk/compliance (51%) and sales/marketing (49%).
More than 200 IT leaders participated in the February 2015 survey.

ADDRESSING FIVE EMERGING CHALLENGES

 OF BIG DATA


Introduction - Big Data Challenges 
Challenge #1: Uncertainty of the Data Management Landscape 
Challenge #2: The Big Data Talent Gap 
Challenge #3: Getting Data into the Big Data Platform 
Challenge #4: Synchronization across the Data Sources 
Challenge #5: Getting Useful Information out of the Big Data Platform 
Considerations: What Risks Do These Challenges Really Pose? 
Conclusion: Addressing the Challenge with a Big Data Integration Strategy 


INTRODUCTION - BIG DATA CHALLENGES
Big data technologies are maturing to a point in which more organizations are prepared to pilot and adopt big data as a core component of the information management and analytics infrastructure. Big data, as a compendium of emerging disruptive tools and technologies, is positioned as the next great step in enabling integrated analytics in many common business scenarios.
As big data wends its inextricable way into the enterprise, information technology (IT) practitioners and business sponsors alike will bump up against a number of challenges that must be addressed before any big data program can be successful. Five of those challenges are:
1.     Uncertainty of the Data Management Landscape – There are many competing technologies, and within each technical area there are numerous rivals. Our first challenge is making the best choices while not introducing additional unknowns and risk to big data adoption.
2.     The Big Data Talent Gap – The excitement around big data applications seems to imply that there is a broad community of experts available to help in implementation. However, this is not yet the case, and the talent gap poses our second challenge.
3.     Getting Data into the Big Data Platform – The scale and variety of data to be absorbed into a big data environment can overwhelm the unprepared data practitioner, making data accessibility and integration our third challenge.
4.     Synchronization Across the Data Sources – As more data sets from diverse sources are incorporated into an analytical platform, the potential for time lags to impact data currency and consistency becomes our fourth challenge.
5.     Getting Useful Information out of the Big Data Platform – Lastly, using big data for different purposes ranging from storage augmentation to enabling high-performance analytics is impeded if the information cannot be adequately provisioned back within the other components of the enterprise information architecture, making big data syndication our fifth challenge.

In this paper, we examine these challenges and consider the requirements for tools to help address them. First, we discuss each of the challenges in greater detail, and then we look at understanding and then quantifying the risks of not addressing these issues. Finally, we explore how a strategy for data integration can be crafted to manage those risks.

CHALLENGE #1: UNCERTAINTY OF THE DATA MANAGEMENT LANDSCAPE
One disruptive facet of big data is the use of a variety of innovative data management frameworks whose designs are intended to support both operational and to a greater extent, analytical processing. These approaches are generally lumped into a category referred to as NoSQL (that is, “not only SQL”) frameworks that are differentiated from the conventional relational database management system paradigm in terms of storage model, data access methodology, and are largely designed to meet performance demands for big data applications (such as managing massive amounts of data and rapid response times).
There are a number of different NoSQL approaches. Some employ the paradigm of a document store that maintains a hierarchical object representation (using standard encoding methods such as XML, JSON, or BSON) associated with each managed data object or entity. Others are based on the concept of a key-value store that allows applications to associate values associated with varying attributes (as named “keys”) to be associated with each managed object in the data set, basically enabling a schema-less model. Graph databases maintain the interconnected relationships among different objects, simplifying social network analyses. And other paradigms are continuing to evolve.
The wide variety of NoSQL tools, developers and the status of the market are creating uncertainty within the data management landscape.
We are still in the relatively early stages of this evolution, with many competing approaches and companies. In fact, within each of these NoSQL categories, there are dozens of models being developed by a wide contingent of organizations, both commercial and non-commercial. Each approach is suited differently to key performance dimensions—some models provide great flexibility, others are eminently scalable in terms of performance while others support a wider range of functionality.

   
In other words, the wide variety of NoSQL tools developers and the status of the market lend a great degree of uncertainty to the data management landscape. Choosing a NoSQL tool can be difficult, but committing to the wrong core data management technology can prove to be a costly error if the selected vendor’s tool does not live up to expectations, the vendor company fails, or if third-party application development tends to adopt different data management schemes. For any organization seeking to institute big data, this challenge is to propose a means for your organization to select NoSQL alternatives while mitigating the technology risk.


CHALLENGE #2: THE BIG DATA TALENT GAP

It is difficult to peruse the analyst and high-tech media without being bombarded with content touting the value of big data analytics and corresponding reliance on a wide variety of disruptive technologies. These new tools range from traditional relational database tools with alternative data layouts designed to increased access speed while decreasing the storage footprint, in-memory analytics, NoSQL data management frameworks, as well as the broad Hadoop ecosystem.

There is a growing community of application developers who are increasing their knowledge of tools like those comprising the Hadoop ecosystem. That being said, despite the promotion of these big data technologies, the reality is that there is not a wealth of skills in the market. The typical expert, though, has gained experience through tool implementation and its use as a programming model, rather than the data management aspects. That suggests that many big data tools experts remain somewhat naïve when it comes to the practical aspects of data modeling, data architecture, and data integration. And in turn, this can lead to less-then-successful implementations whose performance is negatively impacted by issues related to data accessibility.

And the talent gap is real—consider these statistics: According to analyst firm McKinsey & Company, “By 2018, the United States alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions.”2 And in a report from 2012, “Gartner analysts predicted that by 2015, 4.4 million IT jobs globally will be created to support big data with 1.9 million of those jobs in the United States. … However, while the jobs will be created, there is no assurance that there will be employees to fill those positions.”


NoSQL and other innovative data management options are predicted to grow in 2015 and beyond.
The big data talent gap is real. Consider this statistic: “By 2018, the US alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions.”

 There is no doubt that as more data practitioners become engaged, the talent gap will eventually close. But when developers are not adept at addressing these fundamental data architecture and data management challenges, the ability to achieve and maintain a competitive edge through technology adoption will be severely impaired. In essence, for an organization seeking to deploy a big data framework, the challenge lies in ensuring a level of the usability for the big data ecosystem as the proper expertise is brought on board.


CHALLENGE #3: GETTING DATA INTO THE BIG DATA PLATFORM



It might seem obvious that the intent of a big data program involves processing or analyzing massive amounts of data. Yet while many people have raised expectations regarding analyzing massive data sets sitting in a big data platform, they may not be aware of the complexity of facilitating the access, transmission, and delivery of data from the numerous sources and then loading those various data sets into the big data platform.
The impulse toward establishing the ability to manage and analyze data sets of potentially gargantuan size can overshadow the practical steps needed to seamlessly provision data to the big data environment. The intricate aspects of data access, movement, and loading are only part of the challenge. The need to navigate extraction and transformation is not limited to structured conventional relational data sets. Analysts increasingly want to import older mainframe data sets (in VSAM files or IMS structures, for example) and at the same time want to absorb meaningful representations of objects and concepts refined out of different types of unstructured data sources such as emails, texts, tweets, images, graphics, audio files, and videos, all accompanied by their corresponding metadata.
An additional challenge is navigating the response time expectations for loading data into the platform. Trying to squeeze massive data volumes through “data pipes” of limited bandwidth will both degrade performance and may even impact data currency. This actually implies two challenges for any organization starting a big data program. The first involves both cataloging the numerous data source types expected to be incorporated into the analytical framework and ensuring that there are methods for universal data accessibility, while the second is to understand the performance expectations and ensure that the tools and infrastructure can handle the volume transfers in a timely manner.

CHALLENGE #4: SYNCHRONIZATION ACROSS THE DATA SOURCES
Once you have figured out how to get data into the big data platform, you begin to realize that data copies migrated from different sources on different schedules and at different rates can rapidly get out of synchronization with the originating systems. There are different aspects of synchrony. From a data currency perspective, synchrony implies that the data coming from one source is not out of date with data coming from another source. From a semantics perspective, synchronization implies commonality of data concepts, definitions, metadata, and the like.
With conventional data marts and data warehouses, sequences of data extractions, transformations, and migrations all provide situations in which there is a risk for information to become unsynchronized. But as the data volumes explode and the speed at which updates are expected to be made, ensuring the level of governance typically applied for conventional data management environments becomes much more difficult.
The inability to ensure synchrony for big data poses the risk of analyses that use inconsistent or potentially even invalid information. If inconsistent data in a conventional data warehouse poses a risk of forwarding faulty analytical results to downstream information consumers, allowing more rampant inconsistencies and asynchrony in a big data environment can have a much more disastrous effect.
Many people may not be aware of the complexity of facilitating the access, transmission, and delivery of data from the numerous sources and then loading those data sets into the big data platform.
The inability to ensure synchrony for big data poses the risk of analyses that use inconsistent or potentially even invalid information.


CHALLENGE #5: GETTING USEFUL INFORMATION OUT OF THE BIG DATA PLATFORM


Most of the most practical uses cases for big data involve data availability: augmenting existing data storage as well as providing access to end-users employing business intelligence tools for the purpose of data discovery. These BI tools not only must be able to connect to one or more big data platforms, they must provide transparency to the data consumers to reduce or eliminate the need for custom coding. At the same time, as the number of data consumers grows, we can anticipate a need to support a rapidly expanding collection of many simultaneous user accesses. That demand may spike at different times or the day or in reaction to different aspects of business process cycles. Ensuring right-time data availability to the community of data consumers becomes a critical success factor.
This frames our fifth and final challenge: enabling a means of making data accessible to the different types of downstream applications in a way that is seamless and transparent to the consuming applications while elastically supporting demand.
CONSIDERATIONS: WHAT RISKS DO THESE CHALLENGES REALLY POSE?
Considering the business impacts of these challenges suggests some serious risks to successfully deploying a big data program. In Table 1, we reflect on the impacts of our challenges and corresponding risks to success.

Challenge
Impact
Risk
Uncertainty of the market land­scape
Difficulty in choosing technology components
Vendor lock-in
Committing to failing product or failing vendor
Big data talent gap
Steep learning curve
Extended time for design, development, and implementation
Delayed time to value
Big data loading
Increased cycle time for analytical platform data population
Inability to actualize the program due to unmanageable data latencies
Synchronization
Data that is inconsistent or out of date
Flawed decisions based on flawed data
Big data accessibility
Increased complexity in syndicating data to end-user discovery tools
Inability to appropriately satisfy the growing community of data consumers

The Transforming Business Intelligence Landscape (2015 Update)

To compete in today's global economy, businesses and governments need agility and the ability to adapt quickly to change. And what about internal adoption to roll out enterprise-grade Business Intelligence (BI)applications?

BI change is ongoing; often, many things change concurrently.





One element that too often takes a back seat is the impact of changes on the organization's people. Prosci, an independent research company focused on organizational change management (OCM), has developed benchmarks that propose five areas in which change management needs to do better. They all involve the people side of change: better engage the sponsor; begin organizational change management early in the change process; get employees engaged in change activities; secure sufficient personnel resources; and better communicate with employees.

Five Plus One

Because BI is not a single application — and often not even a single platform — we recommend adding a sixth area: visibility into BI usage and performance management of BI itself, aka BI on BI. Forrester recommends keeping these six areas top of mind as your organization prepares for any kind of change.
Some strategic business events, like mergers, are high-risk initiatives involving major changes over two or more years; others, such as restructuring, must be implemented in six months. In the case of BI, some changes might need to happen within a few weeks or even days. All changes will lead to either achieving or failing to achieve a business.

Triggers for Change

There are seven major categories of business and organizational change:
  1. People acquisitions
  2. Technology acquisitions
  3. Business process changes
  4. New technology implementations
  5. Organizational transformations
  6. Leadership changes
  7. Changes to business process outsourcing or technology sourcing
When organizations decide to implement change, detailed upfront planning puts the framework in place to make that change a success. Project and change management teams work in parallel but have many points of intersection. Project managers focus on aspects like tasks, timelines, and technology, while change managers look at which people will be affected by the change and plan ways to mitigate fear. With any change, no matter how small, do not neglect the people angle by focusing only on the project management aspects of the change. Prosci warns that ignoring the people side of change until employees greet a go-live date with outrage and resistance will result in teams having to go back to the drawing board and rework, redesign, re-evaluate, and revisit the entire effort.

Flex Your Business Muscle

In the modern world, one cannot leave technology to technologists. In BI, this is especially challenging and critical with the added complexity of increased business involvement. Unlike other enterprise technology applications, where business and technology partner, in BI the business owns many BI components and work streams. In the world of BI, technology is everyone's job. Therefore, pay particular attention to several ways in which project leaders differ from change leaders:
  • Project leaders focus on tasks; change leaders focus on people.
  • Project leaders and change leaders work together to integrate their plans.
  • Change leaders are "people persons" with credibility in the organization.
  • Team members have a wide range of competencies but add additional value with a specialty.
  • BI projects are iterative; BI change management is constant and ongoing.
Meticulous preparation for BI change is critical to success. This means creating an awareness of the need for and value of change management. Management often underestimates the effort and resources necessary to implement the change. The end result or business outcome of the change must be explicit and clearly communicated to employees, customers, and partners. Planning includes specific tasks and activities, as well as careful analysis of people management challenges and how to address them.
Also, consider that BI projects are different from many technology projects and therefore require special change management considerations. 

In our recently published report we researched the following key areas of BI organizational change management

  • Identifying who will help make the change
  • Securing a budget to fund and support ongoing change management activities
  • Reaching out to specialists (we reviewed OCM capabilities of top management consulting firms
  • Making a change management plan
  • Preparing a varied, ongoing communications plan
  • Developing learning, development and an incentive plan
  • Planning for measuring change management effectiveness

Forecast & Trends in Business Intelligence for 2013


                  A BIG (Business Intelligence Growth) Year :)



Defining Business Intelligence in 2013 as A Collaborative Experience and a Shared Exercise of Asking and Answering Insightful Questions About a Business
 
 

             What a year 2012 was for business intelligence! The said old world of databases is developing faster and faster, with startups addressing new data problems and established companies innovating on their platforms. Web-based analytics tools are connecting to web-based data. And everything’s mobile.
 


         With all the attention organizations are placing on innovating around data, the rate of change will only increase. So what should you expect to see?

 


     Proliferation of Data Stores.

              Once upon a time, an organization had different types of data: CRM, point of sale, email, and more. The rulers of that organization worked very hard and eventually got all their data into one fast data warehouse…2013 is the year we will recognize this story as a fairy tale. The organization that has all its data in one place does not exist. Moreover, why would you want to do it? Big data could be in places like Teradata and Hadoop. Transactional data might be in Oracle or SQL Server. The right data stores for the right data and workload will be seen as one of the hallmarks of a great IT organization, not a problem to be fixed. 





                                Hadoop is Real.

                    Back in 2008 and 2009 Hadoop was a science project. By 2010 and 2011 some forward-thinking organizations started doing proof-of-concepts with Hadoop. In 2012, we saw the emergence of many production-scale Hadoop implementations, as well as a crop of companies trying to address pain points in working with Hadoop. In 2013, Hadoop will finally break into the mainstream for working with large or unstructured data. It is also becoming more “right-time” for a faster analytics experience. 



             Self-reliance Is the New Self-Service.


                Self-service BI is the idea that any business user can analyze the data they need to make a better decision. Self-reliance is the coming of age of that concept: it means business users have access to the right data, that the data is in a place and format that they can use, and that they have the solutions that enable self-service analytics. When all this happens, people become self-reliant with their business questions and IT can focus on providing the secure data and solutions to get them there. 




The Value of text and other Unstructured Data  is (finally!) Recognized.

                One of the subplots of the rise of Hadoop has been the rise of unstructured data. Emails, documents, web analytics and customer feedback have existed for years, but most organizations struggled enough to understand their structured data that unstructured data was left alone. In 2011 and 2012 we saw more techniques emerge to help people deal with unstructured data, not least of which is a place to put it (Hadoop). With the explosion of social data like Twitter and Facebook posts, text analysis becomes even more important. Expect to see a lot of it in 2013. 





                             Cloud BI Grows up.


               Cloud business intelligence as your primary BI? No way! Not in 2012, at least. There are cloud BI services, but with important limitations that have made it difficult to use the cloud as your primary analytics solution. In 2013 we expect to see the maturation of cloud BI, so that people can collaborate with data in the cloud, just like they collaborate on their Salesforce.




               Visual Analytics wins Best Picture.


              For years visual analytics has been the Best Documentary of business intelligence: impressive, but for the intellectuals and not the mass audience. But people are finally beginning to realize that visual analytics helps anyone explore, understand and communicate with data. It’s the star of business analytics, not a handy tool for scientists. 




Forecasting and Predictive Analytics become common.


                Much like visual analytics, forecasting used to be seen as the domain of the scientist. But everyone wants to know the future. Forecasting tools are maturing to help businesses identify emerging trends and make better plans. We expect forecasting and predictive analyses to become much more common as people use them to get more value from their data. 




              Mobile BI Moves up a Weight Class.


            Last year we predicted that Mobile BI would go mainstream—and it did. Now everyone from salespeople to insurance adjusters to shop floor managers use tablets to get data about their work right in the moment. To date mobile BI has been lightweight—involving the consumption of reports, with a bit of interactivity. But the tremendous value that people have seen in mobile BI is driving a trend for more ability to ask and answer questions. 





        Collaboration is not a Feature, it’s a Reality!

            Business intelligence solutions have often talked up their collaboration features. In 2013, that’ll no longer be good enough. Collaboration must be at the root of any business intelligence implementation, because what is business intelligence but a shared experience of asking and answering questions about a business? In 2013, business will look for ways to involve people all around their organization in working together to understand and solve problems. 



         Pervasive Analytics: Finally…Pervasive.

As an industry, we’ve talked for years about terms like “pervasive BI” or “BI for the masses”. There’s a whole market for data that is outside of the market for “business intelligence.” When we talk more about data, and less about software categories like BI, we get to the crux of maximizing business value—and fast, easy-to-use visual analytics is the key that opens the door to organization-wide analytics adoption and collaboration. 


            These are the trends we see in talking with customers about what they’re doing today and where we are investing for the future. The good news is that investment is most often being driven by a desire to take good initiatives farther, not a sense of frustration with failed initiatives. Perhaps the new technology and investment of the last few years is finally starting to pay off. No matter what, you can expect lots of change in business intelligence in 2013.





Internet Governance In a Changing World | Ensuring an Open, Fair and Neutral Architecture

         This is a comment in response to a post I read on  TAP: Technology | Academics | Policy ,

The original post read as,


"The Internet's unique characteristics have made it remarkably resistant to traditional tools of state governance. This is both good and bad. Phil Weiser of Silicon Flatirons explains in the paper below. What do you think? "



Here's a link to the paper titled "The Silicon Flatirons Roundtable Series on Entrepreneurship, Innovation, and Public Policy"Internet Governance: The Role of Multistakeholder Organizations by Joe Waz and Phil Weiser*

Ref.http://siliconflatirons.com/documents/publications/report/InternetGovernanceRoleofMSHOrgs.pdf




Response,


       "As an observation among individuals and groups within the broader global community, this is not merely a case of inherent resistance, it is critically essential to have organizations that are presently integral to the net, and that actively participate and engage in diplomacy, society, commerce and the likes, to determine and to have a say when it comes to building consensus, scoping out the cultural aspects of the Internet, its evolving architecture and keeping pace with its exponential development. Such a sentiment among the broader community is what has, despite all opponents and obstacles, helped keep the net open, neutral and truly global to date.
           Skepticism is expected and is natural among those in governments across the world, some individuals in civil society, and in elements of industry, because there are no clear and obvious means by which they can 1)avoid isolation, 2)ensure inclusion/participation and 3) contribute to help make such a flexible governing body truly representative, accountable,and responsible.

P.S ~ Thank You so much for posting this. I am definitely going to dig deeper, research and keep track of their findings.

Best Regards, Jai Krishna Ponnappan :)
"


Suggested Reading (2012 Publication):



"Internet Architecture And Innovation" 
by Barbara van Schewick




Overview

        Today.. following housing bubbles, bank collapses, and high unemployment--the Internet remains the most reliable mechanism for fostering innovation and creating new wealth. The Internet’s remarkable growth has been fueled by innovation. In this path breaking book, Barbara van Schewick argues that this explosion of innovation is not an accident, but a consequence of the Internet’s architecture--a consequence of technical choices regarding the Internet’s inner structure that were made early in its history. The Internet’s original architecture was based on four design principles: modularity, layering, and two versions of the celebrated but often misunderstood end-to-end arguments. But today, the Internet’s architecture is changing in ways that deviate from the Internet’s original design principles, removing the features that have fostered innovation and threatening the Internet’s ability to spur economic growth, to improve democratic discourse, and to provide a decentralized environment for social and cultural interaction in which anyone can participate. If no one intervenes, network providers’ interests will drive networks further away from the original design principles. If the Internet’s value for society is to be preserved, van Schewick argues, policymakers will have to intervene and protect the features that were at the core of the Internet’s success.

About the Author

      Barbara van Schewick is Associate Professor of Law and Helen L. Crocker Faculty Scholar at Stanford Law School, Director of Stanford Law School’s Center for Internet and Society, and Associate Professor (by courtesy) of Electrical Engineering in Stanford University's Department of Electrical Engineering.


Reviews

       “...Internet Architecture and Innovation is an important work: it supplies a key piece of the broadband puzzle in its consideration of broadband transport as a necessary input for other businesses…van Schewick’s fundamental premise rings true: only neutral networks promote competition and innovation.” ars technica


Endorsements

"This is a tour de force on the topic of the end-to-end principle in the design of the Internet." Daniel E. Atkins, W.K. Kellogg Professor of Community Information, Professor of Information and EECS, and Associate Vice-President for Research Cyberinfrastructure, University of Michigan


"This is an important book, one which for the first time ties together the many emerging threads that link the economic, technical, architectural, legal, and social frameworks of the birth and evolution of the Internet." David P. Reed, MIT Media Laboratory


"This isn't a flash in the pan piece. This book will be an evergreen in a wide range of academic and policy contexts more than an introduction to how technology and policy should be analyzed, it is, in my view, the very best example of that analysis." Lawrence Lessig, author of Code and Other Laws of Cyberspace






Catering to Your Niche Crowd ~ Boutique Publishing

           
                   This is in response to an interesting article on Forbes,
https://www.facebook.com/forbes



"..Some of the most interesting experiments in publishing are the ones that use an entirely different math. NSFW Corp.(http://www.nsfwcorp.com/) is one of them. Launched by former TechCrunch writer Paul Carr over the summer, it’s a general-interest news and humor site that doesn’t need, doesn’t expect and doesn’t particularly want a ton of readers.

Since its launch, it has amassed — or a-niched — more than 3,000 subscribers who pay $3 a month for access. If it can attract 30,000 subscribers, it will be at or near the breakeven point. “If we can get 50,000, none of us will ever have to do anything else again,” says Carr.

NSFW’s mix of gonzo politics, guerilla journalism and open-mic comedy isn’t for everyone. But that’s the point. It’s because it doesn’t try to be for everyone that it can be what it is. After all, Carr notes, if there’s one thing the Web has proven, it’s that there’s money to be made in catering to niche tastes. “It’s nearer to paying for porn than it is to paying for news,” he says of his site’s appeal.


Smallness also makes it easier to innovate, which NSFW does on a variety of levels."


My response,


            
             There are quiet few select groups, underground networks and circles that offer membership by invite only. Off-course, once you identify and demarcate your niche crowd/select audience things such as scalability and monetization by numbers go out the window because the primary emphasis often is purely on things such as delivering quality and value that is relevant, unique and hard to find. P.S ~ Not an easy model to follow but worthy of consideration once you have matured in the business and tested the waters over the years.

~ Jai Krishna Ponnappan

Ref.https://www.facebook.com/photo.php?fbid=10151170951552509&set=a.10151047526737509.456571.30911162508&type=1&ref=nf

Content is King and So is Leveraging Big Data




" We Offer Content Management "




         

















               
       This is a statement being offered by many but is it really content management or merely storing documents that are linked to a database record?

Questions that immediately arise in my mind when I hear this statement are:

Do you provide audit logs reflecting what happens to the content?
What levels of security do you support? (Repository, folder, document, element)
How do you capture information? (Imaging, Office applications, Outlook)
What capture devices are supported? (Scanners, multi-function peripherals, cameras)
Is there an ability to use workflow to automate some processes?
What is the approach to search? (Keyword, Boolean, full-text, parametric)
Does it support versioning?

        The term content management is being used more and more by suppliers of all types and sizes and while there is some truth in their claims, the question is what does content management mean to you in relation to addressing your business requirements? Peel this back again and the real question becomes, what are your business requirements in relation to content and records management and does the application you are considering deliver what you need now and provide a way to grow in the future?

In my view, there are many good applications on the market today. Some of which are focused on content management and some that are focused on a business application with bolt on content management capabilities and some that combine a good degree of both. This does not mean they are bad or will not work for you, it means that you have to become more knowledgeable in order to make the right decisions and choices. The last thing you want to do is make an investment only to find out it does not provide what you thought it would.
Imagine that you make an investment to manage risk and when an audit comes, you cannot defend your information due to a lack of audit logs that would have presented a historical view of the information in question. When you are asked to show who accessed it, when they accessed it and what they did with it, could you answer these questions confidently? If you are brought to court and asked to provide all of the information related to the case which includes electronic files, emails and even metadata associated with the case, could you do so with a level of confidence that you have found everything or would you have a high level of doubt and uncertainty? In order to make the right choices, you need to identify and define your requirements, understand the technologies and assess what is available to you. It could be that the functionality being offered is suitable to meet your needs but wouldn’t you feel better knowing it for sure?
If you are ready to move forward and are finding yourself stuck or unfocused and are not sure where to begin or what to do next, seek professional assistance and/or training to get you started. Be sure to investigate AIIM's Enterprise Content Management training program.


And be sure to read the AIIM Training Briefing on ECM (authored by yours truly). Click on the image to download and read.
What say you? Do you have a story to tell? What are your thoughts on this topic? Do you have a topic of interest you would like discussed in this forum? Let me know.

Bob Larrivee, Director and Industry Advisor – AIIM
Ref. Posted originally on AIIM at, http://www.aiim.org/community/blogs/expert/We-Offer-Content-Management

An after Thought and Response,

"Content is King 
and So is Leveraging Big Data"
       
         Going a step further from this-> "The question is what does content management mean to you in relation to addressing your business requirements? Peel this back again and the real question becomes, what are your business requirements in relation to content and records management and does the application you are considering deliver what you need now and provide a way to grow in the future?"

...I believe it is important to ask

->"Does your organization recognize and value the power of data/content and the systems/solutions that can leverage it to your advantage?

             Many organizations across several industries know that "Content can be King" even within their organizational setting as well as potentially serve as a key area facilitating advantage, efficiency and growth in their wider industry and market place. As a result the understandable trend translates as a shift towards making this is a vital high level strategic decision that can clearly separate them from organizations with less adaptable, less compatible and poorly tailored and managed Data/content Management Systems. Many of these are experiencing measurable and significant setbacks as a result of it.

           Thank You for posting on this topic, I am quiet sure many of us have come across critical real life business scenarios and systems that serve as great case studies and examples that teach us valuable lessons worth sharing. Have a great day. Best Regards, Jai Krishna Ponnappan :)



Disaster Recovery Virtualization Using the Cloud


                      Disaster recovery is a necessary component in any organizations’ plans. Business data must be backed up, and key processes like billing, payroll and procurement need to continue even if an organization’s data center is disabled due to a disaster. Over time, two distinct approaches to disaster recovery models have emerged: dedicated and shared models. While effective, these approaches often forced organizations to choose between cost and speed.

                      We live in a Global economy that is balanced around and driven by a '24x7' culture. Nobody likes to think about it but in order to thrive and survive Disaster Recovery is a necessary component in any organizations’ plans. Even with a flat IT budget, you need to have seamless failover and failback of critical business applications. The flow of information never stops and commerce in our global business environment never sleeps. With the demands of an around-the-clock world, organizations need to start thinking in terms of application continuity rather than infrequent disasters, and disaster recovery service providers need to enable more seamless, nearly instanta-neous failover and failback of critical business applications. Yet given the reality that most IT budgets are flat or even reduced, these services must be provided without incurring significant upfront or ongoing expenditures. Cloud-based business resilience can provide an attractive alter-native to traditional disaster recovery, offering both the more-rapid recovery time associated with a dedicated infrastructure and the reduced costs that are consistent with a shared recovery model. With pay-as-you-go pricing and the ability to scale up as conditions change, cloud computing can help organizations meet the expectations of today’s frenetic, fast paced environment where IT demands continue to increase but budgets do not. This white paper discusses traditional approaches to disaster recovery and describes how organizations can use cloud computing to help plan for both the mundane interruptions to service—cut power lines, server hardware failures and security breaches—as well as more-infrequent disasters. The paper provides key considerations when planning for the transition to cloud-based business resilience and in selecting your cloud partner.



A Qualitative Trade Off Between Cost & Speed 


            When choosing a disaster recovery approach, organizations have traditionally relied on the level of service required, as measured by two recovery objectives:

●● Recovery time objective (RTO)—the amount of time between an outage and the restoration of operations 
●● Recovery point objective (RPO)—the point in time where data is restored and reflects the amount of data that will be ultimately lost during the recovery process. 

In most traditional disaster recovery models—that are usually dedicated and shared— organizations are forced to make the tradeoff between cost and speed to recovery.


          In a dedicated model, the infrastructure is dedicated to a single organization. This type of disaster recovery can offer a faster time to recovery compared to other traditional models because the IT infrastructure is mirrored at the disaster recovery site and is ready to be called upon in the event of a disaster. While this model can reduce RTO because the hardware and software are pre-configured, it does not eliminate all delays. The process is still dependent on receiving a current data image, which involves transporting physical tapes and a data restoration process. This approach is also costly because the hardware sits idle when not being used for disaster recovery. Some organizations use the backup infrastructure for development and test to mitigate the cost, but that introduces additional risk into the equation. Finally, the data restoration process adds variability into the process.

In a shared disaster recovery model, the infrastructure is shared among multiple organizations. Shared disaster recovery is designed to be more cost effective, since the off-site backup infrastructure is shared between multiple organizations. After a disaster is declared, the hardware, operating system and application software at the disaster site must be configured from the ground up to match the IT site that has declared a disaster, and this process can take hours or even days.


Measuring level of service required by RPO and RTO 




 Traditional disaster recovery approaches include shared and dedicated models 


The pressure for continuous availability 


According to a CIO study, organizations are being challenged to keep up with the growing demands on their IT departments while keeping their operations up and running and making them as efficient as possible. Their users and customers are becoming more sophisticated users of technology. Research shows that usage of Internet-connected devices is growing about 42 percent annually, giving clients and employees the ability to quickly access huge amounts of storage. In spite of the pressure to do more, they are spending a large percentage of their funds to maintain the infrastructure that they have today. They are also not getting many significant budget increases; budgets are essentially flat.1 With dedicated and shared disaster recovery models, organiza-tions have traditionally been forced to make tradeoffs between cost and speed. As the pressure to achieve continuous availability and reduce costs continues to increase, organizations can no longer accept tradeoffs. While disaster recovery was originally intended for critical batch “back-office” processes, many organi-zations are now dependent on real-time applications and their online presence as the primary interface to their customers. Any downtime reflects directly on their brand image and interrup-tion of key applications such as e-commerce, online banking and customer self service is viewed as unacceptable by customers. The cost of a minute of downtime may be thousands of dollars.


Thinking in terms of interruptions and not disasters 


Traditional disaster recovery methods also rely on “declaring a disaster” in order to leverage the backup infrastructure during events such as hurricanes, tsunamis, floods or fires. However, most application availability interruptions are due to more mundane everyday occurrences. While organizations need to plan for the worst, they also must plan for the more likely—cut power lines, server hardware failures and security breaches. While weather is the root cause of just over half of the disasters declared, note that almost 50 percent of the declarations are due to other causes. These statistics are from clients who actually declared a disaster. Think about all of the interruptions where a disaster was not declared. In an around-the-clock world, organizations must move beyond disaster recovery and think in terms of application continuity. You must plan for the recovery of critical business applications rather than infrequent, momentous disasters, and build resiliency plans accordingly.



 Time to recovery using a dedicated infrastructure 



Time to recovery using a shared infrastructure. The data restoration process must be completed as shown, resulting in an average of 48 to 72 hours to recovery. 




Types of Potential business interruptions



Cloud-based Business Resilience is a Welcome New Approach 


Cloud computing offers an attractive alternative to traditional disaster recovery. “The Cloud” is inherently a shared infrastruc-ture: a pooled set of resources with the infrastructure cost dis-tributed across everyone who contracts for the cloud service. This shared nature makes cloud an ideal model for disaster recovery. Even when we broaden the definition of disaster recovery to include more mundane service interruptions, the need for disaster recovery resources is sporadic. Since all of the organizations relying on the cloud for backup and recovery are very unlikely to need the infrastructure at the same time, costs can be reduced and the cloud can speed recovery time. 


Cloud-based business resilience managed services are designed to provide a balance of economical shared physical recovery with the speed of dedicated infrastructure. Because the server images and data are continuously replicated, recovery time can be reduced dramatically to less than an hour, and, in many cases, to minutes—or even seconds. However, the costs are more consistent with shared recovery.




Cloud-based business resilience offers several other benefits over traditional disaster recovery models:


Speed to recovery using cloud computing


• More predictable monthly operating expenses can help you avoid the unexpected and hidden costs of do-it-yourself approaches.
• Reduced up-front capital expenditure requirements, because the disaster recovery infrastructure exists in
the cloud.
• Cloud-based business resilience managed services can more easily scale up based on changing conditions.
• Portal access reduces the need to travel to the recovery site which can help save time and money.

A cloud-based approach to business resilience. Virtualizing disaster recovery using cloud computing

While the cloud offers multiple benefits as a disaster recovery platform, there are several key considerations when planning for the transition to cloud-based business resilience and in selecting your cloud partner. These include:

●● Portal access with failover and failback capability
●● Support for disaster recovery testing
●● Tiered service levels
●● Support for mixed and virtualized server environments
●● Global reach and local presence
●● Migration from and coexistence with traditional disaster recovery

The next few sections describe these considerations in greater detail. Facilitating improved control with portal access.  Disaster recovery has traditionally been an insurance policy that organizations hope not to use. In contrast, cloud-based business resilience can actually increase IT’s ability to provide service continuity for key business applications. Since the cloud-based business resilience service can be accessed through a web portal, IT management and administrators gain a dashboard view to their organization’s infrastructure.

               Without the need for a formal declaration and the ability to fail over from the portal, IT can be much more responsive to the more mundane outages and interruptions. Building confidence and refining disaster recovery plans with more frequent testing. One traditional challenge of disaster recovery is the lack of certainty that the planned solution will work when the time comes. Typically, organizations only test their failover and recovery on average once or twice per year, which is hardly sufficient, given the pace of change experienced by most IT departments. This lost sense of control has caused some organizations to bring
disaster recovery “in house,” diverting critical IT focus for mainline application development. Cloud-based business resilience provides the opportunity for more control and more frequent and granular testing of disaster recovery plans, even at the server or application level.

              Supporting optimized application recovery times with tiered service levels Cloud-based business resilience offers the opportunity for tiered service levels that enable you to differentiate applications based on their importance to the organization and the associated tolerance for downtime. The notion of a “server image” is an important part of traditional disaster recovery. As the complexity of IT departments has increased, including multiple server farms with possibly different operating systems and operating system (OS) levels, the ability to respond to a disaster or outage becomes more complex. Organizations are often forced to recover on different hardware, which can take longer and increase the possibility for errors and data loss. Organizations are implementing virtualization technologies in their data centers to help remove some of the underlying complexity and optimize infrastructure utilization. The number of virtual machines installed has been growing exponentially over the past several years.

               According to a recent survey of Chief Information Officers, 98 percent of respondents either had already implemented virtualization or had plans to implement it within the next 12 months. Cloud-based business resilience solutions must offer both physical-to-virtual (P2V) and virtual-to-virtual (V2V) recovery in order to support these types of environments. Cloud-based business resilience requires ongoing server replication, making network bandwidth an important consideration when adopting this approach. A global provider should offer the opportunity for a local presence, thereby reducing the distance that data must travel across the network.

              While cloud-based business resilience offers many advantages for mission-critical and customer-facing applications, an efficient enterprise-wide disaster recovery plan will likely include a blend of traditional and cloud-based approaches. In a recent study, respondents indicated that minimizing data loss was the most important objective of a successful disaster recovery solution. With coordinated disaster recovery and data
back-up, data loss can be reduced and reliability of data integrity improved.


Cloud computing offers a compelling opportunity to realize the recovery time of dedicated disaster recovery with the cost structure of shared disaster recovery. However, disaster recovery planning is not something that is taken lightly; security and resiliency of the cloud are critical considerations.



Posted by Jai Krishna Ponnappan